diff --git a/.yetus/excludes.txt b/.yetus/excludes.txt new file mode 100644 index 00000000000..0064dc8a3a4 --- /dev/null +++ b/.yetus/excludes.txt @@ -0,0 +1,17 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +dev-support/docker/Dockerfile_windows_10 diff --git a/BUILDING.txt b/BUILDING.txt index 5f40a0d7dc3..b872d7e4194 100644 --- a/BUILDING.txt +++ b/BUILDING.txt @@ -492,39 +492,66 @@ Building on CentOS 8 ---------------------------------------------------------------------------------- -Building on Windows +Building on Windows 10 ---------------------------------------------------------------------------------- Requirements: -* Windows System +* Windows 10 * JDK 1.8 -* Maven 3.0 or later -* Boost 1.72 -* Protocol Buffers 3.7.1 -* CMake 3.19 or newer -* Visual Studio 2010 Professional or Higher -* Windows SDK 8.1 (if building CPU rate control for the container executor) -* zlib headers (if building native code bindings for zlib) +* Maven 3.0 or later (maven.apache.org) +* Boost 1.72 (boost.org) +* Protocol Buffers 3.7.1 (https://github.com/protocolbuffers/protobuf/releases) +* CMake 3.19 or newer (cmake.org) +* Visual Studio 2019 (visualstudio.com) +* Windows SDK 8.1 (optional, if building CPU rate control for the container executor. Get this from + http://msdn.microsoft.com/en-us/windows/bg162891.aspx) +* Zlib (zlib.net, if building native code bindings for zlib) +* Git (preferably, get this from https://git-scm.com/download/win since the package also contains + Unix command-line tools that are needed during packaging). +* Python (python.org, for generation of docs using 'mvn site') * Internet connection for first build (to fetch all Maven and Hadoop dependencies) -* Unix command-line tools from GnuWin32: sh, mkdir, rm, cp, tar, gzip. These - tools must be present on your PATH. -* Python ( for generation of docs using 'mvn site') - -Unix command-line tools are also included with the Windows Git package which -can be downloaded from http://git-scm.com/downloads - -If using Visual Studio, it must be Professional level or higher. -Do not use Visual Studio Express. It does not support compiling for 64-bit, -which is problematic if running a 64-bit system. - -The Windows SDK 8.1 is available to download at: - -http://msdn.microsoft.com/en-us/windows/bg162891.aspx - -Cygwin is not required. ---------------------------------------------------------------------------------- + +Building guidelines: + +Hadoop repository provides the Dockerfile for building Hadoop on Windows 10, located at +dev-support/docker/Dockerfile_windows_10. It is highly recommended to use this and create the +Docker image for building Hadoop on Windows 10, since you don't have to install anything else +other than Docker and no additional steps are required in terms of aligning the environment with +the necessary paths etc. + +However, if you still prefer taking the route of not using Docker, this Dockerfile_windows_10 will +still be immensely useful as a raw guide for all the steps involved in creating the environment +needed to build Hadoop on Windows 10. + +Building using the Docker: +We first need to build the Docker image for building Hadoop on Windows 10. Run this command from +the root of the Hadoop repository. +> docker build -t hadoop-windows-10-builder -f .\dev-support\docker\Dockerfile_windows_10 .\dev-support\docker\ + +Start the container with the image that we just built. +> docker run --rm -it hadoop-windows-10-builder + +You can now clone the Hadoop repo inside this container and proceed with the build. + +NOTE: +While one may perceive the idea of mounting the locally cloned (on the host filesystem) Hadoop +repository into the container (using the -v option), we have seen the build to fail owing to some +files not being able to be located by Maven. Thus, we suggest cloning the Hadoop repository to a +non-mounted folder inside the container and proceed with the build. When the build is completed, +you may use the "docker cp" command to copy the built Hadoop tar.gz file from the docker container +to the host filesystem. If you still would like to mount the Hadoop codebase, a workaround would +be to copy the mounted Hadoop codebase into another folder (which doesn't point to a mount) in the +container's filesystem and use this for building. + +However, we noticed no build issues when the Maven repository from the host filesystem was mounted +into the container. One may use this to greatly reduce the build time. Assuming that the Maven +repository is located at D:\Maven\Repository in the host filesystem, one can use the following +command to mount the same onto the default Maven repository location while launching the container. +> docker run --rm -v D:\Maven\Repository:C:\Users\ContainerAdministrator\.m2\repository -it hadoop-windows-10-builder + Building: Keep the source code tree in a short path to avoid running into problems related @@ -540,6 +567,24 @@ configure the bit-ness of the build, and set several optional components. Several tests require that the user must have the Create Symbolic Links privilege. +To simplify the installation of Boost, Protocol buffers, OpenSSL and Zlib dependencies we can use +vcpkg (https://github.com/Microsoft/vcpkg.git). Upon cloning the vcpkg repo, checkout the commit +7ffa425e1db8b0c3edf9c50f2f3a0f25a324541d to get the required versions of the dependencies +mentioned above. +> git clone https://github.com/Microsoft/vcpkg.git +> cd vcpkg +> git checkout 7ffa425e1db8b0c3edf9c50f2f3a0f25a324541d +> .\bootstrap-vcpkg.bat +> .\vcpkg.exe install boost:x64-windows +> .\vcpkg.exe install protobuf:x64-windows +> .\vcpkg.exe install openssl:x64-windows +> .\vcpkg.exe install zlib:x64-windows + +Set the following environment variables - +(Assuming that vcpkg was checked out at C:\vcpkg) +> set PROTOBUF_HOME=C:\vcpkg\installed\x64-windows +> set MAVEN_OPTS=-Xmx2048M -Xss128M + All Maven goals are the same as described above with the exception that native code is built by enabling the 'native-win' Maven profile. -Pnative-win is enabled by default when building on Windows since the native components @@ -557,6 +602,24 @@ the zlib 1.2.7 source tree. http://www.zlib.net/ + +Build command: +The following command builds all the modules in the Hadoop project and generates the tar.gz file in +hadoop-dist/target upon successful build. Run these commands from an +"x64 Native Tools Command Prompt for VS 2019" which can be found under "Visual Studio 2019" in the +Windows start menu. If you're using the Docker image from Dockerfile_windows_10, you'll be +logged into "x64 Native Tools Command Prompt for VS 2019" automatically when you start the +container. + +> set classpath= +> set PROTOBUF_HOME=C:\vcpkg\installed\x64-windows +> mvn clean package -Dhttps.protocols=TLSv1.2 -DskipTests -DskipDocs -Pnative-win,dist^ + -Drequire.openssl -Drequire.test.libhadoop -Pyarn-ui -Dshell-executable=C:\Git\bin\bash.exe^ + -Dtar -Dopenssl.prefix=C:\vcpkg\installed\x64-windows^ + -Dcmake.prefix.path=C:\vcpkg\installed\x64-windows^ + -Dwindows.cmake.toolchain.file=C:\vcpkg\scripts\buildsystems\vcpkg.cmake -Dwindows.cmake.build.type=RelWithDebInfo^ + -Dwindows.build.hdfspp.dll=off -Dwindows.no.sasl=on -Duse.platformToolsetVersion=v142 + ---------------------------------------------------------------------------------- Building distributions: diff --git a/LICENSE-binary b/LICENSE-binary index 0f6e7248dde..432dc5d28f7 100644 --- a/LICENSE-binary +++ b/LICENSE-binary @@ -215,17 +215,17 @@ com.aliyun:aliyun-java-sdk-ecs:4.2.0 com.aliyun:aliyun-java-sdk-ram:3.0.0 com.aliyun:aliyun-java-sdk-sts:3.0.0 com.aliyun.oss:aliyun-sdk-oss:3.13.2 -com.amazonaws:aws-java-sdk-bundle:1.12.262 +com.amazonaws:aws-java-sdk-bundle:1.12.316 com.cedarsoftware:java-util:1.9.0 com.cedarsoftware:json-io:2.5.1 com.fasterxml.jackson.core:jackson-annotations:2.12.7 com.fasterxml.jackson.core:jackson-core:2.12.7 -com.fasterxml.jackson.core:jackson-databind:2.12.7 +com.fasterxml.jackson.core:jackson-databind:2.12.7.1 com.fasterxml.jackson.jaxrs:jackson-jaxrs-base:2.12.7 com.fasterxml.jackson.jaxrs:jackson-jaxrs-json-provider:2.12.7 com.fasterxml.jackson.module:jackson-module-jaxb-annotations:2.12.7 com.fasterxml.uuid:java-uuid-generator:3.1.4 -com.fasterxml.woodstox:woodstox-core:5.3.0 +com.fasterxml.woodstox:woodstox-core:5.4.0 com.github.davidmoten:rxjava-extras:0.8.0.17 com.github.stephenc.jcip:jcip-annotations:1.0-1 com.google:guice:4.0 @@ -241,17 +241,17 @@ com.google.guava:guava:27.0-jre com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava com.microsoft.azure:azure-storage:7.0.0 com.nimbusds:nimbus-jose-jwt:9.8.1 -com.squareup.okhttp3:okhttp:4.9.3 -com.squareup.okio:okio:1.6.0 +com.squareup.okhttp3:okhttp:4.10.0 +com.squareup.okio:okio:3.2.0 com.zaxxer:HikariCP:4.0.3 -commons-beanutils:commons-beanutils:1.9.3 +commons-beanutils:commons-beanutils:1.9.4 commons-cli:commons-cli:1.2 commons-codec:commons-codec:1.11 commons-collections:commons-collections:3.2.2 commons-daemon:commons-daemon:1.0.13 commons-io:commons-io:2.8.0 commons-logging:commons-logging:1.1.3 -commons-net:commons-net:3.8.0 +commons-net:commons-net:3.9.0 de.ruedigermoeller:fst:2.50 io.grpc:grpc-api:1.26.0 io.grpc:grpc-context:1.26.0 @@ -260,7 +260,6 @@ io.grpc:grpc-netty:1.26.0 io.grpc:grpc-protobuf:1.26.0 io.grpc:grpc-protobuf-lite:1.26.0 io.grpc:grpc-stub:1.26.0 -io.netty:netty:3.10.6.Final io.netty:netty-all:4.1.77.Final io.netty:netty-buffer:4.1.77.Final io.netty:netty-codec:4.1.77.Final @@ -306,11 +305,11 @@ org.apache.avro:avro:1.9.2 org.apache.commons:commons-collections4:4.2 org.apache.commons:commons-compress:1.21 org.apache.commons:commons-configuration2:2.8.0 -org.apache.commons:commons-csv:1.0 +org.apache.commons:commons-csv:1.9.0 org.apache.commons:commons-digester:1.8.1 org.apache.commons:commons-lang3:3.12.0 org.apache.commons:commons-math3:3.6.1 -org.apache.commons:commons-text:1.9 +org.apache.commons:commons-text:1.10.0 org.apache.commons:commons-validator:1.6 org.apache.curator:curator-client:5.2.0 org.apache.curator:curator-framework:5.2.0 @@ -324,7 +323,7 @@ org.apache.htrace:htrace-core:3.1.0-incubating org.apache.htrace:htrace-core4:4.1.0-incubating org.apache.httpcomponents:httpclient:4.5.6 org.apache.httpcomponents:httpcore:4.4.10 -org.apache.kafka:kafka-clients:2.8.1 +org.apache.kafka:kafka-clients:2.8.2 org.apache.kerby:kerb-admin:2.0.2 org.apache.kerby:kerb-client:2.0.2 org.apache.kerby:kerb-common:2.0.2 @@ -343,7 +342,7 @@ org.apache.kerby:token-provider:2.0.2 org.apache.solr:solr-solrj:8.8.2 org.apache.yetus:audience-annotations:0.5.0 org.apache.zookeeper:zookeeper:3.6.3 -org.codehaus.jettison:jettison:1.1 +org.codehaus.jettison:jettison:1.5.3 org.eclipse.jetty:jetty-annotations:9.4.48.v20220622 org.eclipse.jetty:jetty-http:9.4.48.v20220622 org.eclipse.jetty:jetty-io:9.4.48.v20220622 @@ -362,8 +361,8 @@ org.ehcache:ehcache:3.3.1 org.lz4:lz4-java:1.7.1 org.objenesis:objenesis:2.6 org.xerial.snappy:snappy-java:1.0.5 -org.yaml:snakeyaml:1.32 -org.wildfly.openssl:wildfly-openssl:1.0.7.Final +org.yaml:snakeyaml:1.33 +org.wildfly.openssl:wildfly-openssl:1.1.3.Final -------------------------------------------------------------------------------- @@ -427,7 +426,7 @@ hadoop-tools/hadoop-sls/src/main/html/js/thirdparty/bootstrap.min.js hadoop-tools/hadoop-sls/src/main/html/js/thirdparty/jquery.js hadoop-tools/hadoop-sls/src/main/html/css/bootstrap.min.css hadoop-tools/hadoop-sls/src/main/html/css/bootstrap-responsive.min.css -hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/* +hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jt/jquery.jstree.js hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/resources/TERMINAL @@ -435,7 +434,7 @@ hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanage bootstrap v3.3.6 broccoli-asset-rev v2.4.2 broccoli-funnel v1.0.1 -datatables v1.10.19 +datatables v1.11.5 em-helpers v0.5.13 em-table v0.1.6 ember v2.2.0 @@ -523,7 +522,7 @@ junit:junit:4.13.2 HSQL License ------------ -org.hsqldb:hsqldb:2.5.2 +org.hsqldb:hsqldb:2.7.1 JDOM License diff --git a/LICENSE.txt b/LICENSE.txt index 763cf2ce53f..2dfc0b9da47 100644 --- a/LICENSE.txt +++ b/LICENSE.txt @@ -252,7 +252,7 @@ hadoop-tools/hadoop-sls/src/main/html/js/thirdparty/bootstrap.min.js hadoop-tools/hadoop-sls/src/main/html/js/thirdparty/jquery.js hadoop-tools/hadoop-sls/src/main/html/css/bootstrap.min.css hadoop-tools/hadoop-sls/src/main/html/css/bootstrap-responsive.min.css -hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/* +hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jt/jquery.jstree.js hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/resources/TERMINAL diff --git a/dev-support/docker/Dockerfile_windows_10 b/dev-support/docker/Dockerfile_windows_10 new file mode 100644 index 00000000000..7a69a2727ae --- /dev/null +++ b/dev-support/docker/Dockerfile_windows_10 @@ -0,0 +1,81 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Dockerfile for installing the necessary dependencies for building Hadoop. +# See BUILDING.txt. + +FROM mcr.microsoft.com/windows:ltsc2019 + +# Need to disable the progress bar for speeding up the downloads. +# hadolint ignore=SC2086 +RUN powershell $Global:ProgressPreference = 'SilentlyContinue' + +# Restore the default Windows shell for correct batch processing. +SHELL ["cmd", "/S", "/C"] + +# Install Visual Studio 2019 Build Tools. +RUN curl -SL --output vs_buildtools.exe https://aka.ms/vs/16/release/vs_buildtools.exe \ + && (start /w vs_buildtools.exe --quiet --wait --norestart --nocache \ + --installPath "%ProgramFiles(x86)%\Microsoft Visual Studio\2019\BuildTools" \ + --add Microsoft.VisualStudio.Workload.VCTools \ + --add Microsoft.VisualStudio.Component.VC.ASAN \ + --add Microsoft.VisualStudio.Component.VC.Tools.x86.x64 \ + --add Microsoft.VisualStudio.Component.Windows10SDK.19041 \ + || IF "%ERRORLEVEL%"=="3010" EXIT 0) \ + && del /q vs_buildtools.exe + +# Install Chocolatey. +RUN powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))" +RUN setx PATH "%PATH%;%ALLUSERSPROFILE%\chocolatey\bin" + +# Install git. +RUN choco install git.install -y +RUN powershell Copy-Item -Recurse -Path 'C:\Program Files\Git' -Destination C:\Git + +# Install vcpkg. +# hadolint ignore=DL3003 +RUN powershell git clone https://github.com/microsoft/vcpkg.git \ + && cd vcpkg \ + && git checkout 7ffa425e1db8b0c3edf9c50f2f3a0f25a324541d \ + && .\bootstrap-vcpkg.bat +RUN powershell .\vcpkg\vcpkg.exe install boost:x64-windows +RUN powershell .\vcpkg\vcpkg.exe install protobuf:x64-windows +RUN powershell .\vcpkg\vcpkg.exe install openssl:x64-windows +RUN powershell .\vcpkg\vcpkg.exe install zlib:x64-windows +ENV PROTOBUF_HOME "C:\vcpkg\installed\x64-windows" + +# Install Azul Java 8 JDK. +RUN powershell Invoke-WebRequest -URI https://cdn.azul.com/zulu/bin/zulu8.62.0.19-ca-jdk8.0.332-win_x64.zip -OutFile $Env:TEMP\zulu8.62.0.19-ca-jdk8.0.332-win_x64.zip +RUN powershell Expand-Archive -Path $Env:TEMP\zulu8.62.0.19-ca-jdk8.0.332-win_x64.zip -DestinationPath "C:\Java" +ENV JAVA_HOME "C:\Java\zulu8.62.0.19-ca-jdk8.0.332-win_x64" +RUN setx PATH "%PATH%;%JAVA_HOME%\bin" + +# Install Apache Maven. +RUN powershell Invoke-WebRequest -URI https://dlcdn.apache.org/maven/maven-3/3.8.6/binaries/apache-maven-3.8.6-bin.zip -OutFile $Env:TEMP\apache-maven-3.8.6-bin.zip +RUN powershell Expand-Archive -Path $Env:TEMP\apache-maven-3.8.6-bin.zip -DestinationPath "C:\Maven" +RUN setx PATH "%PATH%;C:\Maven\apache-maven-3.8.6\bin" +ENV MAVEN_OPTS '-Xmx2048M -Xss128M' + +# Install CMake 3.19.0. +RUN powershell Invoke-WebRequest -URI https://cmake.org/files/v3.19/cmake-3.19.0-win64-x64.zip -OutFile $Env:TEMP\cmake-3.19.0-win64-x64.zip +RUN powershell Expand-Archive -Path $Env:TEMP\cmake-3.19.0-win64-x64.zip -DestinationPath "C:\CMake" +RUN setx PATH "%PATH%;C:\CMake\cmake-3.19.0-win64-x64\bin" + +# We get strange Javadoc errors without this. +RUN setx classpath "" + +# Define the entry point for the docker container. +ENTRYPOINT ["C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Auxiliary\\Build\\vcvars64.bat", "&&", "cmd.exe"] diff --git a/hadoop-client-modules/hadoop-client-runtime/pom.xml b/hadoop-client-modules/hadoop-client-runtime/pom.xml index b2bd7a4fc43..d5185f0fffc 100644 --- a/hadoop-client-modules/hadoop-client-runtime/pom.xml +++ b/hadoop-client-modules/hadoop-client-runtime/pom.xml @@ -148,6 +148,7 @@ com.google.code.findbugs:jsr305 + io.netty:* io.dropwizard.metrics:metrics-core org.eclipse.jetty:jetty-servlet org.eclipse.jetty:jetty-security @@ -156,6 +157,8 @@ org.bouncycastle:* org.xerial.snappy:* + + org.jetbrains.kotlin:* diff --git a/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml b/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml index 33d3f957817..6c8a0916802 100644 --- a/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml +++ b/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml @@ -127,11 +127,6 @@ hadoop-azure-datalake compile - - org.apache.hadoop - hadoop-openstack - compile - org.apache.hadoop hadoop-cos diff --git a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/util/PlatformName.java b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/util/PlatformName.java index eb52839b65a..c52d5d21351 100644 --- a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/util/PlatformName.java +++ b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/util/PlatformName.java @@ -18,6 +18,10 @@ package org.apache.hadoop.util; +import java.security.AccessController; +import java.security.PrivilegedAction; +import java.util.Arrays; + import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; @@ -33,10 +37,10 @@ public class PlatformName { * per the java-vm. */ public static final String PLATFORM_NAME = - (System.getProperty("os.name").startsWith("Windows") - ? System.getenv("os") : System.getProperty("os.name")) - + "-" + System.getProperty("os.arch") - + "-" + System.getProperty("sun.arch.data.model"); + (System.getProperty("os.name").startsWith("Windows") ? + System.getenv("os") : System.getProperty("os.name")) + + "-" + System.getProperty("os.arch") + "-" + + System.getProperty("sun.arch.data.model"); /** * The java vendor name used in this platform. @@ -44,10 +48,60 @@ public class PlatformName { public static final String JAVA_VENDOR_NAME = System.getProperty("java.vendor"); /** - * A public static variable to indicate the current java vendor is - * IBM java or not. + * Define a system class accessor that is open to changes in underlying implementations + * of the system class loader modules. */ - public static final boolean IBM_JAVA = JAVA_VENDOR_NAME.contains("IBM"); + private static final class SystemClassAccessor extends ClassLoader { + public Class getSystemClass(String className) throws ClassNotFoundException { + return findSystemClass(className); + } + } + + /** + * A public static variable to indicate the current java vendor is + * IBM and the type is Java Technology Edition which provides its + * own implementations of many security packages and Cipher suites. + * Note that these are not provided in Semeru runtimes: + * See https://developer.ibm.com/languages/java/semeru-runtimes for details. + */ + public static final boolean IBM_JAVA = JAVA_VENDOR_NAME.contains("IBM") && + hasIbmTechnologyEditionModules(); + + private static boolean hasIbmTechnologyEditionModules() { + return Arrays.asList( + "com.ibm.security.auth.module.JAASLoginModule", + "com.ibm.security.auth.module.Win64LoginModule", + "com.ibm.security.auth.module.NTLoginModule", + "com.ibm.security.auth.module.AIX64LoginModule", + "com.ibm.security.auth.module.LinuxLoginModule", + "com.ibm.security.auth.module.Krb5LoginModule" + ).stream().anyMatch((module) -> isSystemClassAvailable(module)); + } + + /** + * In rare cases where different behaviour is performed based on the JVM vendor + * this method should be used to test for a unique JVM class provided by the + * vendor rather than using the vendor method. For example if on JVM provides a + * different Kerberos login module testing for that login module being loadable + * before configuring to use it is preferable to using the vendor data. + * + * @param className the name of a class in the JVM to test for + * @return true if the class is available, false otherwise. + */ + private static boolean isSystemClassAvailable(String className) { + return AccessController.doPrivileged((PrivilegedAction) () -> { + try { + // Using ClassLoader.findSystemClass() instead of + // Class.forName(className, false, null) because Class.forName with a null + // ClassLoader only looks at the boot ClassLoader with Java 9 and above + // which doesn't look at all the modules available to the findSystemClass. + new SystemClassAccessor().getSystemClass(className); + return true; + } catch (Exception ignored) { + return false; + } + }); + } public static void main(String[] args) { System.out.println(PLATFORM_NAME); diff --git a/hadoop-common-project/hadoop-auth/src/site/markdown/Configuration.md b/hadoop-common-project/hadoop-auth/src/site/markdown/Configuration.md index d9b275e5400..43597b68811 100644 --- a/hadoop-common-project/hadoop-auth/src/site/markdown/Configuration.md +++ b/hadoop-common-project/hadoop-auth/src/site/markdown/Configuration.md @@ -24,7 +24,7 @@ This filter must be configured in front of all the web application resources tha The Hadoop Auth and dependent JAR files must be in the web application classpath (commonly the `WEB-INF/lib` directory). -Hadoop Auth uses SLF4J-API for logging. Auth Maven POM dependencies define the SLF4J API dependency but it does not define the dependency on a concrete logging implementation, this must be addded explicitly to the web application. For example, if the web applicationan uses Log4j, the SLF4J-LOG4J12 and LOG4J jar files must be part part of the web application classpath as well as the Log4j configuration file. +Hadoop Auth uses SLF4J-API for logging. Auth Maven POM dependencies define the SLF4J API dependency but it does not define the dependency on a concrete logging implementation, this must be addded explicitly to the web application. For example, if the web applicationan uses Log4j, the SLF4J-LOG4J12 and LOG4J jar files must be part of the web application classpath as well as the Log4j configuration file. ### Common Configuration parameters diff --git a/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml b/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml index 23e39d055ff..b885891af73 100644 --- a/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml +++ b/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml @@ -379,21 +379,6 @@ - - - - - - - - - - - - diff --git a/hadoop-common-project/hadoop-common/pom.xml b/hadoop-common-project/hadoop-common/pom.xml index 5b5ffe1b006..4391995d209 100644 --- a/hadoop-common-project/hadoop-common/pom.xml +++ b/hadoop-common-project/hadoop-common/pom.xml @@ -383,6 +383,11 @@ mockwebserver test + + com.squareup.okio + okio-jvm + test + dnsjava dnsjava diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java index 0a1e8868e3a..ab7ff0bd40c 100755 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java @@ -24,7 +24,6 @@ import com.ctc.wstx.io.SystemId; import com.ctc.wstx.stax.WstxInputFactory; import com.fasterxml.jackson.core.JsonFactory; import com.fasterxml.jackson.core.JsonGenerator; -import org.apache.hadoop.classification.VisibleForTesting; import java.io.BufferedInputStream; import java.io.DataInput; @@ -87,6 +86,7 @@ import org.apache.hadoop.thirdparty.com.google.common.base.Charsets; import org.apache.commons.collections.map.UnmodifiableMap; import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; +import org.apache.hadoop.classification.VisibleForTesting; import org.apache.hadoop.fs.CommonConfigurationKeysPublic; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; @@ -98,18 +98,19 @@ import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.security.alias.CredentialProvider; import org.apache.hadoop.security.alias.CredentialProvider.CredentialEntry; import org.apache.hadoop.security.alias.CredentialProviderFactory; +import org.apache.hadoop.thirdparty.com.google.common.base.Strings; +import org.apache.hadoop.util.Preconditions; import org.apache.hadoop.util.ReflectionUtils; import org.apache.hadoop.util.StringInterner; import org.apache.hadoop.util.StringUtils; +import org.apache.hadoop.util.XMLUtils; + import org.codehaus.stax2.XMLStreamReader2; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.w3c.dom.Document; import org.w3c.dom.Element; -import org.apache.hadoop.util.Preconditions; -import org.apache.hadoop.thirdparty.com.google.common.base.Strings; - import static org.apache.commons.lang3.StringUtils.isBlank; import static org.apache.commons.lang3.StringUtils.isNotBlank; @@ -3604,7 +3605,7 @@ public class Configuration implements Iterable>, try { DOMSource source = new DOMSource(doc); StreamResult result = new StreamResult(out); - TransformerFactory transFactory = TransformerFactory.newInstance(); + TransformerFactory transFactory = XMLUtils.newSecureTransformerFactory(); Transformer transformer = transFactory.newTransformer(); // Important to not hold Configuration log while writing result, since diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java index 4d1674bd7b8..5e207251805 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java @@ -639,13 +639,14 @@ public abstract class KeyProvider implements Closeable { public abstract void flush() throws IOException; /** - * Split the versionName in to a base name. Converts "/aaa/bbb/3" to + * Split the versionName in to a base name. Converts "/aaa/bbb@3" to * "/aaa/bbb". * @param versionName the version name to split * @return the base name of the key * @throws IOException raised on errors performing I/O. */ public static String getBaseName(String versionName) throws IOException { + Objects.requireNonNull(versionName, "VersionName cannot be null"); int div = versionName.lastIndexOf('@'); if (div == -1) { throw new IOException("No version in key path " + versionName); diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AvroFSInput.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AvroFSInput.java index 7518dd2f7ef..155381de949 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AvroFSInput.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AvroFSInput.java @@ -60,7 +60,6 @@ public class AvroFSInput implements Closeable, SeekableInput { FS_OPTION_OPENFILE_READ_POLICY_SEQUENTIAL) .withFileStatus(status) .build()); - fc.open(p); } @Override diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileRange.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileRange.java index e55696e9650..97da65585d6 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileRange.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileRange.java @@ -55,6 +55,15 @@ public interface FileRange { */ void setData(CompletableFuture data); + /** + * Get any reference passed in to the file range constructor. + * This is not used by any implementation code; it is to help + * bind this API to libraries retrieving multiple stripes of + * data in parallel. + * @return a reference or null. + */ + Object getReference(); + /** * Factory method to create a FileRange object. * @param offset starting offset of the range. @@ -62,6 +71,17 @@ public interface FileRange { * @return a new instance of FileRangeImpl. */ static FileRange createFileRange(long offset, int length) { - return new FileRangeImpl(offset, length); + return new FileRangeImpl(offset, length, null); + } + + /** + * Factory method to create a FileRange object. + * @param offset starting offset of the range. + * @param length length of the range. + * @param reference nullable reference to store in the range. + * @return a new instance of FileRangeImpl. + */ + static FileRange createFileRange(long offset, int length, Object reference) { + return new FileRangeImpl(offset, length, reference); } } diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java index fcef578b072..1d9458148e4 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java @@ -402,7 +402,8 @@ public class FileStatus implements Writable, Comparable, } /** - * Compare this FileStatus to another FileStatus + * Compare this FileStatus to another FileStatus based on lexicographical + * order of path. * @param o the FileStatus to be compared. * @return a negative integer, zero, or a positive integer as this object * is less than, equal to, or greater than the specified object. @@ -412,7 +413,8 @@ public class FileStatus implements Writable, Comparable, } /** - * Compare this FileStatus to another FileStatus. + * Compare this FileStatus to another FileStatus based on lexicographical + * order of path. * This method was added back by HADOOP-14683 to keep binary compatibility. * * @param o the FileStatus to be compared. diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java index 0bc419b0353..df853078461 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java @@ -21,7 +21,6 @@ import javax.annotation.Nonnull; import java.io.Closeable; import java.io.FileNotFoundException; import java.io.IOException; -import java.io.InterruptedIOException; import java.lang.ref.WeakReference; import java.lang.ref.ReferenceQueue; import java.net.URI; @@ -1544,6 +1543,39 @@ public abstract class FileSystem extends Configured public abstract FSDataOutputStream append(Path f, int bufferSize, Progressable progress) throws IOException; + /** + * Append to an existing file (optional operation). + * @param f the existing file to be appended. + * @param appendToNewBlock whether to append data to a new block + * instead of the end of the last partial block + * @throws IOException IO failure + * @throws UnsupportedOperationException if the operation is unsupported + * (default). + * @return output stream. + */ + public FSDataOutputStream append(Path f, boolean appendToNewBlock) throws IOException { + return append(f, getConf().getInt(IO_FILE_BUFFER_SIZE_KEY, + IO_FILE_BUFFER_SIZE_DEFAULT), null, appendToNewBlock); + } + + /** + * Append to an existing file (optional operation). + * This function is used for being overridden by some FileSystem like DistributedFileSystem + * @param f the existing file to be appended. + * @param bufferSize the size of the buffer to be used. + * @param progress for reporting progress if it is not null. + * @param appendToNewBlock whether to append data to a new block + * instead of the end of the last partial block + * @throws IOException IO failure + * @throws UnsupportedOperationException if the operation is unsupported + * (default). + * @return output stream. + */ + public FSDataOutputStream append(Path f, int bufferSize, + Progressable progress, boolean appendToNewBlock) throws IOException { + return append(f, bufferSize, progress); + } + /** * Concat existing files together. * @param trg the path to the target destination. @@ -3647,11 +3679,7 @@ public abstract class FileSystem extends Configured // to construct an instance. try (DurationInfo d = new DurationInfo(LOGGER, false, "Acquiring creator semaphore for %s", uri)) { - creatorPermits.acquire(); - } catch (InterruptedException e) { - // acquisition was interrupted; convert to an IOE. - throw (IOException)new InterruptedIOException(e.toString()) - .initCause(e); + creatorPermits.acquireUninterruptibly(); } FileSystem fsToClose = null; try { diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Trash.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Trash.java index 5c5fa0237ea..73749dd2549 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Trash.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Trash.java @@ -69,7 +69,7 @@ public class Trash extends Configured { * Hence we get the file system of the fully-qualified resolved-path and * then move the path p to the trashbin in that volume, * @param fs - the filesystem of path p - * @param p - the path being deleted - to be moved to trasg + * @param p - the path being deleted - to be moved to trash * @param conf - configuration * @return false if the item is already in the trash or trash is disabled * @throws IOException on error diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/VectoredReadUtils.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/VectoredReadUtils.java index 50cab7dc4cc..cf1b1ef9698 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/VectoredReadUtils.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/VectoredReadUtils.java @@ -307,9 +307,16 @@ public final class VectoredReadUtils { FileRange request) { int offsetChange = (int) (request.getOffset() - readOffset); int requestLength = request.getLength(); + // Create a new buffer that is backed by the original contents + // The buffer will have position 0 and the same limit as the original one readData = readData.slice(); + // Change the offset and the limit of the buffer as the reader wants to see + // only relevant data readData.position(offsetChange); readData.limit(offsetChange + requestLength); + // Create a new buffer after the limit change so that only that portion of the data is + // returned to the reader. + readData = readData.slice(); return readData; } diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/audit/AuditConstants.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/audit/AuditConstants.java index 0929c2be03a..ffca6097c47 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/audit/AuditConstants.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/audit/AuditConstants.java @@ -90,6 +90,11 @@ public final class AuditConstants { */ public static final String PARAM_PROCESS = "ps"; + /** + * Header: Range for GET request data: {@value}. + */ + public static final String PARAM_RANGE = "rg"; + /** * Task Attempt ID query header: {@value}. */ diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/CombinedFileRange.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/CombinedFileRange.java index 516bbb2c70c..c9555a1e541 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/CombinedFileRange.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/CombinedFileRange.java @@ -29,10 +29,10 @@ import java.util.List; * together into a single read for efficiency. */ public class CombinedFileRange extends FileRangeImpl { - private ArrayList underlying = new ArrayList<>(); + private List underlying = new ArrayList<>(); public CombinedFileRange(long offset, long end, FileRange original) { - super(offset, (int) (end - offset)); + super(offset, (int) (end - offset), null); this.underlying.add(original); } diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/FileRangeImpl.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/FileRangeImpl.java index 041e5f0a8d2..1239be764ba 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/FileRangeImpl.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/FileRangeImpl.java @@ -34,9 +34,21 @@ public class FileRangeImpl implements FileRange { private int length; private CompletableFuture reader; - public FileRangeImpl(long offset, int length) { + /** + * nullable reference to store in the range. + */ + private final Object reference; + + /** + * Create. + * @param offset offset in file + * @param length length of data to read. + * @param reference nullable reference to store in the range. + */ + public FileRangeImpl(long offset, int length, Object reference) { this.offset = offset; this.length = length; + this.reference = reference; } @Override @@ -71,4 +83,9 @@ public class FileRangeImpl implements FileRange { public CompletableFuture getData() { return reader; } + + @Override + public Object getReference() { + return reference; + } } diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/WeakRefMetricsSource.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/WeakRefMetricsSource.java new file mode 100644 index 00000000000..14677385793 --- /dev/null +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/WeakRefMetricsSource.java @@ -0,0 +1,97 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.impl; + +import java.lang.ref.WeakReference; + +import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.metrics2.MetricsCollector; +import org.apache.hadoop.metrics2.MetricsSource; + +import static java.util.Objects.requireNonNull; + +/** + * A weak referenced metrics source which avoids hanging on to large objects + * if somehow they don't get fully closed/cleaned up. + * The JVM may clean up all objects which are only weakly referenced whenever + * it does a GC, even if there is no memory pressure. + * To avoid these refs being removed, always keep a strong reference around + * somewhere. + */ +@InterfaceAudience.Private +public class WeakRefMetricsSource implements MetricsSource { + + /** + * Name to know when unregistering. + */ + private final String name; + + /** + * Underlying metrics source. + */ + private final WeakReference sourceWeakReference; + + /** + * Constructor. + * @param name Name to know when unregistering. + * @param source metrics source + */ + public WeakRefMetricsSource(final String name, final MetricsSource source) { + this.name = name; + this.sourceWeakReference = new WeakReference<>(requireNonNull(source)); + } + + /** + * If the weak reference is non null, update the metrics. + * @param collector to contain the resulting metrics snapshot + * @param all if true, return all metrics even if unchanged. + */ + @Override + public void getMetrics(final MetricsCollector collector, final boolean all) { + MetricsSource metricsSource = sourceWeakReference.get(); + if (metricsSource != null) { + metricsSource.getMetrics(collector, all); + } + } + + /** + * Name to know when unregistering. + * @return the name passed in during construction. + */ + public String getName() { + return name; + } + + /** + * Get the source, will be null if the reference has been GC'd + * @return the source reference + */ + public MetricsSource getSource() { + return sourceWeakReference.get(); + } + + @Override + public String toString() { + return "WeakRefMetricsSource{" + + "name='" + name + '\'' + + ", sourceWeakReference is " + + (sourceWeakReference.get() == null ? "unset" : "set") + + '}'; + } +} diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/prefetch/FilePosition.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/prefetch/FilePosition.java index 7cd3bb3de2b..286bdd7ae89 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/prefetch/FilePosition.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/prefetch/FilePosition.java @@ -116,7 +116,7 @@ public final class FilePosition { readOffset, "readOffset", startOffset, - startOffset + bufferData.getBuffer().limit() - 1); + startOffset + bufferData.getBuffer().limit()); data = bufferData; buffer = bufferData.getBuffer().duplicate(); @@ -182,7 +182,7 @@ public final class FilePosition { */ public boolean isWithinCurrentBuffer(long pos) { throwIfInvalidBuffer(); - long bufferEndOffset = bufferStartOffset + buffer.limit() - 1; + long bufferEndOffset = bufferStartOffset + buffer.limit(); return (pos >= bufferStartOffset) && (pos <= bufferEndOffset); } diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/local/package-info.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/local/package-info.java index 48d6644e99b..7d9b829f7d3 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/local/package-info.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/local/package-info.java @@ -15,6 +15,11 @@ * See the License for the specific language governing permissions and * limitations under the License. */ + +/** + * Filesystem implementations that allow Hadoop to read directly from + * the local file system. + */ @InterfaceAudience.LimitedPrivate({"HDFS", "MapReduce"}) @InterfaceStability.Unstable package org.apache.hadoop.fs.local; diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java index 0643a2e983d..1ac204f5f8a 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java @@ -333,15 +333,24 @@ class CopyCommands { */ public static class AppendToFile extends CommandWithDestination { public static final String NAME = "appendToFile"; - public static final String USAGE = " ... "; + public static final String USAGE = "[-n] ... "; public static final String DESCRIPTION = "Appends the contents of all the given local files to the " + "given dst file. The dst file will be created if it does " + "not exist. If is -, then the input is read " + - "from stdin."; + "from stdin. Option -n represents that use NEW_BLOCK create flag to append file."; private static final int DEFAULT_IO_LENGTH = 1024 * 1024; boolean readStdin = false; + private boolean appendToNewBlock = false; + + public boolean isAppendToNewBlock() { + return appendToNewBlock; + } + + public void setAppendToNewBlock(boolean appendToNewBlock) { + this.appendToNewBlock = appendToNewBlock; + } // commands operating on local paths have no need for glob expansion @Override @@ -372,6 +381,9 @@ class CopyCommands { throw new IOException("missing destination argument"); } + CommandFormat cf = new CommandFormat(2, Integer.MAX_VALUE, "n"); + cf.parse(args); + appendToNewBlock = cf.getOpt("n"); getRemoteDestination(args); super.processOptions(args); } @@ -385,7 +397,8 @@ class CopyCommands { } InputStream is = null; - try (FSDataOutputStream fos = dst.fs.append(dst.path)) { + try (FSDataOutputStream fos = appendToNewBlock ? + dst.fs.append(dst.path, true) : dst.fs.append(dst.path)) { if (readStdin) { if (args.size() == 0) { IOUtils.copyBytes(System.in, fos, DEFAULT_IO_LENGTH); diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Display.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Display.java index d3ca013a3f2..1a3f18a658d 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Display.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Display.java @@ -47,7 +47,6 @@ import org.apache.hadoop.io.DataInputBuffer; import org.apache.hadoop.io.DataOutputBuffer; import org.apache.hadoop.io.IOUtils; import org.apache.hadoop.io.SequenceFile; -import org.apache.hadoop.io.Writable; import org.apache.hadoop.io.compress.CompressionCodec; import org.apache.hadoop.io.compress.CompressionCodecFactory; import org.apache.hadoop.util.ReflectionUtils; @@ -217,8 +216,8 @@ class Display extends FsCommand { protected class TextRecordInputStream extends InputStream { SequenceFile.Reader r; - Writable key; - Writable val; + Object key; + Object val; DataInputBuffer inbuf; DataOutputBuffer outbuf; @@ -228,10 +227,8 @@ class Display extends FsCommand { final Configuration lconf = getConf(); r = new SequenceFile.Reader(lconf, SequenceFile.Reader.file(fpath)); - key = ReflectionUtils.newInstance( - r.getKeyClass().asSubclass(Writable.class), lconf); - val = ReflectionUtils.newInstance( - r.getValueClass().asSubclass(Writable.class), lconf); + key = ReflectionUtils.newInstance(r.getKeyClass(), lconf); + val = ReflectionUtils.newInstance(r.getValueClass(), lconf); inbuf = new DataInputBuffer(); outbuf = new DataOutputBuffer(); } @@ -240,8 +237,11 @@ class Display extends FsCommand { public int read() throws IOException { int ret; if (null == inbuf || -1 == (ret = inbuf.read())) { - if (!r.next(key, val)) { + key = r.next(key); + if (key == null) { return -1; + } else { + val = r.getCurrentValue(val); } byte[] tmp = key.toString().getBytes(StandardCharsets.UTF_8); outbuf.write(tmp, 0, tmp.length); diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/package-info.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/package-info.java index 92720bff69b..2f0542aa696 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/package-info.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/package-info.java @@ -15,6 +15,10 @@ * See the License for the specific language governing permissions and * limitations under the License. */ + +/** + * Support for the execution of a file system command. + */ @InterfaceAudience.Private @InterfaceStability.Unstable package org.apache.hadoop.fs.shell; diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/impl/IOStatisticsStoreImpl.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/impl/IOStatisticsStoreImpl.java index 0471703b3b0..6db38208919 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/impl/IOStatisticsStoreImpl.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/impl/IOStatisticsStoreImpl.java @@ -190,7 +190,7 @@ final class IOStatisticsStoreImpl extends WrappedIOStatistics return counter.get(); } else { long l = incAtomicLong(counter, value); - LOG.debug("Incrementing counter {} by {} with final value {}", + LOG.trace("Incrementing counter {} by {} with final value {}", key, value, l); return l; } diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/package-info.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/package-info.java index 32bbbf22307..b1710e0f9cb 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/package-info.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/package-info.java @@ -15,6 +15,10 @@ * See the License for the specific language governing permissions and * limitations under the License. */ + +/** + * Support for embedded HTTP services. + */ @InterfaceAudience.LimitedPrivate({"HBase", "HDFS", "MapReduce"}) @InterfaceStability.Unstable package org.apache.hadoop.http; diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/DefaultStringifier.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/DefaultStringifier.java index 7453996ecab..7be50b0c539 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/DefaultStringifier.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/DefaultStringifier.java @@ -158,6 +158,9 @@ public class DefaultStringifier implements Stringifier { public static void storeArray(Configuration conf, K[] items, String keyName) throws IOException { + if (items.length == 0) { + throw new IndexOutOfBoundsException(); + } DefaultStringifier stringifier = new DefaultStringifier(conf, GenericsUtil.getClass(items[0])); try { diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/bzip2/package-info.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/bzip2/package-info.java index 785170eaf62..9973b78e39a 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/bzip2/package-info.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/bzip2/package-info.java @@ -15,6 +15,11 @@ * See the License for the specific language governing permissions and * limitations under the License. */ + +/** + * Implementation of compression/decompression for the BZip2 + * compression algorithm. + */ @InterfaceAudience.Private @InterfaceStability.Unstable package org.apache.hadoop.io.compress.bzip2; diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/lz4/package-info.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/lz4/package-info.java index 11827f17486..438dfdea3e7 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/lz4/package-info.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/lz4/package-info.java @@ -15,6 +15,13 @@ * See the License for the specific language governing permissions and * limitations under the License. */ + +/** + * Implementation of compression/decompression for the LZ4 + * compression algorithm. + * + * @see LZ4 + */ @InterfaceAudience.Private @InterfaceStability.Unstable package org.apache.hadoop.io.compress.lz4; diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/package-info.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/package-info.java index eedf6550833..320fd026a1d 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/package-info.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/package-info.java @@ -15,6 +15,13 @@ * See the License for the specific language governing permissions and * limitations under the License. */ + +/** + * Implementation of compression/decompression for the Snappy + * compression algorithm. + * + * @see Snappy + */ @InterfaceAudience.Private @InterfaceStability.Unstable package org.apache.hadoop.io.compress.snappy; diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/package-info.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/package-info.java index 33d0a8d7ceb..515eb3498f2 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/package-info.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/package-info.java @@ -15,6 +15,13 @@ * See the License for the specific language governing permissions and * limitations under the License. */ + +/** + * Implementation of compression/decompression based on the popular + * gzip compressed file format. + * + * @see gzip + */ @InterfaceAudience.Private @InterfaceStability.Unstable package org.apache.hadoop.io.compress.zlib; diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zstd/package-info.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zstd/package-info.java index 9069070f73a..7214bf8582b 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zstd/package-info.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zstd/package-info.java @@ -15,6 +15,13 @@ * See the License for the specific language governing permissions and * limitations under the License. */ + +/** + * Implementation of compression/decompression based on the zStandard + * compression algorithm. + * + * @see zStandard + */ @InterfaceAudience.Private @InterfaceStability.Unstable package org.apache.hadoop.io.compress.zstd; diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/package-info.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/package-info.java index 346f895e650..7e47b3b54af 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/package-info.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/package-info.java @@ -15,6 +15,12 @@ * See the License for the specific language governing permissions and * limitations under the License. */ + +/** + * Various native IO-related calls not available in Java. These + * functions should generally be used alongside a fallback to another + * more portable mechanism. + */ @InterfaceAudience.Private @InterfaceStability.Unstable package org.apache.hadoop.io.nativeio; diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallerContext.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallerContext.java index 98d7e82c70e..b6e6f0c57a8 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallerContext.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallerContext.java @@ -140,12 +140,8 @@ public final class CallerContext { } public Builder(String context, Configuration conf) { - if (isValid(context)) { - sb.append(context); - } - fieldSeparator = conf.get(HADOOP_CALLER_CONTEXT_SEPARATOR_KEY, - HADOOP_CALLER_CONTEXT_SEPARATOR_DEFAULT); - checkFieldSeparator(fieldSeparator); + this(context, conf.get(HADOOP_CALLER_CONTEXT_SEPARATOR_KEY, + HADOOP_CALLER_CONTEXT_SEPARATOR_DEFAULT)); } public Builder(String context, String separator) { diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java index f0d4f8921a3..c0f90d98bc6 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java @@ -18,10 +18,10 @@ package org.apache.hadoop.ipc; +import org.apache.commons.lang3.tuple.Pair; import org.apache.hadoop.security.AccessControlException; import org.apache.hadoop.classification.VisibleForTesting; import org.apache.hadoop.util.Preconditions; -import org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder; import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceAudience.Public; import org.apache.hadoop.classification.InterfaceStability; @@ -166,73 +166,6 @@ public class Client implements AutoCloseable { private final int maxAsyncCalls; private final AtomicInteger asyncCallCounter = new AtomicInteger(0); - /** - * Executor on which IPC calls' parameters are sent. - * Deferring the sending of parameters to a separate - * thread isolates them from thread interruptions in the - * calling code. - */ - private final ExecutorService sendParamsExecutor; - private final static ClientExecutorServiceFactory clientExcecutorFactory = - new ClientExecutorServiceFactory(); - - private static class ClientExecutorServiceFactory { - private int executorRefCount = 0; - private ExecutorService clientExecutor = null; - - /** - * Get Executor on which IPC calls' parameters are sent. - * If the internal reference counter is zero, this method - * creates the instance of Executor. If not, this method - * just returns the reference of clientExecutor. - * - * @return An ExecutorService instance - */ - synchronized ExecutorService refAndGetInstance() { - if (executorRefCount == 0) { - clientExecutor = Executors.newCachedThreadPool( - new ThreadFactoryBuilder() - .setDaemon(true) - .setNameFormat("IPC Parameter Sending Thread #%d") - .build()); - } - executorRefCount++; - - return clientExecutor; - } - - /** - * Cleanup Executor on which IPC calls' parameters are sent. - * If reference counter is zero, this method discards the - * instance of the Executor. If not, this method - * just decrements the internal reference counter. - * - * @return An ExecutorService instance if it exists. - * Null is returned if not. - */ - synchronized ExecutorService unrefAndCleanup() { - executorRefCount--; - assert(executorRefCount >= 0); - - if (executorRefCount == 0) { - clientExecutor.shutdown(); - try { - if (!clientExecutor.awaitTermination(1, TimeUnit.MINUTES)) { - clientExecutor.shutdownNow(); - } - } catch (InterruptedException e) { - LOG.warn("Interrupted while waiting for clientExecutor" + - " to stop"); - clientExecutor.shutdownNow(); - Thread.currentThread().interrupt(); - } - clientExecutor = null; - } - - return clientExecutor; - } - } - /** * set the ping interval value in configuration * @@ -301,11 +234,6 @@ public class Client implements AutoCloseable { conf.setInt(CommonConfigurationKeys.IPC_CLIENT_CONNECT_TIMEOUT_KEY, timeout); } - @VisibleForTesting - public static final ExecutorService getClientExecutor() { - return Client.clientExcecutorFactory.clientExecutor; - } - /** * Increment this client's reference count */ @@ -462,8 +390,10 @@ public class Client implements AutoCloseable { private AtomicLong lastActivity = new AtomicLong();// last I/O activity time private AtomicBoolean shouldCloseConnection = new AtomicBoolean(); // indicate if the connection is closed private IOException closeException; // close reason - - private final Object sendRpcRequestLock = new Object(); + + private final Thread rpcRequestThread; + private final SynchronousQueue> rpcRequestQueue = + new SynchronousQueue<>(true); private AtomicReference connectingThread = new AtomicReference<>(); private final Consumer removeMethod; @@ -472,6 +402,9 @@ public class Client implements AutoCloseable { Consumer removeMethod) { this.remoteId = remoteId; this.server = remoteId.getAddress(); + this.rpcRequestThread = new Thread(new RpcRequestSender(), + "IPC Parameter Sending Thread for " + remoteId); + this.rpcRequestThread.setDaemon(true); this.maxResponseLength = remoteId.conf.getInt( CommonConfigurationKeys.IPC_MAXIMUM_RESPONSE_LENGTH, @@ -771,7 +704,7 @@ public class Client implements AutoCloseable { * handle that, a relogin is attempted. */ private synchronized void handleSaslConnectionFailure( - final int currRetries, final int maxRetries, final Exception ex, + final int currRetries, final int maxRetries, final IOException ex, final Random rand, final UserGroupInformation ugi) throws IOException, InterruptedException { ugi.doAs(new PrivilegedExceptionAction() { @@ -782,10 +715,7 @@ public class Client implements AutoCloseable { disposeSasl(); if (shouldAuthenticateOverKrb()) { if (currRetries < maxRetries) { - if(LOG.isDebugEnabled()) { - LOG.debug("Exception encountered while connecting to " - + "the server : " + ex); - } + LOG.debug("Exception encountered while connecting to the server {}", remoteId, ex); // try re-login if (UserGroupInformation.isLoginKeytabBased()) { UserGroupInformation.getLoginUser().reloginFromKeytab(); @@ -803,7 +733,11 @@ public class Client implements AutoCloseable { + UserGroupInformation.getLoginUser().getUserName() + " to " + remoteId; LOG.warn(msg, ex); - throw (IOException) new IOException(msg).initCause(ex); + throw NetUtils.wrapException(remoteId.getAddress().getHostName(), + remoteId.getAddress().getPort(), + NetUtils.getHostname(), + 0, + ex); } } else { // With RequestHedgingProxyProvider, one rpc call will send multiple @@ -811,11 +745,9 @@ public class Client implements AutoCloseable { // all other requests will be interrupted. It's not a big problem, // and should not print a warning log. if (ex instanceof InterruptedIOException) { - LOG.debug("Exception encountered while connecting to the server", - ex); + LOG.debug("Exception encountered while connecting to the server {}", remoteId, ex); } else { - LOG.warn("Exception encountered while connecting to the server ", - ex); + LOG.warn("Exception encountered while connecting to the server {}", remoteId, ex); } } if (ex instanceof RemoteException) @@ -1150,6 +1082,10 @@ public class Client implements AutoCloseable { @Override public void run() { + // Don't start the ipc parameter sending thread until we start this + // thread, because the shutdown logic only gets triggered if this + // thread is started. + rpcRequestThread.start(); if (LOG.isDebugEnabled()) LOG.debug(getName() + ": starting, having connections " + connections.size()); @@ -1173,9 +1109,52 @@ public class Client implements AutoCloseable { + connections.size()); } + /** + * A thread to write rpc requests to the socket. + */ + private class RpcRequestSender implements Runnable { + @Override + public void run() { + while (!shouldCloseConnection.get()) { + ResponseBuffer buf = null; + try { + Pair pair = + rpcRequestQueue.poll(maxIdleTime, TimeUnit.MILLISECONDS); + if (pair == null || shouldCloseConnection.get()) { + continue; + } + buf = pair.getRight(); + synchronized (ipcStreams.out) { + if (LOG.isDebugEnabled()) { + Call call = pair.getLeft(); + LOG.debug(getName() + "{} sending #{} {}", getName(), call.id, + call.rpcRequest); + } + // RpcRequestHeader + RpcRequest + ipcStreams.sendRequest(buf.toByteArray()); + ipcStreams.flush(); + } + } catch (InterruptedException ie) { + // stop this thread + return; + } catch (IOException e) { + // exception at this point would leave the connection in an + // unrecoverable state (eg half a call left on the wire). + // So, close the connection, killing any outstanding calls + markClosed(e); + } finally { + //the buffer is just an in-memory buffer, but it is still polite to + // close early + IOUtils.closeStream(buf); + } + } + } + } + /** Initiates a rpc call by sending the rpc request to the remote server. - * Note: this is not called from the Connection thread, but by other - * threads. + * Note: this is not called from the current thread, but by another + * thread, so that if the current thread is interrupted that the socket + * state isn't corrupted with a partially written message. * @param call - the rpc request */ public void sendRpcRequest(final Call call) @@ -1185,8 +1164,7 @@ public class Client implements AutoCloseable { } // Serialize the call to be sent. This is done from the actual - // caller thread, rather than the sendParamsExecutor thread, - + // caller thread, rather than the rpcRequestThread in the connection, // so that if the serialization throws an error, it is reported // properly. This also parallelizes the serialization. // @@ -1203,51 +1181,7 @@ public class Client implements AutoCloseable { final ResponseBuffer buf = new ResponseBuffer(); header.writeDelimitedTo(buf); RpcWritable.wrap(call.rpcRequest).writeTo(buf); - - synchronized (sendRpcRequestLock) { - Future senderFuture = sendParamsExecutor.submit(new Runnable() { - @Override - public void run() { - try { - synchronized (ipcStreams.out) { - if (shouldCloseConnection.get()) { - return; - } - if (LOG.isDebugEnabled()) { - LOG.debug(getName() + " sending #" + call.id - + " " + call.rpcRequest); - } - // RpcRequestHeader + RpcRequest - ipcStreams.sendRequest(buf.toByteArray()); - ipcStreams.flush(); - } - } catch (IOException e) { - // exception at this point would leave the connection in an - // unrecoverable state (eg half a call left on the wire). - // So, close the connection, killing any outstanding calls - markClosed(e); - } finally { - //the buffer is just an in-memory buffer, but it is still polite to - // close early - IOUtils.closeStream(buf); - } - } - }); - - try { - senderFuture.get(); - } catch (ExecutionException e) { - Throwable cause = e.getCause(); - - // cause should only be a RuntimeException as the Runnable above - // catches IOException - if (cause instanceof RuntimeException) { - throw (RuntimeException) cause; - } else { - throw new RuntimeException("unexpected checked exception", cause); - } - } - } + rpcRequestQueue.put(Pair.of(call, buf)); } /* Receive a response. @@ -1396,7 +1330,6 @@ public class Client implements AutoCloseable { CommonConfigurationKeys.IPC_CLIENT_BIND_WILDCARD_ADDR_DEFAULT); this.clientId = ClientId.getClientId(); - this.sendParamsExecutor = clientExcecutorFactory.refAndGetInstance(); this.maxAsyncCalls = conf.getInt( CommonConfigurationKeys.IPC_CLIENT_ASYNC_CALLS_MAX_KEY, CommonConfigurationKeys.IPC_CLIENT_ASYNC_CALLS_MAX_DEFAULT); @@ -1440,6 +1373,7 @@ public class Client implements AutoCloseable { // wake up all connections for (Connection conn : connections.values()) { conn.interrupt(); + conn.rpcRequestThread.interrupt(); conn.interruptConnectingThread(); } @@ -1456,7 +1390,6 @@ public class Client implements AutoCloseable { } } } - clientExcecutorFactory.unrefAndCleanup(); } /** diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java index 17366eb9569..a79fc2eeb57 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java @@ -123,6 +123,7 @@ import org.apache.hadoop.util.ExitUtil; import org.apache.hadoop.util.ProtoUtil; import org.apache.hadoop.util.StringUtils; import org.apache.hadoop.util.Time; +import java.util.concurrent.atomic.AtomicBoolean; import org.apache.hadoop.tracing.Span; import org.apache.hadoop.tracing.SpanContext; import org.apache.hadoop.tracing.TraceScope; @@ -153,6 +154,13 @@ public abstract class Server { private ExceptionsHandler exceptionsHandler = new ExceptionsHandler(); private Tracer tracer; private AlignmentContext alignmentContext; + + /** + * Allow server to do force Kerberos re-login once after failure irrespective + * of the last login time. + */ + private final AtomicBoolean canTryForceLogin = new AtomicBoolean(true); + /** * Logical name of the server used in metrics and monitor. */ @@ -1393,8 +1401,7 @@ public abstract class Server { bind(acceptChannel.socket(), address, backlogLength, conf, portRangeConfig); //Could be an ephemeral port this.listenPort = acceptChannel.socket().getLocalPort(); - Thread.currentThread().setName("Listener at " + - bindAddress + "/" + this.listenPort); + LOG.info("Listener at {}:{}", bindAddress, this.listenPort); // create a selector; selector= Selector.open(); readers = new Reader[readThreads]; @@ -2207,7 +2214,23 @@ public abstract class Server { AUDITLOG.warn(AUTH_FAILED_FOR + this.toString() + ":" + attemptingUser + " (" + e.getLocalizedMessage() + ") with true cause: (" + tce.getLocalizedMessage() + ")"); - throw tce; + if (!UserGroupInformation.getLoginUser().isLoginSuccess()) { + doKerberosRelogin(); + try { + // try processing message again + LOG.debug("Reprocessing sasl message for {}:{} after re-login", + this.toString(), attemptingUser); + saslResponse = processSaslMessage(saslMessage); + AUDITLOG.info("Retry {}{}:{} after failure", AUTH_SUCCESSFUL_FOR, + this.toString(), attemptingUser); + canTryForceLogin.set(true); + } catch (IOException exp) { + tce = (IOException) getTrueCause(e); + throw tce; + } + } else { + throw tce; + } } if (saslServer != null && saslServer.isComplete()) { @@ -3323,6 +3346,26 @@ public abstract class Server { metricsUpdaterInterval, metricsUpdaterInterval, TimeUnit.MILLISECONDS); } + private synchronized void doKerberosRelogin() throws IOException { + if(UserGroupInformation.getLoginUser().isLoginSuccess()){ + return; + } + LOG.warn("Initiating re-login from IPC Server"); + if (canTryForceLogin.compareAndSet(true, false)) { + if (UserGroupInformation.isLoginKeytabBased()) { + UserGroupInformation.getLoginUser().forceReloginFromKeytab(); + } else if (UserGroupInformation.isLoginTicketBased()) { + UserGroupInformation.getLoginUser().forceReloginFromTicketCache(); + } + } else { + if (UserGroupInformation.isLoginKeytabBased()) { + UserGroupInformation.getLoginUser().reloginFromKeytab(); + } else if (UserGroupInformation.isLoginTicketBased()) { + UserGroupInformation.getLoginUser().reloginFromTicketCache(); + } + } + } + public synchronized void addAuxiliaryListener(int auxiliaryPort) throws IOException { if (auxiliaryListenerMap == null) { diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogThrottlingHelper.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogThrottlingHelper.java index af5f8521433..f9ab394771d 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogThrottlingHelper.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogThrottlingHelper.java @@ -65,7 +65,7 @@ import org.apache.hadoop.util.Timer; *

This class can also be used to coordinate multiple logging points; see * {@link #record(String, long, double...)} for more details. * - *

This class is not thread-safe. + *

This class is thread-safe. */ public class LogThrottlingHelper { @@ -192,7 +192,7 @@ public class LogThrottlingHelper { * @return A LogAction indicating whether or not the caller should write to * its log. */ - public LogAction record(double... values) { + public synchronized LogAction record(double... values) { return record(DEFAULT_RECORDER_NAME, timer.monotonicNow(), values); } @@ -244,7 +244,7 @@ public class LogThrottlingHelper { * * @see #record(double...) */ - public LogAction record(String recorderName, long currentTimeMs, + public synchronized LogAction record(String recorderName, long currentTimeMs, double... values) { if (primaryRecorderName == null) { primaryRecorderName = recorderName; @@ -262,9 +262,15 @@ public class LogThrottlingHelper { if (primaryRecorderName.equals(recorderName) && currentTimeMs - minLogPeriodMs >= lastLogTimestampMs) { lastLogTimestampMs = currentTimeMs; - for (LoggingAction log : currentLogs.values()) { - log.setShouldLog(); - } + currentLogs.replaceAll((key, log) -> { + LoggingAction newLog = log; + if (log.hasLogged()) { + // create a fresh log since the old one has already been logged + newLog = new LoggingAction(log.getValueCount()); + } + newLog.setShouldLog(); + return newLog; + }); } if (currentLog.shouldLog()) { currentLog.setHasLogged(); @@ -281,7 +287,7 @@ public class LogThrottlingHelper { * @param idx The index value. * @return The summary information. */ - public SummaryStatistics getCurrentStats(String recorderName, int idx) { + public synchronized SummaryStatistics getCurrentStats(String recorderName, int idx) { LoggingAction currentLog = currentLogs.get(recorderName); if (currentLog != null) { return currentLog.getStats(idx); @@ -308,6 +314,13 @@ public class LogThrottlingHelper { } } + @VisibleForTesting + public synchronized void reset() { + primaryRecorderName = null; + currentLogs.clear(); + lastLogTimestampMs = Long.MIN_VALUE; + } + /** * A standard log action which keeps track of all of the values which have * been logged. This is also used for internal bookkeeping via its private @@ -357,6 +370,10 @@ public class LogThrottlingHelper { hasLogged = true; } + private int getValueCount() { + return stats.length; + } + private void recordValues(double... values) { if (values.length != stats.length) { throw new IllegalArgumentException("received " + values.length + diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java index 8837c02b99d..6c5a71a708f 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java @@ -280,7 +280,6 @@ public class MetricsSystemImpl extends MetricsSystem implements MetricsSource { } return sink; } - allSinks.put(name, sink); if (config != null) { registerSink(name, description, sink); } @@ -301,6 +300,7 @@ public class MetricsSystemImpl extends MetricsSystem implements MetricsSource { ? newSink(name, desc, sink, conf) : newSink(name, desc, sink, config.subset(SINK_KEY)); sinks.put(name, sa); + allSinks.put(name, sink); sa.start(); LOG.info("Registered sink "+ name); } @@ -508,6 +508,7 @@ public class MetricsSystemImpl extends MetricsSystem implements MetricsSource { conf.getString(DESC_KEY, sinkName), conf); sa.start(); sinks.put(sinkName, sa); + allSinks.put(sinkName, sa.sink()); } catch (Exception e) { LOG.warn("Error creating sink '"+ sinkName +"'", e); } diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableGaugeFloat.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableGaugeFloat.java index 6a52bf382df..126601fcbb6 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableGaugeFloat.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableGaugeFloat.java @@ -69,7 +69,7 @@ public class MutableGaugeFloat extends MutableGauge { private void incr(float delta) { while (true) { - float current = value.get(); + float current = Float.intBitsToFloat(value.get()); float next = current + delta; if (compareAndSet(current, next)) { setChanged(); diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableStat.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableStat.java index f2e072545ad..b130aa6ada3 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableStat.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableStat.java @@ -140,14 +140,14 @@ public class MutableStat extends MutableMetric { if (all || changed()) { numSamples += intervalStat.numSamples(); builder.addCounter(numInfo, numSamples) - .addGauge(avgInfo, lastStat().mean()); + .addGauge(avgInfo, intervalStat.mean()); if (extended) { - builder.addGauge(stdevInfo, lastStat().stddev()) - .addGauge(iMinInfo, lastStat().min()) - .addGauge(iMaxInfo, lastStat().max()) + builder.addGauge(stdevInfo, intervalStat.stddev()) + .addGauge(iMinInfo, intervalStat.min()) + .addGauge(iMaxInfo, intervalStat.max()) .addGauge(minInfo, minMax.min()) .addGauge(maxInfo, minMax.max()) - .addGauge(iNumInfo, lastStat().numSamples()); + .addGauge(iNumInfo, intervalStat.numSamples()); } if (changed()) { if (numSamples > 0) { diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedIdMapping.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedIdMapping.java index c28471a3bda..49fd9194e5a 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedIdMapping.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedIdMapping.java @@ -38,6 +38,8 @@ import org.apache.hadoop.thirdparty.com.google.common.collect.HashBiMap; import org.slf4j.Logger; import org.slf4j.LoggerFactory; +import static org.apache.hadoop.util.Shell.bashQuote; + /** * A simple shell-based implementation of {@link IdMappingServiceProvider} * Map id to user name or group name. It does update every 15 minutes. Only a @@ -472,26 +474,27 @@ public class ShellBasedIdMapping implements IdMappingServiceProvider { boolean updated = false; updateStaticMapping(); + String name2 = bashQuote(name); if (OS.startsWith("Linux") || OS.equals("SunOS") || OS.contains("BSD")) { if (isGrp) { updated = updateMapInternal(gidNameMap, "group", - getName2IdCmdNIX(name, true), ":", + getName2IdCmdNIX(name2, true), ":", staticMapping.gidMapping); } else { updated = updateMapInternal(uidNameMap, "user", - getName2IdCmdNIX(name, false), ":", + getName2IdCmdNIX(name2, false), ":", staticMapping.uidMapping); } } else { // Mac if (isGrp) { updated = updateMapInternal(gidNameMap, "group", - getName2IdCmdMac(name, true), "\\s+", + getName2IdCmdMac(name2, true), "\\s+", staticMapping.gidMapping); } else { updated = updateMapInternal(uidNameMap, "user", - getName2IdCmdMac(name, false), "\\s+", + getName2IdCmdMac(name2, false), "\\s+", staticMapping.uidMapping); } } diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java index 9671d8da38f..8a5a0ee234f 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java @@ -529,6 +529,18 @@ public class UserGroupInformation { user.setLogin(login); } + /** This method checks for a successful Kerberos login + * and returns true by default if it is not using Kerberos. + * + * @return true on successful login + */ + public boolean isLoginSuccess() { + LoginContext login = user.getLogin(); + return (login instanceof HadoopLoginContext) + ? ((HadoopLoginContext) login).isLoginSuccess() + : true; + } + /** * Set the last login time for logged in user * @param loginTime the number of milliseconds since the beginning of time @@ -1276,6 +1288,23 @@ public class UserGroupInformation { relogin(login, ignoreLastLoginTime); } + /** + * Force re-Login a user in from the ticket cache irrespective of the last + * login time. This method assumes that login had happened already. The + * Subject field of this UserGroupInformation object is updated to have the + * new credentials. + * + * @throws IOException + * raised on errors performing I/O. + * @throws KerberosAuthException + * on a failure + */ + @InterfaceAudience.Public + @InterfaceStability.Evolving + public void forceReloginFromTicketCache() throws IOException { + reloginFromTicketCache(true); + } + /** * Re-Login a user in from the ticket cache. This * method assumes that login had happened already. @@ -1287,6 +1316,11 @@ public class UserGroupInformation { @InterfaceAudience.Public @InterfaceStability.Evolving public void reloginFromTicketCache() throws IOException { + reloginFromTicketCache(false); + } + + private void reloginFromTicketCache(boolean ignoreLastLoginTime) + throws IOException { if (!shouldRelogin() || !isFromTicket()) { return; } @@ -1294,7 +1328,7 @@ public class UserGroupInformation { if (login == null) { throw new KerberosAuthException(MUST_FIRST_LOGIN); } - relogin(login, false); + relogin(login, ignoreLastLoginTime); } private void relogin(HadoopLoginContext login, boolean ignoreLastLoginTime) @@ -2083,6 +2117,11 @@ public class UserGroupInformation { this.conf = conf; } + /** Get the login status. */ + public boolean isLoginSuccess() { + return isLoggedIn.get(); + } + String getAppName() { return appName; } diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/package-info.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/package-info.java index e1060e2196d..3c75a2427d8 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/package-info.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/package-info.java @@ -15,6 +15,10 @@ * See the License for the specific language governing permissions and * limitations under the License. */ + +/** + * Support for service-level authorization. + */ @InterfaceAudience.Public @InterfaceStability.Evolving package org.apache.hadoop.security.authorize; diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/http/package-info.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/http/package-info.java index 8e9398eb679..a58b3cdcfb9 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/http/package-info.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/http/package-info.java @@ -15,6 +15,10 @@ * See the License for the specific language governing permissions and * limitations under the License. */ + +/** + * Filters for HTTP service security. + */ @InterfaceAudience.Public @InterfaceStability.Evolving package org.apache.hadoop.security.http; diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java index fe3233d848d..5ab38aa7420 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java @@ -25,7 +25,7 @@ import org.apache.hadoop.util.ReflectionUtils; import org.apache.hadoop.util.StringUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; -import static org.apache.hadoop.util.PlatformName.JAVA_VENDOR_NAME; +import static org.apache.hadoop.util.PlatformName.IBM_JAVA; import javax.net.ssl.HostnameVerifier; import javax.net.ssl.HttpsURLConnection; @@ -102,11 +102,11 @@ public class SSLFactory implements ConnectionConfigurator { "ssl.server.exclude.cipher.list"; public static final String KEY_MANAGER_SSLCERTIFICATE = - JAVA_VENDOR_NAME.contains("IBM") ? "ibmX509" : + IBM_JAVA ? "ibmX509" : KeyManagerFactory.getDefaultAlgorithm(); public static final String TRUST_MANAGER_SSLCERTIFICATE = - JAVA_VENDOR_NAME.contains("IBM") ? "ibmX509" : + IBM_JAVA ? "ibmX509" : TrustManagerFactory.getDefaultAlgorithm(); public static final String KEYSTORES_FACTORY_CLASS_KEY = diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/package-info.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/package-info.java index c85f967ab67..0b3b8c46944 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/package-info.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/package-info.java @@ -15,6 +15,10 @@ * See the License for the specific language governing permissions and * limitations under the License. */ + +/** + * ZooKeeper secret manager for TokenIdentifiers and DelegationKeys. + */ @InterfaceAudience.LimitedPrivate({"HBase", "HDFS", "MapReduce"}) @InterfaceStability.Evolving package org.apache.hadoop.security.token.delegation; diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/package-info.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/package-info.java index e015056b43e..cdf4e61050d 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/package-info.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/package-info.java @@ -15,6 +15,10 @@ * See the License for the specific language governing permissions and * limitations under the License. */ + +/** + * Support for delegation tokens. + */ @InterfaceAudience.Public @InterfaceStability.Evolving package org.apache.hadoop.security.token; diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/service/package-info.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/service/package-info.java index 37164855499..81409382648 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/service/package-info.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/service/package-info.java @@ -15,6 +15,10 @@ * See the License for the specific language governing permissions and * limitations under the License. */ + +/** + * Support for services. + */ @InterfaceAudience.Public package org.apache.hadoop.service; import org.apache.hadoop.classification.InterfaceAudience; diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java index 972bbff4cfd..4e8a9c9b275 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java @@ -108,7 +108,7 @@ public class ApplicationClassLoader extends URLClassLoader { throws MalformedURLException { List urls = new ArrayList(); for (String element : classpath.split(File.pathSeparator)) { - if (element.endsWith("/*")) { + if (element.endsWith(File.separator + "*")) { List jars = FileUtil.getJarsInDirectory(element); if (!jars.isEmpty()) { for (Path jar: jars) { diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/HostsFileReader.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/HostsFileReader.java index d94668356e2..300f8145c31 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/HostsFileReader.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/HostsFileReader.java @@ -147,8 +147,8 @@ public class HostsFileReader { String filename, InputStream fileInputStream, Map map) throws IOException { Document dom; - DocumentBuilderFactory builder = DocumentBuilderFactory.newInstance(); try { + DocumentBuilderFactory builder = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = builder.newDocumentBuilder(); dom = db.parse(fileInputStream); // Examples: diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/InstrumentedReadLock.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/InstrumentedReadLock.java index 18f6ccfdb17..c99290bc3d3 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/InstrumentedReadLock.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/InstrumentedReadLock.java @@ -44,12 +44,7 @@ public class InstrumentedReadLock extends InstrumentedLock { * there can be multiple threads that hold the read lock concurrently. */ private final ThreadLocal readLockHeldTimeStamp = - new ThreadLocal() { - @Override - protected Long initialValue() { - return Long.MAX_VALUE; - }; - }; + ThreadLocal.withInitial(() -> Long.MAX_VALUE); public InstrumentedReadLock(String name, Logger logger, ReentrantReadWriteLock readWriteLock, diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/InstrumentedWriteLock.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/InstrumentedWriteLock.java index 667b1ca6a4b..4637b5efe53 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/InstrumentedWriteLock.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/InstrumentedWriteLock.java @@ -37,6 +37,9 @@ import org.slf4j.Logger; @InterfaceStability.Unstable public class InstrumentedWriteLock extends InstrumentedLock { + private final ReentrantReadWriteLock readWriteLock; + private volatile long writeLockHeldTimeStamp = 0; + public InstrumentedWriteLock(String name, Logger logger, ReentrantReadWriteLock readWriteLock, long minLoggingGapMs, long lockWarningThresholdMs) { @@ -50,5 +53,28 @@ public class InstrumentedWriteLock extends InstrumentedLock { long minLoggingGapMs, long lockWarningThresholdMs, Timer clock) { super(name, logger, readWriteLock.writeLock(), minLoggingGapMs, lockWarningThresholdMs, clock); + this.readWriteLock = readWriteLock; + } + + @Override + public void unlock() { + boolean needReport = readWriteLock.getWriteHoldCount() == 1; + long localWriteReleaseTime = getTimer().monotonicNow(); + long localWriteAcquireTime = writeLockHeldTimeStamp; + getLock().unlock(); + if (needReport) { + writeLockHeldTimeStamp = 0; + check(localWriteAcquireTime, localWriteReleaseTime, true); + } + } + + /** + * Starts timing for the instrumented write lock. + */ + @Override + protected void startLockTiming() { + if (readWriteLock.getWriteHoldCount() == 1) { + writeLockHeldTimeStamp = getTimer().monotonicNow(); + } } } diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LightWeightResizableGSet.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LightWeightResizableGSet.java index 051e2680bc3..1383a7fafe7 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LightWeightResizableGSet.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LightWeightResizableGSet.java @@ -33,7 +33,7 @@ import java.util.function.Consumer; * * This class does not support null element. * - * This class is not thread safe. + * This class is thread safe. * * @param Key type for looking up the elements * @param Element type, which must be diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java index 65978f3c5f5..91868365b13 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java @@ -146,7 +146,8 @@ public abstract class Shell { * @param arg the argument to quote * @return the quoted string */ - static String bashQuote(String arg) { + @InterfaceAudience.Private + public static String bashQuote(String arg) { StringBuilder buffer = new StringBuilder(arg.length() + 2); buffer.append('\'') .append(arg.replace("'", "'\\''")) diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/VersionInfo.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/VersionInfo.java index ea835023e86..31fe3c6377b 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/VersionInfo.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/VersionInfo.java @@ -93,6 +93,10 @@ public class VersionInfo { return info.getProperty("protocVersion", "Unknown"); } + protected String _getCompilePlatform() { + return info.getProperty("compilePlatform", "Unknown"); + } + private static VersionInfo COMMON_VERSION_INFO = new VersionInfo("common"); /** * Get the Hadoop version. @@ -167,12 +171,21 @@ public class VersionInfo { return COMMON_VERSION_INFO._getProtocVersion(); } + /** + * Returns the OS platform used for the build. + * @return the OS platform + */ + public static String getCompilePlatform() { + return COMMON_VERSION_INFO._getCompilePlatform(); + } + public static void main(String[] args) { LOG.debug("version: "+ getVersion()); System.out.println("Hadoop " + getVersion()); System.out.println("Source code repository " + getUrl() + " -r " + getRevision()); System.out.println("Compiled by " + getUser() + " on " + getDate()); + System.out.println("Compiled on platform " + getCompilePlatform()); System.out.println("Compiled with protoc " + getProtocVersion()); System.out.println("From source with checksum " + getSrcChecksum()); System.out.println("This command was run using " + diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/XMLUtils.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/XMLUtils.java index e2b9e414ad3..8a5d2f36615 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/XMLUtils.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/XMLUtils.java @@ -18,13 +18,23 @@ package org.apache.hadoop.util; +import javax.xml.XMLConstants; +import javax.xml.parsers.DocumentBuilderFactory; +import javax.xml.parsers.ParserConfigurationException; +import javax.xml.parsers.SAXParserFactory; import javax.xml.transform.*; +import javax.xml.transform.sax.SAXTransformerFactory; import javax.xml.transform.stream.*; import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.xml.sax.SAXException; + import java.io.*; +import java.util.concurrent.atomic.AtomicBoolean; /** * General xml utilities. @@ -33,6 +43,28 @@ import java.io.*; @InterfaceAudience.Private @InterfaceStability.Unstable public class XMLUtils { + + private static final Logger LOG = + LoggerFactory.getLogger(XMLUtils.class); + + public static final String DISALLOW_DOCTYPE_DECL = + "http://apache.org/xml/features/disallow-doctype-decl"; + public static final String LOAD_EXTERNAL_DECL = + "http://apache.org/xml/features/nonvalidating/load-external-dtd"; + public static final String EXTERNAL_GENERAL_ENTITIES = + "http://xml.org/sax/features/external-general-entities"; + public static final String EXTERNAL_PARAMETER_ENTITIES = + "http://xml.org/sax/features/external-parameter-entities"; + public static final String CREATE_ENTITY_REF_NODES = + "http://apache.org/xml/features/dom/create-entity-ref-nodes"; + public static final String VALIDATION = + "http://xml.org/sax/features/validation"; + + private static final AtomicBoolean CAN_SET_TRANSFORMER_ACCESS_EXTERNAL_DTD = + new AtomicBoolean(true); + private static final AtomicBoolean CAN_SET_TRANSFORMER_ACCESS_EXTERNAL_STYLESHEET = + new AtomicBoolean(true); + /** * Transform input xml given a stylesheet. * @@ -49,7 +81,7 @@ public class XMLUtils { ) throws TransformerConfigurationException, TransformerException { // Instantiate a TransformerFactory - TransformerFactory tFactory = TransformerFactory.newInstance(); + TransformerFactory tFactory = newSecureTransformerFactory(); // Use the TransformerFactory to process the // stylesheet and generate a Transformer @@ -61,4 +93,118 @@ public class XMLUtils { // and send the output to a Result object. transformer.transform(new StreamSource(xml), new StreamResult(out)); } + + /** + * This method should be used if you need a {@link DocumentBuilderFactory}. Use this method + * instead of {@link DocumentBuilderFactory#newInstance()}. The factory that is returned has + * secure configuration enabled. + * + * @return a {@link DocumentBuilderFactory} with secure configuration enabled + * @throws ParserConfigurationException if the {@code JAXP} parser does not support the + * secure configuration + */ + public static DocumentBuilderFactory newSecureDocumentBuilderFactory() + throws ParserConfigurationException { + DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + dbf.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, true); + dbf.setFeature(DISALLOW_DOCTYPE_DECL, true); + dbf.setFeature(LOAD_EXTERNAL_DECL, false); + dbf.setFeature(EXTERNAL_GENERAL_ENTITIES, false); + dbf.setFeature(EXTERNAL_PARAMETER_ENTITIES, false); + dbf.setFeature(CREATE_ENTITY_REF_NODES, false); + return dbf; + } + + /** + * This method should be used if you need a {@link SAXParserFactory}. Use this method + * instead of {@link SAXParserFactory#newInstance()}. The factory that is returned has + * secure configuration enabled. + * + * @return a {@link SAXParserFactory} with secure configuration enabled + * @throws ParserConfigurationException if the {@code JAXP} parser does not support the + * secure configuration + * @throws SAXException if there are another issues when creating the factory + */ + public static SAXParserFactory newSecureSAXParserFactory() + throws SAXException, ParserConfigurationException { + SAXParserFactory spf = SAXParserFactory.newInstance(); + spf.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, true); + spf.setFeature(DISALLOW_DOCTYPE_DECL, true); + spf.setFeature(LOAD_EXTERNAL_DECL, false); + spf.setFeature(EXTERNAL_GENERAL_ENTITIES, false); + spf.setFeature(EXTERNAL_PARAMETER_ENTITIES, false); + return spf; + } + + /** + * This method should be used if you need a {@link TransformerFactory}. Use this method + * instead of {@link TransformerFactory#newInstance()}. The factory that is returned has + * secure configuration enabled. + * + * @return a {@link TransformerFactory} with secure configuration enabled + * @throws TransformerConfigurationException if the {@code JAXP} transformer does not + * support the secure configuration + */ + public static TransformerFactory newSecureTransformerFactory() + throws TransformerConfigurationException { + TransformerFactory trfactory = TransformerFactory.newInstance(); + trfactory.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, true); + setOptionalSecureTransformerAttributes(trfactory); + return trfactory; + } + + /** + * This method should be used if you need a {@link SAXTransformerFactory}. Use this method + * instead of {@link SAXTransformerFactory#newInstance()}. The factory that is returned has + * secure configuration enabled. + * + * @return a {@link SAXTransformerFactory} with secure configuration enabled + * @throws TransformerConfigurationException if the {@code JAXP} transformer does not + * support the secure configuration + */ + public static SAXTransformerFactory newSecureSAXTransformerFactory() + throws TransformerConfigurationException { + SAXTransformerFactory trfactory = (SAXTransformerFactory) SAXTransformerFactory.newInstance(); + trfactory.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, true); + setOptionalSecureTransformerAttributes(trfactory); + return trfactory; + } + + /** + * These attributes are recommended for maximum security but some JAXP transformers do + * not support them. If at any stage, we fail to set these attributes, then we won't try again + * for subsequent transformers. + * + * @param transformerFactory to update + */ + private static void setOptionalSecureTransformerAttributes( + TransformerFactory transformerFactory) { + bestEffortSetAttribute(transformerFactory, CAN_SET_TRANSFORMER_ACCESS_EXTERNAL_DTD, + XMLConstants.ACCESS_EXTERNAL_DTD, ""); + bestEffortSetAttribute(transformerFactory, CAN_SET_TRANSFORMER_ACCESS_EXTERNAL_STYLESHEET, + XMLConstants.ACCESS_EXTERNAL_STYLESHEET, ""); + } + + /** + * Set an attribute value on a {@link TransformerFactory}. If the TransformerFactory + * does not support the attribute, the method just returns false and + * logs the issue at debug level. + * + * @param transformerFactory to update + * @param flag that indicates whether to do the update and the flag can be set to + * false if an update fails + * @param name of the attribute to set + * @param value to set on the attribute + */ + static void bestEffortSetAttribute(TransformerFactory transformerFactory, AtomicBoolean flag, + String name, Object value) { + if (flag.get()) { + try { + transformerFactory.setAttribute(name, value); + } catch (Throwable t) { + flag.set(false); + LOG.debug("Issue setting TransformerFactory attribute {}: {}", name, t.toString()); + } + } + } } diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/concurrent/package-info.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/concurrent/package-info.java index 2effb65872e..871005adc0c 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/concurrent/package-info.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/concurrent/package-info.java @@ -1,5 +1,4 @@ /* - * * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,9 +14,11 @@ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. - * / */ +/** + * Support for concurrent execution. + */ @InterfaceAudience.Private @InterfaceStability.Unstable package org.apache.hadoop.util.concurrent; diff --git a/hadoop-common-project/hadoop-common/src/main/resources/common-version-info.properties b/hadoop-common-project/hadoop-common/src/main/resources/common-version-info.properties index 6f8558b8d4f..0f075c8139a 100644 --- a/hadoop-common-project/hadoop-common/src/main/resources/common-version-info.properties +++ b/hadoop-common-project/hadoop-common/src/main/resources/common-version-info.properties @@ -24,3 +24,4 @@ date=${version-info.build.time} url=${version-info.scm.uri} srcChecksum=${version-info.source.md5} protocVersion=${hadoop.protobuf.version} +compilePlatform=${os.detected.classifier} diff --git a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml index 17cd228dc1b..e18a50c72e8 100644 --- a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml +++ b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml @@ -1094,14 +1094,6 @@ - - fs.viewfs.overload.scheme.target.swift.impl - org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem - The SwiftNativeFileSystem for view file system overload scheme - when child file system and ViewFSOverloadScheme's schemes are swift. - - - fs.viewfs.overload.scheme.target.oss.impl org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem @@ -1211,12 +1203,6 @@ File space usage statistics refresh interval in msec. - - fs.swift.impl - org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem - The implementation class of the OpenStack Swift Filesystem - - fs.automatic.close true @@ -2180,6 +2166,12 @@ The switch to turn S3A auditing on or off. The AbstractFileSystem for gs: uris. + + fs.azure.enable.readahead + true + Enabled readahead/prefetching in AbfsInputStream. + + io.seqfile.compress.blocksize 1000000 diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md b/hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md index 4f76979ea6a..9095d6f9890 100644 --- a/hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md +++ b/hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md @@ -22,7 +22,17 @@ Purpose This document describes how to install and configure Hadoop clusters ranging from a few nodes to extremely large clusters with thousands of nodes. To play with Hadoop, you may first want to install it on a single machine (see [Single Node Setup](./SingleCluster.html)). -This document does not cover advanced topics such as [Security](./SecureMode.html) or High Availability. +This document does not cover advanced topics such as High Availability. + +*Important*: all production Hadoop clusters use Kerberos to authenticate callers +and secure access to HDFS data as well as restriction access to computation +services (YARN etc.). + +These instructions do not cover integration with any Kerberos services, +-everyone bringing up a production cluster should include connecting to their +organisation's Kerberos infrastructure as a key part of the deployment. + +See [Security](./SecureMode.html) for details on how to secure a cluster. Prerequisites ------------- diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/DeprecatedProperties.md b/hadoop-common-project/hadoop-common/src/site/markdown/DeprecatedProperties.md index 281e42dad88..a00feb039d8 100644 --- a/hadoop-common-project/hadoop-common/src/site/markdown/DeprecatedProperties.md +++ b/hadoop-common-project/hadoop-common/src/site/markdown/DeprecatedProperties.md @@ -208,7 +208,8 @@ The following table lists the configuration property names that are deprecated i | mapred.task.profile.params | mapreduce.task.profile.params | | mapred.task.profile.reduces | mapreduce.task.profile.reduces | | mapred.task.timeout | mapreduce.task.timeout | -| mapred.tasktracker.indexcache.mb | mapreduce.tasktracker.indexcache.mb | +| mapred.tasktracker.indexcache.mb | mapreduce.reduce.shuffle.indexcache.mb | +| mapreduce.tasktracker.indexcache.mb | mapreduce.reduce.shuffle.indexcache.mb | | mapred.tasktracker.map.tasks.maximum | mapreduce.tasktracker.map.tasks.maximum | | mapred.tasktracker.memory\_calculator\_plugin | mapreduce.tasktracker.resourcecalculatorplugin | | mapred.tasktracker.memorycalculatorplugin | mapreduce.tasktracker.resourcecalculatorplugin | diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md b/hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md index 9a690a8c5cc..451b33d74fa 100644 --- a/hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md +++ b/hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md @@ -59,7 +59,7 @@ Copies source paths to stdout. Options -* The `-ignoreCrc` option disables checkshum verification. +* The `-ignoreCrc` option disables checksum verification. Example: @@ -73,18 +73,19 @@ Returns 0 on success and -1 on error. checksum -------- -Usage: `hadoop fs -checksum [-v] URI` +Usage: `hadoop fs -checksum [-v] URI [URI ...]` -Returns the checksum information of a file. +Returns the checksum information of the file(s). Options -* The `-v` option displays blocks size for the file. +* The `-v` option displays blocks size for the file(s). Example: * `hadoop fs -checksum hdfs://nn1.example.com/file1` * `hadoop fs -checksum file:///etc/hosts` +* `hadoop fs -checksum file:///etc/hosts hdfs://nn1.example.com/file1` chgrp ----- @@ -177,7 +178,7 @@ Returns 0 on success and -1 on error. cp ---- -Usage: `hadoop fs -cp [-f] [-p | -p[topax]] [-t ] [-q ] URI [URI ...] ` +Usage: `hadoop fs -cp [-f] [-p | -p[topax]] [-d] [-t ] [-q ] URI [URI ...] ` Copy files from source to destination. This command allows multiple sources as well in which case the destination must be a directory. @@ -187,13 +188,14 @@ Options: * `-f` : Overwrite the destination if it already exists. * `-d` : Skip creation of temporary file with the suffix `._COPYING_`. -* `-p` : Preserve file attributes [topx] (timestamps, ownership, permission, ACL, XAttr). If -p is specified with no *arg*, then preserves timestamps, ownership, permission. If -pa is specified, then preserves permission also because ACL is a super-set of permission. Determination of whether raw namespace extended attributes are preserved is independent of the -p flag. +* `-p` : Preserve file attributes [topax] (timestamps, ownership, permission, ACL, XAttr). If -p is specified with no *arg*, then preserves timestamps, ownership, permission. If -pa is specified, then preserves permission also because ACL is a super-set of permission. Determination of whether raw namespace extended attributes are preserved is independent of the -p flag. * `-t ` : Number of threads to be used, default is 1. Useful when copying directories containing more than 1 file. * `-q ` : Thread pool queue size to be used, default is 1024. It takes effect only when thread count greater than 1. Example: * `hadoop fs -cp /user/hadoop/file1 /user/hadoop/file2` +* `hadoop fs -cp -f -d /user/hadoop/file1 /user/hadoop/file2` * `hadoop fs -cp /user/hadoop/file1 /user/hadoop/file2 /user/hadoop/dir` * `hadoop fs -cp -t 5 /user/hadoop/file1 /user/hadoop/file2 /user/hadoop/dir` * `hadoop fs -cp -t 10 -q 2048 /user/hadoop/file1 /user/hadoop/file2 /user/hadoop/dir` @@ -403,7 +405,7 @@ Returns 0 on success and non-zero on error. getmerge -------- -Usage: `hadoop fs -getmerge [-nl] ` +Usage: `hadoop fs -getmerge [-nl] [-skip-empty-file] ` Takes a source directory and a destination file as input and concatenates files in src into the destination local file. Optionally -nl can be set to enable adding a newline character (LF) at the end of each file. -skip-empty-file can be used to avoid unwanted newline characters in case of empty files. @@ -412,6 +414,7 @@ Examples: * `hadoop fs -getmerge -nl /src /opt/output.txt` * `hadoop fs -getmerge -nl /src/file1.txt /src/file2.txt /output.txt` +* `hadoop fs -getmerge -nl -skip-empty-file /src/file1.txt /src/file2.txt /output.txt` Exit Code: @@ -852,7 +855,7 @@ Return the help for an individual command. ==================================================== The Hadoop FileSystem shell works with Object Stores such as Amazon S3, -Azure WASB and OpenStack Swift. +Azure ABFS and Google GCS. @@ -972,7 +975,7 @@ this will be in the bucket; the `rm` operation will then take time proportional to the size of the data. Furthermore, the deleted files will continue to incur storage costs. -To avoid this, use the the `-skipTrash` option. +To avoid this, use the `-skipTrash` option. ```bash hadoop fs -rm -skipTrash s3a://bucket/dataset diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md b/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md index 04cbd9fedf8..e7d387b1131 100644 --- a/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md +++ b/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md @@ -220,7 +220,7 @@ Each metrics record contains tags such as ProcessName, SessionId, and Hostname a | `WarmUpEDEKTimeNumOps` | Total number of warming up EDEK | | `WarmUpEDEKTimeAvgTime` | Average time of warming up EDEK in milliseconds | | `WarmUpEDEKTime`*num*`s(50/75/90/95/99)thPercentileLatency` | The 50/75/90/95/99th percentile of time spent in warming up EDEK in milliseconds (*num* seconds granularity). Percentile measurement is off by default, by watching no intervals. The intervals are specified by `dfs.metrics.percentiles.intervals`. | -| `ResourceCheckTime`*num*`s(50/75/90/95/99)thPercentileLatency` | The 50/75/90/95/99th percentile of of NameNode resource check latency in milliseconds (*num* seconds granularity). Percentile measurement is off by default, by watching no intervals. The intervals are specified by `dfs.metrics.percentiles.intervals`. | +| `ResourceCheckTime`*num*`s(50/75/90/95/99)thPercentileLatency` | The 50/75/90/95/99th percentile of NameNode resource check latency in milliseconds (*num* seconds granularity). Percentile measurement is off by default, by watching no intervals. The intervals are specified by `dfs.metrics.percentiles.intervals`. | | `EditLogTailTimeNumOps` | Total number of times the standby NameNode tailed the edit log | | `EditLogTailTimeAvgTime` | Average time (in milliseconds) spent by standby NameNode in tailing edit log | | `EditLogTailTime`*num*`s(50/75/90/95/99)thPercentileLatency` | The 50/75/90/95/99th percentile of time spent in tailing edit logs by standby NameNode in milliseconds (*num* seconds granularity). Percentile measurement is off by default, by watching no intervals. The intervals are specified by `dfs.metrics.percentiles.intervals`. | diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/SecureMode.md b/hadoop-common-project/hadoop-common/src/site/markdown/SecureMode.md index ebfc16c1a52..98c3dd2bbb9 100644 --- a/hadoop-common-project/hadoop-common/src/site/markdown/SecureMode.md +++ b/hadoop-common-project/hadoop-common/src/site/markdown/SecureMode.md @@ -595,7 +595,7 @@ hadoop kdiag \ --keytab zk.service.keytab --principal zookeeper/devix.example.org@REALM ``` -This attempts to to perform all diagnostics without failing early, load in +This attempts to perform all diagnostics without failing early, load in the HDFS and YARN XML resources, require a minimum key length of 1024 bytes, and log in as the principal `zookeeper/devix.example.org@REALM`, whose key must be in the keytab `zk.service.keytab` diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/SingleCluster.md.vm b/hadoop-common-project/hadoop-common/src/site/markdown/SingleCluster.md.vm index 8d0a7d195a8..3c8af8fd6e9 100644 --- a/hadoop-common-project/hadoop-common/src/site/markdown/SingleCluster.md.vm +++ b/hadoop-common-project/hadoop-common/src/site/markdown/SingleCluster.md.vm @@ -26,6 +26,15 @@ Purpose This document describes how to set up and configure a single-node Hadoop installation so that you can quickly perform simple operations using Hadoop MapReduce and the Hadoop Distributed File System (HDFS). + +*Important*: all production Hadoop clusters use Kerberos to authenticate callers +and secure access to HDFS data as well as restriction access to computation +services (YARN etc.). + +These instructions do not cover integration with any Kerberos services, +-everyone bringing up a production cluster should include connecting to their +organisation's Kerberos infrastructure as a key part of the deployment. + Prerequisites ------------- @@ -33,8 +42,6 @@ $H3 Supported Platforms * GNU/Linux is supported as a development and production platform. Hadoop has been demonstrated on GNU/Linux clusters with 2000 nodes. -* Windows is also a supported platform but the followings steps are for Linux only. To set up Hadoop on Windows, see [wiki page](http://wiki.apache.org/hadoop/Hadoop2OnWindows). - $H3 Required Software Required software for Linux include: diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md index 004220c4bed..fafe2819cf6 100644 --- a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md +++ b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md @@ -501,7 +501,7 @@ Where def blocks(FS, p, s, s + l) = a list of the blocks containing data(FS, path)[s:s+l] -Note that that as `length(FS, f) ` is defined as `0` if `isDir(FS, f)`, the result +Note that as `length(FS, f) ` is defined as `0` if `isDir(FS, f)`, the result of `getFileBlockLocations()` on a directory is `[]` @@ -701,13 +701,13 @@ The behavior of the returned stream is covered in [Output](outputstream.html). clients creating files with `overwrite==true` to fail if the file is created by another client between the two tests. -* S3A, Swift and potentially other Object Stores do not currently change the `FS` state +* The S3A and potentially other Object Stores connectors not currently change the `FS` state until the output stream `close()` operation is completed. This is a significant difference between the behavior of object stores and that of filesystems, as it allows >1 client to create a file with `overwrite=false`, and potentially confuse file/directory logic. In particular, using `create()` to acquire an exclusive lock on a file (whoever creates the file without an error is considered -the holder of the lock) may not not a safe algorithm to use when working with object stores. +the holder of the lock) may not be a safe algorithm to use when working with object stores. * Object stores may create an empty file as a marker when a file is created. However, object stores with `overwrite=true` semantics may not implement this atomically, @@ -1225,7 +1225,7 @@ the parent directories of the destination then exist: There is a check for and rejection if the `parent(dest)` is a file, but no checks for any other ancestors. -*Other Filesystems (including Swift) * +*Other Filesystems* Other filesystems strictly reject the operation, raising a `FileNotFoundException` diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstreambuilder.md b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstreambuilder.md index 16a14150ef9..084c0eaff33 100644 --- a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstreambuilder.md +++ b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstreambuilder.md @@ -167,7 +167,7 @@ rather than just any FS-specific subclass implemented by the implementation custom subclasses. This is critical to ensure safe use of the feature: directory listing/ -status serialization/deserialization can result result in the `withFileStatus()` +status serialization/deserialization can result in the `withFileStatus()` argument not being the custom subclass returned by the Filesystem instance's own `getFileStatus()`, `listFiles()`, `listLocatedStatus()` calls, etc. @@ -686,4 +686,4 @@ public T load(FileSystem fs, *Note:* : in Hadoop 3.3.2 and earlier, the `withFileStatus(status)` call required a non-null parameter; this has since been relaxed. For maximum compatibility across versions, only invoke the method -when the file status is known to be non-null. \ No newline at end of file +when the file status is known to be non-null. diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdataoutputstreambuilder.md b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdataoutputstreambuilder.md index 59a93c5887a..ad6d107d06c 100644 --- a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdataoutputstreambuilder.md +++ b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdataoutputstreambuilder.md @@ -228,7 +228,7 @@ Accordingly: *Use if and only if you are confident that the conditions are met.* ### `fs.s3a.create.header` User-supplied header support -Options with the prefix `fs.s3a.create.header.` will be added to to the +Options with the prefix `fs.s3a.create.header.` will be added to the S3 object metadata as "user defined metadata". This metadata is visible to all applications. It can also be retrieved through the FileSystem/FileContext `listXAttrs()` and `getXAttrs()` API calls with the prefix `header.` @@ -236,4 +236,4 @@ FileSystem/FileContext `listXAttrs()` and `getXAttrs()` API calls with the prefi When an object is renamed, the metadata is propagated the copy created. It is possible to probe an S3A Filesystem instance for this capability through -the `hasPathCapability(path, "fs.s3a.create.header")` check. \ No newline at end of file +the `hasPathCapability(path, "fs.s3a.create.header")` check. diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md index 903d2bb90ff..76782b45409 100644 --- a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md +++ b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md @@ -30,8 +30,8 @@ are places where HDFS diverges from the expected behaviour of a POSIX filesystem. The bundled S3A FileSystem clients make Amazon's S3 Object Store ("blobstore") -accessible through the FileSystem API. The Swift FileSystem driver provides similar -functionality for the OpenStack Swift blobstore. The Azure WASB and ADL object +accessible through the FileSystem API. +The Azure ABFS, WASB and ADL object storage FileSystems talks to Microsoft's Azure storage. All of these bind to object stores, which do have different behaviors, especially regarding consistency guarantees, and atomicity of operations. @@ -314,10 +314,10 @@ child entries This specification refers to *Object Stores* in places, often using the term *Blobstore*. Hadoop does provide FileSystem client classes for some of these -even though they violate many of the requirements. This is why, although -Hadoop can read and write data in an object store, the two which Hadoop ships -with direct support for — Amazon S3 and OpenStack Swift — cannot -be used as direct replacements for HDFS. +even though they violate many of the requirements. + +Consult the documentation for a specific store to determine its compatibility +with specific applications and services. *What is an Object Store?* diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/outputstream.md b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/outputstream.md index 1498d8db2e2..3b486ea3d4e 100644 --- a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/outputstream.md +++ b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/outputstream.md @@ -980,7 +980,7 @@ throw `UnsupportedOperationException`. ### `StreamCapabilities` Implementors of filesystem clients SHOULD implement the `StreamCapabilities` -interface and its `hasCapabilities()` method to to declare whether or not +interface and its `hasCapabilities()` method to declare whether or not an output streams offer the visibility and durability guarantees of `Syncable`. Implementors of `StreamCapabilities.hasCapabilities()` MUST NOT declare that @@ -1013,4 +1013,4 @@ all data to the datanodes. 1. `close()` SHALL return once the guarantees of `hflush()` are met: the data is visible to others. -1. For durability guarantees, `hsync()` MUST be called first. \ No newline at end of file +1. For durability guarantees, `hsync()` MUST be called first. diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md index 4c6fa3ff0f6..53eb9870bc1 100644 --- a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md +++ b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md @@ -66,55 +66,6 @@ Example: - -### swift:// - -The OpenStack Swift login details must be defined in the file -`/hadoop-tools/hadoop-openstack/src/test/resources/contract-test-options.xml`. -The standard hadoop-common `contract-test-options.xml` resource file cannot be -used, as that file does not get included in `hadoop-common-test.jar`. - - -In `/hadoop-tools/hadoop-openstack/src/test/resources/contract-test-options.xml` -the Swift bucket name must be defined in the property `fs.contract.test.fs.swift`, -along with the login details for the specific Swift service provider in which the -bucket is posted. - - - - fs.contract.test.fs.swift - swift://swiftbucket.rackspace/ - - - - fs.swift.service.rackspace.auth.url - https://auth.api.rackspacecloud.com/v2.0/tokens - Rackspace US (multiregion) - - - - fs.swift.service.rackspace.username - this-is-your-username - - - - fs.swift.service.rackspace.region - DFW - - - - fs.swift.service.rackspace.apikey - ab0bceyoursecretapikeyffef - - - - -1. Often the different public cloud Swift infrastructures exhibit different behaviors -(authentication and throttling in particular). We recommand that testers create -accounts on as many of these providers as possible and test against each of them. -1. They can be slow, especially remotely. Remote links are also the most likely -to make eventual-consistency behaviors visible, which is a mixed benefit. - ## Testing a new filesystem The core of adding a new FileSystem to the contract tests is adding a @@ -228,8 +179,6 @@ Passing all the FileSystem contract tests does not mean that a filesystem can be * Scalability: does it support files as large as HDFS, or as many in a single directory? * Durability: do files actually last -and how long for? -Proof that this is is true is the fact that the Amazon S3 and OpenStack Swift object stores are eventually consistent object stores with non-atomic rename and delete operations. Single threaded test cases are unlikely to see some of the concurrency issues, while consistency is very often only visible in tests that span a datacenter. - There are also some specific aspects of the use of the FileSystem API: * Compatibility with the `hadoop -fs` CLI. diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/registry/registry-configuration.md b/hadoop-common-project/hadoop-common/src/site/markdown/registry/registry-configuration.md index e2646e05624..817863027f7 100644 --- a/hadoop-common-project/hadoop-common/src/site/markdown/registry/registry-configuration.md +++ b/hadoop-common-project/hadoop-common/src/site/markdown/registry/registry-configuration.md @@ -143,7 +143,7 @@ too must have this context defined. ### Identifying the system accounts `hadoop.registry.system.acls` -These are the the accounts which are given full access to the base of the +These are the accounts which are given full access to the base of the registry. The Resource Manager needs this option to create the root paths. Client applications writing to the registry access to the nodes it creates. diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/registry/registry-security.md b/hadoop-common-project/hadoop-common/src/site/markdown/registry/registry-security.md index 6317681a716..71c868b557a 100644 --- a/hadoop-common-project/hadoop-common/src/site/markdown/registry/registry-security.md +++ b/hadoop-common-project/hadoop-common/src/site/markdown/registry/registry-security.md @@ -29,7 +29,7 @@ a secure registry: 1. Allow the RM to create per-user regions of the registration space 1. Allow applications belonging to a user to write registry entries into their part of the space. These may be short-lived or long-lived -YARN applications, or they may be be static applications. +YARN applications, or they may be static applications. 1. Prevent other users from writing into another user's part of the registry. 1. Allow system services to register to a `/services` section of the registry. 1. Provide read access to clients of a registry. diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/CLITestHelper.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/CLITestHelper.java index ada4cd80e48..f80c62535a1 100644 --- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/CLITestHelper.java +++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/CLITestHelper.java @@ -24,6 +24,8 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.CommonConfigurationKeys; import org.apache.hadoop.util.Shell; import org.apache.hadoop.util.StringUtils; +import org.apache.hadoop.util.XMLUtils; + import static org.junit.Assert.assertTrue; import static org.junit.Assert.fail; @@ -34,7 +36,6 @@ import org.xml.sax.SAXException; import org.xml.sax.helpers.DefaultHandler; import javax.xml.parsers.SAXParser; -import javax.xml.parsers.SAXParserFactory; import java.io.File; import java.util.ArrayList; @@ -76,7 +77,7 @@ public class CLITestHelper { boolean success = false; testConfigFile = TEST_CACHE_DATA_DIR + File.separator + testConfigFile; try { - SAXParser p = (SAXParserFactory.newInstance()).newSAXParser(); + SAXParser p = XMLUtils.newSecureSAXParserFactory().newSAXParser(); p.parse(testConfigFile, getConfigParser()); success = true; } catch (Exception e) { diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java index c31229ba9fc..74b2f55065d 100644 --- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java +++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java @@ -135,7 +135,6 @@ public class TestCommonConfigurationFields extends TestConfigurationFieldsBase { xmlPropsToSkipCompare.add("fs.viewfs.overload.scheme.target.s3a.impl"); xmlPropsToSkipCompare. add("fs.viewfs.overload.scheme.target.swebhdfs.impl"); - xmlPropsToSkipCompare.add("fs.viewfs.overload.scheme.target.swift.impl"); xmlPropsToSkipCompare.add("fs.viewfs.overload.scheme.target.webhdfs.impl"); xmlPropsToSkipCompare.add("fs.viewfs.overload.scheme.target.wasb.impl"); @@ -223,8 +222,7 @@ public class TestCommonConfigurationFields extends TestConfigurationFieldsBase { xmlPropsToSkipCompare.add("hadoop.common.configuration.version"); // - org.apache.hadoop.fs.FileSystem xmlPropsToSkipCompare.add("fs.har.impl.disable.cache"); - // - org.apache.hadoop.fs.FileSystem#getFileSystemClass() - xmlPropsToSkipCompare.add("fs.swift.impl"); + // - package org.apache.hadoop.tracing.TraceUtils ? xmlPropsToSkipCompare.add("hadoop.htrace.span.receiver.classes"); // Private keys diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfServlet.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfServlet.java index 9d7f4255978..6db47d6d22f 100644 --- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfServlet.java +++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfServlet.java @@ -41,9 +41,12 @@ import org.xml.sax.InputSource; import org.apache.hadoop.thirdparty.com.google.common.base.Strings; import org.apache.hadoop.http.HttpServer2; +import org.apache.hadoop.util.XMLUtils; + import org.junit.BeforeClass; import org.junit.Test; import org.mockito.Mockito; + import static org.mockito.Mockito.when; import static org.mockito.Mockito.mock; import static org.junit.Assert.*; @@ -223,8 +226,7 @@ public class TestConfServlet { ConfServlet.writeResponse(getTestConf(), sw, "xml"); String xml = sw.toString(); - DocumentBuilderFactory docBuilderFactory - = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory docBuilderFactory = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder builder = docBuilderFactory.newDocumentBuilder(); Document doc = builder.parse(new InputSource(new StringReader(xml))); NodeList nameNodes = doc.getElementsByTagName("name"); diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationFieldsBase.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationFieldsBase.java index 152159b3f3e..879f1781d74 100644 --- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationFieldsBase.java +++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationFieldsBase.java @@ -194,7 +194,7 @@ public abstract class TestConfigurationFieldsBase { HashMap retVal = new HashMap<>(); // Setup regexp for valid properties - String propRegex = "^[A-Za-z][A-Za-z0-9_-]+(\\.[A-Za-z0-9_-]+)+$"; + String propRegex = "^[A-Za-z][A-Za-z0-9_-]+(\\.[A-Za-z%s0-9_-]+)+$"; Pattern p = Pattern.compile(propRegex); // Iterate through class member variables diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProvider.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProvider.java index cb6a1fb31e6..b0c3b090022 100644 --- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProvider.java +++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProvider.java @@ -36,6 +36,7 @@ import java.util.HashMap; import java.util.List; import java.util.Map; +import static org.apache.hadoop.test.LambdaTestUtils.intercept; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertNull; import static org.junit.Assert.assertTrue; @@ -62,6 +63,8 @@ public class TestKeyProvider { } catch (IOException e) { assertTrue(true); } + intercept(NullPointerException.class, () -> + KeyProvider.getBaseName(null)); } @Test diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java index c884e223365..94d90b2eb97 100644 --- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java +++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java @@ -1321,16 +1321,16 @@ public class TestFileUtil { if (wildcardPath.equals(classPath)) { // add wildcard matches for (File wildcardMatch: wildcardMatches) { - expectedClassPaths.add(wildcardMatch.toURI().toURL() + expectedClassPaths.add(wildcardMatch.getCanonicalFile().toURI().toURL() .toExternalForm()); } } else { File fileCp = null; if(!new Path(classPath).isAbsolute()) { - fileCp = new File(tmp, classPath); + fileCp = new File(tmp, classPath).getCanonicalFile(); } else { - fileCp = new File(classPath); + fileCp = new File(classPath).getCanonicalFile(); } if (nonExistentSubdir.equals(classPath)) { // expect to maintain trailing path separator if present in input, even @@ -1385,7 +1385,8 @@ public class TestFileUtil { for (Path jar: jars) { URL url = jar.toUri().toURL(); assertTrue("the jar should match either of the jars", - url.equals(jar1.toURI().toURL()) || url.equals(jar2.toURI().toURL())); + url.equals(jar1.getCanonicalFile().toURI().toURL()) || + url.equals(jar2.getCanonicalFile().toURI().toURL())); } } diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFileSystem.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFileSystem.java index 5ed4d9bc9a7..3d8ea0e826c 100644 --- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFileSystem.java +++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFileSystem.java @@ -143,6 +143,11 @@ public class TestFilterFileSystem { of the filter such as checksums. */ MultipartUploaderBuilder createMultipartUploader(Path basePath); + + FSDataOutputStream append(Path f, boolean appendToNewBlock) throws IOException; + + FSDataOutputStream append(Path f, int bufferSize, + Progressable progress, boolean appendToNewBlock) throws IOException; } @Test diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java index 711ab94fdf1..b227e169088 100644 --- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java +++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java @@ -250,6 +250,11 @@ public class TestHarFileSystem { MultipartUploaderBuilder createMultipartUploader(Path basePath) throws IOException; + + FSDataOutputStream append(Path f, boolean appendToNewBlock) throws IOException; + + FSDataOutputStream append(Path f, int bufferSize, + Progressable progress, boolean appendToNewBlock) throws IOException; } @Test diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestVectoredReadUtils.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestVectoredReadUtils.java index ebf0e14053b..e964d23f4b7 100644 --- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestVectoredReadUtils.java +++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestVectoredReadUtils.java @@ -61,6 +61,9 @@ public class TestVectoredReadUtils extends HadoopTestBase { .describedAs("Slicing on the same offset shouldn't " + "create a new buffer") .isEqualTo(slice); + Assertions.assertThat(slice.position()) + .describedAs("Slicing should return buffers starting from position 0") + .isEqualTo(0); // try slicing a range final int offset = 100; @@ -77,6 +80,9 @@ public class TestVectoredReadUtils extends HadoopTestBase { .describedAs("Slicing should use the same underlying " + "data") .isEqualTo(slice.array()); + Assertions.assertThat(slice.position()) + .describedAs("Slicing should return buffers starting from position 0") + .isEqualTo(0); // test the contents of the slice intBuffer = slice.asIntBuffer(); for(int i=0; i < sliceLength / Integer.BYTES; ++i) { @@ -96,7 +102,10 @@ public class TestVectoredReadUtils extends HadoopTestBase { @Test public void testMerge() { - FileRange base = FileRange.createFileRange(2000, 1000); + // a reference to use for tracking + Object tracker1 = "one"; + Object tracker2 = "two"; + FileRange base = FileRange.createFileRange(2000, 1000, tracker1); CombinedFileRange mergeBase = new CombinedFileRange(2000, 3000, base); // test when the gap between is too big @@ -104,44 +113,48 @@ public class TestVectoredReadUtils extends HadoopTestBase { FileRange.createFileRange(5000, 1000), 2000, 4000)); assertEquals("Number of ranges in merged range shouldn't increase", 1, mergeBase.getUnderlying().size()); - assertEquals("post merge offset", 2000, mergeBase.getOffset()); - assertEquals("post merge length", 1000, mergeBase.getLength()); + assertFileRange(mergeBase, 2000, 1000); // test when the total size gets exceeded assertFalse("Large size ranges shouldn't get merged", mergeBase.merge(5000, 6000, FileRange.createFileRange(5000, 1000), 2001, 3999)); assertEquals("Number of ranges in merged range shouldn't increase", 1, mergeBase.getUnderlying().size()); - assertEquals("post merge offset", 2000, mergeBase.getOffset()); - assertEquals("post merge length", 1000, mergeBase.getLength()); + assertFileRange(mergeBase, 2000, 1000); // test when the merge works assertTrue("ranges should get merged ", mergeBase.merge(5000, 6000, - FileRange.createFileRange(5000, 1000), 2001, 4000)); + FileRange.createFileRange(5000, 1000, tracker2), + 2001, 4000)); assertEquals("post merge size", 2, mergeBase.getUnderlying().size()); - assertEquals("post merge offset", 2000, mergeBase.getOffset()); - assertEquals("post merge length", 4000, mergeBase.getLength()); + assertFileRange(mergeBase, 2000, 4000); + + Assertions.assertThat(mergeBase.getUnderlying().get(0).getReference()) + .describedAs("reference of range %s", mergeBase.getUnderlying().get(0)) + .isSameAs(tracker1); + Assertions.assertThat(mergeBase.getUnderlying().get(1).getReference()) + .describedAs("reference of range %s", mergeBase.getUnderlying().get(1)) + .isSameAs(tracker2); // reset the mergeBase and test with a 10:1 reduction mergeBase = new CombinedFileRange(200, 300, base); - assertEquals(200, mergeBase.getOffset()); - assertEquals(100, mergeBase.getLength()); + assertFileRange(mergeBase, 200, 100); + assertTrue("ranges should get merged ", mergeBase.merge(500, 600, FileRange.createFileRange(5000, 1000), 201, 400)); assertEquals("post merge size", 2, mergeBase.getUnderlying().size()); - assertEquals("post merge offset", 200, mergeBase.getOffset()); - assertEquals("post merge length", 400, mergeBase.getLength()); + assertFileRange(mergeBase, 200, 400); } @Test public void testSortAndMerge() { List input = Arrays.asList( - FileRange.createFileRange(3000, 100), - FileRange.createFileRange(2100, 100), - FileRange.createFileRange(1000, 100) + FileRange.createFileRange(3000, 100, "1"), + FileRange.createFileRange(2100, 100, null), + FileRange.createFileRange(1000, 100, "3") ); assertFalse("Ranges are non disjoint", VectoredReadUtils.isOrderedDisjoint(input, 100, 800)); - List outputList = VectoredReadUtils.mergeSortedRanges( + final List outputList = VectoredReadUtils.mergeSortedRanges( Arrays.asList(sortRanges(input)), 100, 1001, 2500); Assertions.assertThat(outputList) .describedAs("merged range size") @@ -150,51 +163,105 @@ public class TestVectoredReadUtils extends HadoopTestBase { Assertions.assertThat(output.getUnderlying()) .describedAs("merged range underlying size") .hasSize(3); - assertEquals("range[1000,3100)", output.toString()); + // range[1000,3100) + assertFileRange(output, 1000, 2100); assertTrue("merged output ranges are disjoint", VectoredReadUtils.isOrderedDisjoint(outputList, 100, 800)); // the minSeek doesn't allow the first two to merge assertFalse("Ranges are non disjoint", VectoredReadUtils.isOrderedDisjoint(input, 100, 1000)); - outputList = VectoredReadUtils.mergeSortedRanges(Arrays.asList(sortRanges(input)), + final List list2 = VectoredReadUtils.mergeSortedRanges( + Arrays.asList(sortRanges(input)), 100, 1000, 2100); - Assertions.assertThat(outputList) + Assertions.assertThat(list2) .describedAs("merged range size") .hasSize(2); - assertEquals("range[1000,1100)", outputList.get(0).toString()); - assertEquals("range[2100,3100)", outputList.get(1).toString()); + assertFileRange(list2.get(0), 1000, 100); + + // range[2100,3100) + assertFileRange(list2.get(1), 2100, 1000); + assertTrue("merged output ranges are disjoint", - VectoredReadUtils.isOrderedDisjoint(outputList, 100, 1000)); + VectoredReadUtils.isOrderedDisjoint(list2, 100, 1000)); // the maxSize doesn't allow the third range to merge assertFalse("Ranges are non disjoint", VectoredReadUtils.isOrderedDisjoint(input, 100, 800)); - outputList = VectoredReadUtils.mergeSortedRanges(Arrays.asList(sortRanges(input)), + final List list3 = VectoredReadUtils.mergeSortedRanges( + Arrays.asList(sortRanges(input)), 100, 1001, 2099); - Assertions.assertThat(outputList) + Assertions.assertThat(list3) .describedAs("merged range size") .hasSize(2); - assertEquals("range[1000,2200)", outputList.get(0).toString()); - assertEquals("range[3000,3100)", outputList.get(1).toString()); + // range[1000,2200) + CombinedFileRange range0 = list3.get(0); + assertFileRange(range0, 1000, 1200); + assertFileRange(range0.getUnderlying().get(0), + 1000, 100, "3"); + assertFileRange(range0.getUnderlying().get(1), + 2100, 100, null); + CombinedFileRange range1 = list3.get(1); + // range[3000,3100) + assertFileRange(range1, 3000, 100); + assertFileRange(range1.getUnderlying().get(0), + 3000, 100, "1"); + assertTrue("merged output ranges are disjoint", - VectoredReadUtils.isOrderedDisjoint(outputList, 100, 800)); + VectoredReadUtils.isOrderedDisjoint(list3, 100, 800)); // test the round up and round down (the maxSize doesn't allow any merges) assertFalse("Ranges are non disjoint", VectoredReadUtils.isOrderedDisjoint(input, 16, 700)); - outputList = VectoredReadUtils.mergeSortedRanges(Arrays.asList(sortRanges(input)), + final List list4 = VectoredReadUtils.mergeSortedRanges( + Arrays.asList(sortRanges(input)), 16, 1001, 100); - Assertions.assertThat(outputList) + Assertions.assertThat(list4) .describedAs("merged range size") .hasSize(3); - assertEquals("range[992,1104)", outputList.get(0).toString()); - assertEquals("range[2096,2208)", outputList.get(1).toString()); - assertEquals("range[2992,3104)", outputList.get(2).toString()); + // range[992,1104) + assertFileRange(list4.get(0), 992, 112); + // range[2096,2208) + assertFileRange(list4.get(1), 2096, 112); + // range[2992,3104) + assertFileRange(list4.get(2), 2992, 112); assertTrue("merged output ranges are disjoint", - VectoredReadUtils.isOrderedDisjoint(outputList, 16, 700)); + VectoredReadUtils.isOrderedDisjoint(list4, 16, 700)); } + /** + * Assert that a file range satisfies the conditions. + * @param range range to validate + * @param offset offset of range + * @param length range length + */ + private void assertFileRange(FileRange range, long offset, int length) { + Assertions.assertThat(range) + .describedAs("file range %s", range) + .isNotNull(); + Assertions.assertThat(range.getOffset()) + .describedAs("offset of %s", range) + .isEqualTo(offset); + Assertions.assertThat(range.getLength()) + .describedAs("length of %s", range) + .isEqualTo(length); + } + + /** + * Assert that a file range satisfies the conditions. + * @param range range to validate + * @param offset offset of range + * @param length range length + * @param reference reference; may be null. + */ + private void assertFileRange(FileRange range, long offset, int length, Object reference) { + assertFileRange(range, offset, length); + Assertions.assertThat(range.getReference()) + .describedAs("reference field of file range %s", range) + .isEqualTo(reference); + } + + @Test public void testSortAndMergeMoreCases() throws Exception { List input = Arrays.asList( @@ -214,7 +281,9 @@ public class TestVectoredReadUtils extends HadoopTestBase { Assertions.assertThat(output.getUnderlying()) .describedAs("merged range underlying size") .hasSize(4); - assertEquals("range[1000,3110)", output.toString()); + + assertFileRange(output, 1000, 2110); + assertTrue("merged output ranges are disjoint", VectoredReadUtils.isOrderedDisjoint(outputList, 1, 800)); @@ -227,7 +296,8 @@ public class TestVectoredReadUtils extends HadoopTestBase { Assertions.assertThat(output.getUnderlying()) .describedAs("merged range underlying size") .hasSize(4); - assertEquals("range[1000,3200)", output.toString()); + assertFileRange(output, 1000, 2200); + assertTrue("merged output ranges are disjoint", VectoredReadUtils.isOrderedDisjoint(outputList, 1, 800)); diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/impl/prefetch/TestFilePosition.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/impl/prefetch/TestFilePosition.java index e86c4be97b9..12ab62556a1 100644 --- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/impl/prefetch/TestFilePosition.java +++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/impl/prefetch/TestFilePosition.java @@ -26,6 +26,7 @@ import org.junit.Test; import org.apache.hadoop.test.AbstractHadoopTestBase; import static org.apache.hadoop.test.LambdaTestUtils.intercept; +import static org.assertj.core.api.Assertions.assertThat; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertTrue; @@ -43,6 +44,7 @@ public class TestFilePosition extends AbstractHadoopTestBase { new FilePosition(10, 5); new FilePosition(5, 10); new FilePosition(10, 5).setData(data, 3, 4); + new FilePosition(10, 10).setData(data, 3, 13); // Verify it throws correctly. @@ -94,11 +96,11 @@ public class TestFilePosition extends AbstractHadoopTestBase { "'readOffset' must not be negative", () -> pos.setData(data, 4, -4)); intercept(IllegalArgumentException.class, - "'readOffset' (15) must be within the range [4, 13]", + "'readOffset' (15) must be within the range [4, 14]", () -> pos.setData(data, 4, 15)); intercept(IllegalArgumentException.class, - "'readOffset' (3) must be within the range [4, 13]", + "'readOffset' (3) must be within the range [4, 14]", () -> pos.setData(data, 4, 3)); } @@ -192,4 +194,31 @@ public class TestFilePosition extends AbstractHadoopTestBase { } assertTrue(pos.bufferFullyRead()); } + + @Test + public void testBounds() { + int bufferSize = 8; + long fileSize = bufferSize; + + ByteBuffer buffer = ByteBuffer.allocate(bufferSize); + BufferData data = new BufferData(0, buffer); + FilePosition pos = new FilePosition(fileSize, bufferSize); + + long eofOffset = fileSize; + pos.setData(data, 0, eofOffset); + + assertThat(pos.isWithinCurrentBuffer(eofOffset)) + .describedAs("EOF offset %d should be within the current buffer", eofOffset) + .isTrue(); + assertThat(pos.absolute()) + .describedAs("absolute() should return the EOF offset") + .isEqualTo(eofOffset); + + assertThat(pos.setAbsolute(eofOffset)) + .describedAs("setAbsolute() should return true on the EOF offset %d", eofOffset) + .isTrue(); + assertThat(pos.absolute()) + .describedAs("absolute() should return the EOF offset") + .isEqualTo(eofOffset); + } } diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestTextCommand.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestTextCommand.java index 4eb1d433bee..e806dde11b0 100644 --- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestTextCommand.java +++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestTextCommand.java @@ -31,6 +31,8 @@ import java.nio.file.Files; import org.apache.commons.io.IOUtils; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.io.SequenceFile; import org.apache.hadoop.test.GenericTestUtils; import org.junit.Test; @@ -46,22 +48,19 @@ public class TestTextCommand { private static final String TEXT_FILENAME = new File(TEST_ROOT_DIR, "testtextfile.txt").toURI().getPath(); + private static final String SEPARATOR = System.getProperty("line.separator"); + /** * Tests whether binary Avro data files are displayed correctly. */ @Test (timeout = 30000) public void testDisplayForAvroFiles() throws Exception { String expectedOutput = - "{\"station\":\"011990-99999\",\"time\":-619524000000,\"temp\":0}" + - System.getProperty("line.separator") + - "{\"station\":\"011990-99999\",\"time\":-619506000000,\"temp\":22}" + - System.getProperty("line.separator") + - "{\"station\":\"011990-99999\",\"time\":-619484400000,\"temp\":-11}" + - System.getProperty("line.separator") + - "{\"station\":\"012650-99999\",\"time\":-655531200000,\"temp\":111}" + - System.getProperty("line.separator") + - "{\"station\":\"012650-99999\",\"time\":-655509600000,\"temp\":78}" + - System.getProperty("line.separator"); + "{\"station\":\"011990-99999\",\"time\":-619524000000,\"temp\":0}" + SEPARATOR + + "{\"station\":\"011990-99999\",\"time\":-619506000000,\"temp\":22}" + SEPARATOR + + "{\"station\":\"011990-99999\",\"time\":-619484400000,\"temp\":-11}" + SEPARATOR + + "{\"station\":\"012650-99999\",\"time\":-655531200000,\"temp\":111}" + SEPARATOR + + "{\"station\":\"012650-99999\",\"time\":-655509600000,\"temp\":78}" + SEPARATOR; String output = readUsingTextCommand(AVRO_FILENAME, generateWeatherAvroBinaryData()); @@ -104,11 +103,16 @@ public class TestTextCommand { throws Exception { createFile(fileName, fileContents); - // Prepare and call the Text command's protected getInputStream method - // using reflection. Configuration conf = new Configuration(); URI localPath = new URI(fileName); - PathData pathData = new PathData(localPath, conf); + return readUsingTextCommand(localPath, conf); + } + // Read a file using Display.Text class. + private String readUsingTextCommand(URI uri, Configuration conf) + throws Exception { + // Prepare and call the Text command's protected getInputStream method + // using reflection. + PathData pathData = new PathData(uri, conf); Display.Text text = new Display.Text() { @Override public InputStream getInputStream(PathData item) throws IOException { @@ -116,7 +120,7 @@ public class TestTextCommand { } }; text.setConf(conf); - InputStream stream = (InputStream) text.getInputStream(pathData); + InputStream stream = text.getInputStream(pathData); return inputStreamToString(stream); } @@ -232,5 +236,21 @@ public class TestTextCommand { return contents; } + + @Test + public void testDisplayForNonWritableSequenceFile() throws Exception { + Configuration conf = new Configuration(); + conf.set("io.serializations", "org.apache.hadoop.io.serializer.JavaSerialization"); + Path path = new Path(String.valueOf(TEST_ROOT_DIR), "NonWritableSequenceFile"); + SequenceFile.Writer writer = SequenceFile.createWriter(conf, SequenceFile.Writer.file(path), + SequenceFile.Writer.keyClass(String.class), SequenceFile.Writer.valueClass(String.class)); + writer.append("Key1", "Value1"); + writer.append("Key2", "Value2"); + writer.close(); + String expected = "Key1\tValue1" + SEPARATOR + "Key2\tValue2" + SEPARATOR; + URI uri = path.toUri(); + System.out.println(expected); + assertEquals(expected, readUsingTextCommand(uri, conf)); + } } diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestDefaultStringifier.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestDefaultStringifier.java index b70e011f6aa..c15ec8caa4f 100644 --- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestDefaultStringifier.java +++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestDefaultStringifier.java @@ -26,6 +26,7 @@ import org.junit.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; +import static org.apache.hadoop.test.LambdaTestUtils.intercept; import static org.junit.Assert.assertEquals; public class TestDefaultStringifier { @@ -98,7 +99,7 @@ public class TestDefaultStringifier { } @Test - public void testStoreLoadArray() throws IOException { + public void testStoreLoadArray() throws Exception { LOG.info("Testing DefaultStringifier#storeArray() and #loadArray()"); conf.set("io.serializations", "org.apache.hadoop.io.serializer.JavaSerialization"); @@ -107,6 +108,8 @@ public class TestDefaultStringifier { Integer[] array = new Integer[] {1,2,3,4,5}; + intercept(IndexOutOfBoundsException.class, () -> + DefaultStringifier.storeArray(conf, new Integer[] {}, keyName)); DefaultStringifier.storeArray(conf, array, keyName); Integer[] claimedArray = DefaultStringifier.loadArray(conf, keyName, Integer.class); diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java index ffa17224b03..25c69765494 100644 --- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java +++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java @@ -1216,11 +1216,6 @@ public class TestIPC { @Test(timeout=30000) public void testInterrupted() { Client client = new Client(LongWritable.class, conf); - Client.getClientExecutor().submit(new Runnable() { - public void run() { - while(true); - } - }); Thread.currentThread().interrupt(); client.stop(); try { diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java index 101750d72c8..084a3dbd4ae 100644 --- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java +++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java @@ -55,6 +55,7 @@ import org.apache.hadoop.test.MockitoUtil; import org.junit.Assert; import org.junit.Before; import org.junit.Test; +import org.mockito.Mockito; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.slf4j.event.Level; @@ -62,13 +63,16 @@ import org.slf4j.event.Level; import javax.net.SocketFactory; import java.io.Closeable; import java.io.IOException; +import java.io.InputStream; import java.io.InterruptedIOException; +import java.io.OutputStream; import java.lang.reflect.InvocationHandler; import java.lang.reflect.Method; import java.lang.reflect.Proxy; import java.net.ConnectException; import java.net.InetAddress; import java.net.InetSocketAddress; +import java.net.Socket; import java.net.SocketTimeoutException; import java.nio.ByteBuffer; import java.security.PrivilegedAction; @@ -89,6 +93,7 @@ import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicReference; +import java.util.concurrent.locks.ReentrantLock; import static org.assertj.core.api.Assertions.assertThat; import static org.apache.hadoop.test.MetricsAsserts.assertCounter; @@ -993,6 +998,196 @@ public class TestRPC extends TestRpcBase { } } + /** + * This tests the case where the server isn't receiving new data and + * multiple threads queue up to send rpc requests. Only one of the requests + * should be written and all of the calling threads should be interrupted. + * + * We use a mock SocketFactory so that we can control when the input and + * output streams are frozen. + */ + @Test(timeout=30000) + public void testSlowConnection() throws Exception { + SocketFactory mockFactory = Mockito.mock(SocketFactory.class); + Socket mockSocket = Mockito.mock(Socket.class); + Mockito.when(mockFactory.createSocket()).thenReturn(mockSocket); + Mockito.when(mockSocket.getPort()).thenReturn(1234); + Mockito.when(mockSocket.getLocalPort()).thenReturn(2345); + MockOutputStream mockOutputStream = new MockOutputStream(); + Mockito.when(mockSocket.getOutputStream()).thenReturn(mockOutputStream); + // Use an input stream that always blocks + Mockito.when(mockSocket.getInputStream()).thenReturn(new InputStream() { + @Override + public int read() throws IOException { + // wait forever + while (true) { + try { + Thread.sleep(TimeUnit.DAYS.toMillis(1)); + } catch (InterruptedException ie) { + Thread.currentThread().interrupt(); + throw new InterruptedIOException("test"); + } + } + } + }); + Configuration clientConf = new Configuration(); + // disable ping & timeout to minimize traffic + clientConf.setBoolean(CommonConfigurationKeys.IPC_CLIENT_PING_KEY, false); + clientConf.setInt(CommonConfigurationKeys.IPC_CLIENT_RPC_TIMEOUT_KEY, 0); + RPC.setProtocolEngine(clientConf, TestRpcService.class, ProtobufRpcEngine.class); + // set async mode so that we don't need to implement the input stream + final boolean wasAsync = Client.isAsynchronousMode(); + TestRpcService client = null; + try { + Client.setAsynchronousMode(true); + client = RPC.getProtocolProxy( + TestRpcService.class, + 0, + new InetSocketAddress("localhost", 1234), + UserGroupInformation.getCurrentUser(), + clientConf, + mockFactory).getProxy(); + // The connection isn't actually made until the first call. + client.ping(null, newEmptyRequest()); + mockOutputStream.waitForFlush(1); + final long headerAndFirst = mockOutputStream.getBytesWritten(); + client.ping(null, newEmptyRequest()); + mockOutputStream.waitForFlush(2); + final long second = mockOutputStream.getBytesWritten() - headerAndFirst; + // pause the writer thread + mockOutputStream.pause(); + // create a set of threads to create calls that will back up + ExecutorService pool = Executors.newCachedThreadPool(); + Future[] futures = new Future[numThreads]; + final AtomicInteger doneThreads = new AtomicInteger(0); + for(int thread = 0; thread < numThreads; ++thread) { + final TestRpcService finalClient = client; + futures[thread] = pool.submit(new Callable() { + @Override + public Void call() throws Exception { + finalClient.ping(null, newEmptyRequest()); + doneThreads.incrementAndGet(); + return null; + } + }); + } + // wait until the threads have started writing + mockOutputStream.waitForWriters(); + // interrupt all the threads + for(int thread=0; thread < numThreads; ++thread) { + assertTrue("cancel thread " + thread, + futures[thread].cancel(true)); + } + // wait until all the writers are cancelled + pool.shutdown(); + pool.awaitTermination(10, TimeUnit.SECONDS); + mockOutputStream.resume(); + // wait for the in flight rpc request to be flushed + mockOutputStream.waitForFlush(3); + // All the threads should have been interrupted + assertEquals(0, doneThreads.get()); + // make sure that only one additional rpc request was sent + assertEquals(headerAndFirst + second * 2, + mockOutputStream.getBytesWritten()); + } finally { + Client.setAsynchronousMode(wasAsync); + if (client != null) { + RPC.stopProxy(client); + } + } + } + + private static final class MockOutputStream extends OutputStream { + private long bytesWritten = 0; + private AtomicInteger flushCount = new AtomicInteger(0); + private ReentrantLock lock = new ReentrantLock(true); + + @Override + public synchronized void write(int b) throws IOException { + lock.lock(); + bytesWritten += 1; + lock.unlock(); + } + + @Override + public void flush() { + flushCount.incrementAndGet(); + } + + public synchronized long getBytesWritten() { + return bytesWritten; + } + + public void pause() { + lock.lock(); + } + + public void resume() { + lock.unlock(); + } + + private static final int DELAY_MS = 250; + + /** + * Wait for the Nth flush, which we assume will happen exactly when the + * Nth RPC request is sent. + * @param flush the total flush count to wait for + * @throws InterruptedException + */ + public void waitForFlush(int flush) throws InterruptedException { + while (flushCount.get() < flush) { + Thread.sleep(DELAY_MS); + } + } + + public void waitForWriters() throws InterruptedException { + while (!lock.hasQueuedThreads()) { + Thread.sleep(DELAY_MS); + } + } + } + + /** + * This test causes an exception in the RPC connection setup to make + * sure that threads aren't leaked. + */ + @Test(timeout=30000) + public void testBadSetup() throws Exception { + SocketFactory mockFactory = Mockito.mock(SocketFactory.class); + Mockito.when(mockFactory.createSocket()) + .thenThrow(new IOException("can't connect")); + Configuration clientConf = new Configuration(); + // Set an illegal value to cause an exception in the constructor + clientConf.set(CommonConfigurationKeys.IPC_MAXIMUM_RESPONSE_LENGTH, + "xxx"); + RPC.setProtocolEngine(clientConf, TestRpcService.class, + ProtobufRpcEngine.class); + TestRpcService client = null; + int threadCount = Thread.getAllStackTraces().size(); + try { + try { + client = RPC.getProtocolProxy( + TestRpcService.class, + 0, + new InetSocketAddress("localhost", 1234), + UserGroupInformation.getCurrentUser(), + clientConf, + mockFactory).getProxy(); + client.ping(null, newEmptyRequest()); + assertTrue("Didn't throw exception!", false); + } catch (ServiceException nfe) { + // ensure no extra threads are running. + assertEquals(threadCount, Thread.getAllStackTraces().size()); + } catch (Throwable t) { + assertTrue("wrong exception: " + t, false); + } + } finally { + if (client != null) { + RPC.stopProxy(client); + } + } + } + @Test public void testConnectionPing() throws Exception { Server server; diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/log/TestLogThrottlingHelper.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/log/TestLogThrottlingHelper.java index d0eeea3e513..6c627116f8c 100644 --- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/log/TestLogThrottlingHelper.java +++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/log/TestLogThrottlingHelper.java @@ -142,6 +142,18 @@ public class TestLogThrottlingHelper { assertTrue(helper.record("bar", 0).shouldLog()); } + @Test + public void testInfrequentPrimaryAndDependentLoggers() { + helper = new LogThrottlingHelper(LOG_PERIOD, "foo", timer); + + assertTrue(helper.record("foo", 0).shouldLog()); + assertTrue(helper.record("bar", 0).shouldLog()); + + // Both should log once the period has elapsed + assertTrue(helper.record("foo", LOG_PERIOD).shouldLog()); + assertTrue(helper.record("bar", LOG_PERIOD).shouldLog()); + } + @Test public void testMultipleLoggersWithValues() { helper = new LogThrottlingHelper(LOG_PERIOD, "foo", timer); diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/impl/TestMetricsSystemImpl.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/impl/TestMetricsSystemImpl.java index 5a1f1d1376d..1e841a68654 100644 --- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/impl/TestMetricsSystemImpl.java +++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/impl/TestMetricsSystemImpl.java @@ -438,6 +438,8 @@ public class TestMetricsSystemImpl { r = recs.get(1); assertTrue("NumActiveSinks should be 3", Iterables.contains(r.metrics(), new MetricGaugeInt(MsInfo.NumActiveSinks, 3))); + assertTrue("NumAllSinks should be 3", + Iterables.contains(r.metrics(), new MetricGaugeInt(MsInfo.NumAllSinks, 3))); } @Test diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/lib/TestMutableMetrics.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/lib/TestMutableMetrics.java index 10c8057c69e..9984c9b95fb 100644 --- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/lib/TestMutableMetrics.java +++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/lib/TestMutableMetrics.java @@ -18,6 +18,7 @@ package org.apache.hadoop.metrics2.lib; +import static org.apache.hadoop.metrics2.impl.MsInfo.Context; import static org.apache.hadoop.metrics2.lib.Interns.info; import static org.apache.hadoop.test.MetricsAsserts.*; import static org.mockito.AdditionalMatchers.eq; @@ -290,6 +291,27 @@ public class TestMutableMetrics { } } + /** + * MutableStat should output 0 instead of the previous state when there is no change. + */ + @Test public void testMutableWithoutChanged() { + MetricsRecordBuilder builderWithChange = mockMetricsRecordBuilder(); + MetricsRecordBuilder builderWithoutChange = mockMetricsRecordBuilder(); + MetricsRegistry registry = new MetricsRegistry("test"); + MutableStat stat = registry.newStat("Test", "Test", "Ops", "Val", true); + stat.add(1000, 1000); + stat.add(1000, 2000); + registry.snapshot(builderWithChange, true); + + assertCounter("TestNumOps", 2000L, builderWithChange); + assertGauge("TestINumOps", 2000L, builderWithChange); + assertGauge("TestAvgVal", 1.5, builderWithChange); + + registry.snapshot(builderWithoutChange, true); + assertGauge("TestINumOps", 0L, builderWithoutChange); + assertGauge("TestAvgVal", 0.0, builderWithoutChange); + } + @Test public void testDuplicateMetrics() { MutableRatesWithAggregation rates = new MutableRatesWithAggregation(); @@ -479,4 +501,15 @@ public class TestMutableMetrics { verify(mb, times(2)).addGauge( info("FooNumOps", "Number of ops for stat with 5s interval"), (long) 0); } + + /** + * Test {@link MutableGaugeFloat#incr()}. + */ + @Test(timeout = 30000) + public void testMutableGaugeFloat() { + MutableGaugeFloat mgf = new MutableGaugeFloat(Context, 3.2f); + assertEquals(3.2f, mgf.value(), 0.0); + mgf.incr(); + assertEquals(4.2f, mgf.value(), 0.0); + } } diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestInstrumentedReadWriteLock.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestInstrumentedReadWriteLock.java index 1ea3ef18608..4d0f8d2e04f 100644 --- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestInstrumentedReadWriteLock.java +++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestInstrumentedReadWriteLock.java @@ -233,4 +233,111 @@ public class TestInstrumentedReadWriteLock { assertEquals(2, wlogged.get()); assertEquals(1, wsuppresed.get()); } + + + /** + * Tests the warning when the write lock is held longer than threshold. + */ + @Test(timeout=10000) + public void testWriteLockLongHoldingReportWithReentrant() { + String testname = name.getMethodName(); + final AtomicLong time = new AtomicLong(0); + Timer mclock = new Timer() { + @Override + public long monotonicNow() { + return time.get(); + } + }; + + final AtomicLong wlogged = new AtomicLong(0); + final AtomicLong wsuppresed = new AtomicLong(0); + final AtomicLong totalHeldTime = new AtomicLong(0); + ReentrantReadWriteLock readWriteLock = new ReentrantReadWriteLock(true); + InstrumentedWriteLock writeLock = new InstrumentedWriteLock(testname, LOG, + readWriteLock, 2000, 300, mclock) { + @Override + protected void logWarning(long lockHeldTime, SuppressedSnapshot stats) { + totalHeldTime.addAndGet(lockHeldTime); + wlogged.incrementAndGet(); + wsuppresed.set(stats.getSuppressedCount()); + } + }; + + InstrumentedReadLock readLock = new InstrumentedReadLock(testname, LOG, + readWriteLock, 2000, 300, mclock) { + @Override + protected void logWarning(long lockHeldTime, SuppressedSnapshot stats) { + totalHeldTime.addAndGet(lockHeldTime); + wlogged.incrementAndGet(); + wsuppresed.set(stats.getSuppressedCount()); + } + }; + + writeLock.lock(); // t = 0 + time.set(100); + + writeLock.lock(); // t = 100 + time.set(500); + + writeLock.lock(); // t = 500 + time.set(2900); + writeLock.unlock(); // t = 2900 + + readLock.lock(); // t = 2900 + time.set(3000); + readLock.unlock(); // t = 3000 + + writeLock.unlock(); // t = 3000 + + writeLock.unlock(); // t = 3000 + assertEquals(1, wlogged.get()); + assertEquals(0, wsuppresed.get()); + assertEquals(3000, totalHeldTime.get()); + } + + /** + * Tests the warning when the read lock is held longer than threshold. + */ + @Test(timeout=10000) + public void testReadLockLongHoldingReportWithReentrant() { + String testname = name.getMethodName(); + final AtomicLong time = new AtomicLong(0); + Timer mclock = new Timer() { + @Override + public long monotonicNow() { + return time.get(); + } + }; + + final AtomicLong wlogged = new AtomicLong(0); + final AtomicLong wsuppresed = new AtomicLong(0); + final AtomicLong totalHelpTime = new AtomicLong(0); + ReentrantReadWriteLock readWriteLock = new ReentrantReadWriteLock(true); + InstrumentedReadLock readLock = new InstrumentedReadLock(testname, LOG, + readWriteLock, 2000, 300, mclock) { + @Override + protected void logWarning(long lockHeldTime, SuppressedSnapshot stats) { + totalHelpTime.addAndGet(lockHeldTime); + wlogged.incrementAndGet(); + wsuppresed.set(stats.getSuppressedCount()); + } + }; + + readLock.lock(); // t = 0 + time.set(100); + + readLock.lock(); // t = 100 + time.set(500); + + readLock.lock(); // t = 500 + time.set(3000); + readLock.unlock(); // t = 3000 + + readLock.unlock(); // t = 3000 + + readLock.unlock(); // t = 3000 + assertEquals(1, wlogged.get()); + assertEquals(0, wsuppresed.get()); + assertEquals(3000, totalHelpTime.get()); + } } diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestXMLUtils.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestXMLUtils.java new file mode 100644 index 00000000000..6db16b6c0c5 --- /dev/null +++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestXMLUtils.java @@ -0,0 +1,153 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.util; + +import java.io.InputStream; +import java.io.StringReader; +import java.io.StringWriter; +import java.util.concurrent.atomic.AtomicBoolean; +import javax.xml.XMLConstants; +import javax.xml.parsers.DocumentBuilder; +import javax.xml.parsers.SAXParser; +import javax.xml.transform.Transformer; +import javax.xml.transform.TransformerException; +import javax.xml.transform.TransformerFactory; +import javax.xml.transform.dom.DOMSource; +import javax.xml.transform.stream.StreamResult; +import javax.xml.transform.stream.StreamSource; + +import org.apache.hadoop.test.AbstractHadoopTestBase; + +import org.assertj.core.api.Assertions; +import org.junit.Assert; +import org.junit.Test; +import org.w3c.dom.Document; +import org.xml.sax.InputSource; +import org.xml.sax.SAXException; +import org.xml.sax.helpers.DefaultHandler; + +public class TestXMLUtils extends AbstractHadoopTestBase { + + @Test + public void testSecureDocumentBuilderFactory() throws Exception { + DocumentBuilder db = XMLUtils.newSecureDocumentBuilderFactory().newDocumentBuilder(); + Document doc = db.parse(new InputSource(new StringReader(""))); + Assertions.assertThat(doc).describedAs("parsed document").isNotNull(); + } + + @Test(expected = SAXException.class) + public void testExternalDtdWithSecureDocumentBuilderFactory() throws Exception { + DocumentBuilder db = XMLUtils.newSecureDocumentBuilderFactory().newDocumentBuilder(); + try (InputStream stream = getResourceStream("/xml/external-dtd.xml")) { + Document doc = db.parse(stream); + } + } + + @Test(expected = SAXException.class) + public void testEntityDtdWithSecureDocumentBuilderFactory() throws Exception { + DocumentBuilder db = XMLUtils.newSecureDocumentBuilderFactory().newDocumentBuilder(); + try (InputStream stream = getResourceStream("/xml/entity-dtd.xml")) { + Document doc = db.parse(stream); + } + } + + @Test + public void testSecureSAXParserFactory() throws Exception { + SAXParser parser = XMLUtils.newSecureSAXParserFactory().newSAXParser(); + parser.parse(new InputSource(new StringReader("")), new DefaultHandler()); + } + + @Test(expected = SAXException.class) + public void testExternalDtdWithSecureSAXParserFactory() throws Exception { + SAXParser parser = XMLUtils.newSecureSAXParserFactory().newSAXParser(); + try (InputStream stream = getResourceStream("/xml/external-dtd.xml")) { + parser.parse(stream, new DefaultHandler()); + } + } + + @Test(expected = SAXException.class) + public void testEntityDtdWithSecureSAXParserFactory() throws Exception { + SAXParser parser = XMLUtils.newSecureSAXParserFactory().newSAXParser(); + try (InputStream stream = getResourceStream("/xml/entity-dtd.xml")) { + parser.parse(stream, new DefaultHandler()); + } + } + + @Test + public void testSecureTransformerFactory() throws Exception { + Transformer transformer = XMLUtils.newSecureTransformerFactory().newTransformer(); + DocumentBuilder db = XMLUtils.newSecureDocumentBuilderFactory().newDocumentBuilder(); + Document doc = db.parse(new InputSource(new StringReader(""))); + try (StringWriter stringWriter = new StringWriter()) { + transformer.transform(new DOMSource(doc), new StreamResult(stringWriter)); + Assertions.assertThat(stringWriter.toString()).contains(""))); + try (StringWriter stringWriter = new StringWriter()) { + transformer.transform(new DOMSource(doc), new StreamResult(stringWriter)); + Assertions.assertThat(stringWriter.toString()).contains(" + + + + ]> +&lol; diff --git a/hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml b/hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml new file mode 100644 index 00000000000..08a13938f5f --- /dev/null +++ b/hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml @@ -0,0 +1,23 @@ + + + +

+ First Last + Acme + (555) 123-4567 +
diff --git a/hadoop-common-project/hadoop-minikdc/src/test/java/org/apache/hadoop/minikdc/TestMiniKdc.java b/hadoop-common-project/hadoop-minikdc/src/test/java/org/apache/hadoop/minikdc/TestMiniKdc.java index 74130cff19b..45684053a03 100644 --- a/hadoop-common-project/hadoop-minikdc/src/test/java/org/apache/hadoop/minikdc/TestMiniKdc.java +++ b/hadoop-common-project/hadoop-minikdc/src/test/java/org/apache/hadoop/minikdc/TestMiniKdc.java @@ -38,8 +38,35 @@ import java.util.HashMap; import java.util.Arrays; public class TestMiniKdc extends KerberosSecurityTestcase { - private static final boolean IBM_JAVA = System.getProperty("java.vendor") - .contains("IBM"); + private static final boolean IBM_JAVA = shouldUseIbmPackages(); + // duplicated to avoid cycles in the build + private static boolean shouldUseIbmPackages() { + final List ibmTechnologyEditionSecurityModules = Arrays.asList( + "com.ibm.security.auth.module.JAASLoginModule", + "com.ibm.security.auth.module.Win64LoginModule", + "com.ibm.security.auth.module.NTLoginModule", + "com.ibm.security.auth.module.AIX64LoginModule", + "com.ibm.security.auth.module.LinuxLoginModule", + "com.ibm.security.auth.module.Krb5LoginModule" + ); + + if (System.getProperty("java.vendor").contains("IBM")) { + return ibmTechnologyEditionSecurityModules + .stream().anyMatch((module) -> isSystemClassAvailable(module)); + } + + return false; + } + + private static boolean isSystemClassAvailable(String className) { + try { + Class.forName(className); + return true; + } catch (Exception ignored) { + return false; + } + } + @Test public void testMiniKdcStart() { MiniKdc kdc = getKdc(); @@ -117,9 +144,9 @@ public class TestMiniKdc extends KerberosSecurityTestcase { options.put("debug", "true"); return new AppConfigurationEntry[]{ - new AppConfigurationEntry(getKrb5LoginModuleName(), - AppConfigurationEntry.LoginModuleControlFlag.REQUIRED, - options)}; + new AppConfigurationEntry(getKrb5LoginModuleName(), + AppConfigurationEntry.LoginModuleControlFlag.REQUIRED, + options)}; } } diff --git a/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcProgram.java b/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcProgram.java index 252eae64b53..53647126463 100644 --- a/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcProgram.java +++ b/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcProgram.java @@ -26,6 +26,7 @@ import io.netty.buffer.ByteBuf; import io.netty.buffer.Unpooled; import io.netty.channel.ChannelHandlerContext; import io.netty.channel.ChannelInboundHandlerAdapter; +import io.netty.util.ReferenceCountUtil; import org.apache.hadoop.classification.VisibleForTesting; import org.apache.hadoop.oncrpc.RpcAcceptedReply.AcceptState; import org.apache.hadoop.oncrpc.security.VerifierNone; @@ -163,8 +164,16 @@ public abstract class RpcProgram extends ChannelInboundHandlerAdapter { public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception { RpcInfo info = (RpcInfo) msg; + try { + channelRead(ctx, info); + } finally { + ReferenceCountUtil.release(info.data()); + } + } + + private void channelRead(ChannelHandlerContext ctx, RpcInfo info) + throws Exception { RpcCall call = (RpcCall) info.header(); - SocketAddress remoteAddress = info.remoteAddress(); if (LOG.isTraceEnabled()) { LOG.trace(program + " procedure #" + call.getProcedure()); @@ -256,4 +265,4 @@ public abstract class RpcProgram extends ChannelInboundHandlerAdapter { public int getPortmapUdpTimeoutMillis() { return portmapUdpTimeoutMillis; } -} \ No newline at end of file +} diff --git a/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcUtil.java b/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcUtil.java index caba13105cc..d814052e43d 100644 --- a/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcUtil.java +++ b/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcUtil.java @@ -17,6 +17,7 @@ */ package org.apache.hadoop.oncrpc; +import java.net.InetSocketAddress; import java.net.SocketAddress; import java.nio.ByteBuffer; import java.util.List; @@ -26,6 +27,7 @@ import io.netty.buffer.Unpooled; import io.netty.channel.ChannelHandler; import io.netty.channel.ChannelHandlerContext; import io.netty.channel.ChannelInboundHandlerAdapter; +import io.netty.channel.SimpleChannelInboundHandler; import io.netty.channel.socket.DatagramPacket; import io.netty.handler.codec.ByteToMessageDecoder; import org.apache.hadoop.classification.VisibleForTesting; @@ -129,15 +131,17 @@ public final class RpcUtil { RpcInfo info = null; try { RpcCall callHeader = RpcCall.read(in); - ByteBuf dataBuffer = Unpooled.wrappedBuffer(in.buffer() - .slice()); + ByteBuf dataBuffer = buf.slice(b.position(), b.remaining()); info = new RpcInfo(callHeader, dataBuffer, ctx, ctx.channel(), remoteAddress); } catch (Exception exc) { LOG.info("Malformed RPC request from " + remoteAddress); } finally { - buf.release(); + // only release buffer if it is not passed to downstream handler + if (info == null) { + buf.release(); + } } if (info != null) { @@ -170,15 +174,18 @@ public final class RpcUtil { */ @ChannelHandler.Sharable private static final class RpcUdpResponseStage extends - ChannelInboundHandlerAdapter { + SimpleChannelInboundHandler { + public RpcUdpResponseStage() { + // do not auto release the RpcResponse message. + super(false); + } @Override - public void channelRead(ChannelHandlerContext ctx, Object msg) - throws Exception { - RpcResponse r = (RpcResponse) msg; - // TODO: check out https://github.com/netty/netty/issues/1282 for - // correct usage - ctx.channel().writeAndFlush(r.data()); + protected void channelRead0(ChannelHandlerContext ctx, + RpcResponse response) throws Exception { + ByteBuf buf = Unpooled.wrappedBuffer(response.data()); + ctx.writeAndFlush(new DatagramPacket( + buf, (InetSocketAddress) response.recipient())); } } } diff --git a/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/portmap/Portmap.java b/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/portmap/Portmap.java index 7d1130b40ff..953d74648db 100644 --- a/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/portmap/Portmap.java +++ b/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/portmap/Portmap.java @@ -117,15 +117,13 @@ final class Portmap { .childOption(ChannelOption.SO_REUSEADDR, true) .channel(NioServerSocketChannel.class) .childHandler(new ChannelInitializer() { - private final IdleStateHandler idleStateHandler = new IdleStateHandler( - 0, 0, idleTimeMilliSeconds, TimeUnit.MILLISECONDS); - @Override protected void initChannel(SocketChannel ch) throws Exception { ChannelPipeline p = ch.pipeline(); p.addLast(RpcUtil.constructRpcFrameDecoder(), - RpcUtil.STAGE_RPC_MESSAGE_PARSER, idleStateHandler, handler, + RpcUtil.STAGE_RPC_MESSAGE_PARSER, new IdleStateHandler(0, 0, + idleTimeMilliSeconds, TimeUnit.MILLISECONDS), handler, RpcUtil.STAGE_RPC_TCP_RESPONSE); }}); diff --git a/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/portmap/TestPortmap.java b/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/portmap/TestPortmap.java index 84fa71a269d..35ab5cdc3da 100644 --- a/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/portmap/TestPortmap.java +++ b/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/portmap/TestPortmap.java @@ -23,8 +23,10 @@ import java.net.DatagramPacket; import java.net.DatagramSocket; import java.net.InetSocketAddress; import java.net.Socket; +import java.util.Arrays; import java.util.Map; +import org.apache.hadoop.oncrpc.RpcReply; import org.junit.Assert; import org.apache.hadoop.oncrpc.RpcCall; @@ -35,6 +37,8 @@ import org.junit.AfterClass; import org.junit.BeforeClass; import org.junit.Test; +import static org.junit.Assert.assertEquals; + public class TestPortmap { private static Portmap pm = new Portmap(); private static final int SHORT_TIMEOUT_MILLISECONDS = 10; @@ -92,6 +96,19 @@ public class TestPortmap { pm.getUdpServerLoAddress()); try { s.send(p); + + // verify that portmap server responds a UDF packet back to the client + byte[] receiveData = new byte[65535]; + DatagramPacket receivePacket = new DatagramPacket(receiveData, + receiveData.length); + s.setSoTimeout(2000); + s.receive(receivePacket); + + // verify that the registration is accepted. + XDR xdr = new XDR(Arrays.copyOfRange(receiveData, 0, + receivePacket.getLength())); + RpcReply reply = RpcReply.read(xdr); + assertEquals(reply.getState(), RpcReply.ReplyState.MSG_ACCEPTED); } finally { s.close(); } diff --git a/hadoop-dist/pom.xml b/hadoop-dist/pom.xml index 0a5db2565b8..0b1c6012673 100644 --- a/hadoop-dist/pom.xml +++ b/hadoop-dist/pom.xml @@ -41,11 +41,21 @@ hadoop-hdfs-client provided + + org.apache.hadoop + hadoop-hdfs-native-client + provided + org.apache.hadoop hadoop-mapreduce-client-app provided + + org.apache.hadoop + hadoop-mapreduce-client-nativetask + provided + org.apache.hadoop hadoop-yarn-api diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml b/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml index 9bb0932d328..3337f7d4089 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml +++ b/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml @@ -37,6 +37,16 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd"> com.squareup.okhttp3 okhttp + + + com.squareup.okio + okio-jvm + + + + + com.squareup.okio + okio-jvm org.jetbrains.kotlin diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientGSIContext.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientGSIContext.java index bcbb4b96c2a..7b03e1f3518 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientGSIContext.java +++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientGSIContext.java @@ -20,13 +20,19 @@ package org.apache.hadoop.hdfs; import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; +import org.apache.hadoop.classification.VisibleForTesting; +import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.RouterFederatedStateProto; import org.apache.hadoop.ipc.AlignmentContext; import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcRequestHeaderProto; import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcResponseHeaderProto; import java.io.IOException; +import java.util.Collections; +import java.util.HashMap; +import java.util.Map; import java.util.concurrent.atomic.LongAccumulator; import org.apache.hadoop.thirdparty.protobuf.ByteString; +import org.apache.hadoop.thirdparty.protobuf.InvalidProtocolBufferException; /** * Global State Id context for the client. @@ -77,12 +83,46 @@ public class ClientGSIContext implements AlignmentContext { @Override public synchronized void receiveResponseState(RpcResponseHeaderProto header) { if (header.hasRouterFederatedState()) { - routerFederatedState = header.getRouterFederatedState(); + routerFederatedState = mergeRouterFederatedState( + this.routerFederatedState, header.getRouterFederatedState()); } else { lastSeenStateId.accumulate(header.getStateId()); } } + /** + * Utility function to parse routerFederatedState field in RPC headers. + */ + public static Map getRouterFederatedStateMap(ByteString byteString) { + if (byteString != null) { + try { + RouterFederatedStateProto federatedState = RouterFederatedStateProto.parseFrom(byteString); + return federatedState.getNamespaceStateIdsMap(); + } catch (InvalidProtocolBufferException e) { + // Ignore this exception and will return an empty map + } + } + return Collections.emptyMap(); + } + + /** + * Merge state1 and state2 to get the max value for each namespace. + * @param state1 input ByteString. + * @param state2 input ByteString. + * @return one ByteString object which contains the max value of each namespace. + */ + public static ByteString mergeRouterFederatedState(ByteString state1, ByteString state2) { + Map mapping1 = new HashMap<>(getRouterFederatedStateMap(state1)); + Map mapping2 = getRouterFederatedStateMap(state2); + mapping2.forEach((k, v) -> { + long localValue = mapping1.getOrDefault(k, 0L); + mapping1.put(k, Math.max(v, localValue)); + }); + RouterFederatedStateProto.Builder federatedBuilder = RouterFederatedStateProto.newBuilder(); + mapping1.forEach(federatedBuilder::putNamespaceStateIds); + return federatedBuilder.build().toByteString(); + } + /** * Client side implementation for providing state alignment info in requests. */ @@ -106,4 +146,9 @@ public class ClientGSIContext implements AlignmentContext { // Do nothing. return 0; } + + @VisibleForTesting + public ByteString getRouterFederatedState() { + return this.routerFederatedState; + } } diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java index 1233c033ee0..931c2bba36a 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java +++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java @@ -293,9 +293,7 @@ public class DFSStripedOutputStream extends DFSOutputStream DataChecksum checksum, String[] favoredNodes) throws IOException { super(dfsClient, src, stat, flag, progress, checksum, favoredNodes, false); - if (LOG.isDebugEnabled()) { - LOG.debug("Creating DFSStripedOutputStream for " + src); - } + LOG.debug("Creating DFSStripedOutputStream for {}", src); ecPolicy = stat.getErasureCodingPolicy(); final int numParityBlocks = ecPolicy.getNumParityUnits(); diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java index 93db332d738..9050a4bddee 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java +++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java @@ -18,6 +18,7 @@ package org.apache.hadoop.hdfs; +import org.apache.hadoop.hdfs.protocol.LocatedBlocks; import org.apache.hadoop.security.AccessControlException; import org.apache.hadoop.classification.VisibleForTesting; import org.apache.hadoop.util.Preconditions; @@ -421,6 +422,16 @@ public class DistributedFileSystem extends FileSystem return append(f, EnumSet.of(CreateFlag.APPEND), bufferSize, progress); } + @Override + public FSDataOutputStream append(Path f, final int bufferSize, + final Progressable progress, boolean appendToNewBlock) throws IOException { + EnumSet flag = EnumSet.of(CreateFlag.APPEND); + if (appendToNewBlock) { + flag.add(CreateFlag.NEW_BLOCK); + } + return append(f, flag, bufferSize, progress); + } + /** * Append to an existing file (optional operation). * @@ -567,7 +578,7 @@ public class DistributedFileSystem extends FileSystem /** * Same as - * {@link #create(Path, FsPermission, EnumSet, int, short, long, + * {@link #create(Path, FsPermission, EnumSet, int, short, long, * Progressable, ChecksumOpt)} with a few additions. First, addition of * favoredNodes that is a hint to where the namenode should place the file * blocks. The favored nodes hint is not persisted in HDFS. Hence it may be @@ -636,12 +647,12 @@ public class DistributedFileSystem extends FileSystem /** * Similar to {@link #create(Path, FsPermission, EnumSet, int, short, long, - * Progressable, ChecksumOpt, InetSocketAddress[], String)}, it provides a + * Progressable, ChecksumOpt, InetSocketAddress[], String, String)}, it provides a * HDFS-specific version of {@link #createNonRecursive(Path, FsPermission, * EnumSet, int, short, long, Progressable)} with a few additions. * * @see #create(Path, FsPermission, EnumSet, int, short, long, Progressable, - * ChecksumOpt, InetSocketAddress[], String) for the descriptions of + * ChecksumOpt, InetSocketAddress[], String, String) for the descriptions of * additional parameters, i.e., favoredNodes, ecPolicyName and * storagePolicyName. */ @@ -3898,4 +3909,36 @@ public class DistributedFileSystem extends FileSystem return dfs.slowDatanodeReport(); } + /** + * Returns LocatedBlocks of the corresponding HDFS file p from offset start + * for length len. + * This is similar to {@link #getFileBlockLocations(Path, long, long)} except + * that it returns LocatedBlocks rather than BlockLocation array. + * @param p path representing the file of interest. + * @param start offset + * @param len length + * @return a LocatedBlocks object + * @throws IOException + */ + public LocatedBlocks getLocatedBlocks(Path p, long start, long len) + throws IOException { + final Path absF = fixRelativePart(p); + return new FileSystemLinkResolver() { + @Override + public LocatedBlocks doCall(final Path p) throws IOException { + return dfs.getLocatedBlocks(getPathName(p), start, len); + } + @Override + public LocatedBlocks next(final FileSystem fs, final Path p) + throws IOException { + if (fs instanceof DistributedFileSystem) { + DistributedFileSystem myDfs = (DistributedFileSystem)fs; + return myDfs.getLocatedBlocks(p, start, len); + } + throw new UnsupportedOperationException("Cannot getLocatedBlocks " + + "through a symlink to a non-DistributedFileSystem: " + fs + " -> "+ + p); + } + }.resolve(this, absF); + } } diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/HAUtilClient.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/HAUtilClient.java index 47288f77df8..bfbee41d144 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/HAUtilClient.java +++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/HAUtilClient.java @@ -135,16 +135,12 @@ public class HAUtilClient { HdfsConstants.HDFS_URI_SCHEME) + "//" + specificToken.getService()); ugi.addToken(alias, specificToken); - if (LOG.isDebugEnabled()) { - LOG.debug("Mapped HA service delegation token for logical URI " + - haUri + " to namenode " + singleNNAddr); - } + LOG.debug("Mapped HA service delegation token for logical URI {}" + + " to namenode {}", haUri, singleNNAddr); } } else { - if (LOG.isDebugEnabled()) { - LOG.debug("No HA service delegation token found for logical URI " + - haUri); - } + LOG.debug("No HA service delegation token found for logical URI {}", + haUri); } } } diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/KeyProviderCache.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/KeyProviderCache.java index d8dd485101b..ee97b96ea78 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/KeyProviderCache.java +++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/KeyProviderCache.java @@ -68,8 +68,11 @@ public class KeyProviderCache { }) .build(); - ShutdownHookManager.get().addShutdownHook(new KeyProviderCacheFinalizer(), - SHUTDOWN_HOOK_PRIORITY); + // Register the shutdown hook when not in shutdown + if (!ShutdownHookManager.get().isShutdownInProgress()) { + ShutdownHookManager.get().addShutdownHook( + new KeyProviderCacheFinalizer(), SHUTDOWN_HOOK_PRIORITY); + } } public KeyProvider get(final Configuration conf, diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/NameNodeProxiesClient.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/NameNodeProxiesClient.java index 4acec828242..2e553238197 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/NameNodeProxiesClient.java +++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/NameNodeProxiesClient.java @@ -349,9 +349,12 @@ public class NameNodeProxiesClient { boolean withRetries, AtomicBoolean fallbackToSimpleAuth, AlignmentContext alignmentContext) throws IOException { - if (alignmentContext == null) { + if (alignmentContext == null && + conf.getBoolean(HdfsClientConfigKeys.DFS_RBF_OBSERVER_READ_ENABLE, + HdfsClientConfigKeys.DFS_RBF_OBSERVER_READ_ENABLE_DEFAULT)) { alignmentContext = new ClientGSIContext(); } + RPC.setProtocolEngine(conf, ClientNamenodeProtocolPB.class, ProtobufRpcEngine2.class); diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/CreateEncryptionZoneFlag.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/CreateEncryptionZoneFlag.java index ad4cea6468d..fe87158c1cd 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/CreateEncryptionZoneFlag.java +++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/CreateEncryptionZoneFlag.java @@ -19,6 +19,9 @@ package org.apache.hadoop.hdfs.client; import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; +import org.apache.hadoop.fs.Path; + +import java.util.EnumSet; /** * CreateEncryptionZoneFlag is used in diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java index e3e01fde3a5..2b511bfc2eb 100755 --- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java +++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java @@ -78,6 +78,8 @@ public interface HdfsClientConfigKeys { int DFS_NAMENODE_HTTPS_PORT_DEFAULT = 9871; String DFS_NAMENODE_HTTPS_ADDRESS_KEY = "dfs.namenode.https-address"; String DFS_HA_NAMENODES_KEY_PREFIX = "dfs.ha.namenodes"; + String DFS_RBF_OBSERVER_READ_ENABLE = "dfs.client.rbf.observer.read.enable"; + boolean DFS_RBF_OBSERVER_READ_ENABLE_DEFAULT = false; int DFS_NAMENODE_RPC_PORT_DEFAULT = 8020; String DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY = "dfs.namenode.kerberos.principal"; diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/ECPolicyLoader.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/ECPolicyLoader.java index fcba618c94a..0d1be4b8e67 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/ECPolicyLoader.java +++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/ECPolicyLoader.java @@ -20,6 +20,8 @@ package org.apache.hadoop.hdfs.util; import org.apache.hadoop.io.erasurecode.ECSchema; import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy; +import org.apache.hadoop.util.XMLUtils; + import org.w3c.dom.Node; import org.w3c.dom.Text; import org.w3c.dom.Element; @@ -87,13 +89,8 @@ public class ECPolicyLoader { LOG.info("Loading EC policy file " + policyFile); // Read and parse the EC policy file. - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); dbf.setIgnoringComments(true); - dbf.setFeature("http://apache.org/xml/features/disallow-doctype-decl", true); - dbf.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false); - dbf.setFeature("http://xml.org/sax/features/external-general-entities", false); - dbf.setFeature("http://xml.org/sax/features/external-parameter-entities", false); - dbf.setFeature("http://apache.org/xml/features/dom/create-entity-ref-nodes", false); DocumentBuilder builder = dbf.newDocumentBuilder(); Document doc = builder.parse(policyFile); Element root = doc.getDocumentElement(); diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java index 75163c16d7e..c3dd556ba53 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java +++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java @@ -19,6 +19,9 @@ package org.apache.hadoop.hdfs.web; import com.fasterxml.jackson.databind.ObjectMapper; import com.fasterxml.jackson.databind.ObjectReader; + +import org.apache.hadoop.classification.VisibleForTesting; +import org.apache.hadoop.fs.BlockLocation; import org.apache.hadoop.util.Preconditions; import org.apache.hadoop.thirdparty.com.google.common.collect.Maps; import org.apache.hadoop.fs.ContentSummary; @@ -965,4 +968,53 @@ public class JsonUtilClient { SnapshotStatus.getParentPath(fullPath))); return snapshotStatus; } + + @VisibleForTesting + public static BlockLocation[] toBlockLocationArray(Map json) + throws IOException { + final Map rootmap = + (Map) json.get(BlockLocation.class.getSimpleName() + "s"); + final List array = + JsonUtilClient.getList(rootmap, BlockLocation.class.getSimpleName()); + Preconditions.checkNotNull(array); + final BlockLocation[] locations = new BlockLocation[array.size()]; + int i = 0; + for (Object object : array) { + final Map m = (Map) object; + locations[i++] = JsonUtilClient.toBlockLocation(m); + } + return locations; + } + + /** Convert a Json map to BlockLocation. **/ + private static BlockLocation toBlockLocation(Map m) throws IOException { + if (m == null) { + return null; + } + long length = ((Number) m.get("length")).longValue(); + long offset = ((Number) m.get("offset")).longValue(); + boolean corrupt = Boolean.getBoolean(m.get("corrupt").toString()); + String[] storageIds = toStringArray(getList(m, "storageIds")); + String[] cachedHosts = toStringArray(getList(m, "cachedHosts")); + String[] hosts = toStringArray(getList(m, "hosts")); + String[] names = toStringArray(getList(m, "names")); + String[] topologyPaths = toStringArray(getList(m, "topologyPaths")); + StorageType[] storageTypes = toStorageTypeArray(getList(m, "storageTypes")); + return new BlockLocation(names, hosts, cachedHosts, topologyPaths, + storageIds, storageTypes, offset, length, corrupt); + } + + @VisibleForTesting + static String[] toStringArray(List list) { + if (list == null) { + return null; + } else { + final String[] array = new String[list.size()]; + int i = 0; + for (Object object : list) { + array[i++] = object.toString(); + } + return array; + } + } } diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java index 5afb5266751..f0774e98d1f 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java +++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java @@ -183,6 +183,8 @@ public class WebHdfsFileSystem extends FileSystem private KeyProvider testProvider; private boolean isTLSKrb; + private boolean isServerHCFSCompatible = true; + /** * Return the protocol scheme for the FileSystem. * @@ -1882,18 +1884,51 @@ public class WebHdfsFileSystem extends FileSystem } @Override - public BlockLocation[] getFileBlockLocations(final Path p, - final long offset, final long length) throws IOException { + public BlockLocation[] getFileBlockLocations(final Path p, final long offset, + final long length) throws IOException { statistics.incrementReadOps(1); storageStatistics.incrementOpCounter(OpType.GET_FILE_BLOCK_LOCATIONS); + BlockLocation[] locations; + try { + if (isServerHCFSCompatible) { + locations = getFileBlockLocations(GetOpParam.Op.GETFILEBLOCKLOCATIONS, p, offset, length); + } else { + locations = getFileBlockLocations(GetOpParam.Op.GET_BLOCK_LOCATIONS, p, offset, length); + } + } catch (RemoteException e) { + // parsing the exception is needed only if the client thinks the service is compatible + if (isServerHCFSCompatible && isGetFileBlockLocationsException(e)) { + LOG.warn("Server does not appear to support GETFILEBLOCKLOCATIONS." + + "Fallback to the old GET_BLOCK_LOCATIONS. Exception: {}", + e.getMessage()); + isServerHCFSCompatible = false; + locations = getFileBlockLocations(GetOpParam.Op.GET_BLOCK_LOCATIONS, p, offset, length); + } else { + throw e; + } + } + return locations; + } - final HttpOpParam.Op op = GetOpParam.Op.GET_BLOCK_LOCATIONS; - return new FsPathResponseRunner(op, p, + private boolean isGetFileBlockLocationsException(RemoteException e) { + return e.getMessage() != null && e.getMessage().contains("Invalid value for webhdfs parameter") + && e.getMessage().contains(GetOpParam.Op.GETFILEBLOCKLOCATIONS.toString()); + } + + private BlockLocation[] getFileBlockLocations(final GetOpParam.Op operation, + final Path p, final long offset, final long length) throws IOException { + return new FsPathResponseRunner(operation, p, new OffsetParam(offset), new LengthParam(length)) { @Override - BlockLocation[] decodeResponse(Map json) throws IOException { - return DFSUtilClient.locatedBlocks2Locations( - JsonUtilClient.toLocatedBlocks(json)); + BlockLocation[] decodeResponse(Map json) throws IOException { + switch (operation) { + case GETFILEBLOCKLOCATIONS: + return JsonUtilClient.toBlockLocationArray(json); + case GET_BLOCK_LOCATIONS: + return DFSUtilClient.locatedBlocks2Locations(JsonUtilClient.toLocatedBlocks(json)); + default: + throw new IOException("Unknown operation " + operation.name()); + } } }.run(); } diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto index a4d36180c2c..e1e7f7d780d 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto +++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto @@ -723,3 +723,15 @@ message BlockTokenSecretProto { repeated string storageIds = 8; optional bytes handshakeSecret = 9; } + +///////////////////////////////////////////////// +// Alignment state for namespaces. +///////////////////////////////////////////////// +/** + * Clients should receive this message in RPC responses and forward it + * in RPC requests without interpreting it. It should be encoded + * as an obscure byte array when being sent to clients. + */ +message RouterFederatedStateProto { + map namespaceStateIds = 1; // Last seen state IDs for multiple namespaces. +} diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/util/TestByteArrayManager.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/util/TestByteArrayManager.java index a47ffa77136..b624f18bd14 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/util/TestByteArrayManager.java +++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/util/TestByteArrayManager.java @@ -342,7 +342,7 @@ public class TestByteArrayManager { } if ((i & 0xFF) == 0) { - LOG.info("randomRecycler sleep, i=" + i); + LOG.info("randomRecycler sleep, i={}", i); sleepMs(100); } } diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml index a1b3ab1f923..b471fd062dd 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml +++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml @@ -199,6 +199,16 @@ bcprov-jdk15on test + + com.squareup.okhttp3 + mockwebserver + test + + + com.squareup.okhttp3 + okhttp + test + diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java index 10dc787fa12..f34a27e0277 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java +++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java @@ -24,8 +24,12 @@ import java.util.EnumSet; import java.util.List; import org.apache.hadoop.thirdparty.com.google.common.base.Charsets; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.fasterxml.jackson.databind.type.MapType; import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.VisibleForTesting; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.BlockLocation; import org.apache.hadoop.fs.CommonPathCapabilities; import org.apache.hadoop.fs.ContentSummary; import org.apache.hadoop.fs.DelegationTokenRenewer; @@ -140,6 +144,8 @@ public class HttpFSFileSystem extends FileSystem public static final String SNAPSHOT_DIFF_INDEX = "snapshotdiffindex"; public static final String FSACTION_MODE_PARAM = "fsaction"; public static final String EC_POLICY_NAME_PARAM = "ecpolicy"; + public static final String OFFSET_PARAM = "offset"; + public static final String LENGTH_PARAM = "length"; public static final Short DEFAULT_PERMISSION = 0755; public static final String ACLSPEC_DEFAULT = ""; @@ -239,6 +245,7 @@ public class HttpFSFileSystem extends FileSystem public static final String STORAGE_POLICIES_JSON = "BlockStoragePolicies"; public static final String STORAGE_POLICY_JSON = "BlockStoragePolicy"; + public static final String BLOCK_LOCATIONS_JSON = "BlockLocations"; public static final int HTTP_TEMPORARY_REDIRECT = 307; @@ -269,7 +276,8 @@ public class HttpFSFileSystem extends FileSystem GETSNAPSHOTTABLEDIRECTORYLIST(HTTP_GET), GETSNAPSHOTLIST(HTTP_GET), GETSERVERDEFAULTS(HTTP_GET), CHECKACCESS(HTTP_GET), SETECPOLICY(HTTP_PUT), GETECPOLICY(HTTP_GET), UNSETECPOLICY( - HTTP_POST), SATISFYSTORAGEPOLICY(HTTP_PUT), GETSNAPSHOTDIFFLISTING(HTTP_GET); + HTTP_POST), SATISFYSTORAGEPOLICY(HTTP_PUT), GETSNAPSHOTDIFFLISTING(HTTP_GET), + GET_BLOCK_LOCATIONS(HTTP_GET); private String httpMethod; @@ -1710,4 +1718,41 @@ public class HttpFSFileSystem extends FileSystem Operation.SATISFYSTORAGEPOLICY.getMethod(), params, path, true); HttpExceptionUtils.validateResponse(conn, HttpURLConnection.HTTP_OK); } -} + + @Override + public BlockLocation[] getFileBlockLocations(Path path, long start, long len) + throws IOException { + Map params = new HashMap<>(); + params.put(OP_PARAM, Operation.GETFILEBLOCKLOCATIONS.toString()); + params.put(OFFSET_PARAM, Long.toString(start)); + params.put(LENGTH_PARAM, Long.toString(len)); + HttpURLConnection conn = getConnection( + Operation.GETFILEBLOCKLOCATIONS.getMethod(), params, path, true); + HttpExceptionUtils.validateResponse(conn, HttpURLConnection.HTTP_OK); + JSONObject json = (JSONObject) HttpFSUtils.jsonParse(conn); + return toBlockLocations(json); + } + + @Override + public BlockLocation[] getFileBlockLocations(final FileStatus status, + final long offset, final long length) throws IOException { + if (status == null) { + return null; + } + return getFileBlockLocations(status.getPath(), offset, length); + } + + @VisibleForTesting + static BlockLocation[] toBlockLocations(JSONObject json) throws IOException { + ObjectMapper mapper = new ObjectMapper(); + MapType subType = mapper.getTypeFactory().constructMapType(Map.class, + String.class, BlockLocation[].class); + MapType rootType = mapper.getTypeFactory().constructMapType(Map.class, + mapper.constructType(String.class), mapper.constructType(subType)); + + Map> jsonMap = + mapper.readValue(json.toJSONString(), rootType); + Map locationMap = jsonMap.get(BLOCK_LOCATIONS_JSON); + return locationMap.get(BlockLocation.class.getSimpleName()); + } +} \ No newline at end of file diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java index 33f2abbb319..9f70351dfca 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java +++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java @@ -19,6 +19,7 @@ package org.apache.hadoop.fs.http.server; import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.BlockLocation; import org.apache.hadoop.fs.BlockStoragePolicySpi; import org.apache.hadoop.fs.ContentSummary; import org.apache.hadoop.fs.FileChecksum; @@ -44,6 +45,7 @@ import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy; import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy; import org.apache.hadoop.hdfs.protocol.HdfsConstants; import org.apache.hadoop.hdfs.protocol.HdfsFileStatus; +import org.apache.hadoop.hdfs.protocol.LocatedBlocks; import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport; import org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing; import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus; @@ -2192,4 +2194,75 @@ public final class FSOperations { return null; } } + + /** + * Executor that performs a getFileBlockLocations operation. + */ + + @InterfaceAudience.Private + @SuppressWarnings("rawtypes") + public static class FSFileBlockLocations implements FileSystemAccess.FileSystemExecutor { + final private Path path; + final private long offsetValue; + final private long lengthValue; + + /** + * Creates a file-block-locations executor. + * + * @param path the path to retrieve the location + * @param offsetValue offset into the given file + * @param lengthValue length for which to get locations for + */ + public FSFileBlockLocations(String path, long offsetValue, long lengthValue) { + this.path = new Path(path); + this.offsetValue = offsetValue; + this.lengthValue = lengthValue; + } + + @Override + public Map execute(FileSystem fs) throws IOException { + BlockLocation[] locations = fs.getFileBlockLocations(this.path, + this.offsetValue, this.lengthValue); + return JsonUtil.toJsonMap(locations); + } + } + + /** + * Executor that performs a getFileBlockLocations operation for legacy + * clients that supports only GET_BLOCK_LOCATIONS. + */ + + @InterfaceAudience.Private + @SuppressWarnings("rawtypes") + public static class FSFileBlockLocationsLegacy + implements FileSystemAccess.FileSystemExecutor { + final private Path path; + final private long offsetValue; + final private long lengthValue; + + /** + * Creates a file-block-locations executor. + * + * @param path the path to retrieve the location + * @param offsetValue offset into the given file + * @param lengthValue length for which to get locations for + */ + public FSFileBlockLocationsLegacy(String path, long offsetValue, long lengthValue) { + this.path = new Path(path); + this.offsetValue = offsetValue; + this.lengthValue = lengthValue; + } + + @Override + public Map execute(FileSystem fs) throws IOException { + if (fs instanceof DistributedFileSystem) { + DistributedFileSystem dfs = (DistributedFileSystem)fs; + LocatedBlocks locations = dfs.getLocatedBlocks( + this.path, this.offsetValue, this.lengthValue); + return JsonUtil.toJsonMap(locations); + } + throw new IOException("Unable to support FSFileBlockLocationsLegacy " + + "because the file system is not DistributedFileSystem."); + } + } } \ No newline at end of file diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java index 41009c7b516..6943636f8fc 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java +++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java @@ -60,7 +60,8 @@ public class HttpFSParametersProvider extends ParametersProvider { PARAMS_DEF.put(Operation.GETQUOTAUSAGE, new Class[]{}); PARAMS_DEF.put(Operation.GETFILECHECKSUM, new Class[]{NoRedirectParam.class}); - PARAMS_DEF.put(Operation.GETFILEBLOCKLOCATIONS, new Class[]{}); + PARAMS_DEF.put(Operation.GETFILEBLOCKLOCATIONS, + new Class[] {OffsetParam.class, LenParam.class}); PARAMS_DEF.put(Operation.GETACLSTATUS, new Class[]{}); PARAMS_DEF.put(Operation.GETTRASHROOT, new Class[]{}); PARAMS_DEF.put(Operation.INSTRUMENTATION, new Class[]{}); @@ -127,6 +128,7 @@ public class HttpFSParametersProvider extends ParametersProvider { PARAMS_DEF.put(Operation.GETECPOLICY, new Class[] {}); PARAMS_DEF.put(Operation.UNSETECPOLICY, new Class[] {}); PARAMS_DEF.put(Operation.SATISFYSTORAGEPOLICY, new Class[] {}); + PARAMS_DEF.put(Operation.GET_BLOCK_LOCATIONS, new Class[] {OffsetParam.class, LenParam.class}); } public HttpFSParametersProvider() { diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java index 399ff3bde9c..b50d24900ac 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java +++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java @@ -370,7 +370,23 @@ public class HttpFSServer { break; } case GETFILEBLOCKLOCATIONS: { - response = Response.status(Response.Status.BAD_REQUEST).build(); + long offset = 0; + long len = Long.MAX_VALUE; + Long offsetParam = params.get(OffsetParam.NAME, OffsetParam.class); + Long lenParam = params.get(LenParam.NAME, LenParam.class); + AUDIT_LOG.info("[{}] offset [{}] len [{}]", path, offsetParam, lenParam); + if (offsetParam != null && offsetParam > 0) { + offset = offsetParam; + } + if (lenParam != null && lenParam > 0) { + len = lenParam; + } + FSOperations.FSFileBlockLocations command = + new FSOperations.FSFileBlockLocations(path, offset, len); + @SuppressWarnings("rawtypes") + Map locations = fsExecute(user, command); + final String json = JsonUtil.toJsonString("BlockLocations", locations); + response = Response.ok(json).type(MediaType.APPLICATION_JSON).build(); break; } case GETACLSTATUS: { @@ -510,6 +526,26 @@ public class HttpFSServer { response = Response.ok(js).type(MediaType.APPLICATION_JSON).build(); break; } + case GET_BLOCK_LOCATIONS: { + long offset = 0; + long len = Long.MAX_VALUE; + Long offsetParam = params.get(OffsetParam.NAME, OffsetParam.class); + Long lenParam = params.get(LenParam.NAME, LenParam.class); + AUDIT_LOG.info("[{}] offset [{}] len [{}]", path, offsetParam, lenParam); + if (offsetParam != null && offsetParam > 0) { + offset = offsetParam; + } + if (lenParam != null && lenParam > 0) { + len = lenParam; + } + FSOperations.FSFileBlockLocationsLegacy command = + new FSOperations.FSFileBlockLocationsLegacy(path, offset, len); + @SuppressWarnings("rawtypes") + Map locations = fsExecute(user, command); + final String json = JsonUtil.toJsonString("LocatedBlocks", locations); + response = Response.ok(json).type(MediaType.APPLICATION_JSON).build(); + break; + } default: { throw new IOException( MessageFormat.format("Invalid HTTP GET operation [{0}]", op.value())); diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java index 9ef24ec734a..41dc03d59e2 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java +++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java @@ -19,6 +19,7 @@ package org.apache.hadoop.fs.http.client; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.BlockLocation; import org.apache.hadoop.fs.BlockStoragePolicySpi; import org.apache.hadoop.fs.CommonConfigurationKeysPublic; import org.apache.hadoop.fs.ContentSummary; @@ -73,6 +74,10 @@ import org.apache.hadoop.test.TestHdfsHelper; import org.apache.hadoop.test.TestJetty; import org.apache.hadoop.test.TestJettyHelper; import org.apache.hadoop.util.Lists; + +import org.json.simple.JSONObject; +import org.json.simple.parser.ContainerFactory; +import org.json.simple.parser.JSONParser; import org.junit.Assert; import org.junit.Assume; import org.junit.Test; @@ -101,11 +106,11 @@ import java.util.regex.Pattern; import static org.junit.Assert.assertArrayEquals; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertNotNull; import static org.junit.Assert.assertTrue; @RunWith(value = Parameterized.class) public abstract class BaseTestHttpFSWith extends HFSTestCase { - protected abstract Path getProxiedFSTestDir(); protected abstract String getProxiedFSURI(); @@ -191,7 +196,7 @@ public abstract class BaseTestHttpFSWith extends HFSTestCase { protected void testGet() throws Exception { FileSystem fs = getHttpFSFileSystem(); - Assert.assertNotNull(fs); + assertNotNull(fs); URI uri = new URI(getScheme() + "://" + TestJettyHelper.getJettyURL().toURI().getAuthority()); assertEquals(fs.getUri(), uri); @@ -1201,7 +1206,7 @@ public abstract class BaseTestHttpFSWith extends HFSTestCase { ALLOW_SNAPSHOT, DISALLOW_SNAPSHOT, DISALLOW_SNAPSHOT_EXCEPTION, FILE_STATUS_ATTR, GET_SNAPSHOT_DIFF, GET_SNAPSHOTTABLE_DIRECTORY_LIST, GET_SNAPSHOT_LIST, GET_SERVERDEFAULTS, CHECKACCESS, SETECPOLICY, - SATISFYSTORAGEPOLICY, GET_SNAPSHOT_DIFF_LISTING + SATISFYSTORAGEPOLICY, GET_SNAPSHOT_DIFF_LISTING, GETFILEBLOCKLOCATIONS } private void operation(Operation op) throws Exception { @@ -1341,6 +1346,9 @@ public abstract class BaseTestHttpFSWith extends HFSTestCase { case GET_SNAPSHOT_DIFF_LISTING: testGetSnapshotDiffListing(); break; + case GETFILEBLOCKLOCATIONS: + testGetFileBlockLocations(); + break; } } @@ -1959,6 +1967,37 @@ public abstract class BaseTestHttpFSWith extends HFSTestCase { } } + private void testGetFileBlockLocations() throws Exception { + BlockLocation[] blockLocations; + Path testFile; + if (!this.isLocalFS()) { + FileSystem fs = this.getHttpFSFileSystem(); + testFile = new Path(getProxiedFSTestDir(), "singleBlock.txt"); + DFSTestUtil.createFile(fs, testFile, 1, (short) 1, 0L); + if (fs instanceof HttpFSFileSystem) { + HttpFSFileSystem httpFS = (HttpFSFileSystem) fs; + blockLocations = httpFS.getFileBlockLocations(testFile, 0, 1); + assertNotNull(blockLocations); + + // verify HttpFSFileSystem.toBlockLocations() + String jsonString = JsonUtil.toJsonString(blockLocations); + JSONParser parser = new JSONParser(); + JSONObject jsonObject = (JSONObject) parser.parse(jsonString, (ContainerFactory) null); + BlockLocation[] deserializedLocation = HttpFSFileSystem.toBlockLocations(jsonObject); + assertEquals(blockLocations.length, deserializedLocation.length); + for (int i = 0; i < blockLocations.length; i++) { + assertEquals(blockLocations[i].toString(), deserializedLocation[i].toString()); + } + } else if (fs instanceof WebHdfsFileSystem) { + WebHdfsFileSystem webHdfsFileSystem = (WebHdfsFileSystem) fs; + blockLocations = webHdfsFileSystem.getFileBlockLocations(testFile, 0, 1); + assertNotNull(blockLocations); + } else { + Assert.fail(fs.getClass().getSimpleName() + " doesn't support access"); + } + } + } + private void testGetSnapshotDiffListing() throws Exception { if (!this.isLocalFS()) { // Create a directory with snapshot allowed diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java index f584b56ebbc..89efdb24edf 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java +++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java @@ -36,12 +36,14 @@ import org.apache.hadoop.hdfs.protocol.SnapshotStatus; import org.apache.hadoop.hdfs.protocol.SystemErasureCodingPolicies; import org.apache.hadoop.hdfs.server.common.HdfsServerConstants; import org.apache.hadoop.hdfs.web.JsonUtil; +import org.apache.hadoop.hdfs.web.JsonUtilClient; import org.apache.hadoop.lib.service.FileSystemAccess; import org.apache.hadoop.security.authentication.util.SignerSecretProvider; import org.apache.hadoop.security.authentication.util.StringSignerSecretProviderCreator; import org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier; import org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator; import org.apache.hadoop.security.token.delegation.web.KerberosDelegationTokenAuthenticationHandler; +import org.apache.hadoop.util.JsonSerialization; import org.json.simple.JSONArray; import org.junit.Assert; @@ -70,6 +72,7 @@ import java.util.Set; import org.apache.commons.io.IOUtils; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.BlockLocation; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.FsServerDefaults; import org.apache.hadoop.fs.Path; @@ -2003,4 +2006,38 @@ public class TestHttpFSServer extends HFSTestCase { () -> HttpFSUtils.jsonParse(conn)); conn.disconnect(); } + + @Test + @TestDir + @TestJetty + @TestHdfs + public void testGetFileBlockLocations() throws Exception { + createHttpFSServer(false, false); + // Create a test directory + String pathStr = "/tmp/tmp-get-block-location-test"; + createDirWithHttp(pathStr, "700", null); + + Path path = new Path(pathStr); + DistributedFileSystem dfs = (DistributedFileSystem) FileSystem + .get(path.toUri(), TestHdfsHelper.getHdfsConf()); + + String file1 = pathStr + "/file1"; + createWithHttp(file1, null); + HttpURLConnection conn = sendRequestToHttpFSServer(file1, + "GETFILEBLOCKLOCATIONS", "length=10&offset10"); + Assert.assertEquals(HttpURLConnection.HTTP_OK, conn.getResponseCode()); + BlockLocation[] locations1 = dfs.getFileBlockLocations(new Path(file1), 0, 1); + Assert.assertNotNull(locations1); + + Map jsonMap = JsonSerialization.mapReader().readValue(conn.getInputStream()); + + BlockLocation[] httpfsBlockLocations = JsonUtilClient.toBlockLocationArray(jsonMap); + + assertEquals(locations1.length, httpfsBlockLocations.length); + for (int i = 0; i < locations1.length; i++) { + assertEquals(locations1[i].toString(), httpfsBlockLocations[i].toString()); + } + + conn.getInputStream().close(); + } } diff --git a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CONTRIBUTING.md b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CONTRIBUTING.md index 0d081b2c1b0..4e403e6f706 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CONTRIBUTING.md +++ b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CONTRIBUTING.md @@ -124,7 +124,7 @@ Please make sure you write code that is portable. * Don't write code that could force a non-aligned word access. * This causes performance issues on most architectures and isn't supported at all on some. * Generally the compiler will prevent this unless you are doing clever things with pointers e.g. abusing placement new or reinterpreting a pointer into a pointer to a wider type. -* If a type needs to be a a specific width make sure to specify it. +* If a type needs to be a specific width make sure to specify it. * `int32_t my_32_bit_wide_int` * Avoid using compiler dependent pragmas or attributes. * If there is a justified and unavoidable reason for using these you must document why. See examples below. diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MembershipNamenodeResolver.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MembershipNamenodeResolver.java index d65ebdb628e..db1dcdf1818 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MembershipNamenodeResolver.java +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MembershipNamenodeResolver.java @@ -128,9 +128,13 @@ public class MembershipNamenodeResolver // Our cache depends on the store, update it first try { MembershipStore membership = getMembershipStore(); - membership.loadCache(force); + if (!membership.loadCache(force)) { + return false; + } DisabledNameserviceStore disabled = getDisabledNameserviceStore(); - disabled.loadCache(force); + if (!disabled.loadCache(force)) { + return false; + } } catch (IOException e) { LOG.error("Cannot update membership from the State Store", e); } @@ -189,13 +193,53 @@ public class MembershipNamenodeResolver } } + /** + * Try to shuffle the multiple observer namenodes if listObserversFirst is true. + * @param inputNameNodes the input FederationNamenodeContext list. If listObserversFirst is true, + * all observers will be placed at the front of the collection. + * @param listObserversFirst true if we need to shuffle the multiple front observer namenodes. + * @return a list of FederationNamenodeContext. + * @param a subclass of FederationNamenodeContext. + */ + private List shuffleObserverNN( + List inputNameNodes, boolean listObserversFirst) { + if (!listObserversFirst) { + return inputNameNodes; + } + // Get Observers first. + List observerList = new ArrayList<>(); + for (T t : inputNameNodes) { + if (t.getState() == OBSERVER) { + observerList.add(t); + } else { + // The inputNameNodes are already sorted, so it can break + // when the first non-observer is encountered. + break; + } + } + // Returns the inputNameNodes if no shuffle is required + if (observerList.size() <= 1) { + return inputNameNodes; + } + + // Shuffle multiple Observers + Collections.shuffle(observerList); + + List ret = new ArrayList<>(inputNameNodes.size()); + ret.addAll(observerList); + for (int i = observerList.size(); i < inputNameNodes.size(); i++) { + ret.add(inputNameNodes.get(i)); + } + return Collections.unmodifiableList(ret); + } + @Override public List getNamenodesForNameserviceId( final String nsId, boolean listObserversFirst) throws IOException { List ret = cacheNS.get(Pair.of(nsId, listObserversFirst)); if (ret != null) { - return ret; + return shuffleObserverNN(ret, listObserversFirst); } // Not cached, generate the value diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java index 2888d8cc501..1fdd4cdfba8 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java @@ -398,7 +398,9 @@ public class MountTableResolver try { // Our cache depends on the store, update it first MountTableStore mountTable = this.getMountTableStore(); - mountTable.loadCache(force); + if (!mountTable.loadCache(force)) { + return false; + } GetMountTableEntriesRequest request = GetMountTableEntriesRequest.newInstance("/"); diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java index c6db9837c7c..e9253427783 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java @@ -77,10 +77,6 @@ public class ConnectionManager { * Global federated namespace context for router. */ private final RouterStateIdContext routerStateIdContext; - /** - * Map from connection pool ID to namespace. - */ - private final Map connectionPoolToNamespaceMap; /** Max size of queue for creating new connections. */ private final int creatorQueueMaxSize; @@ -105,7 +101,6 @@ public class ConnectionManager { public ConnectionManager(Configuration config, RouterStateIdContext routerStateIdContext) { this.conf = config; this.routerStateIdContext = routerStateIdContext; - this.connectionPoolToNamespaceMap = new HashMap<>(); // Configure minimum, maximum and active connection pools this.maxSize = this.conf.getInt( RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE, @@ -172,10 +167,6 @@ public class ConnectionManager { pool.close(); } this.pools.clear(); - for (String nsID: connectionPoolToNamespaceMap.values()) { - routerStateIdContext.removeNamespaceStateId(nsID); - } - connectionPoolToNamespaceMap.clear(); } finally { writeLock.unlock(); } @@ -224,15 +215,15 @@ public class ConnectionManager { this.minActiveRatio, protocol, new PoolAlignmentContext(this.routerStateIdContext, nsId)); this.pools.put(connectionId, pool); - this.connectionPoolToNamespaceMap.put(connectionId, nsId); } - long clientStateId = RouterStateIdContext.getClientStateIdFromCurrentCall(nsId); - pool.getPoolAlignmentContext().advanceClientStateId(clientStateId); } finally { writeLock.unlock(); } } + long clientStateId = RouterStateIdContext.getClientStateIdFromCurrentCall(nsId); + pool.getPoolAlignmentContext().advanceClientStateId(clientStateId); + ConnectionContext conn = pool.getConnection(); // Add a new connection to the pool if it wasn't usable @@ -450,11 +441,6 @@ public class ConnectionManager { try { for (ConnectionPoolId poolId : toRemove) { pools.remove(poolId); - String nsID = connectionPoolToNamespaceMap.get(poolId); - connectionPoolToNamespaceMap.remove(poolId); - if (!connectionPoolToNamespaceMap.values().contains(nsID)) { - routerStateIdContext.removeNamespaceStateId(nsID); - } } } finally { writeLock.unlock(); diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/PoolAlignmentContext.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/PoolAlignmentContext.java index 571f41c4d54..1f2b12d445f 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/PoolAlignmentContext.java +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/PoolAlignmentContext.java @@ -20,6 +20,8 @@ package org.apache.hadoop.hdfs.server.federation.router; import java.io.IOException; import java.util.concurrent.atomic.LongAccumulator; + +import org.apache.hadoop.classification.VisibleForTesting; import org.apache.hadoop.ipc.AlignmentContext; import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos; @@ -71,8 +73,7 @@ public class PoolAlignmentContext implements AlignmentContext { */ @Override public void updateRequestState(RpcHeaderProtos.RpcRequestHeaderProto.Builder header) { - long maxStateId = Long.max(poolLocalStateId.get(), sharedGlobalStateId.get()); - header.setStateId(maxStateId); + header.setStateId(poolLocalStateId.get()); } /** @@ -100,4 +101,9 @@ public class PoolAlignmentContext implements AlignmentContext { public void advanceClientStateId(Long clientStateId) { poolLocalStateId.accumulate(clientStateId); } + + @VisibleForTesting + public long getPoolLocalStateId() { + return this.poolLocalStateId.get(); + } } diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java index c598076f636..7e07d7b6549 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java @@ -239,6 +239,18 @@ public class RBFConfigKeys extends CommonConfigurationKeysPublic { public static final long FEDERATION_STORE_ROUTER_EXPIRATION_DELETION_MS_DEFAULT = -1; + // HDFS Router-based federation State Store ZK DRIVER + public static final String FEDERATION_STORE_ZK_DRIVER_PREFIX = + RBFConfigKeys.FEDERATION_STORE_PREFIX + "driver.zk."; + public static final String FEDERATION_STORE_ZK_PARENT_PATH = + FEDERATION_STORE_ZK_DRIVER_PREFIX + "parent-path"; + public static final String FEDERATION_STORE_ZK_PARENT_PATH_DEFAULT = + "/hdfs-federation"; + public static final String FEDERATION_STORE_ZK_ASYNC_MAX_THREADS = + FEDERATION_STORE_ZK_DRIVER_PREFIX + "async.max.threads"; + public static final int FEDERATION_STORE_ZK_ASYNC_MAX_THREADS_DEFAULT = + -1; + // HDFS Router safe mode public static final String DFS_ROUTER_SAFEMODE_ENABLE = FEDERATION_ROUTER_PREFIX + "safemode.enable"; diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RemoteMethod.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RemoteMethod.java index e5df4893a91..ecaa97b9330 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RemoteMethod.java +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RemoteMethod.java @@ -131,7 +131,7 @@ public class RemoteMethod { /** * Get the represented java method. * - * @return Method + * @return {@link Method} * @throws IOException If the method cannot be found. */ public Method getMethod() throws IOException { diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java index a5f83c95b7b..ee8ae5885a6 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java @@ -702,8 +702,9 @@ public class RouterClientProtocol implements ClientProtocol { RemoteMethod method = new RemoteMethod("truncate", new Class[] {String.class, long.class, String.class}, new RemoteParam(), newLength, clientName); + // Truncate can return true/false, so don't expect a result return rpcClient.invokeSequential(locations, method, Boolean.class, - Boolean.TRUE); + null); } @Override diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java index c4173163436..45676cea63a 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java @@ -53,6 +53,7 @@ import java.util.concurrent.ExecutionException; import java.util.concurrent.Executors; import java.util.concurrent.ThreadFactory; import java.util.concurrent.TimeUnit; +import java.util.stream.Collectors; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hdfs.HAUtil; @@ -203,6 +204,9 @@ public class RouterRpcServer extends AbstractService implements ClientProtocol, /** Router using this RPC server. */ private final Router router; + /** Alignment context storing state IDs for all namespaces this router serves. */ + private final RouterStateIdContext routerStateIdContext; + /** The RPC server that listens to requests from clients. */ private final Server rpcServer; /** The address for this RPC server. */ @@ -321,7 +325,7 @@ public class RouterRpcServer extends AbstractService implements ClientProtocol, // Create security manager this.securityManager = new RouterSecurityManager(this.conf); - RouterStateIdContext routerStateIdContext = new RouterStateIdContext(conf); + routerStateIdContext = new RouterStateIdContext(conf); this.rpcServer = new RPC.Builder(this.conf) .setProtocol(ClientNamenodeProtocolPB.class) @@ -410,9 +414,38 @@ public class RouterRpcServer extends AbstractService implements ClientProtocol, .forEach(this.dnCache::refresh), 0, dnCacheExpire, TimeUnit.MILLISECONDS); + + Executors + .newSingleThreadScheduledExecutor() + .scheduleWithFixedDelay(this::clearStaleNamespacesInRouterStateIdContext, + 0, + conf.getLong(RBFConfigKeys.FEDERATION_STORE_MEMBERSHIP_EXPIRATION_MS, + RBFConfigKeys.FEDERATION_STORE_MEMBERSHIP_EXPIRATION_MS_DEFAULT), + TimeUnit.MILLISECONDS); + initRouterFedRename(); } + /** + * Clear expired namespace in the shared RouterStateIdContext. + */ + private void clearStaleNamespacesInRouterStateIdContext() { + try { + final Set resolvedNamespaces = namenodeResolver.getNamespaces() + .stream() + .map(FederationNamespaceInfo::getNameserviceId) + .collect(Collectors.toSet()); + + routerStateIdContext.getNamespaces().forEach(namespace -> { + if (!resolvedNamespaces.contains(namespace)) { + routerStateIdContext.removeNamespaceStateId(namespace); + } + }); + } catch (IOException e) { + LOG.warn("Could not fetch current list of namespaces.", e); + } + } + /** * Init the router federation rename environment. Each router has its own * journal path. @@ -510,6 +543,15 @@ public class RouterRpcServer extends AbstractService implements ClientProtocol, return this.fedRenameScheduler; } + /** + * Get the routerStateIdContext used by this server. + * @return routerStateIdContext + */ + @VisibleForTesting + protected RouterStateIdContext getRouterStateIdContext() { + return routerStateIdContext; + } + /** * Get the RPC security manager. * diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterStateIdContext.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterStateIdContext.java index 9d2b75b0b55..b3bab732c02 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterStateIdContext.java +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterStateIdContext.java @@ -22,14 +22,15 @@ import java.lang.reflect.Method; import java.util.Collections; import java.util.HashSet; +import java.util.List; import java.util.Map; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.atomic.LongAccumulator; import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos; import org.apache.hadoop.hdfs.protocol.ClientProtocol; +import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.RouterFederatedStateProto; import org.apache.hadoop.hdfs.server.namenode.ha.ReadOnly; import org.apache.hadoop.ipc.AlignmentContext; import org.apache.hadoop.ipc.RetriableException; @@ -83,16 +84,19 @@ class RouterStateIdContext implements AlignmentContext { if (namespaceIdMap.isEmpty()) { return; } - HdfsServerFederationProtos.RouterFederatedStateProto.Builder federatedStateBuilder = - HdfsServerFederationProtos.RouterFederatedStateProto.newBuilder(); - namespaceIdMap.forEach((k, v) -> federatedStateBuilder.putNamespaceStateIds(k, v.get())); - headerBuilder.setRouterFederatedState(federatedStateBuilder.build().toByteString()); + RouterFederatedStateProto.Builder builder = RouterFederatedStateProto.newBuilder(); + namespaceIdMap.forEach((k, v) -> builder.putNamespaceStateIds(k, v.get())); + headerBuilder.setRouterFederatedState(builder.build().toByteString()); } public LongAccumulator getNamespaceStateId(String nsId) { return namespaceIdMap.computeIfAbsent(nsId, key -> new LongAccumulator(Math::max, Long.MIN_VALUE)); } + public List getNamespaces() { + return Collections.list(namespaceIdMap.keys()); + } + public void removeNamespaceStateId(String nsId) { namespaceIdMap.remove(nsId); } @@ -102,9 +106,9 @@ class RouterStateIdContext implements AlignmentContext { */ public static Map getRouterFederatedStateMap(ByteString byteString) { if (byteString != null) { - HdfsServerFederationProtos.RouterFederatedStateProto federatedState; + RouterFederatedStateProto federatedState; try { - federatedState = HdfsServerFederationProtos.RouterFederatedStateProto.parseFrom(byteString); + federatedState = RouterFederatedStateProto.parseFrom(byteString); } catch (InvalidProtocolBufferException e) { throw new RuntimeException(e); } diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/CachedRecordStore.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/CachedRecordStore.java index 2b693aa936f..6fea9b9946d 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/CachedRecordStore.java +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/CachedRecordStore.java @@ -100,7 +100,7 @@ public abstract class CachedRecordStore * @throws StateStoreUnavailableException If the cache is not initialized. */ private void checkCacheAvailable() throws StateStoreUnavailableException { - if (!this.initialized) { + if (!getDriver().isDriverReady() || !this.initialized) { throw new StateStoreUnavailableException( "Cached State Store not initialized, " + getRecordClass().getSimpleName() + " records not valid"); @@ -125,7 +125,6 @@ public abstract class CachedRecordStore } catch (IOException e) { LOG.error("Cannot get \"{}\" records from the State Store", getRecordClass().getSimpleName()); - this.initialized = false; return false; } diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/StateStoreService.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/StateStoreService.java index 201c7a325f1..77939799e72 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/StateStoreService.java +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/StateStoreService.java @@ -272,6 +272,15 @@ public class StateStoreService extends CompositeService { return null; } + /** + * Get the list of all RecordStores. + * @return a list of each RecordStore. + */ + @SuppressWarnings("unchecked") + public > List getRecordStores() { + return new ArrayList<>((Collection) recordStores.values()); + } + /** * List of records supported by this State Store. * diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/StateStoreRecordOperations.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/StateStoreRecordOperations.java index b5ce8f8d411..3b781cb485b 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/StateStoreRecordOperations.java +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/StateStoreRecordOperations.java @@ -56,7 +56,7 @@ public interface StateStoreRecordOperations { * @param clazz Class of record to fetch. * @param query Query to filter results. * @return A single record matching the query. Null if there are no matching - * records or more than one matching record in the store. + * records. * @throws IOException If multiple records match or if the data store cannot * be queried. */ diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreFileBaseImpl.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreFileBaseImpl.java index 871919594f5..c93d919aea0 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreFileBaseImpl.java +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreFileBaseImpl.java @@ -85,7 +85,8 @@ public abstract class StateStoreFileBaseImpl * @param path Path of the record to write. * @return Writer for the record. */ - protected abstract BufferedWriter getWriter( + @VisibleForTesting + public abstract BufferedWriter getWriter( String path); /** @@ -348,25 +349,18 @@ public abstract class StateStoreFileBaseImpl for (Entry entry : toWrite.entrySet()) { String recordPath = entry.getKey(); String recordPathTemp = recordPath + "." + now() + TMP_MARK; - BufferedWriter writer = getWriter(recordPathTemp); - try { + boolean recordWrittenSuccessfully = true; + try (BufferedWriter writer = getWriter(recordPathTemp)) { T record = entry.getValue(); String line = serializeString(record); writer.write(line); } catch (IOException e) { LOG.error("Cannot write {}", recordPathTemp, e); + recordWrittenSuccessfully = false; success = false; - } finally { - if (writer != null) { - try { - writer.close(); - } catch (IOException e) { - LOG.error("Cannot close the writer for {}", recordPathTemp, e); - } - } } // Commit - if (!rename(recordPathTemp, recordPath)) { + if (recordWrittenSuccessfully && !rename(recordPathTemp, recordPath)) { LOG.error("Failed committing record into {}", recordPath); success = false; } diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreFileImpl.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreFileImpl.java index 9d2b1ab2fb7..6ca26637161 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreFileImpl.java +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreFileImpl.java @@ -31,6 +31,7 @@ import java.util.Collections; import java.util.List; import org.apache.commons.lang3.ArrayUtils; +import org.apache.hadoop.classification.VisibleForTesting; import org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys; import org.apache.hadoop.hdfs.server.federation.store.records.BaseRecord; import org.slf4j.Logger; @@ -125,7 +126,8 @@ public class StateStoreFileImpl extends StateStoreFileBaseImpl { } @Override - protected BufferedWriter getWriter(String filename) { + @VisibleForTesting + public BufferedWriter getWriter(String filename) { BufferedWriter writer = null; try { LOG.debug("Writing file: {}", filename); diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreFileSystemImpl.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreFileSystemImpl.java index e6bf159e2f5..ee34d8a4cab 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreFileSystemImpl.java +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreFileSystemImpl.java @@ -28,13 +28,14 @@ import java.util.ArrayList; import java.util.Collections; import java.util.List; +import org.apache.hadoop.classification.VisibleForTesting; import org.apache.hadoop.fs.FSDataInputStream; import org.apache.hadoop.fs.FSDataOutputStream; import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.FileUtil; import org.apache.hadoop.fs.Options; import org.apache.hadoop.fs.Path; -import org.apache.hadoop.hdfs.DistributedFileSystem; import org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys; import org.apache.hadoop.hdfs.server.federation.store.driver.StateStoreDriver; import org.apache.hadoop.hdfs.server.federation.store.records.BaseRecord; @@ -82,17 +83,8 @@ public class StateStoreFileSystemImpl extends StateStoreFileBaseImpl { @Override protected boolean rename(String src, String dst) { try { - if (fs instanceof DistributedFileSystem) { - DistributedFileSystem dfs = (DistributedFileSystem)fs; - dfs.rename(new Path(src), new Path(dst), Options.Rename.OVERWRITE); - return true; - } else { - // Replace should be atomic but not available - if (fs.exists(new Path(dst))) { - fs.delete(new Path(dst), true); - } - return fs.rename(new Path(src), new Path(dst)); - } + FileUtil.rename(fs, new Path(src), new Path(dst), Options.Rename.OVERWRITE); + return true; } catch (Exception e) { LOG.error("Cannot rename {} to {}", src, dst, e); return false; @@ -148,7 +140,8 @@ public class StateStoreFileSystemImpl extends StateStoreFileBaseImpl { } @Override - protected BufferedWriter getWriter(String pathName) { + @VisibleForTesting + public BufferedWriter getWriter(String pathName) { BufferedWriter writer = null; Path path = new Path(pathName); try { diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreZooKeeperImpl.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreZooKeeperImpl.java index 45442da0ab5..7882c8f8273 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreZooKeeperImpl.java +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreZooKeeperImpl.java @@ -25,7 +25,16 @@ import static org.apache.hadoop.util.Time.monotonicNow; import java.io.IOException; import java.util.ArrayList; import java.util.List; +import java.util.concurrent.Callable; +import java.util.concurrent.Future; +import java.util.concurrent.LinkedBlockingQueue; +import java.util.concurrent.ThreadFactory; +import java.util.concurrent.ThreadPoolExecutor; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; +import org.apache.hadoop.classification.VisibleForTesting; +import org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder; import org.apache.curator.framework.CuratorFramework; import org.apache.curator.framework.imps.CuratorFrameworkState; import org.apache.hadoop.conf.Configuration; @@ -57,14 +66,9 @@ public class StateStoreZooKeeperImpl extends StateStoreSerializableImpl { private static final Logger LOG = LoggerFactory.getLogger(StateStoreZooKeeperImpl.class); - - /** Configuration keys. */ - public static final String FEDERATION_STORE_ZK_DRIVER_PREFIX = - RBFConfigKeys.FEDERATION_STORE_PREFIX + "driver.zk."; - public static final String FEDERATION_STORE_ZK_PARENT_PATH = - FEDERATION_STORE_ZK_DRIVER_PREFIX + "parent-path"; - public static final String FEDERATION_STORE_ZK_PARENT_PATH_DEFAULT = - "/hdfs-federation"; + /** Service to get/update zk state. */ + private ThreadPoolExecutor executorService; + private boolean enableConcurrent; /** Directory to store the state store data. */ @@ -82,8 +86,22 @@ public class StateStoreZooKeeperImpl extends StateStoreSerializableImpl { Configuration conf = getConf(); baseZNode = conf.get( - FEDERATION_STORE_ZK_PARENT_PATH, - FEDERATION_STORE_ZK_PARENT_PATH_DEFAULT); + RBFConfigKeys.FEDERATION_STORE_ZK_PARENT_PATH, + RBFConfigKeys.FEDERATION_STORE_ZK_PARENT_PATH_DEFAULT); + int numThreads = conf.getInt( + RBFConfigKeys.FEDERATION_STORE_ZK_ASYNC_MAX_THREADS, + RBFConfigKeys.FEDERATION_STORE_ZK_ASYNC_MAX_THREADS_DEFAULT); + enableConcurrent = numThreads > 0; + if (enableConcurrent) { + ThreadFactory threadFactory = new ThreadFactoryBuilder() + .setNameFormat("StateStore ZK Client-%d") + .build(); + this.executorService = new ThreadPoolExecutor(numThreads, numThreads, + 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<>(), threadFactory); + LOG.info("Init StateStoreZookeeperImpl by async mode with {} threads.", numThreads); + } else { + LOG.info("Init StateStoreZookeeperImpl by sync mode."); + } try { this.zkManager = new ZKCuratorManager(conf); this.zkManager.start(); @@ -109,8 +127,16 @@ public class StateStoreZooKeeperImpl extends StateStoreSerializableImpl { } } + @VisibleForTesting + public void setEnableConcurrent(boolean enableConcurrent) { + this.enableConcurrent = enableConcurrent; + } + @Override public void close() throws Exception { + if (executorService != null) { + executorService.shutdown(); + } if (zkManager != null) { zkManager.close(); } @@ -136,34 +162,21 @@ public class StateStoreZooKeeperImpl extends StateStoreSerializableImpl { List ret = new ArrayList<>(); String znode = getZNodeForClass(clazz); try { - List children = zkManager.getChildren(znode); - for (String child : children) { - try { - String path = getNodePath(znode, child); - Stat stat = new Stat(); - String data = zkManager.getStringData(path, stat); - boolean corrupted = false; - if (data == null || data.equals("")) { - // All records should have data, otherwise this is corrupted - corrupted = true; - } else { - try { - T record = createRecord(data, stat, clazz); - ret.add(record); - } catch (IOException e) { - LOG.error("Cannot create record type \"{}\" from \"{}\": {}", - clazz.getSimpleName(), data, e.getMessage()); - corrupted = true; - } + List> callables = new ArrayList<>(); + zkManager.getChildren(znode).forEach(c -> callables.add(() -> getRecord(clazz, znode, c))); + if (enableConcurrent) { + List> futures = executorService.invokeAll(callables); + for (Future future : futures) { + if (future.get() != null) { + ret.add(future.get()); } - - if (corrupted) { - LOG.error("Cannot get data for {} at {}, cleaning corrupted data", - child, path); - zkManager.delete(path); + } + } else { + for (Callable callable : callables) { + T record = callable.call(); + if (record != null) { + ret.add(record); } - } catch (Exception e) { - LOG.error("Cannot get data for {}: {}", child, e.getMessage()); } } } catch (Exception e) { @@ -178,6 +191,44 @@ public class StateStoreZooKeeperImpl extends StateStoreSerializableImpl { return new QueryResult(ret, getTime()); } + /** + * Get one data record in the StateStore or delete it if it's corrupted. + * + * @param clazz Record class to evaluate. + * @param znode The ZNode for the class. + * @param child The child for znode to get. + * @return The record to get. + */ + private T getRecord(Class clazz, String znode, String child) { + T record = null; + try { + String path = getNodePath(znode, child); + Stat stat = new Stat(); + String data = zkManager.getStringData(path, stat); + boolean corrupted = false; + if (data == null || data.equals("")) { + // All records should have data, otherwise this is corrupted + corrupted = true; + } else { + try { + record = createRecord(data, stat, clazz); + } catch (IOException e) { + LOG.error("Cannot create record type \"{}\" from \"{}\": {}", + clazz.getSimpleName(), data, e.getMessage()); + corrupted = true; + } + } + + if (corrupted) { + LOG.error("Cannot get data for {} at {}, cleaning corrupted data", child, path); + zkManager.delete(path); + } + } catch (Exception e) { + LOG.error("Cannot get data for {}: {}", child, e.getMessage()); + } + return record; + } + @Override public boolean putAll( List records, boolean update, boolean error) throws IOException { @@ -192,22 +243,40 @@ public class StateStoreZooKeeperImpl extends StateStoreSerializableImpl { String znode = getZNodeForClass(recordClass); long start = monotonicNow(); - boolean status = true; - for (T record : records) { - String primaryKey = getPrimaryKey(record); - String recordZNode = getNodePath(znode, primaryKey); - byte[] data = serialize(record); - if (!writeNode(recordZNode, data, update, error)){ - status = false; + final AtomicBoolean status = new AtomicBoolean(true); + List> callables = new ArrayList<>(); + records.forEach(record -> + callables.add( + () -> { + String primaryKey = getPrimaryKey(record); + String recordZNode = getNodePath(znode, primaryKey); + byte[] data = serialize(record); + if (!writeNode(recordZNode, data, update, error)) { + status.set(false); + } + return null; + } + ) + ); + try { + if (enableConcurrent) { + executorService.invokeAll(callables); + } else { + for(Callable callable : callables) { + callable.call(); + } } + } catch (Exception e) { + LOG.error("Write record failed : {}", e.getMessage(), e); + throw new IOException(e); } long end = monotonicNow(); - if (status) { + if (status.get()) { getMetrics().addWrite(end - start); } else { getMetrics().addFailure(end - start); } - return status; + return status.get(); } @Override diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MembershipStoreImpl.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MembershipStoreImpl.java index a63a0f3b3ab..5d22b77afe2 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MembershipStoreImpl.java +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MembershipStoreImpl.java @@ -185,7 +185,9 @@ public class MembershipStoreImpl @Override public boolean loadCache(boolean force) throws IOException { - super.loadCache(force); + if (!super.loadCache(force)) { + return false; + } // Update local cache atomically cacheWriteLock.lock(); diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java index d7fcf862fb6..3ecb4c2caba 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java @@ -18,6 +18,7 @@ package org.apache.hadoop.hdfs.tools.federation; import java.io.IOException; +import java.io.PrintStream; import java.net.InetSocketAddress; import java.util.Arrays; import java.util.Collection; @@ -26,6 +27,7 @@ import java.util.LinkedList; import java.util.List; import java.util.Map; import java.util.Map.Entry; +import java.util.TreeMap; import java.util.regex.Pattern; import org.apache.hadoop.classification.InterfaceAudience.Private; @@ -46,6 +48,10 @@ import org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys; import org.apache.hadoop.hdfs.server.federation.router.RouterClient; import org.apache.hadoop.hdfs.server.federation.router.RouterQuotaUsage; import org.apache.hadoop.hdfs.server.federation.router.RouterStateManager; +import org.apache.hadoop.hdfs.server.federation.store.CachedRecordStore; +import org.apache.hadoop.hdfs.server.federation.store.RecordStore; +import org.apache.hadoop.hdfs.server.federation.store.StateStoreService; +import org.apache.hadoop.hdfs.server.federation.store.StateStoreUtils; import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryRequest; import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryResponse; import org.apache.hadoop.hdfs.server.federation.store.protocol.DisableNameserviceRequest; @@ -70,7 +76,9 @@ import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableE import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryResponse; import org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableEntryRequest; import org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableEntryResponse; +import org.apache.hadoop.hdfs.server.federation.store.records.BaseRecord; import org.apache.hadoop.hdfs.server.federation.store.records.MountTable; +import org.apache.hadoop.hdfs.server.federation.store.records.impl.pb.PBRecord; import org.apache.hadoop.ipc.ProtobufRpcEngine2; import org.apache.hadoop.ipc.RPC; import org.apache.hadoop.ipc.RefreshResponse; @@ -97,6 +105,7 @@ import static org.apache.hadoop.hdfs.server.federation.router.Quota.andByStorage public class RouterAdmin extends Configured implements Tool { private static final Logger LOG = LoggerFactory.getLogger(RouterAdmin.class); + private static final String DUMP_COMMAND = "-dumpState"; private RouterClient client; @@ -133,7 +142,7 @@ public class RouterAdmin extends Configured implements Tool { String[] commands = {"-add", "-update", "-rm", "-ls", "-getDestination", "-setQuota", "-setStorageTypeQuota", "-clrQuota", "-clrStorageTypeQuota", - "-safemode", "-nameservice", "-getDisabledNameservices", + DUMP_COMMAND, "-safemode", "-nameservice", "-getDisabledNameservices", "-refresh", "-refreshRouterArgs", "-refreshSuperUserGroupsConfiguration", "-refreshCallQueue"}; StringBuilder usage = new StringBuilder(); @@ -187,6 +196,8 @@ public class RouterAdmin extends Configured implements Tool { return "\t[-refreshSuperUserGroupsConfiguration]"; } else if (cmd.equals("-refreshCallQueue")) { return "\t[-refreshCallQueue]"; + } else if (cmd.equals(DUMP_COMMAND)) { + return "\t[" + DUMP_COMMAND + "]"; } return getUsage(null); } @@ -224,7 +235,8 @@ public class RouterAdmin extends Configured implements Tool { if (arg.length > 1) { throw new IllegalArgumentException("No arguments allowed"); } - } else if (arg[0].equals("-refreshCallQueue")) { + } else if (arg[0].equals("-refreshCallQueue") || + arg[0].equals(DUMP_COMMAND)) { if (arg.length > 1) { throw new IllegalArgumentException("No arguments allowed"); } @@ -286,6 +298,15 @@ public class RouterAdmin extends Configured implements Tool { return true; } + /** + * Does this command run in the local process? + * @param cmd the string of the command + * @return is this a local command? + */ + boolean isLocalCommand(String cmd) { + return DUMP_COMMAND.equals(cmd); + } + @Override public int run(String[] argv) throws Exception { if (argv.length < 1) { @@ -303,6 +324,10 @@ public class RouterAdmin extends Configured implements Tool { System.err.println("Not enough parameters specificed for cmd " + cmd); printUsage(cmd); return exitCode; + } else if (isLocalCommand(argv[0])) { + if (DUMP_COMMAND.equals(argv[0])) { + return dumpStateStore(getConf(), System.out) ? 0 : -1; + } } String address = null; // Initialize RouterClient @@ -1301,6 +1326,49 @@ public class RouterAdmin extends Configured implements Tool { return returnCode; } + /** + * Dumps the contents of the StateStore to stdout. + * @return true if it was successful + */ + public static boolean dumpStateStore(Configuration conf, + PrintStream output) throws IOException { + StateStoreService service = new StateStoreService(); + conf.setBoolean(RBFConfigKeys.DFS_ROUTER_METRICS_ENABLE, false); + service.init(conf); + service.loadDriver(); + if (!service.isDriverReady()) { + System.err.println("Can't initialize driver"); + return false; + } + // Get the stores sorted by name + Map> stores = new TreeMap<>(); + for(RecordStore store: service.getRecordStores()) { + String recordName = StateStoreUtils.getRecordName(store.getRecordClass()); + stores.put(recordName, store); + } + for (Entry> pair: stores.entrySet()) { + String recordName = pair.getKey(); + RecordStore store = pair.getValue(); + output.println("---- " + recordName + " ----"); + if (store instanceof CachedRecordStore) { + for (Object record: ((CachedRecordStore) store).getCachedRecords()) { + if (record instanceof BaseRecord && record instanceof PBRecord) { + BaseRecord baseRecord = (BaseRecord) record; + // Generate the pseudo-json format of the protobuf record + String recordString = ((PBRecord) record).getProto().toString(); + // Indent each line + recordString = " " + recordString.replaceAll("\n", "\n "); + output.println(String.format(" %s:", baseRecord.getPrimaryKey())); + output.println(recordString); + } + } + output.println(); + } + } + service.stop(); + return true; + } + /** * Normalize a path for that filesystem. * @@ -1341,4 +1409,4 @@ public class RouterAdmin extends Configured implements Tool { return mode; } } -} \ No newline at end of file +} diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/proto/FederationProtocol.proto b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/proto/FederationProtocol.proto index 7f61d80fe1a..c8636826c3c 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/proto/FederationProtocol.proto +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/proto/FederationProtocol.proto @@ -311,17 +311,4 @@ message GetDisabledNameservicesRequestProto { message GetDisabledNameservicesResponseProto { repeated string nameServiceIds = 1; -} - -///////////////////////////////////////////////// -// Alignment state for namespaces. -///////////////////////////////////////////////// - -/** - * Clients should receive this message in RPC responses and forward it - * in RPC requests without interpreting it. It should be encoded - * as an obscure byte array when being sent to clients. - */ -message RouterFederatedStateProto { - map namespaceStateIds = 1; // Last seen state IDs for multiple namespaces. } \ No newline at end of file diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml index 52a1e3a3bd1..b5096cd253d 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml @@ -377,6 +377,26 @@ + + dfs.federation.router.store.driver.zk.parent-path + /hdfs-federation + + The parent path of zookeeper for StateStoreZooKeeperImpl. + + + + + dfs.federation.router.store.driver.zk.async.max.threads + -1 + + Max threads number of StateStoreZooKeeperImpl in async mode. + The only class currently being supported: + org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreZooKeeperImpl. + Default value is -1, which means StateStoreZooKeeperImpl is working in sync mode. + Use positive integer value to enable async mode. + + + dfs.federation.router.cache.ttl 1m diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md index 5a9c2fd4285..098c73a3b71 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md @@ -328,6 +328,17 @@ To trigger a runtime-refresh of the resource specified by \ on \ [arg1..argn] +### Router state dump + +To diagnose the current state of the routers, you can use the +dumpState command. It generates a text dump of the records in the +State Store. Since it uses the configuration to find and read the +state store, it is often easiest to use the machine where the routers +run. The command runs locally, so the routers do not have to be up to +use this command. + + [hdfs]$ $HADOOP_HOME/bin/hdfs dfsrouteradmin -dumpState + Client configuration -------------------- diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java index 4fcdf6595e4..2c703958704 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java @@ -91,6 +91,7 @@ import org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer; import org.apache.hadoop.hdfs.server.namenode.FSImage; import org.apache.hadoop.hdfs.server.namenode.NameNode; import org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider; +import org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider; import org.apache.hadoop.hdfs.server.protocol.NamespaceInfo; import org.apache.hadoop.http.HttpConfig; import org.apache.hadoop.net.NetUtils; @@ -233,6 +234,25 @@ public class MiniRouterDFSCluster { return DistributedFileSystem.get(conf); } + public FileSystem getFileSystem(Configuration configuration) throws IOException { + configuration.addResource(conf); + return DistributedFileSystem.get(configuration); + } + + public FileSystem getFileSystemWithObserverReadProxyProvider() throws IOException { + Configuration observerReadConf = new Configuration(conf); + observerReadConf.set(DFS_NAMESERVICES, + observerReadConf.get(DFS_NAMESERVICES)+ ",router-service"); + observerReadConf.set(DFS_HA_NAMENODES_KEY_PREFIX + ".router-service", "router1"); + observerReadConf.set(DFS_NAMENODE_RPC_ADDRESS_KEY+ ".router-service.router1", + getFileSystemURI().toString()); + observerReadConf.set(HdfsClientConfigKeys.Failover.PROXY_PROVIDER_KEY_PREFIX + + "." + "router-service", ObserverReadProxyProvider.class.getName()); + DistributedFileSystem.setDefaultUri(observerReadConf, "hdfs://router-service"); + + return DistributedFileSystem.get(observerReadConf); + } + public DFSClient getClient(UserGroupInformation user) throws IOException, URISyntaxException, InterruptedException { diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestNamenodeResolver.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestNamenodeResolver.java index b602a27c95f..05d21b9b275 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestNamenodeResolver.java +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestNamenodeResolver.java @@ -90,6 +90,98 @@ public class TestNamenodeResolver { assertTrue(cleared); } + @Test + public void testShuffleObserverNNs() throws Exception { + // Add an active entry to the store + NamenodeStatusReport activeReport = createNamenodeReport( + NAMESERVICES[0], NAMENODES[0], HAServiceState.ACTIVE); + assertTrue(namenodeResolver.registerNamenode(activeReport)); + + // Add a standby entry to the store + NamenodeStatusReport standbyReport = createNamenodeReport( + NAMESERVICES[0], NAMENODES[1], HAServiceState.STANDBY); + assertTrue(namenodeResolver.registerNamenode(standbyReport)); + + // Load cache + stateStore.refreshCaches(true); + + // Get namenodes from state store. + List withoutObserver = + namenodeResolver.getNamenodesForNameserviceId(NAMESERVICES[0], true); + assertEquals(2, withoutObserver.size()); + assertEquals(FederationNamenodeServiceState.ACTIVE, withoutObserver.get(0).getState()); + assertEquals(FederationNamenodeServiceState.STANDBY, withoutObserver.get(1).getState()); + + // Get namenodes from cache. + withoutObserver = namenodeResolver.getNamenodesForNameserviceId(NAMESERVICES[0], true); + assertEquals(2, withoutObserver.size()); + assertEquals(FederationNamenodeServiceState.ACTIVE, withoutObserver.get(0).getState()); + assertEquals(FederationNamenodeServiceState.STANDBY, withoutObserver.get(1).getState()); + + // Add an observer entry to the store + NamenodeStatusReport observerReport1 = createNamenodeReport( + NAMESERVICES[0], NAMENODES[2], HAServiceState.OBSERVER); + assertTrue(namenodeResolver.registerNamenode(observerReport1)); + + // Load cache + stateStore.refreshCaches(true); + + // Get namenodes from state store. + List observerList = + namenodeResolver.getNamenodesForNameserviceId(NAMESERVICES[0], true); + assertEquals(3, observerList.size()); + assertEquals(FederationNamenodeServiceState.OBSERVER, observerList.get(0).getState()); + assertEquals(FederationNamenodeServiceState.ACTIVE, observerList.get(1).getState()); + assertEquals(FederationNamenodeServiceState.STANDBY, observerList.get(2).getState()); + + // Get namenodes from cache. + observerList = namenodeResolver.getNamenodesForNameserviceId(NAMESERVICES[0], true); + assertEquals(3, observerList.size()); + assertEquals(FederationNamenodeServiceState.OBSERVER, observerList.get(0).getState()); + assertEquals(FederationNamenodeServiceState.ACTIVE, observerList.get(1).getState()); + assertEquals(FederationNamenodeServiceState.STANDBY, observerList.get(2).getState()); + + // Add one new observer entry to the store + NamenodeStatusReport observerReport2 = createNamenodeReport( + NAMESERVICES[0], NAMENODES[3], HAServiceState.OBSERVER); + assertTrue(namenodeResolver.registerNamenode(observerReport2)); + + // Load cache + stateStore.refreshCaches(true); + + // Get namenodes from state store. + List observerList2 = + namenodeResolver.getNamenodesForNameserviceId(NAMESERVICES[0], true); + assertEquals(4, observerList2.size()); + assertEquals(FederationNamenodeServiceState.OBSERVER, observerList2.get(0).getState()); + assertEquals(FederationNamenodeServiceState.OBSERVER, observerList2.get(1).getState()); + assertEquals(FederationNamenodeServiceState.ACTIVE, observerList2.get(2).getState()); + assertEquals(FederationNamenodeServiceState.STANDBY, observerList2.get(3).getState()); + + // Get namenodes from cache. + observerList2 = namenodeResolver.getNamenodesForNameserviceId(NAMESERVICES[0], true); + assertEquals(4, observerList2.size()); + assertEquals(FederationNamenodeServiceState.OBSERVER, observerList2.get(0).getState()); + assertEquals(FederationNamenodeServiceState.OBSERVER, observerList2.get(1).getState()); + assertEquals(FederationNamenodeServiceState.ACTIVE, observerList2.get(2).getState()); + assertEquals(FederationNamenodeServiceState.STANDBY, observerList2.get(3).getState()); + + // Test shuffler + List observerList3; + boolean hit = false; + for (int i = 0; i < 1000; i++) { + observerList3 = namenodeResolver.getNamenodesForNameserviceId(NAMESERVICES[0], true); + assertEquals(FederationNamenodeServiceState.OBSERVER, observerList3.get(0).getState()); + assertEquals(FederationNamenodeServiceState.OBSERVER, observerList3.get(1).getState()); + if (observerList3.get(0).getNamenodeId().equals(observerList2.get(1).getNamenodeId()) && + observerList3.get(1).getNamenodeId().equals(observerList2.get(0).getNamenodeId())) { + hit = true; + break; + } + } + assertTrue(hit); + } + @Test public void testStateStoreDisconnected() throws Exception { diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java index 067d43dabd5..920c9c4e519 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java @@ -19,7 +19,10 @@ package org.apache.hadoop.hdfs.server.federation.router; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hdfs.protocol.ClientProtocol; +import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.RouterFederatedStateProto; import org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol; +import org.apache.hadoop.ipc.RPC; +import org.apache.hadoop.ipc.Server; import org.apache.hadoop.net.NetUtils; import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.test.GenericTestUtils; @@ -31,6 +34,7 @@ import org.junit.Rule; import org.junit.rules.ExpectedException; import java.io.IOException; +import java.util.HashMap; import java.util.Map; import java.util.concurrent.ArrayBlockingQueue; import java.util.concurrent.BlockingQueue; @@ -305,6 +309,51 @@ public class TestConnectionManager { } } + @Test + public void testAdvanceClientStateId() throws IOException { + // Start one ConnectionManager + Configuration tmpConf = new Configuration(); + ConnectionManager tmpConnManager = new ConnectionManager(tmpConf); + tmpConnManager.start(); + Map poolMap = tmpConnManager.getPools(); + + // Mock one Server.Call with FederatedNamespaceState that ns0 = 1L. + Server.Call mockCall1 = new Server.Call(1, 1, null, null, + RPC.RpcKind.RPC_BUILTIN, new byte[] {1, 2, 3}); + Map nsStateId = new HashMap<>(); + nsStateId.put("ns0", 1L); + RouterFederatedStateProto.Builder stateBuilder = RouterFederatedStateProto.newBuilder(); + nsStateId.forEach(stateBuilder::putNamespaceStateIds); + mockCall1.setFederatedNamespaceState(stateBuilder.build().toByteString()); + + Server.getCurCall().set(mockCall1); + + // Create one new connection pool + tmpConnManager.getConnection(TEST_USER1, TEST_NN_ADDRESS, NamenodeProtocol.class, "ns0"); + assertEquals(1, poolMap.size()); + ConnectionPoolId connectionPoolId = new ConnectionPoolId(TEST_USER1, + TEST_NN_ADDRESS, NamenodeProtocol.class); + ConnectionPool pool = poolMap.get(connectionPoolId); + assertEquals(1L, pool.getPoolAlignmentContext().getPoolLocalStateId()); + + // Mock one Server.Call with FederatedNamespaceState that ns0 = 2L. + Server.Call mockCall2 = new Server.Call(2, 1, null, null, + RPC.RpcKind.RPC_BUILTIN, new byte[] {1, 2, 3}); + nsStateId.clear(); + nsStateId.put("ns0", 2L); + stateBuilder = RouterFederatedStateProto.newBuilder(); + nsStateId.forEach(stateBuilder::putNamespaceStateIds); + mockCall2.setFederatedNamespaceState(stateBuilder.build().toByteString()); + + Server.getCurCall().set(mockCall2); + + // Get one existed connection for ns0 + tmpConnManager.getConnection(TEST_USER1, TEST_NN_ADDRESS, NamenodeProtocol.class, "ns0"); + assertEquals(1, poolMap.size()); + pool = poolMap.get(connectionPoolId); + assertEquals(2L, pool.getPoolAlignmentContext().getPoolLocalStateId()); + } + @Test public void testConfigureConnectionActiveRatio() throws IOException { // test 1 conn below the threshold and these conns are closed diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestObserverWithRouter.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestObserverWithRouter.java index fbd731c073f..45001b461ba 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestObserverWithRouter.java +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestObserverWithRouter.java @@ -21,35 +21,70 @@ import static org.apache.hadoop.hdfs.server.federation.FederationTestUtils.NAMEN import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertNotEquals; import static org.junit.Assert.assertTrue; +import static org.junit.Assert.assertThrows; import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_HA_NAMENODE_ID_KEY; import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMESERVICE_ID; -import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_MONITOR_NAMENODE; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_STATE_CONTEXT_ENABLED_KEY; import java.io.IOException; +import java.util.HashMap; import java.util.List; +import java.util.Map; import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.LongAccumulator; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hdfs.ClientGSIContext; import org.apache.hadoop.hdfs.DFSConfigKeys; +import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys; +import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.RouterFederatedStateProto; import org.apache.hadoop.hdfs.server.federation.MiniRouterDFSCluster; import org.apache.hadoop.hdfs.server.federation.MiniRouterDFSCluster.RouterContext; +import org.apache.hadoop.hdfs.server.federation.MockResolver; import org.apache.hadoop.hdfs.server.federation.RouterConfigBuilder; import org.apache.hadoop.hdfs.server.federation.StateStoreDFSCluster; import org.apache.hadoop.hdfs.server.federation.resolver.FederationNamenodeContext; import org.apache.hadoop.hdfs.server.federation.resolver.FederationNamenodeServiceState; import org.apache.hadoop.hdfs.server.federation.resolver.MembershipNamenodeResolver; import org.apache.hadoop.hdfs.server.namenode.NameNode; -import org.junit.After; -import org.junit.Test; +import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos; +import org.junit.jupiter.api.Assertions; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Tag; +import org.junit.jupiter.api.TestInfo; + public class TestObserverWithRouter { - + private static final String SKIP_BEFORE_EACH_CLUSTER_STARTUP = "SkipBeforeEachClusterStartup"; private MiniRouterDFSCluster cluster; + private RouterContext routerContext; + private FileSystem fileSystem; - public void startUpCluster(int numberOfObserver) throws Exception { - startUpCluster(numberOfObserver, null); + @BeforeEach + void init(TestInfo info) throws Exception { + if (info.getTags().contains(SKIP_BEFORE_EACH_CLUSTER_STARTUP)) { + return; + } + startUpCluster(2, null); + } + + @AfterEach + public void teardown() throws IOException { + if (cluster != null) { + cluster.shutdown(); + cluster = null; + } + + routerContext = null; + + if (fileSystem != null) { + fileSystem.close(); + fileSystem = null; + } } public void startUpCluster(int numberOfObserver, Configuration confOverrides) throws Exception { @@ -58,6 +93,7 @@ public class TestObserverWithRouter { conf.setBoolean(RBFConfigKeys.DFS_ROUTER_OBSERVER_READ_DEFAULT_KEY, true); conf.setBoolean(DFSConfigKeys.DFS_HA_TAILEDITS_INPROGRESS_KEY, true); conf.set(DFSConfigKeys.DFS_HA_TAILEDITS_PERIOD_KEY, "0ms"); + conf.setBoolean(DFS_NAMENODE_STATE_CONTEXT_ENABLED_KEY, true); if (confOverrides != null) { conf.addResource(confOverrides); } @@ -95,31 +131,44 @@ public class TestObserverWithRouter { cluster.installMockLocations(); cluster.waitActiveNamespaces(); + routerContext = cluster.getRandomRouter(); } - @After - public void teardown() throws IOException { - if (cluster != null) { - cluster.shutdown(); - cluster = null; - } + private static Configuration getConfToEnableObserverReads() { + Configuration conf = new Configuration(); + conf.setBoolean(HdfsClientConfigKeys.DFS_RBF_OBSERVER_READ_ENABLE, true); + return conf; } @Test public void testObserverRead() throws Exception { - startUpCluster(1); - RouterContext routerContext = cluster.getRandomRouter(); + fileSystem = routerContext.getFileSystem(getConfToEnableObserverReads()); + internalTestObserverRead(); + } + + /** + * Tests that without adding config to use ObserverProxyProvider, the client shouldn't + * have reads served by Observers. + * Fixes regression in HDFS-13522. + */ + @Test + public void testReadWithoutObserverClientConfigurations() throws Exception { + fileSystem = routerContext.getFileSystem(); + assertThrows(AssertionError.class, this::internalTestObserverRead); + } + + public void internalTestObserverRead() + throws Exception { List namenodes = routerContext .getRouter().getNamenodeResolver() .getNamenodesForNameserviceId(cluster.getNameservices().get(0), true); assertEquals("First namenode should be observer", namenodes.get(0).getState(), FederationNamenodeServiceState.OBSERVER); - FileSystem fileSystem = routerContext.getFileSystem(); Path path = new Path("/testFile"); - // Send Create call to active + // Send create call fileSystem.create(path).close(); - // Send read request to observer + // Send read request fileSystem.open(path).close(); long rpcCountForActive = routerContext.getRouter().getRpcServer() @@ -131,21 +180,20 @@ public class TestObserverWithRouter { .getRPCMetrics().getObserverProxyOps(); // getBlockLocations should be sent to observer assertEquals("One call should be sent to observer", 1, rpcCountForObserver); - fileSystem.close(); } @Test + @Tag(SKIP_BEFORE_EACH_CLUSTER_STARTUP) public void testObserverReadWithoutFederatedStatePropagation() throws Exception { Configuration confOverrides = new Configuration(false); confOverrides.setInt(RBFConfigKeys.DFS_ROUTER_OBSERVER_FEDERATED_STATE_PROPAGATION_MAXSIZE, 0); - startUpCluster(1, confOverrides); - RouterContext routerContext = cluster.getRandomRouter(); + startUpCluster(2, confOverrides); + fileSystem = routerContext.getFileSystem(getConfToEnableObserverReads()); List namenodes = routerContext .getRouter().getNamenodeResolver() .getNamenodesForNameserviceId(cluster.getNameservices().get(0), true); assertEquals("First namenode should be observer", namenodes.get(0).getState(), FederationNamenodeServiceState.OBSERVER); - FileSystem fileSystem = routerContext.getFileSystem(); Path path = new Path("/testFile"); // Send Create call to active fileSystem.create(path).close(); @@ -161,22 +209,20 @@ public class TestObserverWithRouter { long rpcCountForObserver = routerContext.getRouter().getRpcServer() .getRPCMetrics().getObserverProxyOps(); assertEquals("No call should be sent to observer", 0, rpcCountForObserver); - fileSystem.close(); } @Test + @Tag(SKIP_BEFORE_EACH_CLUSTER_STARTUP) public void testDisablingObserverReadUsingNameserviceOverride() throws Exception { // Disable observer reads using per-nameservice override Configuration confOverrides = new Configuration(false); confOverrides.set(RBFConfigKeys.DFS_ROUTER_OBSERVER_READ_OVERRIDES, "ns0"); - startUpCluster(1, confOverrides); + startUpCluster(2, confOverrides); + fileSystem = routerContext.getFileSystem(getConfToEnableObserverReads()); - RouterContext routerContext = cluster.getRandomRouter(); - FileSystem fileSystem = routerContext.getFileSystem(); Path path = new Path("/testFile"); fileSystem.create(path).close(); fileSystem.open(path).close(); - fileSystem.close(); long rpcCountForActive = routerContext.getRouter().getRpcServer() .getRPCMetrics().getActiveProxyOps(); @@ -190,17 +236,16 @@ public class TestObserverWithRouter { @Test public void testReadWhenObserverIsDown() throws Exception { - startUpCluster(1); - RouterContext routerContext = cluster.getRandomRouter(); - FileSystem fileSystem = routerContext.getFileSystem(); + fileSystem = routerContext.getFileSystem(getConfToEnableObserverReads()); Path path = new Path("/testFile1"); // Send Create call to active fileSystem.create(path).close(); // Stop observer NN int nnIndex = stopObserver(1); - assertNotEquals("No observer found", 3, nnIndex); + nnIndex = stopObserver(1); + assertNotEquals("No observer found", 4, nnIndex); // Send read request fileSystem.open(path).close(); @@ -215,14 +260,11 @@ public class TestObserverWithRouter { .getRPCMetrics().getObserverProxyOps(); assertEquals("No call should send to observer", 0, rpcCountForObserver); - fileSystem.close(); } @Test public void testMultipleObserver() throws Exception { - startUpCluster(2); - RouterContext routerContext = cluster.getRandomRouter(); - FileSystem fileSystem = routerContext.getFileSystem(); + fileSystem = routerContext.getFileSystem(getConfToEnableObserverReads()); Path path = new Path("/testFile1"); // Send Create call to active fileSystem.create(path).close(); @@ -267,7 +309,6 @@ public class TestObserverWithRouter { .getRpcServer().getRPCMetrics().getObserverProxyOps(); assertEquals("No call should send to observer", expectedObserverRpc, rpcCountForObserver); - fileSystem.close(); } private int stopObserver(int num) { @@ -288,9 +329,9 @@ public class TestObserverWithRouter { // test router observer with multiple to know which observer NN received // requests @Test + @Tag(SKIP_BEFORE_EACH_CLUSTER_STARTUP) public void testMultipleObserverRouter() throws Exception { StateStoreDFSCluster innerCluster; - RouterContext routerContext; MembershipNamenodeResolver resolver; String ns0; @@ -318,7 +359,7 @@ public class TestObserverWithRouter { } sb.append(suffix); } - routerConf.set(DFS_ROUTER_MONITOR_NAMENODE, sb.toString()); + routerConf.set(RBFConfigKeys.DFS_ROUTER_MONITOR_NAMENODE, sb.toString()); routerConf.setBoolean(RBFConfigKeys.DFS_ROUTER_OBSERVER_READ_DEFAULT_KEY, true); routerConf.setBoolean(DFSConfigKeys.DFS_HA_TAILEDITS_INPROGRESS_KEY, true); routerConf.set(DFSConfigKeys.DFS_HA_TAILEDITS_PERIOD_KEY, "0ms"); @@ -356,14 +397,13 @@ public class TestObserverWithRouter { namespaceInfo0.get(1).getNamenodeId()); assertEquals(namespaceInfo1.get(0).getState(), FederationNamenodeServiceState.OBSERVER); + + innerCluster.shutdown(); } @Test public void testUnavailableObserverNN() throws Exception { - startUpCluster(2); - RouterContext routerContext = cluster.getRandomRouter(); - FileSystem fileSystem = routerContext.getFileSystem(); - + fileSystem = routerContext.getFileSystem(getConfToEnableObserverReads()); stopObserver(2); Path path = new Path("/testFile"); @@ -399,10 +439,7 @@ public class TestObserverWithRouter { @Test public void testRouterMsync() throws Exception { - startUpCluster(1); - RouterContext routerContext = cluster.getRandomRouter(); - - FileSystem fileSystem = routerContext.getFileSystem(); + fileSystem = routerContext.getFileSystem(getConfToEnableObserverReads()); Path path = new Path("/testFile"); // Send Create call to active @@ -420,6 +457,186 @@ public class TestObserverWithRouter { // 2 msync calls should be sent. One to each active namenode in the two namespaces. assertEquals("Four calls should be sent to active", 4, rpcCountForActive); - fileSystem.close(); } -} \ No newline at end of file + + @Test + public void testSingleRead() throws Exception { + fileSystem = routerContext.getFileSystem(getConfToEnableObserverReads()); + List namenodes = routerContext + .getRouter().getNamenodeResolver() + .getNamenodesForNameserviceId(cluster.getNameservices().get(0), true); + assertEquals("First namenode should be observer", namenodes.get(0).getState(), + FederationNamenodeServiceState.OBSERVER); + Path path = new Path("/"); + + long rpcCountForActive; + long rpcCountForObserver; + + // Send read request + fileSystem.listFiles(path, false); + fileSystem.close(); + + rpcCountForActive = routerContext.getRouter().getRpcServer() + .getRPCMetrics().getActiveProxyOps(); + // getListingCall sent to active. + assertEquals("Only one call should be sent to active", 1, rpcCountForActive); + + rpcCountForObserver = routerContext.getRouter().getRpcServer() + .getRPCMetrics().getObserverProxyOps(); + // getList call should be sent to observer + assertEquals("No calls should be sent to observer", 0, rpcCountForObserver); + } + + @Test + public void testSingleReadUsingObserverReadProxyProvider() throws Exception { + fileSystem = routerContext.getFileSystemWithObserverReadProxyProvider(); + List namenodes = routerContext + .getRouter().getNamenodeResolver() + .getNamenodesForNameserviceId(cluster.getNameservices().get(0), true); + assertEquals("First namenode should be observer", namenodes.get(0).getState(), + FederationNamenodeServiceState.OBSERVER); + Path path = new Path("/"); + + long rpcCountForActive; + long rpcCountForObserver; + + // Send read request + fileSystem.listFiles(path, false); + fileSystem.close(); + + rpcCountForActive = routerContext.getRouter().getRpcServer() + .getRPCMetrics().getActiveProxyOps(); + // Two msync calls to the active namenodes. + assertEquals("Two calls should be sent to active", 2, rpcCountForActive); + + rpcCountForObserver = routerContext.getRouter().getRpcServer() + .getRPCMetrics().getObserverProxyOps(); + // getList call should be sent to observer + assertEquals("One call should be sent to observer", 1, rpcCountForObserver); + } + + @Test + @Tag(SKIP_BEFORE_EACH_CLUSTER_STARTUP) + public void testClientReceiveResponseState() { + ClientGSIContext clientGSIContext = new ClientGSIContext(); + + Map mockMapping = new HashMap<>(); + mockMapping.put("ns0", 10L); + RouterFederatedStateProto.Builder builder = RouterFederatedStateProto.newBuilder(); + mockMapping.forEach(builder::putNamespaceStateIds); + RpcHeaderProtos.RpcResponseHeaderProto header = RpcHeaderProtos.RpcResponseHeaderProto + .newBuilder() + .setCallId(1) + .setStatus(RpcHeaderProtos.RpcResponseHeaderProto.RpcStatusProto.SUCCESS) + .setRouterFederatedState(builder.build().toByteString()) + .build(); + clientGSIContext.receiveResponseState(header); + + Map mockLowerMapping = new HashMap<>(); + mockLowerMapping.put("ns0", 8L); + builder = RouterFederatedStateProto.newBuilder(); + mockLowerMapping.forEach(builder::putNamespaceStateIds); + header = RpcHeaderProtos.RpcResponseHeaderProto.newBuilder() + .setRouterFederatedState(builder.build().toByteString()) + .setCallId(2) + .setStatus(RpcHeaderProtos.RpcResponseHeaderProto.RpcStatusProto.SUCCESS) + .build(); + clientGSIContext.receiveResponseState(header); + + Map latestFederateState = ClientGSIContext.getRouterFederatedStateMap( + clientGSIContext.getRouterFederatedState()); + Assertions.assertEquals(1, latestFederateState.size()); + Assertions.assertEquals(10L, latestFederateState.get("ns0")); + } + + @Test + public void testStateIdProgressionInRouter() throws Exception { + Path rootPath = new Path("/"); + fileSystem = routerContext.getFileSystem(getConfToEnableObserverReads()); + RouterStateIdContext routerStateIdContext = routerContext + .getRouterRpcServer() + .getRouterStateIdContext(); + for (int i = 0; i < 10; i++) { + fileSystem.create(new Path(rootPath, "file" + i)).close(); + } + + // Get object storing state of the namespace in the shared RouterStateIdContext + LongAccumulator namespaceStateId = routerStateIdContext.getNamespaceStateId("ns0"); + assertEquals("Router's shared should have progressed.", 21, namespaceStateId.get()); + } + + @Test + @Tag(SKIP_BEFORE_EACH_CLUSTER_STARTUP) + public void testSharedStateInRouterStateIdContext() throws Exception { + Path rootPath = new Path("/"); + long cleanupPeriodMs = 1000; + + Configuration conf = new Configuration(false); + conf.setLong(RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_POOL_CLEAN, cleanupPeriodMs); + conf.setLong(RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_CLEAN_MS, cleanupPeriodMs / 10); + startUpCluster(1, conf); + fileSystem = routerContext.getFileSystem(getConfToEnableObserverReads()); + RouterStateIdContext routerStateIdContext = routerContext.getRouterRpcServer() + .getRouterStateIdContext(); + + // First read goes to active and creates connection pool for this user to active + fileSystem.listStatus(rootPath); + // Second read goes to observer and creates connection pool for this user to observer + fileSystem.listStatus(rootPath); + // Get object storing state of the namespace in the shared RouterStateIdContext + LongAccumulator namespaceStateId1 = routerStateIdContext.getNamespaceStateId("ns0"); + + // Wait for connection pools to expire and be cleaned up. + Thread.sleep(cleanupPeriodMs * 2); + + // Third read goes to observer. + // New connection pool to observer is created since existing one expired. + fileSystem.listStatus(rootPath); + fileSystem.close(); + // Get object storing state of the namespace in the shared RouterStateIdContext + LongAccumulator namespaceStateId2 = routerStateIdContext.getNamespaceStateId("ns0"); + + long rpcCountForActive = routerContext.getRouter().getRpcServer() + .getRPCMetrics().getActiveProxyOps(); + long rpcCountForObserver = routerContext.getRouter().getRpcServer() + .getRPCMetrics().getObserverProxyOps(); + + // First list status goes to active + assertEquals("One call should be sent to active", 1, rpcCountForActive); + // Last two listStatuses go to observer. + assertEquals("Two calls should be sent to observer", 2, rpcCountForObserver); + + Assertions.assertSame(namespaceStateId1, namespaceStateId2, + "The same object should be used in the shared RouterStateIdContext"); + } + + + @Test + @Tag(SKIP_BEFORE_EACH_CLUSTER_STARTUP) + public void testRouterStateIdContextCleanup() throws Exception { + Path rootPath = new Path("/"); + long recordExpiry = TimeUnit.SECONDS.toMillis(1); + + Configuration confOverride = new Configuration(false); + confOverride.setLong(RBFConfigKeys.FEDERATION_STORE_MEMBERSHIP_EXPIRATION_MS, recordExpiry); + + startUpCluster(1, confOverride); + fileSystem = routerContext.getFileSystem(getConfToEnableObserverReads()); + RouterStateIdContext routerStateIdContext = routerContext.getRouterRpcServer() + .getRouterStateIdContext(); + + fileSystem.listStatus(rootPath); + List namespace1 = routerStateIdContext.getNamespaces(); + fileSystem.close(); + + MockResolver mockResolver = (MockResolver) routerContext.getRouter().getNamenodeResolver(); + mockResolver.cleanRegistrations(); + mockResolver.setDisableRegistration(true); + Thread.sleep(recordExpiry * 2); + + List namespace2 = routerStateIdContext.getNamespaces(); + assertEquals(1, namespace1.size()); + assertEquals("ns0", namespace1.get(0)); + assertTrue(namespace2.isEmpty()); + } +} diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestPoolAlignmentContext.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestPoolAlignmentContext.java new file mode 100644 index 00000000000..ef6745654cf --- /dev/null +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestPoolAlignmentContext.java @@ -0,0 +1,53 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs.server.federation.router; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcRequestHeaderProto; +import org.junit.jupiter.api.Assertions; +import org.junit.jupiter.api.Test; + + +public class TestPoolAlignmentContext { + @Test + public void testNamenodeRequestsOnlyUsePoolLocalStateID() { + RouterStateIdContext routerStateIdContext = new RouterStateIdContext(new Configuration()); + String namespaceId = "namespace1"; + routerStateIdContext.getNamespaceStateId(namespaceId).accumulate(20L); + PoolAlignmentContext poolContext1 = new PoolAlignmentContext(routerStateIdContext, namespaceId); + PoolAlignmentContext poolContext2 = new PoolAlignmentContext(routerStateIdContext, namespaceId); + + assertRequestHeaderStateId(poolContext1, Long.MIN_VALUE); + assertRequestHeaderStateId(poolContext2, Long.MIN_VALUE); + Assertions.assertEquals(20L, poolContext1.getLastSeenStateId()); + Assertions.assertEquals(20L, poolContext2.getLastSeenStateId()); + + poolContext1.advanceClientStateId(30L); + assertRequestHeaderStateId(poolContext1, 30L); + assertRequestHeaderStateId(poolContext2, Long.MIN_VALUE); + Assertions.assertEquals(20L, poolContext1.getLastSeenStateId()); + Assertions.assertEquals(20L, poolContext2.getLastSeenStateId()); + } + + private void assertRequestHeaderStateId(PoolAlignmentContext poolAlignmentContext, + Long expectedValue) { + RpcRequestHeaderProto.Builder builder = RpcRequestHeaderProto.newBuilder(); + poolAlignmentContext.updateRequestState(builder); + Assertions.assertEquals(expectedValue, builder.getStateId()); + } +} diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java index 677f3b5e947..761fad2fb7a 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java @@ -1,4 +1,4 @@ -/** +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -42,16 +42,20 @@ import org.apache.hadoop.hdfs.server.federation.RouterConfigBuilder; import org.apache.hadoop.hdfs.server.federation.StateStoreDFSCluster; import org.apache.hadoop.hdfs.server.federation.metrics.RBFMetrics; import org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver; +import org.apache.hadoop.hdfs.server.federation.resolver.FederationNamenodeServiceState; import org.apache.hadoop.hdfs.server.federation.resolver.MountTableManager; import org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver; import org.apache.hadoop.hdfs.server.federation.resolver.MultipleDestinationMountTableResolver; import org.apache.hadoop.hdfs.server.federation.resolver.RemoteLocation; import org.apache.hadoop.hdfs.server.federation.resolver.order.DestinationOrder; import org.apache.hadoop.hdfs.server.federation.store.StateStoreService; +import org.apache.hadoop.hdfs.server.federation.store.driver.StateStoreDriver; import org.apache.hadoop.hdfs.server.federation.store.impl.DisabledNameserviceStoreImpl; import org.apache.hadoop.hdfs.server.federation.store.impl.MountTableStoreImpl; import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesRequest; import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesResponse; +import org.apache.hadoop.hdfs.server.federation.store.records.MembershipState; +import org.apache.hadoop.hdfs.server.federation.store.records.MockStateStoreDriver; import org.apache.hadoop.hdfs.server.federation.store.records.MountTable; import org.apache.hadoop.hdfs.tools.federation.RouterAdmin; import org.apache.hadoop.security.UserGroupInformation; @@ -852,6 +856,7 @@ public class TestRouterAdminCLI { + " ]\n" + "\t[-clrQuota ]\n" + "\t[-clrStorageTypeQuota ]\n" + + "\t[-dumpState]\n" + "\t[-safemode enter | leave | get]\n" + "\t[-nameservice enable | disable ]\n" + "\t[-getDisabledNameservices]\n" @@ -1759,6 +1764,72 @@ public class TestRouterAdminCLI { assertTrue(err.toString().contains("No arguments allowed")); } + @Test + public void testDumpState() throws Exception { + MockStateStoreDriver driver = new MockStateStoreDriver(); + driver.clearAll(); + // Add two records for block1 + driver.put(MembershipState.newInstance("routerId", "ns1", + "ns1-ha1", "cluster1", "block1", "rpc1", + "service1", "lifeline1", "https", "nn01", + FederationNamenodeServiceState.ACTIVE, false), false, false); + driver.put(MembershipState.newInstance("routerId", "ns1", + "ns1-ha2", "cluster1", "block1", "rpc2", + "service2", "lifeline2", "https", "nn02", + FederationNamenodeServiceState.STANDBY, false), false, false); + Configuration conf = new Configuration(); + conf.setClass(RBFConfigKeys.FEDERATION_STORE_DRIVER_CLASS, + MockStateStoreDriver.class, + StateStoreDriver.class); + ByteArrayOutputStream buffer = new ByteArrayOutputStream(); + try (PrintStream stream = new PrintStream(buffer)) { + RouterAdmin.dumpStateStore(conf, stream); + } + final String expected = + "---- DisabledNameservice ----\n" + + "\n" + + "---- MembershipState ----\n" + + " ns1-ha1-ns1-routerId:\n" + + " dateCreated: XXX\n" + + " dateModified: XXX\n" + + " routerId: \"routerId\"\n" + + " nameserviceId: \"ns1\"\n" + + " namenodeId: \"ns1-ha1\"\n" + + " clusterId: \"cluster1\"\n" + + " blockPoolId: \"block1\"\n" + + " webAddress: \"nn01\"\n" + + " rpcAddress: \"rpc1\"\n" + + " serviceAddress: \"service1\"\n" + + " lifelineAddress: \"lifeline1\"\n" + + " state: \"ACTIVE\"\n" + + " isSafeMode: false\n" + + " webScheme: \"https\"\n" + + " \n" + + " ns1-ha2-ns1-routerId:\n" + + " dateCreated: XXX\n" + + " dateModified: XXX\n" + + " routerId: \"routerId\"\n" + + " nameserviceId: \"ns1\"\n" + + " namenodeId: \"ns1-ha2\"\n" + + " clusterId: \"cluster1\"\n" + + " blockPoolId: \"block1\"\n" + + " webAddress: \"nn02\"\n" + + " rpcAddress: \"rpc2\"\n" + + " serviceAddress: \"service2\"\n" + + " lifelineAddress: \"lifeline2\"\n" + + " state: \"STANDBY\"\n" + + " isSafeMode: false\n" + + " webScheme: \"https\"\n" + + " \n" + + "\n" + + "---- MountTable ----\n" + + "\n" + + "---- RouterState ----"; + // Replace the time values with XXX + assertEquals(expected, + buffer.toString().trim().replaceAll("[0-9]{4,}+", "XXX")); + } + private void addMountTable(String src, String nsId, String dst) throws Exception { String[] argv = new String[] {"-add", src, nsId, dst}; diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAllResolver.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAllResolver.java index 715b627f694..075917bfbe0 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAllResolver.java +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAllResolver.java @@ -33,6 +33,7 @@ import org.apache.hadoop.fs.FSDataOutputStream; import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hdfs.DistributedFileSystem; import org.apache.hadoop.hdfs.server.federation.RouterConfigBuilder; import org.apache.hadoop.hdfs.server.federation.MiniRouterDFSCluster.NamenodeContext; import org.apache.hadoop.hdfs.server.federation.MiniRouterDFSCluster.RouterContext; @@ -46,6 +47,7 @@ import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntr import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesRequest; import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesResponse; import org.apache.hadoop.hdfs.server.federation.store.records.MountTable; +import org.apache.hadoop.hdfs.server.namenode.TestFileTruncate; import org.junit.After; import org.junit.Before; import org.junit.Test; @@ -191,6 +193,18 @@ public class TestRouterAllResolver { assertDirsEverywhere(path, 9); assertFilesDistributed(path, 15); + // Test truncate + String testTruncateFile = path + "/dir2/dir22/dir220/file-truncate.txt"; + createTestFile(routerFs, testTruncateFile); + Path testTruncateFilePath = new Path(testTruncateFile); + routerFs.truncate(testTruncateFilePath, 10); + TestFileTruncate.checkBlockRecovery(testTruncateFilePath, + (DistributedFileSystem) routerFs); + assertEquals("Truncate file fails", 10, + routerFs.getFileStatus(testTruncateFilePath).getLen()); + assertDirsEverywhere(path, 9); + assertFilesDistributed(path, 16); + // Removing a directory should remove it from every subcluster routerFs.delete(new Path(path + "/dir2/dir22/dir220"), true); assertDirsEverywhere(path, 8); diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterFederatedState.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterFederatedState.java index 2bc8cfc21b2..be8fcf682bd 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterFederatedState.java +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterFederatedState.java @@ -19,12 +19,13 @@ package org.apache.hadoop.hdfs.server.federation.router; import java.util.HashMap; import java.util.Map; + +import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.RouterFederatedStateProto; import org.apache.hadoop.ipc.AlignmentContext; import org.apache.hadoop.ipc.ClientId; import org.apache.hadoop.ipc.RPC; import org.apache.hadoop.ipc.RpcConstants; import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos; -import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RouterFederatedStateProto; import org.apache.hadoop.thirdparty.protobuf.InvalidProtocolBufferException; import org.apache.hadoop.util.ProtoUtil; import org.junit.Test; diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/driver/TestStateStoreDriverBase.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/driver/TestStateStoreDriverBase.java index b8bb7c4d2d1..4eb38b06b12 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/driver/TestStateStoreDriverBase.java +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/driver/TestStateStoreDriverBase.java @@ -119,7 +119,7 @@ public class TestStateStoreDriverBase { } @SuppressWarnings("unchecked") - private T generateFakeRecord(Class recordClass) + protected T generateFakeRecord(Class recordClass) throws IllegalArgumentException, IllegalAccessException, IOException { if (recordClass == MembershipState.class) { @@ -234,6 +234,25 @@ public class TestStateStoreDriverBase { assertEquals(11, records2.size()); } + public void testInsertWithErrorDuringWrite( + StateStoreDriver driver, Class recordClass) + throws IllegalArgumentException, IllegalAccessException, IOException { + + assertTrue(driver.removeAll(recordClass)); + QueryResult queryResult0 = driver.get(recordClass); + List records0 = queryResult0.getRecords(); + assertTrue(records0.isEmpty()); + + // Insert single + BaseRecord record = generateFakeRecord(recordClass); + driver.put(record, true, false); + + // Verify that no record was inserted. + QueryResult queryResult1 = driver.get(recordClass); + List records1 = queryResult1.getRecords(); + assertEquals(0, records1.size()); + } + public void testFetchErrors(StateStoreDriver driver, Class clazz) throws IllegalAccessException, IOException { diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/driver/TestStateStoreFileSystem.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/driver/TestStateStoreFileSystem.java index 8c4b188cc47..dbd4b9bdae2 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/driver/TestStateStoreFileSystem.java +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/driver/TestStateStoreFileSystem.java @@ -17,16 +17,26 @@ */ package org.apache.hadoop.hdfs.server.federation.store.driver; +import java.io.BufferedWriter; import java.io.IOException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hdfs.MiniDFSCluster; import org.apache.hadoop.hdfs.server.federation.store.FederationStateStoreTestUtils; +import org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreFileBaseImpl; import org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreFileSystemImpl; +import org.apache.hadoop.hdfs.server.federation.store.records.MembershipState; import org.junit.AfterClass; import org.junit.Before; import org.junit.BeforeClass; import org.junit.Test; +import org.mockito.stubbing.Answer; + +import static org.mockito.Mockito.any; +import static org.mockito.Mockito.doAnswer; +import static org.mockito.Mockito.doThrow; +import static org.mockito.Mockito.spy; + /** * Test the FileSystem (e.g., HDFS) implementation of the State Store driver. @@ -91,4 +101,18 @@ public class TestStateStoreFileSystem extends TestStateStoreDriverBase { throws IllegalArgumentException, IllegalAccessException, IOException { testMetrics(getStateStoreDriver()); } + + @Test + public void testInsertWithErrorDuringWrite() + throws IllegalArgumentException, IllegalAccessException, IOException { + StateStoreFileBaseImpl driver = spy((StateStoreFileBaseImpl)getStateStoreDriver()); + doAnswer((Answer) a -> { + BufferedWriter writer = (BufferedWriter) a.callRealMethod(); + BufferedWriter spyWriter = spy(writer); + doThrow(IOException.class).when(spyWriter).write(any(String.class)); + return spyWriter; + }).when(driver).getWriter(any()); + + testInsertWithErrorDuringWrite(driver, MembershipState.class); + } } \ No newline at end of file diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/driver/TestStateStoreZK.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/driver/TestStateStoreZK.java index f8be9f0a05b..3ad106697ac 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/driver/TestStateStoreZK.java +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/driver/TestStateStoreZK.java @@ -18,12 +18,13 @@ package org.apache.hadoop.hdfs.server.federation.store.driver; import static org.apache.hadoop.hdfs.server.federation.store.FederationStateStoreTestUtils.getStateStoreConfiguration; -import static org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreZooKeeperImpl.FEDERATION_STORE_ZK_PARENT_PATH; -import static org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreZooKeeperImpl.FEDERATION_STORE_ZK_PARENT_PATH_DEFAULT; import static org.junit.Assert.assertNotNull; import static org.junit.Assert.assertNull; +import static org.junit.Assert.assertTrue; import java.io.IOException; +import java.util.ArrayList; +import java.util.List; import java.util.concurrent.TimeUnit; import org.apache.curator.framework.CuratorFramework; @@ -40,6 +41,7 @@ import org.apache.hadoop.hdfs.server.federation.store.records.DisabledNameservic import org.apache.hadoop.hdfs.server.federation.store.records.MembershipState; import org.apache.hadoop.hdfs.server.federation.store.records.MountTable; import org.apache.hadoop.hdfs.server.federation.store.records.RouterState; +import org.apache.hadoop.util.Time; import org.apache.zookeeper.CreateMode; import org.junit.AfterClass; import org.junit.Before; @@ -73,9 +75,10 @@ public class TestStateStoreZK extends TestStateStoreDriverBase { // Disable auto-repair of connection conf.setLong(RBFConfigKeys.FEDERATION_STORE_CONNECTION_TEST_MS, TimeUnit.HOURS.toMillis(1)); + conf.setInt(RBFConfigKeys.FEDERATION_STORE_ZK_ASYNC_MAX_THREADS, 10); - baseZNode = conf.get(FEDERATION_STORE_ZK_PARENT_PATH, - FEDERATION_STORE_ZK_PARENT_PATH_DEFAULT); + baseZNode = conf.get(RBFConfigKeys.FEDERATION_STORE_ZK_PARENT_PATH, + RBFConfigKeys.FEDERATION_STORE_ZK_PARENT_PATH_DEFAULT); getStateStore(conf); } @@ -91,6 +94,8 @@ public class TestStateStoreZK extends TestStateStoreDriverBase { @Before public void startup() throws IOException { removeAll(getStateStoreDriver()); + StateStoreZooKeeperImpl stateStoreZooKeeper = (StateStoreZooKeeperImpl) getStateStoreDriver(); + stateStoreZooKeeper.setEnableConcurrent(false); } private String generateFakeZNode( @@ -126,33 +131,79 @@ public class TestStateStoreZK extends TestStateStoreDriverBase { assertNull(curatorFramework.checkExists().forPath(znode)); } + @Test + public void testAsyncPerformance() throws Exception { + StateStoreZooKeeperImpl stateStoreDriver = (StateStoreZooKeeperImpl) getStateStoreDriver(); + List insertList = new ArrayList<>(); + for (int i = 0; i < 1000; i++) { + MountTable newRecord = generateFakeRecord(MountTable.class); + insertList.add(newRecord); + } + // Insert Multiple on sync mode + long startSync = Time.now(); + stateStoreDriver.putAll(insertList, true, false); + long endSync = Time.now(); + stateStoreDriver.removeAll(MembershipState.class); + + stateStoreDriver.setEnableConcurrent(true); + // Insert Multiple on async mode + long startAsync = Time.now(); + stateStoreDriver.putAll(insertList, true, false); + long endAsync = Time.now(); + assertTrue((endSync - startSync) > (endAsync - startAsync)); + } + @Test public void testGetNullRecord() throws Exception { - testGetNullRecord(getStateStoreDriver()); + StateStoreZooKeeperImpl stateStoreDriver = (StateStoreZooKeeperImpl) getStateStoreDriver(); + testGetNullRecord(stateStoreDriver); + + // test async mode + stateStoreDriver.setEnableConcurrent(true); + testGetNullRecord(stateStoreDriver); } @Test public void testInsert() throws IllegalArgumentException, IllegalAccessException, IOException { - testInsert(getStateStoreDriver()); + StateStoreZooKeeperImpl stateStoreDriver = (StateStoreZooKeeperImpl) getStateStoreDriver(); + testInsert(stateStoreDriver); + // test async mode + stateStoreDriver.setEnableConcurrent(true); + testInsert(stateStoreDriver); } @Test public void testUpdate() throws IllegalArgumentException, ReflectiveOperationException, IOException, SecurityException { - testPut(getStateStoreDriver()); + StateStoreZooKeeperImpl stateStoreDriver = (StateStoreZooKeeperImpl) getStateStoreDriver(); + testPut(stateStoreDriver); + + // test async mode + stateStoreDriver.setEnableConcurrent(true); + testPut(stateStoreDriver); } @Test public void testDelete() throws IllegalArgumentException, IllegalAccessException, IOException { - testRemove(getStateStoreDriver()); + StateStoreZooKeeperImpl stateStoreDriver = (StateStoreZooKeeperImpl) getStateStoreDriver(); + testRemove(stateStoreDriver); + + // test async mode + stateStoreDriver.setEnableConcurrent(true); + testRemove(stateStoreDriver); } @Test public void testFetchErrors() throws IllegalArgumentException, IllegalAccessException, IOException { - testFetchErrors(getStateStoreDriver()); + StateStoreZooKeeperImpl stateStoreDriver = (StateStoreZooKeeperImpl) getStateStoreDriver(); + testFetchErrors(stateStoreDriver); + + // test async mode + stateStoreDriver.setEnableConcurrent(true); + testFetchErrors(stateStoreDriver); } } \ No newline at end of file diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/records/MockStateStoreDriver.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/records/MockStateStoreDriver.java new file mode 100644 index 00000000000..9f600cb6f3f --- /dev/null +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/records/MockStateStoreDriver.java @@ -0,0 +1,146 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs.server.federation.store.records; + +import org.apache.hadoop.hdfs.server.federation.store.StateStoreUtils; +import org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreBaseImpl; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collection; +import java.util.HashMap; +import java.util.Iterator; +import java.util.List; +import java.util.Map; + +/** + * A mock StateStoreDriver that runs in memory that can force IOExceptions + * upon demand. + */ +public class MockStateStoreDriver extends StateStoreBaseImpl { + private boolean giveErrors = false; + private boolean initialized = false; + private static final Map> VALUE_MAP = new HashMap<>(); + + @Override + public boolean initDriver() { + initialized = true; + return true; + } + + @Override + public boolean initRecordStorage(String className, + Class clazz) { + return true; + } + + @Override + public boolean isDriverReady() { + return initialized; + } + + @Override + public void close() throws Exception { + VALUE_MAP.clear(); + initialized = false; + } + + /** + * Should this object throw an IOException on each following call? + * @param value should we throw errors? + */ + public void setGiveErrors(boolean value) { + giveErrors = value; + } + + /** + * Check to see if this StateStore should throw IOException on each call. + * @throws IOException thrown if giveErrors has been set + */ + private void checkErrors() throws IOException { + if (giveErrors) { + throw new IOException("Induced errors"); + } + } + + @Override + @SuppressWarnings("unchecked") + public QueryResult get(Class clazz) throws IOException { + checkErrors(); + Map map = VALUE_MAP.get(StateStoreUtils.getRecordName(clazz)); + List results = + map != null ? new ArrayList<>((Collection) map.values()) : new ArrayList<>(); + return new QueryResult<>(results, System.currentTimeMillis()); + } + + @Override + public boolean putAll(List records, + boolean allowUpdate, + boolean errorIfExists) + throws IOException { + checkErrors(); + for (T record : records) { + Map map = + VALUE_MAP.computeIfAbsent(StateStoreUtils.getRecordName(record.getClass()), + k -> new HashMap<>()); + String key = record.getPrimaryKey(); + BaseRecord oldRecord = map.get(key); + if (oldRecord == null || allowUpdate) { + map.put(key, record); + } else if (errorIfExists) { + throw new IOException("Record already exists for " + record.getClass() + + ": " + key); + } + } + return true; + } + + /** + * Clear all records from the store. + */ + public void clearAll() { + VALUE_MAP.clear(); + } + + @Override + public boolean removeAll(Class clazz) throws IOException { + checkErrors(); + return VALUE_MAP.remove(StateStoreUtils.getRecordName(clazz)) != null; + } + + @Override + @SuppressWarnings("unchecked") + public int remove(Class clazz, + Query query) + throws IOException { + checkErrors(); + int result = 0; + Map map = + VALUE_MAP.get(StateStoreUtils.getRecordName(clazz)); + if (map != null) { + for (Iterator itr = map.values().iterator(); itr.hasNext();) { + BaseRecord record = itr.next(); + if (query.matches((T) record)) { + itr.remove(); + result += 1; + } + } + } + return result; + } +} diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/records/TestRouterState.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/records/TestRouterState.java index dfe2bc98bf4..8226178fe76 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/records/TestRouterState.java +++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/records/TestRouterState.java @@ -1,4 +1,4 @@ -/** +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -20,8 +20,16 @@ package org.apache.hadoop.hdfs.server.federation.store.records; import static org.junit.Assert.assertEquals; import java.io.IOException; +import java.util.List; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hdfs.server.federation.resolver.FederationNamenodeContext; +import org.apache.hadoop.hdfs.server.federation.resolver.FederationNamenodeServiceState; +import org.apache.hadoop.hdfs.server.federation.resolver.MembershipNamenodeResolver; +import org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys; import org.apache.hadoop.hdfs.server.federation.router.RouterServiceState; +import org.apache.hadoop.hdfs.server.federation.store.StateStoreService; +import org.apache.hadoop.hdfs.server.federation.store.driver.StateStoreDriver; import org.apache.hadoop.hdfs.server.federation.store.driver.StateStoreSerializer; import org.junit.Test; @@ -40,7 +48,7 @@ public class TestRouterState { private static final RouterServiceState STATE = RouterServiceState.RUNNING; - private RouterState generateRecord() throws IOException { + private RouterState generateRecord() { RouterState record = RouterState.newInstance(ADDRESS, START_TIME, STATE); record.setVersion(VERSION); record.setCompileInfo(COMPILE_INFO); @@ -82,4 +90,46 @@ public class TestRouterState { validateRecord(newRecord); } + + @Test + public void testStateStoreResilience() throws Exception { + StateStoreService service = new StateStoreService(); + Configuration conf = new Configuration(); + conf.setClass(RBFConfigKeys.FEDERATION_STORE_DRIVER_CLASS, + MockStateStoreDriver.class, + StateStoreDriver.class); + conf.setBoolean(RBFConfigKeys.DFS_ROUTER_METRICS_ENABLE, false); + service.init(conf); + MockStateStoreDriver driver = (MockStateStoreDriver) service.getDriver(); + driver.clearAll(); + // Add two records for block1 + driver.put(MembershipState.newInstance("routerId", "ns1", + "ns1-ha1", "cluster1", "block1", "rpc1", + "service1", "lifeline1", "https", "nn01", + FederationNamenodeServiceState.ACTIVE, false), false, false); + driver.put(MembershipState.newInstance("routerId", "ns1", + "ns1-ha2", "cluster1", "block1", "rpc2", + "service2", "lifeline2", "https", "nn02", + FederationNamenodeServiceState.STANDBY, false), false, false); + // load the cache + service.loadDriver(); + MembershipNamenodeResolver resolver = new MembershipNamenodeResolver(conf, service); + service.refreshCaches(true); + + // look up block1 + List result = + resolver.getNamenodesForBlockPoolId("block1"); + assertEquals(2, result.size()); + + // cause io errors and then reload the cache + driver.setGiveErrors(true); + long previousUpdate = service.getCacheUpdateTime(); + service.refreshCaches(true); + assertEquals(previousUpdate, service.getCacheUpdateTime()); + + // make sure the old cache is still there + result = resolver.getNamenodesForBlockPoolId("block1"); + assertEquals(2, result.size()); + service.stop(); + } } diff --git a/hadoop-hdfs-project/hadoop-hdfs/pom.xml b/hadoop-hdfs-project/hadoop-hdfs/pom.xml index b51c7154f7b..ab8934f9368 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/pom.xml +++ b/hadoop-hdfs-project/hadoop-hdfs/pom.xml @@ -152,6 +152,10 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd"> xml-apis xml-apis + + xerces + xercesImpl + @@ -175,11 +179,6 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd"> slf4j-log4j12 provided - - io.netty - netty - compile - io.netty netty-all diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java index 8ff46e37aee..1ab7edd6adc 100755 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java @@ -628,6 +628,12 @@ public class DFSConfigKeys extends CommonConfigurationKeys { public static final String DFS_NAMENODE_READ_LOCK_REPORTING_THRESHOLD_MS_KEY = "dfs.namenode.read-lock-reporting-threshold-ms"; public static final long DFS_NAMENODE_READ_LOCK_REPORTING_THRESHOLD_MS_DEFAULT = 5000L; + + public static final String DFS_NAMENODE_ACCESS_CONTROL_ENFORCER_REPORTING_THRESHOLD_MS_KEY + = "dfs.namenode.access-control-enforcer-reporting-threshold-ms"; + public static final long DFS_NAMENODE_ACCESS_CONTROL_ENFORCER_REPORTING_THRESHOLD_MS_DEFAULT + = 1000L; + // Threshold for how long the lock warnings must be suppressed public static final String DFS_LOCK_SUPPRESS_WARNING_INTERVAL_KEY = "dfs.lock.suppress.warning.interval"; @@ -636,14 +642,6 @@ public class DFSConfigKeys extends CommonConfigurationKeys { public static final String DFS_DATANODE_LOCK_FAIR_KEY = "dfs.datanode.lock.fair"; public static final boolean DFS_DATANODE_LOCK_FAIR_DEFAULT = true; - public static final String DFS_DATANODE_LOCK_READ_WRITE_ENABLED_KEY = - "dfs.datanode.lock.read.write.enabled"; - public static final Boolean DFS_DATANODE_LOCK_READ_WRITE_ENABLED_DEFAULT = - true; - public static final String DFS_DATANODE_LOCK_REPORTING_THRESHOLD_MS_KEY = - "dfs.datanode.lock-reporting-threshold-ms"; - public static final long - DFS_DATANODE_LOCK_REPORTING_THRESHOLD_MS_DEFAULT = 300L; public static final String DFS_UPGRADE_DOMAIN_FACTOR = "dfs.namenode.upgrade.domain.factor"; public static final int DFS_UPGRADE_DOMAIN_FACTOR_DEFAULT = DFS_REPLICATION_DEFAULT; @@ -1432,7 +1430,10 @@ public class DFSConfigKeys extends CommonConfigurationKeys { public static final long DFS_JOURNALNODE_SYNC_INTERVAL_DEFAULT = 2*60*1000L; public static final String DFS_JOURNALNODE_EDIT_CACHE_SIZE_KEY = "dfs.journalnode.edit-cache-size.bytes"; - public static final int DFS_JOURNALNODE_EDIT_CACHE_SIZE_DEFAULT = 1024 * 1024; + + public static final String DFS_JOURNALNODE_EDIT_CACHE_SIZE_FRACTION_KEY = + "dfs.journalnode.edit-cache-size.fraction"; + public static final float DFS_JOURNALNODE_EDIT_CACHE_SIZE_FRACTION_DEFAULT = 0.5f; // Journal-node related configs for the client side. public static final String DFS_QJOURNAL_QUEUE_SIZE_LIMIT_KEY = "dfs.qjournal.queued-edits.limit.mb"; diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/AsyncLoggerSet.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/AsyncLoggerSet.java index 624e574024c..a65120e3610 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/AsyncLoggerSet.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/AsyncLoggerSet.java @@ -87,7 +87,7 @@ class AsyncLoggerSet { /** * @return the epoch number for this writer. This may only be called after - * a successful call to {@link #createNewUniqueEpoch(NamespaceInfo)}. + * a successful call to {@link QuorumJournalManager#createNewUniqueEpoch()}. */ long getEpoch() { Preconditions.checkState(myEpoch != INVALID_EPOCH, diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournaledEditsCache.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournaledEditsCache.java index 65f54609ef3..339b7fa7b68 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournaledEditsCache.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournaledEditsCache.java @@ -40,6 +40,7 @@ import org.apache.hadoop.hdfs.server.namenode.EditLogFileOutputStream; import org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader; import org.apache.hadoop.hdfs.server.namenode.FSEditLogOp; import org.apache.hadoop.util.AutoCloseableLock; +import org.apache.hadoop.util.Preconditions; /** * An in-memory cache of edits in their serialized form. This is used to serve @@ -121,12 +122,18 @@ class JournaledEditsCache { // ** End lock-protected fields ** JournaledEditsCache(Configuration conf) { + float fraction = conf.getFloat(DFSConfigKeys.DFS_JOURNALNODE_EDIT_CACHE_SIZE_FRACTION_KEY, + DFSConfigKeys.DFS_JOURNALNODE_EDIT_CACHE_SIZE_FRACTION_DEFAULT); + Preconditions.checkArgument((fraction > 0 && fraction < 1.0f), + String.format("Cache config %s is set at %f, it should be a positive float value, " + + "less than 1.0. The recommended value is less than 0.9.", + DFSConfigKeys.DFS_JOURNALNODE_EDIT_CACHE_SIZE_FRACTION_KEY, fraction)); capacity = conf.getInt(DFSConfigKeys.DFS_JOURNALNODE_EDIT_CACHE_SIZE_KEY, - DFSConfigKeys.DFS_JOURNALNODE_EDIT_CACHE_SIZE_DEFAULT); + (int) (Runtime.getRuntime().maxMemory() * fraction)); if (capacity > 0.9 * Runtime.getRuntime().maxMemory()) { Journal.LOG.warn(String.format("Cache capacity is set at %d bytes but " + "maximum JVM memory is only %d bytes. It is recommended that you " + - "decrease the cache size or increase the heap size.", + "decrease the cache size/fraction or increase the heap size.", capacity, Runtime.getRuntime().maxMemory())); } Journal.LOG.info("Enabling the journaled edits cache with a capacity " + @@ -277,11 +284,12 @@ class JournaledEditsCache { initialize(INVALID_TXN_ID); Journal.LOG.warn(String.format("A single batch of edits was too " + "large to fit into the cache: startTxn = %d, endTxn = %d, " + - "input length = %d. The capacity of the cache (%s) must be " + + "input length = %d. The cache size (%s) or cache fraction (%s) must be " + "increased for it to work properly (current capacity %d)." + "Cache is now empty.", newStartTxn, newEndTxn, inputData.length, - DFSConfigKeys.DFS_JOURNALNODE_EDIT_CACHE_SIZE_KEY, capacity)); + DFSConfigKeys.DFS_JOURNALNODE_EDIT_CACHE_SIZE_KEY, + DFSConfigKeys.DFS_JOURNALNODE_EDIT_CACHE_SIZE_FRACTION_KEY, capacity)); return; } if (dataMap.isEmpty()) { @@ -388,10 +396,11 @@ class JournaledEditsCache { } else { return new CacheMissException(lowestTxnId - requestedTxnId, "Oldest txn ID available in the cache is %d, but requested txns " + - "starting at %d. The cache size (%s) may need to be increased " + - "to hold more transactions (currently %d bytes containing %d " + + "starting at %d. The cache size (%s) or cache fraction (%s) may need to be " + + "increased to hold more transactions (currently %d bytes containing %d " + "transactions)", lowestTxnId, requestedTxnId, - DFSConfigKeys.DFS_JOURNALNODE_EDIT_CACHE_SIZE_KEY, capacity, + DFSConfigKeys.DFS_JOURNALNODE_EDIT_CACHE_SIZE_KEY, + DFSConfigKeys.DFS_JOURNALNODE_EDIT_CACHE_SIZE_FRACTION_KEY, capacity, highestTxnId - lowestTxnId + 1); } } @@ -414,4 +423,9 @@ class JournaledEditsCache { } + @VisibleForTesting + int getCapacity() { + return capacity; + } + } diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java index 02004f337c1..07736514712 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java @@ -708,12 +708,12 @@ public class Balancer { Result newResult(ExitStatus exitStatus, long bytesLeftToMove, long bytesBeingMoved) { return new Result(exitStatus, bytesLeftToMove, bytesBeingMoved, - dispatcher.getBytesMoved(), dispatcher.getBblocksMoved()); + dispatcher.getBytesMoved(), dispatcher.getBlocksMoved()); } Result newResult(ExitStatus exitStatus) { return new Result(exitStatus, -1, -1, dispatcher.getBytesMoved(), - dispatcher.getBblocksMoved()); + dispatcher.getBlocksMoved()); } /** Run an iteration for all datanodes. */ diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java index 5c66d669120..98a6d8449b6 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java @@ -164,7 +164,7 @@ public class Dispatcher { } } - /** Aloocate a single lot of items */ + /** Allocate a single lot of items. */ int allocate() { return allocate(lotSize); } @@ -1127,7 +1127,7 @@ public class Dispatcher { return nnc.getBytesMoved().get(); } - long getBblocksMoved() { + long getBlocksMoved() { return nnc.getBlocksMoved().get(); } @@ -1234,7 +1234,7 @@ public class Dispatcher { */ private long dispatchBlockMoves() throws InterruptedException { final long bytesLastMoved = getBytesMoved(); - final long blocksLastMoved = getBblocksMoved(); + final long blocksLastMoved = getBlocksMoved(); final Future[] futures = new Future[sources.size()]; int concurrentThreads = Math.min(sources.size(), @@ -1284,7 +1284,7 @@ public class Dispatcher { waitForMoveCompletion(targets); LOG.info("Total bytes (blocks) moved in this iteration {} ({})", StringUtils.byteDesc(getBytesMoved() - bytesLastMoved), - (getBblocksMoved() - blocksLastMoved)); + (getBlocksMoved() - blocksLastMoved)); return getBytesMoved() - bytesLastMoved; } diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java index dfe48f7bde1..4e5e1234716 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java @@ -395,12 +395,12 @@ public class BlockManager implements BlockStatsMXBean { * The maximum number of outgoing replication streams a given node should have * at one time considering all but the highest priority replications needed. */ - int maxReplicationStreams; + private volatile int maxReplicationStreams; /** * The maximum number of outgoing replication streams a given node should have * at one time. */ - int replicationStreamsHardLimit; + private volatile int replicationStreamsHardLimit; /** Minimum copies needed or else write is disallowed */ public final short minReplication; /** Default number of replicas */ @@ -409,7 +409,7 @@ public class BlockManager implements BlockStatsMXBean { final int maxCorruptFilesReturned; final float blocksInvalidateWorkPct; - private int blocksReplWorkMultiplier; + private volatile int blocksReplWorkMultiplier; // whether or not to issue block encryption keys. final boolean encryptDataTransfer; @@ -1017,12 +1017,19 @@ public class BlockManager implements BlockStatsMXBean { * * @param newVal - Must be a positive non-zero integer. */ - public void setMaxReplicationStreams(int newVal) { - ensurePositiveInt(newVal, - DFSConfigKeys.DFS_NAMENODE_REPLICATION_MAX_STREAMS_KEY); + @VisibleForTesting + public void setMaxReplicationStreams(int newVal, boolean ensurePositiveInt) { + if (ensurePositiveInt) { + ensurePositiveInt(newVal, + DFSConfigKeys.DFS_NAMENODE_REPLICATION_MAX_STREAMS_KEY); + } maxReplicationStreams = newVal; } + public void setMaxReplicationStreams(int newVal) { + setMaxReplicationStreams(newVal, true); + } + /** Returns the current setting for maxReplicationStreamsHardLimit, set by * {@code DFSConfigKeys.DFS_NAMENODE_REPLICATION_STREAMS_HARD_LIMIT_KEY}. * @@ -1117,7 +1124,7 @@ public class BlockManager implements BlockStatsMXBean { return minReplicationToBeInMaintenance; } - private short getMinMaintenanceStorageNum(BlockInfo block) { + short getMinMaintenanceStorageNum(BlockInfo block) { if (block.isStriped()) { return ((BlockInfoStriped) block).getRealDataBlockNum(); } else { @@ -2599,7 +2606,8 @@ public class BlockManager implements BlockStatsMXBean { if (priority != LowRedundancyBlocks.QUEUE_HIGHEST_PRIORITY && (!node.isDecommissionInProgress() && !node.isEnteringMaintenance()) - && node.getNumberOfBlocksToBeReplicated() >= maxReplicationStreams) { + && node.getNumberOfBlocksToBeReplicated() + + node.getNumberOfBlocksToBeErasureCoded() >= maxReplicationStreams) { if (isStriped && (state == StoredReplicaState.LIVE || state == StoredReplicaState.DECOMMISSIONING)) { liveBusyBlockIndices.add(blockIndex); @@ -2609,7 +2617,8 @@ public class BlockManager implements BlockStatsMXBean { continue; // already reached replication limit } - if (node.getNumberOfBlocksToBeReplicated() >= replicationStreamsHardLimit) { + if (node.getNumberOfBlocksToBeReplicated() + + node.getNumberOfBlocksToBeErasureCoded() >= replicationStreamsHardLimit) { if (isStriped && (state == StoredReplicaState.LIVE || state == StoredReplicaState.DECOMMISSIONING)) { liveBusyBlockIndices.add(blockIndex); @@ -3616,7 +3625,7 @@ public class BlockManager implements BlockStatsMXBean { if (storedBlock == null || storedBlock.isDeleted()) { // If this block does not belong to anyfile, then we are done. blockLog.debug("BLOCK* addStoredBlock: {} on {} size {} but it does not belong to any file", - block, node, block.getNumBytes()); + reportedBlock, node, reportedBlock.getNumBytes()); // we could add this block to invalidate set of this datanode. // it will happen in next block report otherwise. return block; @@ -3631,12 +3640,12 @@ public class BlockManager implements BlockStatsMXBean { (node.isDecommissioned() || node.isDecommissionInProgress()) ? 0 : 1; if (logEveryBlock) { blockLog.info("BLOCK* addStoredBlock: {} is added to {} (size={})", - node, storedBlock, storedBlock.getNumBytes()); + node, reportedBlock, reportedBlock.getNumBytes()); } } else if (result == AddBlockResult.REPLACED) { curReplicaDelta = 0; blockLog.warn("BLOCK* addStoredBlock: block {} moved to storageType " + - "{} on node {}", storedBlock, storageInfo.getStorageType(), node); + "{} on node {}", reportedBlock, storageInfo.getStorageType(), node); } else { // if the same block is added again and the replica was corrupt // previously because of a wrong gen stamp, remove it from the @@ -3646,8 +3655,8 @@ public class BlockManager implements BlockStatsMXBean { curReplicaDelta = 0; if (blockLog.isDebugEnabled()) { blockLog.debug("BLOCK* addStoredBlock: Redundant addStoredBlock request" - + " received for {} on node {} size {}", storedBlock, node, - storedBlock.getNumBytes()); + + " received for {} on node {} size {}", reportedBlock, node, + reportedBlock.getNumBytes()); } } diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminBackoffMonitor.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminBackoffMonitor.java index a7d72d019bd..79d5a065b08 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminBackoffMonitor.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminBackoffMonitor.java @@ -24,6 +24,7 @@ import org.apache.hadoop.hdfs.server.namenode.INodeFile; import org.apache.hadoop.hdfs.server.namenode.INodeId; import org.apache.hadoop.hdfs.util.LightWeightHashSet; import org.apache.hadoop.hdfs.util.LightWeightLinkedSet; +import org.apache.hadoop.classification.VisibleForTesting; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.util.HashMap; @@ -70,10 +71,10 @@ public class DatanodeAdminBackoffMonitor extends DatanodeAdminMonitorBase outOfServiceNodeBlocks = new HashMap<>(); /** - * The numbe of blocks to process when moving blocks to pendingReplication + * The number of blocks to process when moving blocks to pendingReplication * before releasing and reclaiming the namenode lock. */ - private int blocksPerLock; + private volatile int blocksPerLock; /** * The number of blocks that have been checked on this tick. @@ -82,7 +83,7 @@ public class DatanodeAdminBackoffMonitor extends DatanodeAdminMonitorBase /** * The maximum number of blocks to hold in PendingRep at any time. */ - private int pendingRepLimit; + private volatile int pendingRepLimit; /** * The list of blocks which have been placed onto the replication queue @@ -801,6 +802,26 @@ public class DatanodeAdminBackoffMonitor extends DatanodeAdminMonitorBase return false; } + @VisibleForTesting + @Override + public int getPendingRepLimit() { + return pendingRepLimit; + } + + public void setPendingRepLimit(int pendingRepLimit) { + this.pendingRepLimit = pendingRepLimit; + } + + @VisibleForTesting + @Override + public int getBlocksPerLock() { + return blocksPerLock; + } + + public void setBlocksPerLock(int blocksPerLock) { + this.blocksPerLock = blocksPerLock; + } + static class BlockStats { private LightWeightHashSet openFiles = new LightWeightLinkedSet<>(); diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminDefaultMonitor.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminDefaultMonitor.java index e642dfba351..94049b35dc4 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminDefaultMonitor.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminDefaultMonitor.java @@ -27,6 +27,7 @@ import org.apache.hadoop.hdfs.util.CyclicIteration; import org.apache.hadoop.hdfs.util.LightWeightHashSet; import org.apache.hadoop.hdfs.util.LightWeightLinkedSet; import org.apache.hadoop.util.ChunkedArrayList; +import org.apache.hadoop.classification.VisibleForTesting; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -113,6 +114,15 @@ public class DatanodeAdminDefaultMonitor extends DatanodeAdminMonitorBase numBlocksPerCheck = DFSConfigKeys.DFS_NAMENODE_DECOMMISSION_BLOCKS_PER_INTERVAL_DEFAULT; } + + final String deprecatedKey = "dfs.namenode.decommission.nodes.per.interval"; + final String strNodes = conf.get(deprecatedKey); + if (strNodes != null) { + LOG.warn("Deprecated configuration key {} will be ignored.", deprecatedKey); + LOG.warn("Please update your configuration to use {} instead.", + DFSConfigKeys.DFS_NAMENODE_DECOMMISSION_BLOCKS_PER_INTERVAL_KEY); + } + LOG.info("Initialized the Default Decommission and Maintenance monitor"); } @@ -137,6 +147,28 @@ public class DatanodeAdminDefaultMonitor extends DatanodeAdminMonitorBase return numNodesChecked; } + @VisibleForTesting + @Override + public int getPendingRepLimit() { + return 0; + } + + @Override + public void setPendingRepLimit(int pendingRepLimit) { + // nothing. + } + + @VisibleForTesting + @Override + public int getBlocksPerLock() { + return 0; + } + + @Override + public void setBlocksPerLock(int blocksPerLock) { + // nothing. + } + @Override public void run() { LOG.debug("DatanodeAdminMonitor is running."); diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminManager.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminManager.java index 887cb1072d9..af207a843fd 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminManager.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminManager.java @@ -108,32 +108,6 @@ public class DatanodeAdminManager { Preconditions.checkArgument(intervalSecs >= 0, "Cannot set a negative " + "value for " + DFSConfigKeys.DFS_NAMENODE_DECOMMISSION_INTERVAL_KEY); - int blocksPerInterval = conf.getInt( - DFSConfigKeys.DFS_NAMENODE_DECOMMISSION_BLOCKS_PER_INTERVAL_KEY, - DFSConfigKeys.DFS_NAMENODE_DECOMMISSION_BLOCKS_PER_INTERVAL_DEFAULT); - - final String deprecatedKey = - "dfs.namenode.decommission.nodes.per.interval"; - final String strNodes = conf.get(deprecatedKey); - if (strNodes != null) { - LOG.warn("Deprecated configuration key {} will be ignored.", - deprecatedKey); - LOG.warn("Please update your configuration to use {} instead.", - DFSConfigKeys.DFS_NAMENODE_DECOMMISSION_BLOCKS_PER_INTERVAL_KEY); - } - - Preconditions.checkArgument(blocksPerInterval > 0, - "Must set a positive value for " - + DFSConfigKeys.DFS_NAMENODE_DECOMMISSION_BLOCKS_PER_INTERVAL_KEY); - - final int maxConcurrentTrackedNodes = conf.getInt( - DFSConfigKeys.DFS_NAMENODE_DECOMMISSION_MAX_CONCURRENT_TRACKED_NODES, - DFSConfigKeys - .DFS_NAMENODE_DECOMMISSION_MAX_CONCURRENT_TRACKED_NODES_DEFAULT); - Preconditions.checkArgument(maxConcurrentTrackedNodes >= 0, - "Cannot set a negative value for " - + DFSConfigKeys.DFS_NAMENODE_DECOMMISSION_MAX_CONCURRENT_TRACKED_NODES); - Class cls = null; try { cls = conf.getClass( @@ -152,12 +126,7 @@ public class DatanodeAdminManager { executor.scheduleWithFixedDelay(monitor, intervalSecs, intervalSecs, TimeUnit.SECONDS); - if (LOG.isDebugEnabled()) { - LOG.debug("Activating DatanodeAdminManager with interval {} seconds, " + - "{} max blocks per interval, " + - "{} max concurrently tracked nodes.", intervalSecs, - blocksPerInterval, maxConcurrentTrackedNodes); - } + LOG.debug("Activating DatanodeAdminManager with interval {} seconds.", intervalSecs); } /** @@ -352,8 +321,7 @@ public class DatanodeAdminManager { } } } - if (isMaintenance - && numLive >= blockManager.getMinReplicationToBeInMaintenance()) { + if (isMaintenance && numLive >= blockManager.getMinMaintenanceStorageNum(block)) { return true; } return false; @@ -419,4 +387,30 @@ public class DatanodeAdminManager { executor.submit(monitor).get(); } + public void refreshPendingRepLimit(int pendingRepLimit, String key) { + ensurePositiveInt(pendingRepLimit, key); + this.monitor.setPendingRepLimit(pendingRepLimit); + } + + @VisibleForTesting + public int getPendingRepLimit() { + return this.monitor.getPendingRepLimit(); + } + + public void refreshBlocksPerLock(int blocksPerLock, String key) { + ensurePositiveInt(blocksPerLock, key); + this.monitor.setBlocksPerLock(blocksPerLock); + } + + @VisibleForTesting + public int getBlocksPerLock() { + return this.monitor.getBlocksPerLock(); + } + + private void ensurePositiveInt(int val, String key) { + Preconditions.checkArgument( + (val > 0), + key + " = '" + val + "' is invalid. " + + "It should be a positive, non-zero integer value."); + } } diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminMonitorBase.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminMonitorBase.java index 5aab1b4a8a1..1403161a0f5 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminMonitorBase.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminMonitorBase.java @@ -123,6 +123,10 @@ public abstract class DatanodeAdminMonitorBase DFSConfigKeys .DFS_NAMENODE_DECOMMISSION_MAX_CONCURRENT_TRACKED_NODES_DEFAULT; } + + LOG.debug("Activating DatanodeAdminMonitor with {} max concurrently tracked nodes.", + maxConcurrentTrackedNodes); + processConf(); } diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminMonitorInterface.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminMonitorInterface.java index 89673a759ea..a4774742108 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminMonitorInterface.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminMonitorInterface.java @@ -37,4 +37,12 @@ public interface DatanodeAdminMonitorInterface extends Runnable { void setBlockManager(BlockManager bm); void setDatanodeAdminManager(DatanodeAdminManager dnm); void setNameSystem(Namesystem ns); + + int getPendingRepLimit(); + + void setPendingRepLimit(int pendingRepLimit); + + int getBlocksPerLock(); + + void setBlocksPerLock(int blocksPerLock); } \ No newline at end of file diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java index a2b7afedfdd..c77d54591a9 100755 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java @@ -197,8 +197,10 @@ public class DatanodeDescriptor extends DatanodeInfo { /** A queue of blocks to be replicated by this datanode */ private final BlockQueue replicateBlocks = new BlockQueue<>(); - /** A queue of blocks to be erasure coded by this datanode */ - private final BlockQueue erasurecodeBlocks = + /** A queue of ec blocks to be replicated by this datanode. */ + private final BlockQueue ecBlocksToBeReplicated = new BlockQueue<>(); + /** A queue of ec blocks to be erasure coded by this datanode. */ + private final BlockQueue ecBlocksToBeErasureCoded = new BlockQueue<>(); /** A queue of blocks to be recovered by this datanode */ private final BlockQueue recoverBlocks = new BlockQueue<>(); @@ -358,7 +360,8 @@ public class DatanodeDescriptor extends DatanodeInfo { } this.recoverBlocks.clear(); this.replicateBlocks.clear(); - this.erasurecodeBlocks.clear(); + this.ecBlocksToBeReplicated.clear(); + this.ecBlocksToBeErasureCoded.clear(); // pendingCached, cached, and pendingUncached are protected by the // FSN lock. this.pendingCached.clear(); @@ -678,6 +681,15 @@ public class DatanodeDescriptor extends DatanodeInfo { replicateBlocks.offer(new BlockTargetPair(block, targets)); } + /** + * Store ec block to be replicated work. + */ + @VisibleForTesting + public void addECBlockToBeReplicated(Block block, DatanodeStorageInfo[] targets) { + assert (block != null && targets != null && targets.length > 0); + ecBlocksToBeReplicated.offer(new BlockTargetPair(block, targets)); + } + /** * Store block erasure coding work. */ @@ -687,9 +699,9 @@ public class DatanodeDescriptor extends DatanodeInfo { assert (block != null && sources != null && sources.length > 0); BlockECReconstructionInfo task = new BlockECReconstructionInfo(block, sources, targets, liveBlockIndices, excludeReconstrutedIndices, ecPolicy); - erasurecodeBlocks.offer(task); + ecBlocksToBeErasureCoded.offer(task); BlockManager.LOG.debug("Adding block reconstruction task " + task + "to " - + getName() + ", current queue size is " + erasurecodeBlocks.size()); + + getName() + ", current queue size is " + ecBlocksToBeErasureCoded.size()); } /** @@ -720,7 +732,8 @@ public class DatanodeDescriptor extends DatanodeInfo { * The number of work items that are pending to be replicated. */ int getNumberOfBlocksToBeReplicated() { - return pendingReplicationWithoutTargets + replicateBlocks.size(); + return pendingReplicationWithoutTargets + replicateBlocks.size() + + ecBlocksToBeReplicated.size(); } /** @@ -728,7 +741,15 @@ public class DatanodeDescriptor extends DatanodeInfo { */ @VisibleForTesting public int getNumberOfBlocksToBeErasureCoded() { - return erasurecodeBlocks.size(); + return ecBlocksToBeErasureCoded.size(); + } + + /** + * The number of ec work items that are pending to be replicated. + */ + @VisibleForTesting + public int getNumberOfECBlocksToBeReplicated() { + return ecBlocksToBeReplicated.size(); } @VisibleForTesting @@ -740,9 +761,13 @@ public class DatanodeDescriptor extends DatanodeInfo { return replicateBlocks.poll(maxTransfers); } + List getECReplicatedCommand(int maxTransfers) { + return ecBlocksToBeReplicated.poll(maxTransfers); + } + public List getErasureCodeCommand( int maxTransfers) { - return erasurecodeBlocks.poll(maxTransfers); + return ecBlocksToBeErasureCoded.poll(maxTransfers); } public BlockInfo[] getLeaseRecoveryCommand(int maxTransfers) { @@ -994,7 +1019,11 @@ public class DatanodeDescriptor extends DatanodeInfo { if (repl > 0) { sb.append(" ").append(repl).append(" blocks to be replicated;"); } - int ec = erasurecodeBlocks.size(); + int ecRepl = ecBlocksToBeReplicated.size(); + if (ecRepl > 0) { + sb.append(" ").append(ecRepl).append(" ec blocks to be replicated;"); + } + int ec = ecBlocksToBeErasureCoded.size(); if(ec > 0) { sb.append(" ").append(ec).append(" blocks to be erasure coded;"); } diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java index b2c5cb0b557..88f3ac4e7c4 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java @@ -1825,28 +1825,41 @@ public class DatanodeManager { // Allocate _approximately_ maxTransfers pending tasks to DataNode. // NN chooses pending tasks based on the ratio between the lengths of // replication and erasure-coded block queues. - int totalReplicateBlocks = nodeinfo.getNumberOfReplicateBlocks(); - int totalECBlocks = nodeinfo.getNumberOfBlocksToBeErasureCoded(); - int totalBlocks = totalReplicateBlocks + totalECBlocks; + int replicationBlocks = nodeinfo.getNumberOfReplicateBlocks(); + int ecBlocksToBeReplicated = nodeinfo.getNumberOfECBlocksToBeReplicated(); + int ecBlocksToBeErasureCoded = nodeinfo.getNumberOfBlocksToBeErasureCoded(); + int totalBlocks = replicationBlocks + ecBlocksToBeReplicated + ecBlocksToBeErasureCoded; if (totalBlocks > 0) { - int maxTransfers; + int maxTransfers = blockManager.getMaxReplicationStreams() - xmitsInProgress; + int maxECReplicatedTransfers; if (nodeinfo.isDecommissionInProgress()) { - maxTransfers = blockManager.getReplicationStreamsHardLimit() + maxECReplicatedTransfers = blockManager.getReplicationStreamsHardLimit() - xmitsInProgress; } else { - maxTransfers = blockManager.getMaxReplicationStreams() - - xmitsInProgress; + maxECReplicatedTransfers = maxTransfers; } int numReplicationTasks = (int) Math.ceil( - (double) (totalReplicateBlocks * maxTransfers) / totalBlocks); - int numECTasks = (int) Math.ceil( - (double) (totalECBlocks * maxTransfers) / totalBlocks); - LOG.debug("Pending replication tasks: {} erasure-coded tasks: {}.", - numReplicationTasks, numECTasks); + (double) (replicationBlocks * maxTransfers) / totalBlocks); + int numEcReplicatedTasks = (int) Math.ceil( + (double) (ecBlocksToBeReplicated * maxECReplicatedTransfers) / totalBlocks); + int numECReconstructedTasks = (int) Math.ceil( + (double) (ecBlocksToBeErasureCoded * maxTransfers) / totalBlocks); + LOG.debug("Pending replication tasks: {} ec to be replicated tasks: {} " + + "ec reconstruction tasks: {}.", + numReplicationTasks, numEcReplicatedTasks, numECReconstructedTasks); // check pending replication tasks - List pendingList = nodeinfo.getReplicationCommand( + List pendingReplicationList = nodeinfo.getReplicationCommand( numReplicationTasks); - if (pendingList != null && !pendingList.isEmpty()) { + List pendingECReplicatedList = nodeinfo.getECReplicatedCommand( + numEcReplicatedTasks); + List pendingList = new ArrayList(); + if(pendingReplicationList != null && !pendingReplicationList.isEmpty()) { + pendingList.addAll(pendingReplicationList); + } + if(pendingECReplicatedList != null && !pendingECReplicatedList.isEmpty()) { + pendingList.addAll(pendingECReplicatedList); + } + if (!pendingList.isEmpty()) { // If the block is deleted, the block size will become // BlockCommand.NO_ACK (LONG.MAX_VALUE) . This kind of block we don't // need @@ -1868,7 +1881,7 @@ public class DatanodeManager { } // check pending erasure coding tasks List pendingECList = nodeinfo - .getErasureCodeCommand(numECTasks); + .getErasureCodeCommand(numECReconstructedTasks); if (pendingECList != null && !pendingECList.isEmpty()) { cmds.add(new BlockECReconstructionCommand( DNA_ERASURE_CODING_RECONSTRUCTION, pendingECList)); diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ErasureCodingWork.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ErasureCodingWork.java index e5303a28d71..147f4c3fd62 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ErasureCodingWork.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ErasureCodingWork.java @@ -164,7 +164,7 @@ class ErasureCodingWork extends BlockReconstructionWork { stripedBlk.getDataBlockNum(), blockIndex); final Block targetBlk = new Block(stripedBlk.getBlockId() + blockIndex, internBlkLen, stripedBlk.getGenerationStamp()); - source.addBlockToBeReplicated(targetBlk, + source.addECBlockToBeReplicated(targetBlk, new DatanodeStorageInfo[] {target}); LOG.debug("Add replication task from source {} to " + "target {} for EC block {}", source, target, targetBlk); diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java index 6c3b4c97bed..553b8218421 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java @@ -59,7 +59,7 @@ class PendingReconstructionBlocks { // It might take anywhere between 5 to 10 minutes before // a request is timed out. // - private long timeout = + private volatile long timeout = DFS_NAMENODE_RECONSTRUCTION_PENDING_TIMEOUT_SEC_DEFAULT * 1000; private final static long DEFAULT_RECHECK_INTERVAL = 5 * 60 * 1000; diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HostRestrictingAuthorizationFilter.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HostRestrictingAuthorizationFilter.java index 0308e55e4cf..afed1e9e6e7 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HostRestrictingAuthorizationFilter.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HostRestrictingAuthorizationFilter.java @@ -226,9 +226,8 @@ public class HostRestrictingAuthorizationFilter implements Filter { final String query = interaction.getQueryString(); final String uri = interaction.getRequestURI(); if (!uri.startsWith(WebHdfsFileSystem.PATH_PREFIX)) { - LOG.trace("Rejecting interaction; wrong URI: {}", uri); - interaction.sendError(HttpServletResponse.SC_NOT_FOUND, - "The request URI must start with " + WebHdfsFileSystem.PATH_PREFIX); + LOG.trace("Proceeding with interaction since the request doesn't access WebHDFS API"); + interaction.proceed(); return; } final String path = uri.substring(WebHdfsFileSystem.PATH_PREFIX.length()); diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java index 844b67ce1a8..f7b09d5fc18 100755 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java @@ -202,6 +202,7 @@ class BPServiceActor implements Runnable { Map getActorInfoMap() { final Map info = new HashMap(); info.put("NamenodeAddress", getNameNodeAddress()); + info.put("NamenodeHaState", state != null ? state.toString() : "Unknown"); info.put("BlockPoolID", bpos.getBlockPoolId()); info.put("ActorState", getRunningState()); info.put("LastHeartbeat", @@ -697,6 +698,8 @@ class BPServiceActor implements Runnable { // Every so often, send heartbeat or block-report // final boolean sendHeartbeat = scheduler.isHeartbeatDue(startTime); + LOG.debug("BP offer service run start time: {}, sendHeartbeat: {}", startTime, + sendHeartbeat); HeartbeatResponse resp = null; if (sendHeartbeat) { // @@ -709,6 +712,8 @@ class BPServiceActor implements Runnable { boolean requestBlockReportLease = (fullBlockReportLeaseId == 0) && scheduler.isBlockReportDue(startTime); if (!dn.areHeartbeatsDisabledForTests()) { + LOG.debug("Before sending heartbeat to namenode {}, the state of the namenode known" + + " to datanode so far is {}", this.getNameNodeAddress(), state); resp = sendHeartBeat(requestBlockReportLease); assert resp != null; if (resp.getFullBlockReportLeaseId() != 0) { @@ -733,7 +738,12 @@ class BPServiceActor implements Runnable { // that we should actually process. bpos.updateActorStatesFromHeartbeat( this, resp.getNameNodeHaState()); - state = resp.getNameNodeHaState().getState(); + HAServiceState stateFromResp = resp.getNameNodeHaState().getState(); + if (state != stateFromResp) { + LOG.info("After receiving heartbeat response, updating state of namenode {} to {}", + this.getNameNodeAddress(), stateFromResp); + } + state = stateFromResp; if (state == HAServiceState.ACTIVE) { handleRollingUpgradeStatus(resp); @@ -794,6 +804,7 @@ class BPServiceActor implements Runnable { long sleepTime = Math.min(1000, dnConf.heartBeatInterval); Thread.sleep(sleepTime); } catch (InterruptedException ie) { + LOG.info("BPServiceActor {} is interrupted", this); Thread.currentThread().interrupt(); } } @@ -995,6 +1006,8 @@ class BPServiceActor implements Runnable { while (!duplicateQueue.isEmpty()) { BPServiceActorAction actionItem = duplicateQueue.remove(); try { + LOG.debug("BPServiceActor ( {} ) processing queued messages. Action item: {}", this, + actionItem); actionItem.reportTo(bpNamenode, bpRegistration); } catch (BPServiceActorActionException baae) { LOG.warn(baae.getMessage() + nnAddr , baae); diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java index 58334bf5c07..c42abda72bc 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java @@ -72,6 +72,8 @@ import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_OUTLIERS_REPORT_ import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_OUTLIERS_REPORT_INTERVAL_KEY; import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_SLOWDISK_LOW_THRESHOLD_MS_DEFAULT; import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_SLOWDISK_LOW_THRESHOLD_MS_KEY; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_MAX_SLOWDISKS_TO_EXCLUDE_DEFAULT; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_MAX_SLOWDISKS_TO_EXCLUDE_KEY; import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_STARTUP_KEY; import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_BALANCE_MAX_NUM_CONCURRENT_MOVES_KEY; import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_BALANCE_MAX_NUM_CONCURRENT_MOVES_DEFAULT; @@ -353,6 +355,7 @@ public class DataNode extends ReconfigurableBase DFS_DATANODE_OUTLIERS_REPORT_INTERVAL_KEY, DFS_DATANODE_MIN_OUTLIER_DETECTION_DISKS_KEY, DFS_DATANODE_SLOWDISK_LOW_THRESHOLD_MS_KEY, + DFS_DATANODE_MAX_SLOWDISKS_TO_EXCLUDE_KEY, FS_DU_INTERVAL_KEY, FS_GETSPACEUSED_JITTER_KEY, FS_GETSPACEUSED_CLASSNAME)); @@ -699,6 +702,7 @@ public class DataNode extends ReconfigurableBase case DFS_DATANODE_OUTLIERS_REPORT_INTERVAL_KEY: case DFS_DATANODE_MIN_OUTLIER_DETECTION_DISKS_KEY: case DFS_DATANODE_SLOWDISK_LOW_THRESHOLD_MS_KEY: + case DFS_DATANODE_MAX_SLOWDISKS_TO_EXCLUDE_KEY: return reconfSlowDiskParameters(property, newVal); case FS_DU_INTERVAL_KEY: case FS_GETSPACEUSED_JITTER_KEY: @@ -877,6 +881,12 @@ public class DataNode extends ReconfigurableBase Long.parseLong(newVal)); result = Long.toString(threshold); diskMetrics.setLowThresholdMs(threshold); + } else if (property.equals(DFS_DATANODE_MAX_SLOWDISKS_TO_EXCLUDE_KEY)) { + checkNotNull(diskMetrics, "DataNode disk stats may be disabled."); + int maxSlowDisksToExclude = (newVal == null ? + DFS_DATANODE_MAX_SLOWDISKS_TO_EXCLUDE_DEFAULT : Integer.parseInt(newVal)); + result = Integer.toString(maxSlowDisksToExclude); + diskMetrics.setMaxSlowDisksToExclude(maxSlowDisksToExclude); } LOG.info("RECONFIGURE* changed {} to {}", property, newVal); return result; diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNodeFaultInjector.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNodeFaultInjector.java index e9a9c690163..7b116d9e566 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNodeFaultInjector.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNodeFaultInjector.java @@ -157,4 +157,9 @@ public class DataNodeFaultInjector { public void badDecoding(ByteBuffer[] outputs) {} public void markSlow(String dnAddr, int[] replies) {} + + /** + * Just delay delete replica a while. + */ + public void delayDeleteReplica() {} } diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataSetLockManager.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataSetLockManager.java index 1d59f87ab2b..eac1259fb84 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataSetLockManager.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataSetLockManager.java @@ -233,7 +233,6 @@ public class DataSetLockManager implements DataNodeLockManager { @Override public Set deepCopyReplica(String bpid) throws IOException { - try (AutoCloseDataSetLock l = lockManager.readLock(LockLevel.BLOCK_POOl, bpid)) { - Set replicas = new HashSet<>(); - volumeMap.replicas(bpid, (iterator) -> { - while (iterator.hasNext()) { - ReplicaInfo b = iterator.next(); - replicas.add(b); - } - }); - return replicas; - } + Set replicas = new HashSet<>(); + volumeMap.replicas(bpid, (iterator) -> { + while (iterator.hasNext()) { + ReplicaInfo b = iterator.next(); + replicas.add(b); + } + }); + return replicas; } /** @@ -522,7 +520,7 @@ class FsDatasetImpl implements FsDatasetSpi { for (final NamespaceInfo nsInfo : nsInfos) { String bpid = nsInfo.getBlockPoolID(); - try (AutoCloseDataSetLock l = lockManager.writeLock(LockLevel.BLOCK_POOl, bpid)) { + try { fsVolume.addBlockPool(bpid, this.conf, this.timer); fsVolume.getVolumeMap(bpid, tempVolumeMap, ramDiskReplicaTracker); } catch (IOException e) { @@ -2169,19 +2167,16 @@ class FsDatasetImpl implements FsDatasetSpi { */ @Override public List getFinalizedBlocks(String bpid) { - try (AutoCloseDataSetLock l = lockManager.readLock(LockLevel.BLOCK_POOl, bpid)) { - ArrayList finalized = - new ArrayList<>(volumeMap.size(bpid)); - volumeMap.replicas(bpid, (iterator) -> { - while (iterator.hasNext()) { - ReplicaInfo b = iterator.next(); - if (b.getState() == ReplicaState.FINALIZED) { - finalized.add(new FinalizedReplica((FinalizedReplica)b)); - } + ArrayList finalized = new ArrayList<>(); + volumeMap.replicas(bpid, (iterator) -> { + while (iterator.hasNext()) { + ReplicaInfo b = iterator.next(); + if (b.getState() == ReplicaState.FINALIZED) { + finalized.add(new FinalizedReplica((FinalizedReplica)b)); } - }); - return finalized; - } + } + }); + return finalized; } /** @@ -2310,10 +2305,10 @@ class FsDatasetImpl implements FsDatasetSpi { throws IOException { final List errors = new ArrayList(); for (int i = 0; i < invalidBlks.length; i++) { - final ReplicaInfo removing; + final ReplicaInfo info; final FsVolumeImpl v; - try (AutoCloseableLock lock = lockManager.writeLock(LockLevel.BLOCK_POOl, bpid)) { - final ReplicaInfo info = volumeMap.get(bpid, invalidBlks[i]); + try (AutoCloseableLock lock = lockManager.readLock(LockLevel.BLOCK_POOl, bpid)) { + info = volumeMap.get(bpid, invalidBlks[i]); if (info == null) { ReplicaInfo infoByBlockId = volumeMap.get(bpid, invalidBlks[i].getBlockId()); @@ -2347,48 +2342,21 @@ class FsDatasetImpl implements FsDatasetSpi { LOG.warn("Parent directory check failed; replica {} is " + "not backed by a local file", info); } - removing = volumeMap.remove(bpid, invalidBlks[i]); - addDeletingBlock(bpid, removing.getBlockId()); - LOG.debug("Block file {} is to be deleted", removing.getBlockURI()); - datanode.getMetrics().incrBlocksRemoved(1); - if (removing instanceof ReplicaInPipeline) { - ((ReplicaInPipeline) removing).releaseAllBytesReserved(); - } } - if (v.isTransientStorage()) { - RamDiskReplica replicaInfo = - ramDiskReplicaTracker.getReplica(bpid, invalidBlks[i].getBlockId()); - if (replicaInfo != null) { - if (!replicaInfo.getIsPersisted()) { - datanode.getMetrics().incrRamDiskBlocksDeletedBeforeLazyPersisted(); - } - ramDiskReplicaTracker.discardReplica(replicaInfo.getBlockPoolId(), - replicaInfo.getBlockId(), true); - } - } - - // If a DFSClient has the replica in its cache of short-circuit file - // descriptors (and the client is using ShortCircuitShm), invalidate it. - datanode.getShortCircuitRegistry().processBlockInvalidation( - new ExtendedBlockId(invalidBlks[i].getBlockId(), bpid)); - - // If the block is cached, start uncaching it. - cacheManager.uncacheBlock(bpid, invalidBlks[i].getBlockId()); - try { if (async) { // Delete the block asynchronously to make sure we can do it fast // enough. // It's ok to unlink the block file before the uncache operation // finishes. - asyncDiskService.deleteAsync(v.obtainReference(), removing, + asyncDiskService.deleteAsync(v.obtainReference(), info, new ExtendedBlock(bpid, invalidBlks[i]), - dataStorage.getTrashDirectoryForReplica(bpid, removing)); + dataStorage.getTrashDirectoryForReplica(bpid, info)); } else { - asyncDiskService.deleteSync(v.obtainReference(), removing, + asyncDiskService.deleteSync(v.obtainReference(), info, new ExtendedBlock(bpid, invalidBlks[i]), - dataStorage.getTrashDirectoryForReplica(bpid, removing)); + dataStorage.getTrashDirectoryForReplica(bpid, info)); } } catch (ClosedChannelException e) { LOG.warn("Volume {} is closed, ignore the deletion task for " + @@ -2427,6 +2395,91 @@ class FsDatasetImpl implements FsDatasetSpi { block.getStorageUuid()); } + /** + * Remove Replica from ReplicaMap. + * + * @param block + * @param volume + * @return + */ + boolean removeReplicaFromMem(final ExtendedBlock block, final FsVolumeImpl volume) { + final String bpid = block.getBlockPoolId(); + final Block localBlock = block.getLocalBlock(); + final long blockId = localBlock.getBlockId(); + try (AutoCloseableLock lock = lockManager.writeLock(LockLevel.BLOCK_POOl, bpid)) { + final ReplicaInfo info = volumeMap.get(bpid, localBlock); + if (info == null) { + ReplicaInfo infoByBlockId = volumeMap.get(bpid, blockId); + if (infoByBlockId == null) { + // It is okay if the block is not found -- it + // may be deleted earlier. + LOG.info("Failed to delete replica {}: ReplicaInfo not found " + + "in removeReplicaFromMem.", localBlock); + } else { + LOG.error("Failed to delete replica {}: GenerationStamp not matched, " + + "existing replica is {} in removeReplicaFromMem.", + localBlock, Block.toString(infoByBlockId)); + } + return false; + } + + FsVolumeImpl v = (FsVolumeImpl) info.getVolume(); + if (v == null) { + LOG.error("Failed to delete replica {}. No volume for this replica {} " + + "in removeReplicaFromMem.", localBlock, info); + return false; + } + + try { + File blockFile = new File(info.getBlockURI()); + if (blockFile.getParentFile() == null) { + LOG.error("Failed to delete replica {}. Parent not found for block file: {} " + + "in removeReplicaFromMem.", localBlock, blockFile); + return false; + } + } catch(IllegalArgumentException e) { + LOG.warn("Parent directory check failed; replica {} is " + + "not backed by a local file in removeReplicaFromMem.", info); + } + + if (!volume.getStorageID().equals(v.getStorageID())) { + LOG.error("Failed to delete replica {}. Appear different volumes, oldVolume: {} " + + "and newVolume: {} for this replica in removeReplicaFromMem.", + localBlock, volume, v); + return false; + } + + ReplicaInfo removing = volumeMap.remove(bpid, localBlock); + addDeletingBlock(bpid, removing.getBlockId()); + LOG.debug("Block file {} is to be deleted", removing.getBlockURI()); + datanode.getMetrics().incrBlocksRemoved(1); + if (removing instanceof ReplicaInPipeline) { + ((ReplicaInPipeline) removing).releaseAllBytesReserved(); + } + } + + if (volume.isTransientStorage()) { + RamDiskReplicaTracker.RamDiskReplica replicaInfo = ramDiskReplicaTracker. + getReplica(bpid, blockId); + if (replicaInfo != null) { + if (!replicaInfo.getIsPersisted()) { + datanode.getMetrics().incrRamDiskBlocksDeletedBeforeLazyPersisted(); + } + ramDiskReplicaTracker.discardReplica(replicaInfo.getBlockPoolId(), + replicaInfo.getBlockId(), true); + } + } + + // If a DFSClient has the replica in its cache of short-circuit file + // descriptors (and the client is using ShortCircuitShm), invalidate it. + datanode.getShortCircuitRegistry().processBlockInvalidation( + ExtendedBlockId.fromExtendedBlock(block)); + + // If the block is cached, start uncaching it. + cacheManager.uncacheBlock(bpid, blockId); + return true; + } + /** * Asynchronously attempts to cache a single block via {@link FsDatasetCache}. */ @@ -3633,8 +3686,8 @@ class FsDatasetImpl implements FsDatasetSpi { } } } - - private void addDeletingBlock(String bpid, Long blockId) { + + protected void addDeletingBlock(String bpid, Long blockId) { synchronized(deletingBlock) { Set s = deletingBlock.get(bpid); if (s == null) { diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaTracker.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaTracker.java index 1103468d3c8..15bd9dec604 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaTracker.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaTracker.java @@ -184,7 +184,7 @@ public abstract class RamDiskReplicaTracker { * {@link org.apache.hadoop.hdfs.DFSConfigKeys#DFS_DATANODE_RAM_DISK_REPLICA_TRACKER_KEY}. * * @param conf the configuration to be used - * @param dataset the FsDataset object. + * @param fsDataset the FsDataset object. * @return an instance of RamDiskReplicaTracker */ static RamDiskReplicaTracker getInstance(final Configuration conf, diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeDiskMetrics.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeDiskMetrics.java index 409084cfe8b..a8ccd6d4ec4 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeDiskMetrics.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeDiskMetrics.java @@ -80,7 +80,7 @@ public class DataNodeDiskMetrics { /** * The number of slow disks that needs to be excluded. */ - private int maxSlowDisksToExclude; + private volatile int maxSlowDisksToExclude; /** * List of slow disks that need to be excluded. */ @@ -274,6 +274,14 @@ public class DataNodeDiskMetrics { return slowDisksToExclude; } + public int getMaxSlowDisksToExclude() { + return maxSlowDisksToExclude; + } + + public void setMaxSlowDisksToExclude(int maxSlowDisksToExclude) { + this.maxSlowDisksToExclude = maxSlowDisksToExclude; + } + public void setLowThresholdMs(long thresholdMs) { Preconditions.checkArgument(thresholdMs > 0, DFS_DATANODE_SLOWDISK_LOW_THRESHOLD_MS_KEY + " should be larger than 0"); diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/datamodel/DiskBalancerCluster.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/datamodel/DiskBalancerCluster.java index c801f36ea52..7e935a3f820 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/datamodel/DiskBalancerCluster.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/datamodel/DiskBalancerCluster.java @@ -389,6 +389,6 @@ public class DiskBalancerCluster { * @return DiskBalancerDataNode. */ public DiskBalancerDataNode getNodeByName(String hostName) { - return hostNames.get(hostName); + return hostNames.get(hostName.toLowerCase(Locale.US)); } } diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java index c129d1928ab..64bc46d9016 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java @@ -152,8 +152,8 @@ class FSDirRenameOp { * @param srcIIP source path * @param dstIIP destination path * @return true INodesInPath if rename succeeds; null otherwise - * @deprecated See {@link #renameToInt(FSDirectory, String, String, - * boolean, Options.Rename...)} + * @deprecated See {@link #renameToInt(FSDirectory, FSPermissionChecker, + * String, String, boolean, Options.Rename...)} */ @Deprecated static INodesInPath unprotectedRenameTo(FSDirectory fsd, @@ -248,8 +248,8 @@ class FSDirRenameOp { String src = srcArg; String dst = dstArg; if (NameNode.stateChangeLog.isDebugEnabled()) { - NameNode.stateChangeLog.debug("DIR* NameSystem.renameTo: with options -" + - " " + src + " to " + dst); + NameNode.stateChangeLog.debug("DIR* NameSystem.renameTo: with options={} {} to {}", + Arrays.toString(options), src, dst); } BlocksMapUpdateInfo collectedBlocks = new BlocksMapUpdateInfo(); @@ -258,8 +258,8 @@ class FSDirRenameOp { } /** - * @see {@link #unprotectedRenameTo(FSDirectory, String, String, INodesInPath, - * INodesInPath, long, BlocksMapUpdateInfo, Options.Rename...)} + * @see {@link #unprotectedRenameTo(FSDirectory, INodesInPath, INodesInPath, + * long, BlocksMapUpdateInfo, Options.Rename...)} */ static RenameResult renameTo(FSDirectory fsd, FSPermissionChecker pc, String src, String dst, BlocksMapUpdateInfo collectedBlocks, @@ -482,8 +482,8 @@ class FSDirRenameOp { } /** - * @deprecated Use {@link #renameToInt(FSDirectory, String, String, - * boolean, Options.Rename...)} + * @deprecated Use {@link #renameToInt(FSDirectory, FSPermissionChecker, + * String, String, boolean, Options.Rename...)} */ @Deprecated private static RenameResult renameTo(FSDirectory fsd, FSPermissionChecker pc, diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java index d4fed21e98e..52ba4729a71 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java @@ -87,6 +87,8 @@ import java.util.concurrent.RecursiveAction; import static org.apache.hadoop.fs.CommonConfigurationKeys.FS_PROTECTED_DIRECTORIES; import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_ACCESSTIME_PRECISION_DEFAULT; import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_ACCESSTIME_PRECISION_KEY; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_ACCESS_CONTROL_ENFORCER_REPORTING_THRESHOLD_MS_DEFAULT; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_ACCESS_CONTROL_ENFORCER_REPORTING_THRESHOLD_MS_KEY; import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_QUOTA_BY_STORAGETYPE_ENABLED_DEFAULT; import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_QUOTA_BY_STORAGETYPE_ENABLED_KEY; import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_PROTECTED_SUBDIRECTORIES_ENABLE; @@ -181,6 +183,8 @@ public class FSDirectory implements Closeable { * ACL-related operations. */ private final boolean aclsEnabled; + /** Threshold to print a warning. */ + private final long accessControlEnforcerReportingThresholdMs; /** * Support for POSIX ACL inheritance. Not final for testing purpose. */ @@ -388,6 +392,10 @@ public class FSDirectory implements Closeable { DFS_PROTECTED_SUBDIRECTORIES_ENABLE, DFS_PROTECTED_SUBDIRECTORIES_ENABLE_DEFAULT); + this.accessControlEnforcerReportingThresholdMs = conf.getLong( + DFS_NAMENODE_ACCESS_CONTROL_ENFORCER_REPORTING_THRESHOLD_MS_KEY, + DFS_NAMENODE_ACCESS_CONTROL_ENFORCER_REPORTING_THRESHOLD_MS_DEFAULT); + Preconditions.checkArgument(this.inodeXAttrsLimit >= 0, "Cannot set a negative limit on the number of xattrs per inode (%s).", DFSConfigKeys.DFS_NAMENODE_MAX_XATTRS_PER_INODE_KEY); @@ -1869,7 +1877,8 @@ public class FSDirectory implements Closeable { UserGroupInformation ugi) throws AccessControlException { return new FSPermissionChecker( fsOwner, superGroup, ugi, getUserFilteredAttributeProvider(ugi), - useAuthorizationWithContextAPI); + useAuthorizationWithContextAPI, + accessControlEnforcerReportingThresholdMs); } void checkOwner(FSPermissionChecker pc, INodesInPath iip) diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java index ff39494e3bc..68c1187608b 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java @@ -1673,18 +1673,31 @@ public class FSEditLog implements LogsPurgeable { endTransaction(start); } + void recoverUnclosedStreams() throws IOException { + recoverUnclosedStreams(false); + } + /** * Run recovery on all journals to recover any unclosed segments */ - synchronized void recoverUnclosedStreams() { + synchronized void recoverUnclosedStreams(boolean terminateOnFailure) throws IOException { Preconditions.checkState( state == State.BETWEEN_LOG_SEGMENTS, "May not recover segments - wrong state: %s", state); try { journalSet.recoverUnfinalizedSegments(); } catch (IOException ex) { - // All journals have failed, it is handled in logSync. - // TODO: are we sure this is OK? + if (terminateOnFailure) { + final String msg = "Unable to recover log segments: " + + "too few journals successfully recovered."; + LOG.error(msg, ex); + synchronized (journalSetLock) { + IOUtils.cleanupWithLogger(LOG, journalSet); + } + terminate(1, msg); + } else { + throw ex; + } } } diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java index a065fe6c0cf..5158058a056 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java @@ -132,7 +132,8 @@ public class FSEditLogLoader { /** Limit logging about edit loading to every 5 seconds max. */ @VisibleForTesting static final long LOAD_EDIT_LOG_INTERVAL_MS = 5000; - private final LogThrottlingHelper loadEditsLogHelper = + @VisibleForTesting + static final LogThrottlingHelper LOAD_EDITS_LOG_HELPER = new LogThrottlingHelper(LOAD_EDIT_LOG_INTERVAL_MS); private final FSNamesystem fsNamesys; @@ -173,7 +174,7 @@ public class FSEditLogLoader { fsNamesys.writeLock(); try { long startTime = timer.monotonicNow(); - LogAction preLogAction = loadEditsLogHelper.record("pre", startTime); + LogAction preLogAction = LOAD_EDITS_LOG_HELPER.record("pre", startTime); if (preLogAction.shouldLog()) { FSImage.LOG.info("Start loading edits file " + edits.getName() + " maxTxnsToRead = " + maxTxnsToRead + @@ -182,7 +183,7 @@ public class FSEditLogLoader { long numEdits = loadEditRecords(edits, false, expectedStartingTxId, maxTxnsToRead, startOpt, recovery); long endTime = timer.monotonicNow(); - LogAction postLogAction = loadEditsLogHelper.record("post", endTime, + LogAction postLogAction = LOAD_EDITS_LOG_HELPER.record("post", endTime, numEdits, edits.length(), endTime - startTime); if (postLogAction.shouldLog()) { FSImage.LOG.info("Loaded {} edits file(s) (the last named {}) of " + @@ -912,6 +913,7 @@ public class FSEditLogLoader { fsNamesys.getFSImage().updateStorageVersion(); fsNamesys.getFSImage().renameCheckpoint(NameNodeFile.IMAGE_ROLLBACK, NameNodeFile.IMAGE); + fsNamesys.setNeedRollbackFsImage(false); break; } case OP_ADD_CACHE_DIRECTIVE: { diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java index 3f0c9faa97c..1f21871ac7b 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java @@ -23,9 +23,9 @@ import java.io.InputStream; import java.io.OutputStream; import java.util.ArrayList; import java.util.Collection; +import java.util.Collections; import java.util.Iterator; import java.util.List; -import java.util.concurrent.CopyOnWriteArrayList; import java.util.concurrent.CountDownLatch; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; @@ -227,8 +227,7 @@ public final class FSImageFormatPBINode { LOG.info("Loading the INodeDirectory section in parallel with {} sub-" + "sections", sections.size()); CountDownLatch latch = new CountDownLatch(sections.size()); - final CopyOnWriteArrayList exceptions = - new CopyOnWriteArrayList<>(); + final List exceptions = Collections.synchronizedList(new ArrayList<>()); for (FileSummary.Section s : sections) { service.submit(() -> { InputStream ins = null; @@ -237,8 +236,7 @@ public final class FSImageFormatPBINode { compressionCodec); loadINodeDirectorySection(ins); } catch (Exception e) { - LOG.error("An exception occurred loading INodeDirectories in " + - "parallel", e); + LOG.error("An exception occurred loading INodeDirectories in parallel", e); exceptions.add(new IOException(e)); } finally { latch.countDown(); @@ -424,8 +422,7 @@ public final class FSImageFormatPBINode { long expectedInodes = 0; CountDownLatch latch = new CountDownLatch(sections.size()); AtomicInteger totalLoaded = new AtomicInteger(0); - final CopyOnWriteArrayList exceptions = - new CopyOnWriteArrayList<>(); + final List exceptions = Collections.synchronizedList(new ArrayList<>()); for (int i=0; i < sections.size(); i++) { FileSummary.Section s = sections.get(i); diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java index 99f2089fe8d..5e4f0d520a9 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java @@ -1389,7 +1389,7 @@ public class FSNamesystem implements Namesystem, FSNamesystemMBean, // During startup, we're already open for write during initialization. editLog.initJournalsForWrite(); // May need to recover - editLog.recoverUnclosedStreams(); + editLog.recoverUnclosedStreams(true); LOG.info("Catching up to latest edits from old active before " + "taking over writer role in edits logs"); @@ -3044,12 +3044,12 @@ public class FSNamesystem implements Namesystem, FSNamesystemMBean, LocatedBlock[] onRetryBlock = new LocatedBlock[1]; FSDirWriteFileOp.ValidateAddBlockResult r; - checkOperation(OperationCategory.READ); + checkOperation(OperationCategory.WRITE); final FSPermissionChecker pc = getPermissionChecker(); FSPermissionChecker.setOperationType(operationName); readLock(); try { - checkOperation(OperationCategory.READ); + checkOperation(OperationCategory.WRITE); r = FSDirWriteFileOp.validateAddBlock(this, pc, src, fileId, clientName, previous, onRetryBlock); } finally { @@ -3095,12 +3095,15 @@ public class FSNamesystem implements Namesystem, FSNamesystemMBean, final byte storagePolicyID; final List chosen; final BlockType blockType; - checkOperation(OperationCategory.READ); + checkOperation(OperationCategory.WRITE); final FSPermissionChecker pc = getPermissionChecker(); FSPermissionChecker.setOperationType(null); readLock(); try { - checkOperation(OperationCategory.READ); + // Changing this operation category to WRITE instead of making getAdditionalDatanode as a + // read method is aim to let Active NameNode to handle this RPC, because Active NameNode + // contains a more complete DN selection context than Observer NameNode. + checkOperation(OperationCategory.WRITE); //check safe mode checkNameNodeSafeMode("Cannot add datanode; src=" + src + ", blk=" + blk); final INodesInPath iip = dir.resolvePath(pc, src, fileId); @@ -3621,10 +3624,10 @@ public class FSNamesystem implements Namesystem, FSNamesystemMBean, final String operationName = getQuotaCommand(nsQuota, ssQuota); final FSPermissionChecker pc = getPermissionChecker(); FSPermissionChecker.setOperationType(operationName); + if(!allowOwnerSetQuota) { + checkSuperuserPrivilege(operationName, src); + } try { - if(!allowOwnerSetQuota) { - checkSuperuserPrivilege(operationName, src); - } writeLock(); try { checkOperation(OperationCategory.WRITE); @@ -7761,8 +7764,8 @@ public class FSNamesystem implements Namesystem, FSNamesystemMBean, checkOperation(OperationCategory.WRITE); String poolInfoStr = null; String poolName = req == null ? null : req.getPoolName(); + checkSuperuserPrivilege(operationName, poolName); try { - checkSuperuserPrivilege(operationName, poolName); writeLock(); try { checkOperation(OperationCategory.WRITE); @@ -7788,8 +7791,8 @@ public class FSNamesystem implements Namesystem, FSNamesystemMBean, checkOperation(OperationCategory.WRITE); String poolNameStr = "{poolName: " + (req == null ? null : req.getPoolName()) + "}"; + checkSuperuserPrivilege(operationName, poolNameStr); try { - checkSuperuserPrivilege(operationName, poolNameStr); writeLock(); try { checkOperation(OperationCategory.WRITE); @@ -7815,8 +7818,8 @@ public class FSNamesystem implements Namesystem, FSNamesystemMBean, final String operationName = "removeCachePool"; checkOperation(OperationCategory.WRITE); String poolNameStr = "{poolName: " + cachePoolName + "}"; + checkSuperuserPrivilege(operationName, poolNameStr); try { - checkSuperuserPrivilege(operationName, poolNameStr); writeLock(); try { checkOperation(OperationCategory.WRITE); @@ -8017,11 +8020,11 @@ public class FSNamesystem implements Namesystem, FSNamesystemMBean, SafeModeException, AccessControlException { final String operationName = "createEncryptionZone"; FileStatus resultingStat = null; + checkSuperuserPrivilege(operationName, src); try { Metadata metadata = FSDirEncryptionZoneOp.ensureKeyIsInitialized(dir, keyName, src); final FSPermissionChecker pc = getPermissionChecker(); - checkSuperuserPrivilege(operationName, src); checkOperation(OperationCategory.WRITE); writeLock(); try { @@ -8100,11 +8103,11 @@ public class FSNamesystem implements Namesystem, FSNamesystemMBean, final boolean logRetryCache) throws IOException { final String operationName = "reencryptEncryptionZone"; boolean success = false; + checkSuperuserPrivilege(operationName, zone); try { Preconditions.checkNotNull(zone, "zone is null."); checkOperation(OperationCategory.WRITE); final FSPermissionChecker pc = dir.getPermissionChecker(); - checkSuperuserPrivilege(operationName, zone); checkNameNodeSafeMode("NameNode in safemode, cannot " + action + " re-encryption on zone " + zone); reencryptEncryptionZoneInt(pc, zone, action, logRetryCache); @@ -9034,9 +9037,15 @@ public class FSNamesystem implements Namesystem, FSNamesystemMBean, private void checkBlockLocationsWhenObserver(LocatedBlocks blocks, String src) throws ObserverRetryOnActiveException { - for (LocatedBlock b : blocks.getLocatedBlocks()) { - if (b.getLocations() == null || b.getLocations().length == 0) { - throw new ObserverRetryOnActiveException("Zero blocklocations for " + src); + if (blocks == null) { + return; + } + List locatedBlockList = blocks.getLocatedBlocks(); + if (locatedBlockList != null) { + for (LocatedBlock b : locatedBlockList) { + if (b.getLocations() == null || b.getLocations().length == 0) { + throw new ObserverRetryOnActiveException("Zero blocklocations for " + src); + } } } } diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java index c7430e38cd0..00b726b928a 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java @@ -21,10 +21,13 @@ import java.io.IOException; import java.util.ArrayList; import java.util.Collection; import java.util.List; +import java.util.Optional; import java.util.Stack; +import java.util.function.LongFunction; import org.apache.hadoop.util.Preconditions; import org.apache.hadoop.ipc.CallerContext; +import org.apache.hadoop.util.Time; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.hadoop.fs.FSExceptionMessages; @@ -87,19 +90,21 @@ public class FSPermissionChecker implements AccessControlEnforcer { private final boolean isSuper; private final INodeAttributeProvider attributeProvider; private final boolean authorizeWithContext; + private final long accessControlEnforcerReportingThresholdMs; private static ThreadLocal operationType = new ThreadLocal<>(); protected FSPermissionChecker(String fsOwner, String supergroup, UserGroupInformation callerUgi, INodeAttributeProvider attributeProvider) { - this(fsOwner, supergroup, callerUgi, attributeProvider, false); + this(fsOwner, supergroup, callerUgi, attributeProvider, false, 0); } protected FSPermissionChecker(String fsOwner, String supergroup, UserGroupInformation callerUgi, INodeAttributeProvider attributeProvider, - boolean useAuthorizationWithContextAPI) { + boolean useAuthorizationWithContextAPI, + long accessControlEnforcerReportingThresholdMs) { this.fsOwner = fsOwner; this.supergroup = supergroup; this.callerUgi = callerUgi; @@ -117,6 +122,38 @@ public class FSPermissionChecker implements AccessControlEnforcer { } else { authorizeWithContext = useAuthorizationWithContextAPI; } + this.accessControlEnforcerReportingThresholdMs + = accessControlEnforcerReportingThresholdMs; + } + + private String checkAccessControlEnforcerSlowness( + long elapsedMs, AccessControlEnforcer ace, + boolean checkSuperuser, AuthorizationContext context) { + return checkAccessControlEnforcerSlowness(elapsedMs, + accessControlEnforcerReportingThresholdMs, ace.getClass(), checkSuperuser, + context.getPath(), context.getOperationName(), + context.getCallerContext()); + } + + /** @return the warning message if there is any. */ + static String checkAccessControlEnforcerSlowness( + long elapsedMs, long thresholdMs, Class clazz, + boolean checkSuperuser, String path, String op, Object caller) { + if (!LOG.isWarnEnabled()) { + return null; + } + if (thresholdMs <= 0) { + return null; + } + if (elapsedMs > thresholdMs) { + final String message = clazz + " ran for " + + elapsedMs + "ms (threshold=" + thresholdMs + "ms) to check " + + (checkSuperuser ? "superuser" : "permission") + + " on " + path + " for " + op + " from caller " + caller; + LOG.warn(message, new Throwable("TRACE")); + return message; + } + return null; } public static void setOperationType(String opType) { @@ -139,9 +176,70 @@ public class FSPermissionChecker implements AccessControlEnforcer { return attributeProvider; } + @FunctionalInterface + interface CheckPermission { + void run() throws AccessControlException; + } + + static String runCheckPermission(CheckPermission checker, + LongFunction checkElapsedMs) throws AccessControlException { + final String message; + final long start = Time.monotonicNow(); + try { + checker.run(); + } finally { + final long end = Time.monotonicNow(); + message = checkElapsedMs.apply(end - start); + } + return message; + } + private AccessControlEnforcer getAccessControlEnforcer() { - return (attributeProvider != null) - ? attributeProvider.getExternalAccessControlEnforcer(this) : this; + final AccessControlEnforcer e = Optional.ofNullable(attributeProvider) + .map(p -> p.getExternalAccessControlEnforcer(this)) + .orElse(this); + if (e == this) { + return this; + } + // For an external AccessControlEnforcer, check for slowness. + return new AccessControlEnforcer() { + @Override + public void checkPermission( + String filesystemOwner, String superGroup, UserGroupInformation ugi, + INodeAttributes[] inodeAttrs, INode[] inodes, byte[][] pathByNameArr, + int snapshotId, String path, int ancestorIndex, boolean doCheckOwner, + FsAction ancestorAccess, FsAction parentAccess, FsAction access, + FsAction subAccess, boolean ignoreEmptyDir) + throws AccessControlException { + runCheckPermission( + () -> e.checkPermission(filesystemOwner, superGroup, ugi, + inodeAttrs, inodes, pathByNameArr, snapshotId, path, + ancestorIndex, doCheckOwner, ancestorAccess, parentAccess, + access, subAccess, ignoreEmptyDir), + elapsedMs -> checkAccessControlEnforcerSlowness(elapsedMs, + accessControlEnforcerReportingThresholdMs, + e.getClass(), false, path, operationType.get(), + CallerContext.getCurrent())); + } + + @Override + public void checkPermissionWithContext(AuthorizationContext context) + throws AccessControlException { + runCheckPermission( + () -> e.checkPermissionWithContext(context), + elapsedMs -> checkAccessControlEnforcerSlowness(elapsedMs, + e, false, context)); + } + + @Override + public void checkSuperUserPermissionWithContext( + AuthorizationContext context) throws AccessControlException { + runCheckPermission( + () -> e.checkSuperUserPermissionWithContext(context), + elapsedMs -> checkAccessControlEnforcerSlowness(elapsedMs, + e, true, context)); + } + }; } private AuthorizationContext getAuthorizationContextForSuperUser( diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java index 1047f852a16..08ea2d4bc40 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java @@ -205,6 +205,10 @@ import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_MAX_SLOWPEER_COL import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_MAX_SLOWPEER_COLLECT_NODES_DEFAULT; import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_RECONSTRUCTION_PENDING_TIMEOUT_SEC_KEY; import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_RECONSTRUCTION_PENDING_TIMEOUT_SEC_DEFAULT; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_DECOMMISSION_BACKOFF_MONITOR_PENDING_LIMIT; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_DECOMMISSION_BACKOFF_MONITOR_PENDING_LIMIT_DEFAULT; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_DECOMMISSION_BACKOFF_MONITOR_PENDING_BLOCKS_PER_LOCK; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_DECOMMISSION_BACKOFF_MONITOR_PENDING_BLOCKS_PER_LOCK_DEFAULT; import static org.apache.hadoop.util.ExitUtil.terminate; import static org.apache.hadoop.util.ToolRunner.confirmPrompt; @@ -353,7 +357,9 @@ public class NameNode extends ReconfigurableBase implements DFS_BLOCK_INVALIDATE_LIMIT_KEY, DFS_DATANODE_PEER_STATS_ENABLED_KEY, DFS_DATANODE_MAX_NODES_TO_REPORT_KEY, - DFS_NAMENODE_RECONSTRUCTION_PENDING_TIMEOUT_SEC_KEY)); + DFS_NAMENODE_RECONSTRUCTION_PENDING_TIMEOUT_SEC_KEY, + DFS_NAMENODE_DECOMMISSION_BACKOFF_MONITOR_PENDING_LIMIT, + DFS_NAMENODE_DECOMMISSION_BACKOFF_MONITOR_PENDING_BLOCKS_PER_LOCK)); private static final String USAGE = "Usage: hdfs namenode [" + StartupOption.BACKUP.getName() + "] | \n\t[" @@ -2038,6 +2044,9 @@ public class NameNode extends ReconfigurableBase implements synchronized void transitionToObserver() throws IOException { String operationName = "transitionToObserver"; namesystem.checkSuperuserPrivilege(operationName); + if (notBecomeActiveInSafemode && isInSafeMode()) { + throw new ServiceFailedException(getRole() + " still not leave safemode"); + } if (!haEnabled) { throw new ServiceFailedException("HA for namenode is not enabled"); } @@ -2356,6 +2365,10 @@ public class NameNode extends ReconfigurableBase implements return reconfigureSlowNodesParameters(datanodeManager, property, newVal); } else if (property.equals(DFS_BLOCK_INVALIDATE_LIMIT_KEY)) { return reconfigureBlockInvalidateLimit(datanodeManager, property, newVal); + } else if (property.equals(DFS_NAMENODE_DECOMMISSION_BACKOFF_MONITOR_PENDING_LIMIT) || + (property.equals(DFS_NAMENODE_DECOMMISSION_BACKOFF_MONITOR_PENDING_BLOCKS_PER_LOCK))) { + return reconfigureDecommissionBackoffMonitorParameters(datanodeManager, property, + newVal); } else { throw new ReconfigurationException(property, newVal, getConf().get( property)); @@ -2636,6 +2649,34 @@ public class NameNode extends ReconfigurableBase implements } } + private String reconfigureDecommissionBackoffMonitorParameters( + final DatanodeManager datanodeManager, final String property, final String newVal) + throws ReconfigurationException { + String newSetting = null; + try { + if (property.equals(DFS_NAMENODE_DECOMMISSION_BACKOFF_MONITOR_PENDING_LIMIT)) { + int pendingRepLimit = (newVal == null ? + DFS_NAMENODE_DECOMMISSION_BACKOFF_MONITOR_PENDING_LIMIT_DEFAULT : + Integer.parseInt(newVal)); + datanodeManager.getDatanodeAdminManager().refreshPendingRepLimit(pendingRepLimit, + DFS_NAMENODE_DECOMMISSION_BACKOFF_MONITOR_PENDING_LIMIT); + newSetting = String.valueOf(datanodeManager.getDatanodeAdminManager().getPendingRepLimit()); + } else if (property.equals( + DFS_NAMENODE_DECOMMISSION_BACKOFF_MONITOR_PENDING_BLOCKS_PER_LOCK)) { + int blocksPerLock = (newVal == null ? + DFS_NAMENODE_DECOMMISSION_BACKOFF_MONITOR_PENDING_BLOCKS_PER_LOCK_DEFAULT : + Integer.parseInt(newVal)); + datanodeManager.getDatanodeAdminManager().refreshBlocksPerLock(blocksPerLock, + DFS_NAMENODE_DECOMMISSION_BACKOFF_MONITOR_PENDING_BLOCKS_PER_LOCK); + newSetting = String.valueOf(datanodeManager.getDatanodeAdminManager().getBlocksPerLock()); + } + LOG.info("RECONFIGURE* changed reconfigureDecommissionBackoffMonitorParameters {} to {}", + property, newSetting); + return newSetting; + } catch (IllegalArgumentException e) { + throw new ReconfigurationException(property, newVal, getConf().get(property), e); + } + } @Override // ReconfigurableBase protected Configuration getNewConf() { diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java index 1e8517edeca..050eda628ee 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java @@ -320,6 +320,10 @@ public class NamenodeFsck implements DataEncryptionKeyFactory { } out.println("No. of corrupted Replica: " + numberReplicas.corruptReplicas()); + // for striped blocks only and number of redundant internal block replicas. + if (blockInfo.isStriped()) { + out.println("No. of redundant Replica: " + numberReplicas.redundantInternalBlocks()); + } //record datanodes that have corrupted block replica Collection corruptionRecord = null; if (blockManager.getCorruptReplicas(block) != null) { diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RedundantEditLogInputStream.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RedundantEditLogInputStream.java index 5c046cdce0a..cd18804f654 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RedundantEditLogInputStream.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RedundantEditLogInputStream.java @@ -47,7 +47,7 @@ class RedundantEditLogInputStream extends EditLogInputStream { /** Limit logging about fast forwarding the stream to every 5 seconds max. */ private static final long FAST_FORWARD_LOGGING_INTERVAL_MS = 5000; - private final LogThrottlingHelper fastForwardLoggingHelper = + private static final LogThrottlingHelper FAST_FORWARD_LOGGING_HELPER = new LogThrottlingHelper(FAST_FORWARD_LOGGING_INTERVAL_MS); /** @@ -182,7 +182,7 @@ class RedundantEditLogInputStream extends EditLogInputStream { case SKIP_UNTIL: try { if (prevTxId != HdfsServerConstants.INVALID_TXID) { - LogAction logAction = fastForwardLoggingHelper.record(); + LogAction logAction = FAST_FORWARD_LOGGING_HELPER.record(); if (logAction.shouldLog()) { LOG.info("Fast-forwarding stream '" + streams[curIdx].getName() + "' to transaction ID " + (prevTxId + 1) + diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SerialNumberMap.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SerialNumberMap.java index d9a41428b55..ff116b51188 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SerialNumberMap.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SerialNumberMap.java @@ -68,6 +68,7 @@ public class SerialNumberMap { } Integer old = t2i.putIfAbsent(t, sn); if (old != null) { + current.getAndDecrement(); return old; } i2t.put(sn, t); diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/EditLogTailer.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/EditLogTailer.java index f72ec7c9177..d43035ba731 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/EditLogTailer.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/EditLogTailer.java @@ -123,7 +123,7 @@ public class EditLogTailer { /** * The timeout in milliseconds of calling rollEdits RPC to Active NN. - * @see HDFS-4176. + * See HDFS-4176. */ private final long rollEditsTimeoutMs; @@ -311,7 +311,8 @@ public class EditLogTailer { startTime - lastLoadTimeMs); // It is already under the name system lock and the checkpointer // thread is already stopped. No need to acquire any other lock. - editsTailed = doTailEdits(); + // HDFS-16689. Disable inProgress to use the streaming mechanism + editsTailed = doTailEdits(false); } catch (InterruptedException e) { throw new IOException(e); } finally { @@ -323,9 +324,13 @@ public class EditLogTailer { } }); } - + @VisibleForTesting public long doTailEdits() throws IOException, InterruptedException { + return doTailEdits(inProgressOk); + } + + private long doTailEdits(boolean enableInProgress) throws IOException, InterruptedException { Collection streams; FSImage image = namesystem.getFSImage(); @@ -334,7 +339,7 @@ public class EditLogTailer { long startTime = timer.monotonicNow(); try { streams = editLog.selectInputStreams(lastTxnId + 1, 0, - null, inProgressOk, true); + null, enableInProgress, true); } catch (IOException ioe) { // This is acceptable. If we try to tail edits in the middle of an edits // log roll, i.e. the last one has been finalized but the new inprogress diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotFSImageFormat.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotFSImageFormat.java index 21642da9c24..527d767b09a 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotFSImageFormat.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotFSImageFormat.java @@ -70,7 +70,7 @@ public class SnapshotFSImageFormat { /** * Save SnapshotDiff list for an INodeDirectoryWithSnapshot. - * @param sNode The directory that the SnapshotDiff list belongs to. + * @param diffs The directory that the SnapshotDiff list belongs to. * @param out The {@link DataOutput} to write. */ private static > @@ -186,7 +186,7 @@ public class SnapshotFSImageFormat { * @param createdList The created list associated with the deleted list in * the same Diff. * @param in The {@link DataInput} to read. - * @param loader The {@link Loader} instance. + * @param loader The {@link FSImageFormat.Loader} instance. * @return The deleted list. */ private static List loadDeletedList(INodeDirectory parent, @@ -260,7 +260,7 @@ public class SnapshotFSImageFormat { * Load the snapshotINode field of {@link AbstractINodeDiff}. * @param snapshot The Snapshot associated with the {@link AbstractINodeDiff}. * @param in The {@link DataInput} to read. - * @param loader The {@link Loader} instance that this loading procedure is + * @param loader The {@link FSImageFormat.Loader} instance that this loading procedure is * using. * @return The snapshotINode. */ @@ -281,7 +281,7 @@ public class SnapshotFSImageFormat { * Load {@link DirectoryDiff} from fsimage. * @param parent The directory that the SnapshotDiff belongs to. * @param in The {@link DataInput} instance to read. - * @param loader The {@link Loader} instance that this loading procedure is + * @param loader The {@link FSImageFormat.Loader} instance that this loading procedure is * using. * @return A {@link DirectoryDiff}. */ diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/PhaseTracking.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/PhaseTracking.java index 3f1d9030297..b01a4c2845f 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/PhaseTracking.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/PhaseTracking.java @@ -20,6 +20,7 @@ import java.util.Map; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; +import org.apache.commons.lang3.builder.ToStringBuilder; import org.apache.hadoop.classification.InterfaceAudience; /** @@ -43,4 +44,15 @@ final class PhaseTracking extends AbstractTracking { } return clone; } + + @Override + public String toString() { + return new ToStringBuilder(this) + .append("file", file) + .append("size", size) + .append("steps", steps) + .append("beginTime", beginTime) + .append("endTime", endTime) + .toString(); + } } diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/StartupProgress.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/StartupProgress.java index 6249a84e7f9..0ca338b34b1 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/StartupProgress.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/StartupProgress.java @@ -24,6 +24,9 @@ import java.util.Map; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + import org.apache.hadoop.classification.InterfaceAudience; /** @@ -48,6 +51,9 @@ import org.apache.hadoop.classification.InterfaceAudience; */ @InterfaceAudience.Private public class StartupProgress { + + private static final Logger LOG = LoggerFactory.getLogger(StartupProgress.class); + // package-private for access by StartupProgressView final Map phases = new ConcurrentHashMap(); @@ -81,6 +87,7 @@ public class StartupProgress { if (!isComplete()) { phases.get(phase).beginTime = monotonicNow(); } + LOG.debug("Beginning of the phase: {}", phase); } /** @@ -94,6 +101,7 @@ public class StartupProgress { if (!isComplete(phase)) { lazyInitStep(phase, step).beginTime = monotonicNow(); } + LOG.debug("Beginning of the step. Phase: {}, Step: {}", phase, step); } /** @@ -105,6 +113,7 @@ public class StartupProgress { if (!isComplete()) { phases.get(phase).endTime = monotonicNow(); } + LOG.debug("End of the phase: {}", phase); } /** @@ -118,6 +127,7 @@ public class StartupProgress { if (!isComplete(phase)) { lazyInitStep(phase, step).endTime = monotonicNow(); } + LOG.debug("End of the step. Phase: {}, Step: {}", phase, step); } /** diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/Step.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/Step.java index 0baf99d994e..5dee13d2a5e 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/Step.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/Step.java @@ -21,6 +21,7 @@ import java.util.concurrent.atomic.AtomicInteger; import org.apache.commons.lang3.builder.CompareToBuilder; import org.apache.commons.lang3.builder.EqualsBuilder; import org.apache.commons.lang3.builder.HashCodeBuilder; +import org.apache.commons.lang3.builder.ToStringBuilder; import org.apache.hadoop.classification.InterfaceAudience; /** @@ -139,4 +140,14 @@ public class Step implements Comparable { return new HashCodeBuilder().append(file).append(size).append(type) .toHashCode(); } + + @Override + public String toString() { + return new ToStringBuilder(this) + .append("file", file) + .append("sequenceNumber", sequenceNumber) + .append("size", size) + .append("type", type) + .toString(); + } } diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/StepTracking.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/StepTracking.java index bc224ec5670..799b4d0b09f 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/StepTracking.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/StepTracking.java @@ -18,6 +18,7 @@ package org.apache.hadoop.hdfs.server.namenode.startupprogress; import java.util.concurrent.atomic.AtomicLong; +import org.apache.commons.lang3.builder.ToStringBuilder; import org.apache.hadoop.classification.InterfaceAudience; /** @@ -36,4 +37,14 @@ final class StepTracking extends AbstractTracking { clone.total = total; return clone; } + + @Override + public String toString() { + return new ToStringBuilder(this) + .append("count", count) + .append("total", total) + .append("beginTime", beginTime) + .append("endTime", endTime) + .toString(); + } } diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSHAAdmin.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSHAAdmin.java index 4d0d56c05a7..a462cafb458 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSHAAdmin.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSHAAdmin.java @@ -247,7 +247,7 @@ public class DFSHAAdmin extends HAAdmin { } private int transitionToObserver(final CommandLine cmd) - throws IOException, ServiceFailedException { + throws IOException { String[] argv = cmd.getArgs(); if (argv.length != 1) { errOut.println("transitionToObserver: incorrect number of arguments"); @@ -262,8 +262,13 @@ public class DFSHAAdmin extends HAAdmin { if (!checkManualStateManagementOK(target)) { return -1; } - HAServiceProtocol proto = target.getProxy(getConf(), 0); - HAServiceProtocolHelper.transitionToObserver(proto, createReqInfo()); + try { + HAServiceProtocol proto = target.getProxy(getConf(), 0); + HAServiceProtocolHelper.transitionToObserver(proto, createReqInfo()); + } catch (ServiceFailedException e) { + errOut.println("transitionToObserver failed! " + e.getLocalizedMessage()); + return -1; + } return 0; } diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DebugAdmin.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DebugAdmin.java index 32e8248adc6..7116c2578ca 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DebugAdmin.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DebugAdmin.java @@ -432,8 +432,13 @@ public class DebugAdmin extends Configured implements Tool { VerifyECCommand() { super("verifyEC", - "verifyEC -file ", - " Verify HDFS erasure coding on all block groups of the file."); + "verifyEC -file [-blockId ] [-skipFailureBlocks]", + " -file Verify HDFS erasure coding on all block groups of the file." + + System.lineSeparator() + + " -skipFailureBlocks specify will skip any block group failures during verify," + + " and continues verify all block groups of the file," + System.lineSeparator() + + " the default is not to skip failure blocks." + System.lineSeparator() + + " -blockId specify blk_Id to verify for a specific one block group."); } int run(List args) throws IOException { @@ -480,30 +485,48 @@ public class DebugAdmin extends Configured implements Tool { this.parityBlkNum = ecPolicy.getNumParityUnits(); this.cellSize = ecPolicy.getCellSize(); this.encoder = CodecUtil.createRawEncoder(getConf(), ecPolicy.getCodecName(), - new ErasureCoderOptions( - ecPolicy.getNumDataUnits(), ecPolicy.getNumParityUnits())); + new ErasureCoderOptions(dataBlkNum, parityBlkNum)); int blockNum = dataBlkNum + parityBlkNum; this.readService = new ExecutorCompletionService<>( DFSUtilClient.getThreadPoolExecutor(blockNum, blockNum, 60, new LinkedBlockingQueue<>(), "read-", false)); - this.blockReaders = new BlockReader[dataBlkNum + parityBlkNum]; + this.blockReaders = new BlockReader[blockNum]; + + String needToVerifyBlockId = StringUtils.popOptionWithArgument("-blockId", args); + boolean skipFailureBlocks = StringUtils.popOption("-skipFailureBlocks", args); + boolean isHealthy = true; for (LocatedBlock locatedBlock : locatedBlocks.getLocatedBlocks()) { - System.out.println("Checking EC block group: blk_" + locatedBlock.getBlock().getBlockId()); - LocatedStripedBlock blockGroup = (LocatedStripedBlock) locatedBlock; + String blockName = locatedBlock.getBlock().getBlockName(); + if (needToVerifyBlockId == null || needToVerifyBlockId.equals(blockName)) { + System.out.println("Checking EC block group: " + blockName); + LocatedStripedBlock blockGroup = (LocatedStripedBlock) locatedBlock; - try { - verifyBlockGroup(blockGroup); - System.out.println("Status: OK"); - } catch (Exception e) { - System.err.println("Status: ERROR, message: " + e.getMessage()); - return 1; - } finally { - closeBlockReaders(); + try { + verifyBlockGroup(blockGroup); + System.out.println("Status: OK"); + } catch (Exception e) { + System.err.println("Status: ERROR, message: " + e.getMessage()); + isHealthy = false; + if (!skipFailureBlocks) { + break; + } + } finally { + closeBlockReaders(); + } + + if (needToVerifyBlockId != null) { + break; + } } } - System.out.println("\nAll EC block group status: OK"); - return 0; + if (isHealthy) { + if (needToVerifyBlockId == null) { + System.out.println("\nAll EC block group status: OK"); + } + return 0; + } + return 1; } private void verifyBlockGroup(LocatedStripedBlock blockGroup) throws Exception { diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java index ddf7933f032..9fabd1887ce 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java @@ -60,8 +60,8 @@ public class XmlEditsVisitor implements OfflineEditsVisitor { public XmlEditsVisitor(OutputStream out) throws IOException { this.out = out; - factory =(SAXTransformerFactory)SAXTransformerFactory.newInstance(); try { + factory = org.apache.hadoop.util.XMLUtils.newSecureSAXTransformerFactory(); TransformerHandler handler = factory.newTransformerHandler(); handler.getTransformer().setOutputProperty(OutputKeys.METHOD, "xml"); handler.getTransformer().setOutputProperty(OutputKeys.ENCODING, "UTF-8"); diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageReconstructor.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageReconstructor.java index 78a7301db04..6a2049acb4b 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageReconstructor.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageReconstructor.java @@ -56,6 +56,7 @@ import org.apache.hadoop.thirdparty.protobuf.TextFormat; import org.slf4j.Logger; import org.slf4j.LoggerFactory; + import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; import org.apache.hadoop.fs.permission.AclEntry; @@ -147,6 +148,8 @@ class OfflineImageReconstructor { InputStreamReader reader) throws XMLStreamException { this.out = out; XMLInputFactory factory = XMLInputFactory.newInstance(); + factory.setProperty(XMLInputFactory.SUPPORT_DTD, false); + factory.setProperty(XMLInputFactory.IS_SUPPORTING_EXTERNAL_ENTITIES, false); this.events = factory.createXMLEventReader(reader); this.sections = new HashMap<>(); this.sections.put(NameSectionProcessor.NAME, new NameSectionProcessor()); diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageViewerPB.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageViewerPB.java index 2233a3c3d24..fea8a8fdfbb 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageViewerPB.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageViewerPB.java @@ -81,6 +81,8 @@ public class OfflineImageViewerPB { + " changed via the -delimiter argument.\n" + " -sp print storage policy, used by delimiter only.\n" + " -ec print erasure coding policy, used by delimiter only.\n" + + " -m defines multiThread to process sub-sections, \n" + + " used by delimiter only.\n" + " * DetectCorruption: Detect potential corruption of the image by\n" + " selectively loading parts of it and actively searching for\n" + " inconsistencies. Outputs a summary of the found corruptions\n" @@ -101,8 +103,24 @@ public class OfflineImageViewerPB { + " against image file. (XML|FileDistribution|\n" + " ReverseXML|Web|Delimited|DetectCorruption)\n" + " The default is Web.\n" + + "-addr Specify the address(host:port) to listen.\n" + + " (localhost:5978 by default). This option is\n" + + " used with Web processor.\n" + + "-maxSize Specify the range [0, maxSize] of file sizes\n" + + " to be analyzed in bytes (128GB by default).\n" + + " This option is used with FileDistribution processor.\n" + + "-step Specify the granularity of the distribution in bytes\n" + + " (2MB by default). This option is used\n" + + " with FileDistribution processor.\n" + + "-format Format the output result in a human-readable fashion rather\n" + + " than a number of bytes. (false by default).\n" + + " This option is used with FileDistribution processor.\n" + "-delimiter Delimiting string to use with Delimited or \n" + " DetectCorruption processor. \n" + + "-sp Whether to print storage policy (default is false). \n" + + " Is used by Delimited processor only. \n" + + "-ec Whether to print erasure coding policy (default is false). \n" + + " Is used by Delimited processor only. \n" + "-t,--temp Use temporary dir to cache intermediate\n" + " result to generate DetectCorruption or\n" + " Delimited outputs. If not set, the processor\n" diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageTextWriter.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageTextWriter.java index 5773d7fecf0..bd6c860ccf0 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageTextWriter.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageTextWriter.java @@ -85,7 +85,7 @@ import static org.apache.hadoop.hdfs.server.common.HdfsServerConstants.XATTR_ERA /** * This class reads the protobuf-based fsimage and generates text output * for each inode to {@link PBImageTextWriter#out}. The sub-class can override - * {@link getEntry()} to generate formatted string for each inode. + * {@link #getEntry(String, INode)} to generate formatted string for each inode. * * Since protobuf-based fsimage does not guarantee the order of inodes and * directories, PBImageTextWriter runs two-phase scans: diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/MD5FileUtils.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/MD5FileUtils.java index 2bc63ec77eb..77ec7890588 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/MD5FileUtils.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/MD5FileUtils.java @@ -68,7 +68,7 @@ public abstract class MD5FileUtils { /** * Read the md5 file stored alongside the given data file * and match the md5 file content. - * @param dataFile the file containing data + * @param md5File the file containing md5 data * @return a matcher with two matched groups * where group(1) is the md5 string and group(2) is the data file path. */ diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java index 1744b3dde05..b91399e399e 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java @@ -17,6 +17,7 @@ */ package org.apache.hadoop.hdfs.web; +import org.apache.hadoop.classification.VisibleForTesting; import org.apache.hadoop.fs.BlockLocation; import org.apache.hadoop.fs.ContentSummary; import org.apache.hadoop.fs.FileChecksum; @@ -335,6 +336,17 @@ public class JsonUtil { return null; } + final Map m = toJsonMap(locatedblocks); + return toJsonString(LocatedBlocks.class, m); + } + + /** Convert LocatedBlocks to a Map. */ + public static Map toJsonMap(final LocatedBlocks locatedblocks) + throws IOException { + if (locatedblocks == null) { + return null; + } + final Map m = new TreeMap(); m.put("fileLength", locatedblocks.getFileLength()); m.put("isUnderConstruction", locatedblocks.isUnderConstruction()); @@ -342,7 +354,7 @@ public class JsonUtil { m.put("locatedBlocks", toJsonArray(locatedblocks.getLocatedBlocks())); m.put("lastLocatedBlock", toJsonMap(locatedblocks.getLastLocatedBlock())); m.put("isLastBlockComplete", locatedblocks.isLastBlockComplete()); - return toJsonString(LocatedBlocks.class, m); + return m; } /** Convert a ContentSummary to a Json string. */ @@ -676,7 +688,8 @@ public class JsonUtil { return m; } - private static Map toJsonMap( + @VisibleForTesting + static Map toJsonMap( final BlockLocation blockLocation) throws IOException { if (blockLocation == null) { return null; @@ -696,15 +709,20 @@ public class JsonUtil { public static String toJsonString(BlockLocation[] locations) throws IOException { + return toJsonString("BlockLocations", JsonUtil.toJsonMap(locations)); + } + + public static Map toJsonMap(BlockLocation[] locations) + throws IOException { if (locations == null) { return null; } final Map m = new HashMap<>(); Object[] blockLocations = new Object[locations.length]; - for(int i=0; i + + dfs.namenode.access-control-enforcer-reporting-threshold-ms + 1000 + + If an external AccessControlEnforcer runs for a long time to check permission with the FSnamesystem lock, + print a WARN log message. This sets how long must be run for logging to occur. + + + dfs.namenode.lock.detailed-metrics.enabled false @@ -3465,30 +3474,6 @@ - - dfs.datanode.lock.read.write.enabled - true - If this is true, the FsDataset lock will be a read write lock. If - it is false, all locks will be a write lock. - Enabling this should give better datanode throughput, as many read only - functions can run concurrently under the read lock, when they would - previously have required the exclusive write lock. As the feature is - experimental, this switch can be used to disable the shared read lock, and - cause all lock acquisitions to use the exclusive write lock. - - - - - dfs.datanode.lock-reporting-threshold-ms - 300 - When thread waits to obtain a lock, or a thread holds a lock for - more than the threshold, a log message will be written. Note that - dfs.lock.suppress.warning.interval ensures a single log message is - emitted per interval for waiting threads and a single message for holding - threads to avoid excessive logging. - - - dfs.namenode.startup.delay.block.deletion.sec 0 @@ -3749,7 +3734,7 @@ dfs.ha.nn.not-become-active-in-safemode false - This will prevent safe mode namenodes to become active while other standby + This will prevent safe mode namenodes to become active or observer while other standby namenodes might be ready to serve requests when it is set to true. @@ -4969,7 +4954,7 @@ dfs.journalnode.edit-cache-size.bytes - 1048576 + The size, in bytes, of the in-memory cache of edits to keep on the JournalNode. This cache is used to serve edits for tailing via the RPC-based @@ -4979,6 +4964,22 @@ + + dfs.journalnode.edit-cache-size.fraction + 0.5f + + This ratio refers to the proportion of the maximum memory of the JVM. + Used to calculate the size of the edits cache that is kept in the JournalNode's memory. + This config is an alternative to the dfs.journalnode.edit-cache-size.bytes. + And it is used to serve edits for tailing via the RPC-based mechanism, and is only + enabled when dfs.ha.tail-edits.in-progress is true. Transactions range in size but + are around 200 bytes on average, so the default of 1MB can store around 5000 transactions. + So we can configure a reasonable value based on the maximum memory. The recommended value + is less than 0.9. If we set dfs.journalnode.edit-cache-size.bytes, this parameter will + not take effect. + + + dfs.journalnode.kerberos.internal.spnego.principal @@ -5465,8 +5466,6 @@ - dfs.storage.policy.satisfier.enabled - false dfs.storage.policy.satisfier.mode none @@ -6466,4 +6465,11 @@ If the namespace is DEFAULT, it's best to change this conf to other value. + + dfs.client.rbf.observer.read.enable + false + + Enables observer reads for clients. This should only be enabled when clients are using routers. + + diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/datanode.html b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/datanode.html index caab81ef686..b491d5a04e3 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/datanode.html +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/datanode.html @@ -81,6 +81,7 @@ Namenode Address + Namenode HA State Block Pool ID Actor State Last Heartbeat @@ -91,6 +92,7 @@ {#dn.BPServiceActorInfo} {NamenodeAddress} + {NamenodeHaState} {BlockPoolID} {ActorState} {LastHeartbeat}s diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery.dataTables.min.js b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery.dataTables.min.js index 07af1c39938..b3fd6981556 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery.dataTables.min.js +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery.dataTables.min.js @@ -1,166 +1,4 @@ /*! - DataTables 1.10.19 - ©2008-2018 SpryMedia Ltd - datatables.net/license -*/ -(function(h){"function"===typeof define&&define.amd?define(["jquery"],function(E){return h(E,window,document)}):"object"===typeof exports?module.exports=function(E,H){E||(E=window);H||(H="undefined"!==typeof window?require("jquery"):require("jquery")(E));return h(H,E,E.document)}:h(jQuery,window,document)})(function(h,E,H,k){function Z(a){var b,c,d={};h.each(a,function(e){if((b=e.match(/^([^A-Z]+?)([A-Z])/))&&-1!=="a aa ai ao as b fn i m o s ".indexOf(b[1]+" "))c=e.replace(b[0],b[2].toLowerCase()), -d[c]=e,"o"===b[1]&&Z(a[e])});a._hungarianMap=d}function J(a,b,c){a._hungarianMap||Z(a);var d;h.each(b,function(e){d=a._hungarianMap[e];if(d!==k&&(c||b[d]===k))"o"===d.charAt(0)?(b[d]||(b[d]={}),h.extend(!0,b[d],b[e]),J(a[d],b[d],c)):b[d]=b[e]})}function Ca(a){var b=n.defaults.oLanguage,c=b.sDecimal;c&&Da(c);if(a){var d=a.sZeroRecords;!a.sEmptyTable&&(d&&"No data available in table"===b.sEmptyTable)&&F(a,a,"sZeroRecords","sEmptyTable");!a.sLoadingRecords&&(d&&"Loading..."===b.sLoadingRecords)&&F(a, -a,"sZeroRecords","sLoadingRecords");a.sInfoThousands&&(a.sThousands=a.sInfoThousands);(a=a.sDecimal)&&c!==a&&Da(a)}}function fb(a){A(a,"ordering","bSort");A(a,"orderMulti","bSortMulti");A(a,"orderClasses","bSortClasses");A(a,"orderCellsTop","bSortCellsTop");A(a,"order","aaSorting");A(a,"orderFixed","aaSortingFixed");A(a,"paging","bPaginate");A(a,"pagingType","sPaginationType");A(a,"pageLength","iDisplayLength");A(a,"searching","bFilter");"boolean"===typeof a.sScrollX&&(a.sScrollX=a.sScrollX?"100%": -"");"boolean"===typeof a.scrollX&&(a.scrollX=a.scrollX?"100%":"");if(a=a.aoSearchCols)for(var b=0,c=a.length;b").css({position:"fixed",top:0,left:-1*h(E).scrollLeft(),height:1,width:1, -overflow:"hidden"}).append(h("
").css({position:"absolute",top:1,left:1,width:100,overflow:"scroll"}).append(h("
").css({width:"100%",height:10}))).appendTo("body"),d=c.children(),e=d.children();b.barWidth=d[0].offsetWidth-d[0].clientWidth;b.bScrollOversize=100===e[0].offsetWidth&&100!==d[0].clientWidth;b.bScrollbarLeft=1!==Math.round(e.offset().left);b.bBounding=c[0].getBoundingClientRect().width?!0:!1;c.remove()}h.extend(a.oBrowser,n.__browser);a.oScroll.iBarWidth=n.__browser.barWidth} -function ib(a,b,c,d,e,f){var g,j=!1;c!==k&&(g=c,j=!0);for(;d!==e;)a.hasOwnProperty(d)&&(g=j?b(g,a[d],d,a):a[d],j=!0,d+=f);return g}function Ea(a,b){var c=n.defaults.column,d=a.aoColumns.length,c=h.extend({},n.models.oColumn,c,{nTh:b?b:H.createElement("th"),sTitle:c.sTitle?c.sTitle:b?b.innerHTML:"",aDataSort:c.aDataSort?c.aDataSort:[d],mData:c.mData?c.mData:d,idx:d});a.aoColumns.push(c);c=a.aoPreSearchCols;c[d]=h.extend({},n.models.oSearch,c[d]);ka(a,d,h(b).data())}function ka(a,b,c){var b=a.aoColumns[b], -d=a.oClasses,e=h(b.nTh);if(!b.sWidthOrig){b.sWidthOrig=e.attr("width")||null;var f=(e.attr("style")||"").match(/width:\s*(\d+[pxem%]+)/);f&&(b.sWidthOrig=f[1])}c!==k&&null!==c&&(gb(c),J(n.defaults.column,c),c.mDataProp!==k&&!c.mData&&(c.mData=c.mDataProp),c.sType&&(b._sManualType=c.sType),c.className&&!c.sClass&&(c.sClass=c.className),c.sClass&&e.addClass(c.sClass),h.extend(b,c),F(b,c,"sWidth","sWidthOrig"),c.iDataSort!==k&&(b.aDataSort=[c.iDataSort]),F(b,c,"aDataSort"));var g=b.mData,j=S(g),i=b.mRender? -S(b.mRender):null,c=function(a){return"string"===typeof a&&-1!==a.indexOf("@")};b._bAttrSrc=h.isPlainObject(g)&&(c(g.sort)||c(g.type)||c(g.filter));b._setter=null;b.fnGetData=function(a,b,c){var d=j(a,b,k,c);return i&&b?i(d,b,a,c):d};b.fnSetData=function(a,b,c){return N(g)(a,b,c)};"number"!==typeof g&&(a._rowReadObject=!0);a.oFeatures.bSort||(b.bSortable=!1,e.addClass(d.sSortableNone));a=-1!==h.inArray("asc",b.asSorting);c=-1!==h.inArray("desc",b.asSorting);!b.bSortable||!a&&!c?(b.sSortingClass=d.sSortableNone, -b.sSortingClassJUI=""):a&&!c?(b.sSortingClass=d.sSortableAsc,b.sSortingClassJUI=d.sSortJUIAscAllowed):!a&&c?(b.sSortingClass=d.sSortableDesc,b.sSortingClassJUI=d.sSortJUIDescAllowed):(b.sSortingClass=d.sSortable,b.sSortingClassJUI=d.sSortJUI)}function $(a){if(!1!==a.oFeatures.bAutoWidth){var b=a.aoColumns;Fa(a);for(var c=0,d=b.length;cq[f])d(l.length+q[f],m);else if("string"=== -typeof q[f]){j=0;for(i=l.length;jb&&a[e]--; -1!=d&&c===k&&a.splice(d, -1)}function da(a,b,c,d){var e=a.aoData[b],f,g=function(c,d){for(;c.childNodes.length;)c.removeChild(c.firstChild);c.innerHTML=B(a,b,d,"display")};if("dom"===c||(!c||"auto"===c)&&"dom"===e.src)e._aData=Ia(a,e,d,d===k?k:e._aData).data;else{var j=e.anCells;if(j)if(d!==k)g(j[d],d);else{c=0;for(f=j.length;c").appendTo(g));b=0;for(c=l.length;btr").attr("role","row");h(g).find(">tr>th, >tr>td").addClass(m.sHeaderTH);h(j).find(">tr>th, >tr>td").addClass(m.sFooterTH);if(null!==j){a=a.aoFooter[0];b=0;for(c=a.length;b=a.fnRecordsDisplay()?0:g,a.iInitDisplayStart=-1);var g=a._iDisplayStart,m=a.fnDisplayEnd();if(a.bDeferLoading)a.bDeferLoading=!1,a.iDraw++,C(a,!1);else if(j){if(!a.bDestroying&&!mb(a))return}else a.iDraw++;if(0!==i.length){f=j?a.aoData.length:m;for(j=j?0:g;j",{"class":e?d[0]:""}).append(h("",{valign:"top",colSpan:V(a),"class":a.oClasses.sRowEmpty}).html(c))[0];r(a,"aoHeaderCallback","header",[h(a.nTHead).children("tr")[0],Ka(a),g,m,i]);r(a,"aoFooterCallback","footer",[h(a.nTFoot).children("tr")[0],Ka(a),g,m,i]);d=h(a.nTBody);d.children().detach(); -d.append(h(b));r(a,"aoDrawCallback","draw",[a]);a.bSorted=!1;a.bFiltered=!1;a.bDrawing=!1}}function T(a,b){var c=a.oFeatures,d=c.bFilter;c.bSort&&nb(a);d?ga(a,a.oPreviousSearch):a.aiDisplay=a.aiDisplayMaster.slice();!0!==b&&(a._iDisplayStart=0);a._drawHold=b;P(a);a._drawHold=!1}function ob(a){var b=a.oClasses,c=h(a.nTable),c=h("
").insertBefore(c),d=a.oFeatures,e=h("
",{id:a.sTableId+"_wrapper","class":b.sWrapper+(a.nTFoot?"":" "+b.sNoFooter)});a.nHolding=c[0];a.nTableWrapper=e[0];a.nTableReinsertBefore= -a.nTable.nextSibling;for(var f=a.sDom.split(""),g,j,i,m,l,q,k=0;k")[0];m=f[k+1];if("'"==m||'"'==m){l="";for(q=2;f[k+q]!=m;)l+=f[k+q],q++;"H"==l?l=b.sJUIHeader:"F"==l&&(l=b.sJUIFooter);-1!=l.indexOf(".")?(m=l.split("."),i.id=m[0].substr(1,m[0].length-1),i.className=m[1]):"#"==l.charAt(0)?i.id=l.substr(1,l.length-1):i.className=l;k+=q}e.append(i);e=h(i)}else if(">"==j)e=e.parent();else if("l"==j&&d.bPaginate&&d.bLengthChange)g=pb(a);else if("f"==j&& -d.bFilter)g=qb(a);else if("r"==j&&d.bProcessing)g=rb(a);else if("t"==j)g=sb(a);else if("i"==j&&d.bInfo)g=tb(a);else if("p"==j&&d.bPaginate)g=ub(a);else if(0!==n.ext.feature.length){i=n.ext.feature;q=0;for(m=i.length;q',j=d.sSearch,j=j.match(/_INPUT_/)?j.replace("_INPUT_", -g):j+g,b=h("
",{id:!f.f?c+"_filter":null,"class":b.sFilter}).append(h("
").addClass(b.sLength);a.aanFeatures.l||(i[0].id=c+"_length");i.children().append(a.oLanguage.sLengthMenu.replace("_MENU_",e[0].outerHTML));h("select",i).val(a._iDisplayLength).on("change.DT",function(){Ra(a,h(this).val());P(a)});h(a.nTable).on("length.dt.DT",function(b,c,d){a=== -c&&h("select",i).val(d)});return i[0]}function ub(a){var b=a.sPaginationType,c=n.ext.pager[b],d="function"===typeof c,e=function(a){P(a)},b=h("
").addClass(a.oClasses.sPaging+b)[0],f=a.aanFeatures;d||c.fnInit(a,b,e);f.p||(b.id=a.sTableId+"_paginate",a.aoDrawCallback.push({fn:function(a){if(d){var b=a._iDisplayStart,i=a._iDisplayLength,h=a.fnRecordsDisplay(),l=-1===i,b=l?0:Math.ceil(b/i),i=l?1:Math.ceil(h/i),h=c(b,i),k,l=0;for(k=f.p.length;lf&&(d=0)):"first"==b?d=0:"previous"==b?(d=0<=e?d-e:0,0>d&&(d=0)):"next"==b?d+e",{id:!a.aanFeatures.r?a.sTableId+"_processing":null,"class":a.oClasses.sProcessing}).html(a.oLanguage.sProcessing).insertBefore(a.nTable)[0]} -function C(a,b){a.oFeatures.bProcessing&&h(a.aanFeatures.r).css("display",b?"block":"none");r(a,null,"processing",[a,b])}function sb(a){var b=h(a.nTable);b.attr("role","grid");var c=a.oScroll;if(""===c.sX&&""===c.sY)return a.nTable;var d=c.sX,e=c.sY,f=a.oClasses,g=b.children("caption"),j=g.length?g[0]._captionSide:null,i=h(b[0].cloneNode(!1)),m=h(b[0].cloneNode(!1)),l=b.children("tfoot");l.length||(l=null);i=h("
",{"class":f.sScrollWrapper}).append(h("
",{"class":f.sScrollHead}).css({overflow:"hidden", -position:"relative",border:0,width:d?!d?null:v(d):"100%"}).append(h("
",{"class":f.sScrollHeadInner}).css({"box-sizing":"content-box",width:c.sXInner||"100%"}).append(i.removeAttr("id").css("margin-left",0).append("top"===j?g:null).append(b.children("thead"))))).append(h("
",{"class":f.sScrollBody}).css({position:"relative",overflow:"auto",width:!d?null:v(d)}).append(b));l&&i.append(h("
",{"class":f.sScrollFoot}).css({overflow:"hidden",border:0,width:d?!d?null:v(d):"100%"}).append(h("
", -{"class":f.sScrollFootInner}).append(m.removeAttr("id").css("margin-left",0).append("bottom"===j?g:null).append(b.children("tfoot")))));var b=i.children(),k=b[0],f=b[1],t=l?b[2]:null;if(d)h(f).on("scroll.DT",function(){var a=this.scrollLeft;k.scrollLeft=a;l&&(t.scrollLeft=a)});h(f).css(e&&c.bCollapse?"max-height":"height",e);a.nScrollHead=k;a.nScrollBody=f;a.nScrollFoot=t;a.aoDrawCallback.push({fn:la,sName:"scrolling"});return i[0]}function la(a){var b=a.oScroll,c=b.sX,d=b.sXInner,e=b.sY,b=b.iBarWidth, -f=h(a.nScrollHead),g=f[0].style,j=f.children("div"),i=j[0].style,m=j.children("table"),j=a.nScrollBody,l=h(j),q=j.style,t=h(a.nScrollFoot).children("div"),n=t.children("table"),o=h(a.nTHead),p=h(a.nTable),s=p[0],r=s.style,u=a.nTFoot?h(a.nTFoot):null,x=a.oBrowser,U=x.bScrollOversize,Xb=D(a.aoColumns,"nTh"),Q,L,R,w,Ua=[],y=[],z=[],A=[],B,C=function(a){a=a.style;a.paddingTop="0";a.paddingBottom="0";a.borderTopWidth="0";a.borderBottomWidth="0";a.height=0};L=j.scrollHeight>j.clientHeight;if(a.scrollBarVis!== -L&&a.scrollBarVis!==k)a.scrollBarVis=L,$(a);else{a.scrollBarVis=L;p.children("thead, tfoot").remove();u&&(R=u.clone().prependTo(p),Q=u.find("tr"),R=R.find("tr"));w=o.clone().prependTo(p);o=o.find("tr");L=w.find("tr");w.find("th, td").removeAttr("tabindex");c||(q.width="100%",f[0].style.width="100%");h.each(ra(a,w),function(b,c){B=aa(a,b);c.style.width=a.aoColumns[B].sWidth});u&&I(function(a){a.style.width=""},R);f=p.outerWidth();if(""===c){r.width="100%";if(U&&(p.find("tbody").height()>j.offsetHeight|| -"scroll"==l.css("overflow-y")))r.width=v(p.outerWidth()-b);f=p.outerWidth()}else""!==d&&(r.width=v(d),f=p.outerWidth());I(C,L);I(function(a){z.push(a.innerHTML);Ua.push(v(h(a).css("width")))},L);I(function(a,b){if(h.inArray(a,Xb)!==-1)a.style.width=Ua[b]},o);h(L).height(0);u&&(I(C,R),I(function(a){A.push(a.innerHTML);y.push(v(h(a).css("width")))},R),I(function(a,b){a.style.width=y[b]},Q),h(R).height(0));I(function(a,b){a.innerHTML='
'+z[b]+"
";a.childNodes[0].style.height= -"0";a.childNodes[0].style.overflow="hidden";a.style.width=Ua[b]},L);u&&I(function(a,b){a.innerHTML='
'+A[b]+"
";a.childNodes[0].style.height="0";a.childNodes[0].style.overflow="hidden";a.style.width=y[b]},R);if(p.outerWidth()j.offsetHeight||"scroll"==l.css("overflow-y")?f+b:f;if(U&&(j.scrollHeight>j.offsetHeight||"scroll"==l.css("overflow-y")))r.width=v(Q-b);(""===c||""!==d)&&K(a,1,"Possible column misalignment",6)}else Q="100%";q.width=v(Q); -g.width=v(Q);u&&(a.nScrollFoot.style.width=v(Q));!e&&U&&(q.height=v(s.offsetHeight+b));c=p.outerWidth();m[0].style.width=v(c);i.width=v(c);d=p.height()>j.clientHeight||"scroll"==l.css("overflow-y");e="padding"+(x.bScrollbarLeft?"Left":"Right");i[e]=d?b+"px":"0px";u&&(n[0].style.width=v(c),t[0].style.width=v(c),t[0].style[e]=d?b+"px":"0px");p.children("colgroup").insertBefore(p.children("thead"));l.scroll();if((a.bSorted||a.bFiltered)&&!a._drawHold)j.scrollTop=0}}function I(a,b,c){for(var d=0,e=0, -f=b.length,g,j;e").appendTo(j.find("tbody"));j.find("thead, tfoot").remove();j.append(h(a.nTHead).clone()).append(h(a.nTFoot).clone());j.find("tfoot th, tfoot td").css("width","");m=ra(a,j.find("thead")[0]);for(n=0;n").css({width:o.sWidthOrig,margin:0,padding:0,border:0,height:1}));if(a.aoData.length)for(n=0;n").css(f||e?{position:"absolute",top:0,left:0,height:1,right:0,overflow:"hidden"}:{}).append(j).appendTo(k);f&&g?j.width(g):f?(j.css("width","auto"),j.removeAttr("width"),j.width()").css("width",v(a)).appendTo(b||H.body),d=c[0].offsetWidth;c.remove();return d}function Gb(a, -b){var c=Hb(a,b);if(0>c)return null;var d=a.aoData[c];return!d.nTr?h("").html(B(a,c,b,"display"))[0]:d.anCells[b]}function Hb(a,b){for(var c,d=-1,e=-1,f=0,g=a.aoData.length;fd&&(d=c.length,e=f);return e}function v(a){return null===a?"0px":"number"==typeof a?0>a?"0px":a+"px":a.match(/\d$/)?a+"px":a}function X(a){var b,c,d=[],e=a.aoColumns,f,g,j,i;b=a.aaSortingFixed;c=h.isPlainObject(b);var m=[];f=function(a){a.length&& -!h.isArray(a[0])?m.push(a):h.merge(m,a)};h.isArray(b)&&f(b);c&&b.pre&&f(b.pre);f(a.aaSorting);c&&b.post&&f(b.post);for(a=0;ae?1:0,0!==c)return"asc"===j.dir?c:-c;c=d[a];e=d[b];return ce?1:0}):i.sort(function(a,b){var c,g,j,i,k=h.length,n=f[a]._aSortData,o=f[b]._aSortData;for(j=0;jg?1:0})}a.bSorted=!0}function Jb(a){for(var b,c,d=a.aoColumns,e=X(a),a=a.oLanguage.oAria,f=0,g=d.length;f/g,"");var i=c.nTh;i.removeAttribute("aria-sort");c.bSortable&&(0e?e+1:3));e=0;for(f=d.length;ee?e+1:3))}a.aLastSort=d}function Ib(a,b){var c=a.aoColumns[b],d=n.ext.order[c.sSortDataType],e;d&&(e=d.call(a.oInstance,a,b,ba(a,b)));for(var f,g=n.ext.type.order[c.sType+"-pre"],j=0,i=a.aoData.length;j=f.length?[0,c[1]]:c)}));b.search!==k&&h.extend(a.oPreviousSearch,Cb(b.search));if(b.columns){d=0;for(e=b.columns.length;d=c&&(b=c-d);b-=b%d;if(-1===d||0>b)b=0;a._iDisplayStart=b}function Na(a,b){var c=a.renderer,d=n.ext.renderer[b];return h.isPlainObject(c)&&c[b]?d[c[b]]||d._:"string"=== -typeof c?d[c]||d._:d._}function y(a){return a.oFeatures.bServerSide?"ssp":a.ajax||a.sAjaxSource?"ajax":"dom"}function ia(a,b){var c=[],c=Lb.numbers_length,d=Math.floor(c/2);b<=c?c=Y(0,b):a<=d?(c=Y(0,c-2),c.push("ellipsis"),c.push(b-1)):(a>=b-1-d?c=Y(b-(c-2),b):(c=Y(a-d+2,a+d-1),c.push("ellipsis"),c.push(b-1)),c.splice(0,0,"ellipsis"),c.splice(0,0,0));c.DT_el="span";return c}function Da(a){h.each({num:function(b){return za(b,a)},"num-fmt":function(b){return za(b,a,Ya)},"html-num":function(b){return za(b, -a,Aa)},"html-num-fmt":function(b){return za(b,a,Aa,Ya)}},function(b,c){x.type.order[b+a+"-pre"]=c;b.match(/^html\-/)&&(x.type.search[b+a]=x.type.search.html)})}function Mb(a){return function(){var b=[ya(this[n.ext.iApiIndex])].concat(Array.prototype.slice.call(arguments));return n.ext.internal[a].apply(this,b)}}var n=function(a){this.$=function(a,b){return this.api(!0).$(a,b)};this._=function(a,b){return this.api(!0).rows(a,b).data()};this.api=function(a){return a?new s(ya(this[x.iApiIndex])):new s(this)}; -this.fnAddData=function(a,b){var c=this.api(!0),d=h.isArray(a)&&(h.isArray(a[0])||h.isPlainObject(a[0]))?c.rows.add(a):c.row.add(a);(b===k||b)&&c.draw();return d.flatten().toArray()};this.fnAdjustColumnSizing=function(a){var b=this.api(!0).columns.adjust(),c=b.settings()[0],d=c.oScroll;a===k||a?b.draw(!1):(""!==d.sX||""!==d.sY)&&la(c)};this.fnClearTable=function(a){var b=this.api(!0).clear();(a===k||a)&&b.draw()};this.fnClose=function(a){this.api(!0).row(a).child.hide()};this.fnDeleteRow=function(a, -b,c){var d=this.api(!0),a=d.rows(a),e=a.settings()[0],h=e.aoData[a[0][0]];a.remove();b&&b.call(this,e,h);(c===k||c)&&d.draw();return h};this.fnDestroy=function(a){this.api(!0).destroy(a)};this.fnDraw=function(a){this.api(!0).draw(a)};this.fnFilter=function(a,b,c,d,e,h){e=this.api(!0);null===b||b===k?e.search(a,c,d,h):e.column(b).search(a,c,d,h);e.draw()};this.fnGetData=function(a,b){var c=this.api(!0);if(a!==k){var d=a.nodeName?a.nodeName.toLowerCase():"";return b!==k||"td"==d||"th"==d?c.cell(a,b).data(): -c.row(a).data()||null}return c.data().toArray()};this.fnGetNodes=function(a){var b=this.api(!0);return a!==k?b.row(a).node():b.rows().nodes().flatten().toArray()};this.fnGetPosition=function(a){var b=this.api(!0),c=a.nodeName.toUpperCase();return"TR"==c?b.row(a).index():"TD"==c||"TH"==c?(a=b.cell(a).index(),[a.row,a.columnVisible,a.column]):null};this.fnIsOpen=function(a){return this.api(!0).row(a).child.isShown()};this.fnOpen=function(a,b,c){return this.api(!0).row(a).child(b,c).show().child()[0]}; -this.fnPageChange=function(a,b){var c=this.api(!0).page(a);(b===k||b)&&c.draw(!1)};this.fnSetColumnVis=function(a,b,c){a=this.api(!0).column(a).visible(b);(c===k||c)&&a.columns.adjust().draw()};this.fnSettings=function(){return ya(this[x.iApiIndex])};this.fnSort=function(a){this.api(!0).order(a).draw()};this.fnSortListener=function(a,b,c){this.api(!0).order.listener(a,b,c)};this.fnUpdate=function(a,b,c,d,e){var h=this.api(!0);c===k||null===c?h.row(b).data(a):h.cell(b,c).data(a);(e===k||e)&&h.columns.adjust(); -(d===k||d)&&h.draw();return 0};this.fnVersionCheck=x.fnVersionCheck;var b=this,c=a===k,d=this.length;c&&(a={});this.oApi=this.internal=x.internal;for(var e in n.ext.internal)e&&(this[e]=Mb(e));this.each(function(){var e={},g=1").appendTo(q)); -p.nTHead=b[0];b=q.children("tbody");b.length===0&&(b=h("").appendTo(q));p.nTBody=b[0];b=q.children("tfoot");if(b.length===0&&a.length>0&&(p.oScroll.sX!==""||p.oScroll.sY!==""))b=h("").appendTo(q);if(b.length===0||b.children().length===0)q.addClass(u.sNoFooter);else if(b.length>0){p.nTFoot=b[0];ea(p.aoFooter,p.nTFoot)}if(g.aaData)for(j=0;j/g,Zb=/^\d{2,4}[\.\/\-]\d{1,2}[\.\/\-]\d{1,2}([T ]{1}\d{1,2}[:\.]\d{2}([\.:]\d{2})?)?$/,$b=RegExp("(\\/|\\.|\\*|\\+|\\?|\\||\\(|\\)|\\[|\\]|\\{|\\}|\\\\|\\$|\\^|\\-)","g"),Ya=/[',$£€¥%\u2009\u202F\u20BD\u20a9\u20BArfkɃΞ]/gi,M=function(a){return!a||!0===a||"-"===a?!0:!1},Ob=function(a){var b=parseInt(a,10);return!isNaN(b)&& -isFinite(a)?b:null},Pb=function(a,b){Za[b]||(Za[b]=RegExp(Qa(b),"g"));return"string"===typeof a&&"."!==b?a.replace(/\./g,"").replace(Za[b],"."):a},$a=function(a,b,c){var d="string"===typeof a;if(M(a))return!0;b&&d&&(a=Pb(a,b));c&&d&&(a=a.replace(Ya,""));return!isNaN(parseFloat(a))&&isFinite(a)},Qb=function(a,b,c){return M(a)?!0:!(M(a)||"string"===typeof a)?null:$a(a.replace(Aa,""),b,c)?!0:null},D=function(a,b,c){var d=[],e=0,f=a.length;if(c!==k)for(;ea.length)){b=a.slice().sort();for(var c=b[0],d=1,e=b.length;d")[0],Wb=va.textContent!==k,Yb= -/<.*?>/g,Oa=n.util.throttle,Sb=[],w=Array.prototype,ac=function(a){var b,c,d=n.settings,e=h.map(d,function(a){return a.nTable});if(a){if(a.nTable&&a.oApi)return[a];if(a.nodeName&&"table"===a.nodeName.toLowerCase())return b=h.inArray(a,e),-1!==b?[d[b]]:null;if(a&&"function"===typeof a.settings)return a.settings().toArray();"string"===typeof a?c=h(a):a instanceof h&&(c=a)}else return[];if(c)return c.map(function(){b=h.inArray(this,e);return-1!==b?d[b]:null}).toArray()};s=function(a,b){if(!(this instanceof -s))return new s(a,b);var c=[],d=function(a){(a=ac(a))&&(c=c.concat(a))};if(h.isArray(a))for(var e=0,f=a.length;ea?new s(b[a],this[a]):null},filter:function(a){var b=[];if(w.filter)b=w.filter.call(this,a,this);else for(var c=0,d=this.length;c").addClass(b),h("td",c).addClass(b).html(a)[0].colSpan=V(d),e.push(c[0]))};f(a,b);c._details&&c._details.detach();c._details=h(e); -c._detailsShow&&c._details.insertAfter(c.nTr)}return this});o(["row().child.show()","row().child().show()"],function(){Ub(this,!0);return this});o(["row().child.hide()","row().child().hide()"],function(){Ub(this,!1);return this});o(["row().child.remove()","row().child().remove()"],function(){db(this);return this});o("row().child.isShown()",function(){var a=this.context;return a.length&&this.length?a[0].aoData[this[0]]._detailsShow||!1:!1});var bc=/^([^:]+):(name|visIdx|visible)$/,Vb=function(a,b, -c,d,e){for(var c=[],d=0,f=e.length;d=0?b:g.length+b];if(typeof a==="function"){var e=Ba(c,f);return h.map(g,function(b,f){return a(f,Vb(c,f,0,0,e),i[f])?f:null})}var k=typeof a==="string"?a.match(bc): -"";if(k)switch(k[2]){case "visIdx":case "visible":b=parseInt(k[1],10);if(b<0){var n=h.map(g,function(a,b){return a.bVisible?b:null});return[n[n.length+b]]}return[aa(c,b)];case "name":return h.map(j,function(a,b){return a===k[1]?b:null});default:return[]}if(a.nodeName&&a._DT_CellIndex)return[a._DT_CellIndex.column];b=h(i).filter(a).map(function(){return h.inArray(this,i)}).toArray();if(b.length||!a.nodeName)return b;b=h(a).closest("*[data-dt-column]");return b.length?[b.data("dt-column")]:[]},c,f)}, -1);c.selector.cols=a;c.selector.opts=b;return c});u("columns().header()","column().header()",function(){return this.iterator("column",function(a,b){return a.aoColumns[b].nTh},1)});u("columns().footer()","column().footer()",function(){return this.iterator("column",function(a,b){return a.aoColumns[b].nTf},1)});u("columns().data()","column().data()",function(){return this.iterator("column-rows",Vb,1)});u("columns().dataSrc()","column().dataSrc()",function(){return this.iterator("column",function(a,b){return a.aoColumns[b].mData}, -1)});u("columns().cache()","column().cache()",function(a){return this.iterator("column-rows",function(b,c,d,e,f){return ja(b.aoData,f,"search"===a?"_aFilterData":"_aSortData",c)},1)});u("columns().nodes()","column().nodes()",function(){return this.iterator("column-rows",function(a,b,c,d,e){return ja(a.aoData,e,"anCells",b)},1)});u("columns().visible()","column().visible()",function(a,b){var c=this.iterator("column",function(b,c){if(a===k)return b.aoColumns[c].bVisible;var f=b.aoColumns,g=f[c],j=b.aoData, -i,m,l;if(a!==k&&g.bVisible!==a){if(a){var n=h.inArray(!0,D(f,"bVisible"),c+1);i=0;for(m=j.length;id;return!0};n.isDataTable= -n.fnIsDataTable=function(a){var b=h(a).get(0),c=!1;if(a instanceof n.Api)return!0;h.each(n.settings,function(a,e){var f=e.nScrollHead?h("table",e.nScrollHead)[0]:null,g=e.nScrollFoot?h("table",e.nScrollFoot)[0]:null;if(e.nTable===b||f===b||g===b)c=!0});return c};n.tables=n.fnTables=function(a){var b=!1;h.isPlainObject(a)&&(b=a.api,a=a.visible);var c=h.map(n.settings,function(b){if(!a||a&&h(b.nTable).is(":visible"))return b.nTable});return b?new s(c):c};n.camelToHungarian=J;o("$()",function(a,b){var c= -this.rows(b).nodes(),c=h(c);return h([].concat(c.filter(a).toArray(),c.find(a).toArray()))});h.each(["on","one","off"],function(a,b){o(b+"()",function(){var a=Array.prototype.slice.call(arguments);a[0]=h.map(a[0].split(/\s/),function(a){return!a.match(/\.dt\b/)?a+".dt":a}).join(" ");var d=h(this.tables().nodes());d[b].apply(d,a);return this})});o("clear()",function(){return this.iterator("table",function(a){oa(a)})});o("settings()",function(){return new s(this.context,this.context)});o("init()",function(){var a= -this.context;return a.length?a[0].oInit:null});o("data()",function(){return this.iterator("table",function(a){return D(a.aoData,"_aData")}).flatten()});o("destroy()",function(a){a=a||!1;return this.iterator("table",function(b){var c=b.nTableWrapper.parentNode,d=b.oClasses,e=b.nTable,f=b.nTBody,g=b.nTHead,j=b.nTFoot,i=h(e),f=h(f),k=h(b.nTableWrapper),l=h.map(b.aoData,function(a){return a.nTr}),o;b.bDestroying=!0;r(b,"aoDestroyCallback","destroy",[b]);a||(new s(b)).columns().visible(!0);k.off(".DT").find(":not(tbody *)").off(".DT"); -h(E).off(".DT-"+b.sInstance);e!=g.parentNode&&(i.children("thead").detach(),i.append(g));j&&e!=j.parentNode&&(i.children("tfoot").detach(),i.append(j));b.aaSorting=[];b.aaSortingFixed=[];wa(b);h(l).removeClass(b.asStripeClasses.join(" "));h("th, td",g).removeClass(d.sSortable+" "+d.sSortableAsc+" "+d.sSortableDesc+" "+d.sSortableNone);f.children().detach();f.append(l);g=a?"remove":"detach";i[g]();k[g]();!a&&c&&(c.insertBefore(e,b.nTableReinsertBefore),i.css("width",b.sDestroyWidth).removeClass(d.sTable), -(o=b.asDestroyStripes.length)&&f.children().each(function(a){h(this).addClass(b.asDestroyStripes[a%o])}));c=h.inArray(b,n.settings);-1!==c&&n.settings.splice(c,1)})});h.each(["column","row","cell"],function(a,b){o(b+"s().every()",function(a){var d=this.selector.opts,e=this;return this.iterator(b,function(f,g,h,i,m){a.call(e[b](g,"cell"===b?h:d,"cell"===b?d:k),g,h,i,m)})})});o("i18n()",function(a,b,c){var d=this.context[0],a=S(a)(d.oLanguage);a===k&&(a=b);c!==k&&h.isPlainObject(a)&&(a=a[c]!==k?a[c]: -a._);return a.replace("%d",c)});n.version="1.10.19";n.settings=[];n.models={};n.models.oSearch={bCaseInsensitive:!0,sSearch:"",bRegex:!1,bSmart:!0};n.models.oRow={nTr:null,anCells:null,_aData:[],_aSortData:null,_aFilterData:null,_sFilterRow:null,_sRowStripe:"",src:null,idx:-1};n.models.oColumn={idx:null,aDataSort:null,asSorting:null,bSearchable:null,bSortable:null,bVisible:null,_sManualType:null,_bAttrSrc:!1,fnCreatedCell:null,fnGetData:null,fnSetData:null,mData:null,mRender:null,nTh:null,nTf:null, -sClass:null,sContentPadding:null,sDefaultContent:null,sName:null,sSortDataType:"std",sSortingClass:null,sSortingClassJUI:null,sTitle:null,sType:null,sWidth:null,sWidthOrig:null};n.defaults={aaData:null,aaSorting:[[0,"asc"]],aaSortingFixed:[],ajax:null,aLengthMenu:[10,25,50,100],aoColumns:null,aoColumnDefs:null,aoSearchCols:[],asStripeClasses:null,bAutoWidth:!0,bDeferRender:!1,bDestroy:!1,bFilter:!0,bInfo:!0,bLengthChange:!0,bPaginate:!0,bProcessing:!1,bRetrieve:!1,bScrollCollapse:!1,bServerSide:!1, -bSort:!0,bSortMulti:!0,bSortCellsTop:!1,bSortClasses:!0,bStateSave:!1,fnCreatedRow:null,fnDrawCallback:null,fnFooterCallback:null,fnFormatNumber:function(a){return a.toString().replace(/\B(?=(\d{3})+(?!\d))/g,this.oLanguage.sThousands)},fnHeaderCallback:null,fnInfoCallback:null,fnInitComplete:null,fnPreDrawCallback:null,fnRowCallback:null,fnServerData:null,fnServerParams:null,fnStateLoadCallback:function(a){try{return JSON.parse((-1===a.iStateDuration?sessionStorage:localStorage).getItem("DataTables_"+ -a.sInstance+"_"+location.pathname))}catch(b){}},fnStateLoadParams:null,fnStateLoaded:null,fnStateSaveCallback:function(a,b){try{(-1===a.iStateDuration?sessionStorage:localStorage).setItem("DataTables_"+a.sInstance+"_"+location.pathname,JSON.stringify(b))}catch(c){}},fnStateSaveParams:null,iStateDuration:7200,iDeferLoading:null,iDisplayLength:10,iDisplayStart:0,iTabIndex:0,oClasses:{},oLanguage:{oAria:{sSortAscending:": activate to sort column ascending",sSortDescending:": activate to sort column descending"}, -oPaginate:{sFirst:"First",sLast:"Last",sNext:"Next",sPrevious:"Previous"},sEmptyTable:"No data available in table",sInfo:"Showing _START_ to _END_ of _TOTAL_ entries",sInfoEmpty:"Showing 0 to 0 of 0 entries",sInfoFiltered:"(filtered from _MAX_ total entries)",sInfoPostFix:"",sDecimal:"",sThousands:",",sLengthMenu:"Show _MENU_ entries",sLoadingRecords:"Loading...",sProcessing:"Processing...",sSearch:"Search:",sSearchPlaceholder:"",sUrl:"",sZeroRecords:"No matching records found"},oSearch:h.extend({}, -n.models.oSearch),sAjaxDataProp:"data",sAjaxSource:null,sDom:"lfrtip",searchDelay:null,sPaginationType:"simple_numbers",sScrollX:"",sScrollXInner:"",sScrollY:"",sServerMethod:"GET",renderer:null,rowId:"DT_RowId"};Z(n.defaults);n.defaults.column={aDataSort:null,iDataSort:-1,asSorting:["asc","desc"],bSearchable:!0,bSortable:!0,bVisible:!0,fnCreatedCell:null,mData:null,mRender:null,sCellType:"td",sClass:"",sContentPadding:"",sDefaultContent:null,sName:"",sSortDataType:"std",sTitle:null,sType:null,sWidth:null}; -Z(n.defaults.column);n.models.oSettings={oFeatures:{bAutoWidth:null,bDeferRender:null,bFilter:null,bInfo:null,bLengthChange:null,bPaginate:null,bProcessing:null,bServerSide:null,bSort:null,bSortMulti:null,bSortClasses:null,bStateSave:null},oScroll:{bCollapse:null,iBarWidth:0,sX:null,sXInner:null,sY:null},oLanguage:{fnInfoCallback:null},oBrowser:{bScrollOversize:!1,bScrollbarLeft:!1,bBounding:!1,barWidth:0},ajax:null,aanFeatures:[],aoData:[],aiDisplay:[],aiDisplayMaster:[],aIds:{},aoColumns:[],aoHeader:[], -aoFooter:[],oPreviousSearch:{},aoPreSearchCols:[],aaSorting:null,aaSortingFixed:[],asStripeClasses:null,asDestroyStripes:[],sDestroyWidth:0,aoRowCallback:[],aoHeaderCallback:[],aoFooterCallback:[],aoDrawCallback:[],aoRowCreatedCallback:[],aoPreDrawCallback:[],aoInitComplete:[],aoStateSaveParams:[],aoStateLoadParams:[],aoStateLoaded:[],sTableId:"",nTable:null,nTHead:null,nTFoot:null,nTBody:null,nTableWrapper:null,bDeferLoading:!1,bInitialised:!1,aoOpenRows:[],sDom:null,searchDelay:null,sPaginationType:"two_button", -iStateDuration:0,aoStateSave:[],aoStateLoad:[],oSavedState:null,oLoadedState:null,sAjaxSource:null,sAjaxDataProp:null,bAjaxDataGet:!0,jqXHR:null,json:k,oAjaxData:k,fnServerData:null,aoServerParams:[],sServerMethod:null,fnFormatNumber:null,aLengthMenu:null,iDraw:0,bDrawing:!1,iDrawError:-1,_iDisplayLength:10,_iDisplayStart:0,_iRecordsTotal:0,_iRecordsDisplay:0,oClasses:{},bFiltered:!1,bSorted:!1,bSortCellsTop:null,oInit:null,aoDestroyCallback:[],fnRecordsTotal:function(){return"ssp"==y(this)?1*this._iRecordsTotal: -this.aiDisplayMaster.length},fnRecordsDisplay:function(){return"ssp"==y(this)?1*this._iRecordsDisplay:this.aiDisplay.length},fnDisplayEnd:function(){var a=this._iDisplayLength,b=this._iDisplayStart,c=b+a,d=this.aiDisplay.length,e=this.oFeatures,f=e.bPaginate;return e.bServerSide?!1===f||-1===a?b+d:Math.min(b+a,this._iRecordsDisplay):!f||c>d||-1===a?d:c},oInstance:null,sInstance:null,iTabIndex:0,nScrollHead:null,nScrollFoot:null,aLastSort:[],oPlugins:{},rowIdFn:null,rowId:null};n.ext=x={buttons:{}, -classes:{},builder:"-source-",errMode:"alert",feature:[],search:[],selector:{cell:[],column:[],row:[]},internal:{},legacy:{ajax:null},pager:{},renderer:{pageButton:{},header:{}},order:{},type:{detect:[],search:{},order:{}},_unique:0,fnVersionCheck:n.fnVersionCheck,iApiIndex:0,oJUIClasses:{},sVersion:n.version};h.extend(x,{afnFiltering:x.search,aTypes:x.type.detect,ofnSearch:x.type.search,oSort:x.type.order,afnSortData:x.order,aoFeatures:x.feature,oApi:x.internal,oStdClasses:x.classes,oPagination:x.pager}); -h.extend(n.ext.classes,{sTable:"dataTable",sNoFooter:"no-footer",sPageButton:"paginate_button",sPageButtonActive:"current",sPageButtonDisabled:"disabled",sStripeOdd:"odd",sStripeEven:"even",sRowEmpty:"dataTables_empty",sWrapper:"dataTables_wrapper",sFilter:"dataTables_filter",sInfo:"dataTables_info",sPaging:"dataTables_paginate paging_",sLength:"dataTables_length",sProcessing:"dataTables_processing",sSortAsc:"sorting_asc",sSortDesc:"sorting_desc",sSortable:"sorting",sSortableAsc:"sorting_asc_disabled", -sSortableDesc:"sorting_desc_disabled",sSortableNone:"sorting_disabled",sSortColumn:"sorting_",sFilterInput:"",sLengthSelect:"",sScrollWrapper:"dataTables_scroll",sScrollHead:"dataTables_scrollHead",sScrollHeadInner:"dataTables_scrollHeadInner",sScrollBody:"dataTables_scrollBody",sScrollFoot:"dataTables_scrollFoot",sScrollFootInner:"dataTables_scrollFootInner",sHeaderTH:"",sFooterTH:"",sSortJUIAsc:"",sSortJUIDesc:"",sSortJUI:"",sSortJUIAscAllowed:"",sSortJUIDescAllowed:"",sSortJUIWrapper:"",sSortIcon:"", -sJUIHeader:"",sJUIFooter:""});var Lb=n.ext.pager;h.extend(Lb,{simple:function(){return["previous","next"]},full:function(){return["first","previous","next","last"]},numbers:function(a,b){return[ia(a,b)]},simple_numbers:function(a,b){return["previous",ia(a,b),"next"]},full_numbers:function(a,b){return["first","previous",ia(a,b),"next","last"]},first_last_numbers:function(a,b){return["first",ia(a,b),"last"]},_numbers:ia,numbers_length:7});h.extend(!0,n.ext.renderer,{pageButton:{_:function(a,b,c,d,e, -f){var g=a.oClasses,j=a.oLanguage.oPaginate,i=a.oLanguage.oAria.paginate||{},m,l,n=0,o=function(b,d){var k,s,u,r,v=function(b){Ta(a,b.data.action,true)};k=0;for(s=d.length;k").appendTo(b);o(u,r)}else{m=null;l="";switch(r){case "ellipsis":b.append('');break;case "first":m=j.sFirst;l=r+(e>0?"":" "+g.sPageButtonDisabled);break;case "previous":m=j.sPrevious;l=r+(e>0?"":" "+g.sPageButtonDisabled);break;case "next":m= -j.sNext;l=r+(e",{"class":g.sPageButton+" "+l,"aria-controls":a.sTableId,"aria-label":i[r],"data-dt-idx":n,tabindex:a.iTabIndex,id:c===0&&typeof r==="string"?a.sTableId+"_"+r:null}).html(m).appendTo(b);Wa(u,{action:r},v);n++}}}},s;try{s=h(b).find(H.activeElement).data("dt-idx")}catch(u){}o(h(b).empty(),d);s!==k&&h(b).find("[data-dt-idx="+ -s+"]").focus()}}});h.extend(n.ext.type.detect,[function(a,b){var c=b.oLanguage.sDecimal;return $a(a,c)?"num"+c:null},function(a){if(a&&!(a instanceof Date)&&!Zb.test(a))return null;var b=Date.parse(a);return null!==b&&!isNaN(b)||M(a)?"date":null},function(a,b){var c=b.oLanguage.sDecimal;return $a(a,c,!0)?"num-fmt"+c:null},function(a,b){var c=b.oLanguage.sDecimal;return Qb(a,c)?"html-num"+c:null},function(a,b){var c=b.oLanguage.sDecimal;return Qb(a,c,!0)?"html-num-fmt"+c:null},function(a){return M(a)|| -"string"===typeof a&&-1!==a.indexOf("<")?"html":null}]);h.extend(n.ext.type.search,{html:function(a){return M(a)?a:"string"===typeof a?a.replace(Nb," ").replace(Aa,""):""},string:function(a){return M(a)?a:"string"===typeof a?a.replace(Nb," "):a}});var za=function(a,b,c,d){if(0!==a&&(!a||"-"===a))return-Infinity;b&&(a=Pb(a,b));a.replace&&(c&&(a=a.replace(c,"")),d&&(a=a.replace(d,"")));return 1*a};h.extend(x.type.order,{"date-pre":function(a){a=Date.parse(a);return isNaN(a)?-Infinity:a},"html-pre":function(a){return M(a)? -"":a.replace?a.replace(/<.*?>/g,"").toLowerCase():a+""},"string-pre":function(a){return M(a)?"":"string"===typeof a?a.toLowerCase():!a.toString?"":a.toString()},"string-asc":function(a,b){return ab?1:0},"string-desc":function(a,b){return ab?-1:0}});Da("");h.extend(!0,n.ext.renderer,{header:{_:function(a,b,c,d){h(a.nTable).on("order.dt.DT",function(e,f,g,h){if(a===f){e=c.idx;b.removeClass(c.sSortingClass+" "+d.sSortAsc+" "+d.sSortDesc).addClass(h[e]=="asc"?d.sSortAsc:h[e]=="desc"?d.sSortDesc: -c.sSortingClass)}})},jqueryui:function(a,b,c,d){h("
").addClass(d.sSortJUIWrapper).append(b.contents()).append(h("").addClass(d.sSortIcon+" "+c.sSortingClassJUI)).appendTo(b);h(a.nTable).on("order.dt.DT",function(e,f,g,h){if(a===f){e=c.idx;b.removeClass(d.sSortAsc+" "+d.sSortDesc).addClass(h[e]=="asc"?d.sSortAsc:h[e]=="desc"?d.sSortDesc:c.sSortingClass);b.find("span."+d.sSortIcon).removeClass(d.sSortJUIAsc+" "+d.sSortJUIDesc+" "+d.sSortJUI+" "+d.sSortJUIAscAllowed+" "+d.sSortJUIDescAllowed).addClass(h[e]== -"asc"?d.sSortJUIAsc:h[e]=="desc"?d.sSortJUIDesc:c.sSortingClassJUI)}})}}});var eb=function(a){return"string"===typeof a?a.replace(//g,">").replace(/"/g,"""):a};n.render={number:function(a,b,c,d,e){return{display:function(f){if("number"!==typeof f&&"string"!==typeof f)return f;var g=0>f?"-":"",h=parseFloat(f);if(isNaN(h))return eb(f);h=h.toFixed(c);f=Math.abs(h);h=parseInt(f,10);f=c?b+(f-h).toFixed(c).substring(2):"";return g+(d||"")+h.toString().replace(/\B(?=(\d{3})+(?!\d))/g, -a)+f+(e||"")}}},text:function(){return{display:eb,filter:eb}}};h.extend(n.ext.internal,{_fnExternApiFunc:Mb,_fnBuildAjax:sa,_fnAjaxUpdate:mb,_fnAjaxParameters:vb,_fnAjaxUpdateDraw:wb,_fnAjaxDataSrc:ta,_fnAddColumn:Ea,_fnColumnOptions:ka,_fnAdjustColumnSizing:$,_fnVisibleToColumnIndex:aa,_fnColumnIndexToVisible:ba,_fnVisbleColumns:V,_fnGetColumns:ma,_fnColumnTypes:Ga,_fnApplyColumnDefs:jb,_fnHungarianMap:Z,_fnCamelToHungarian:J,_fnLanguageCompat:Ca,_fnBrowserDetect:hb,_fnAddData:O,_fnAddTr:na,_fnNodeToDataIndex:function(a, -b){return b._DT_RowIndex!==k?b._DT_RowIndex:null},_fnNodeToColumnIndex:function(a,b,c){return h.inArray(c,a.aoData[b].anCells)},_fnGetCellData:B,_fnSetCellData:kb,_fnSplitObjNotation:Ja,_fnGetObjectDataFn:S,_fnSetObjectDataFn:N,_fnGetDataMaster:Ka,_fnClearTable:oa,_fnDeleteIndex:pa,_fnInvalidate:da,_fnGetRowElements:Ia,_fnCreateTr:Ha,_fnBuildHead:lb,_fnDrawHead:fa,_fnDraw:P,_fnReDraw:T,_fnAddOptionsHtml:ob,_fnDetectHeader:ea,_fnGetUniqueThs:ra,_fnFeatureHtmlFilter:qb,_fnFilterComplete:ga,_fnFilterCustom:zb, -_fnFilterColumn:yb,_fnFilter:xb,_fnFilterCreateSearch:Pa,_fnEscapeRegex:Qa,_fnFilterData:Ab,_fnFeatureHtmlInfo:tb,_fnUpdateInfo:Db,_fnInfoMacros:Eb,_fnInitialise:ha,_fnInitComplete:ua,_fnLengthChange:Ra,_fnFeatureHtmlLength:pb,_fnFeatureHtmlPaginate:ub,_fnPageChange:Ta,_fnFeatureHtmlProcessing:rb,_fnProcessingDisplay:C,_fnFeatureHtmlTable:sb,_fnScrollDraw:la,_fnApplyToChildren:I,_fnCalculateColumnWidths:Fa,_fnThrottle:Oa,_fnConvertToWidth:Fb,_fnGetWidestNode:Gb,_fnGetMaxLenString:Hb,_fnStringToCss:v, -_fnSortFlatten:X,_fnSort:nb,_fnSortAria:Jb,_fnSortListener:Va,_fnSortAttachListener:Ma,_fnSortingClasses:wa,_fnSortData:Ib,_fnSaveState:xa,_fnLoadState:Kb,_fnSettingsFromNode:ya,_fnLog:K,_fnMap:F,_fnBindAction:Wa,_fnCallbackReg:z,_fnCallbackFire:r,_fnLengthOverflow:Sa,_fnRenderer:Na,_fnDataSource:y,_fnRowAttributes:La,_fnExtend:Xa,_fnCalculateEnd:function(){}});h.fn.dataTable=n;n.$=h;h.fn.dataTableSettings=n.settings;h.fn.dataTableExt=n.ext;h.fn.DataTable=function(a){return h(this).dataTable(a).api()}; -h.each(n,function(a,b){h.fn.DataTable[a]=b});return h.fn.dataTable}); + DataTables 1.11.5 + ©2008-2021 SpryMedia Ltd - datatables.net/license +*/ var $jscomp=$jscomp||{};$jscomp.scope={},$jscomp.findInternal=function(t,e,n){t instanceof String&&(t=String(t));for(var a=t.length,r=0;r").css({position:"fixed",top:0,left:-1*t(e).scrollLeft(),height:1,width:1,overflow:"hidden"}).append(t("
").css({position:"absolute",top:1,left:1,width:100,overflow:"scroll"}).append(t("
").css({width:"100%",height:10}))).appendTo("body"),o=r.children(),i=o.children();a.barWidth=o[0].offsetWidth-o[0].clientWidth,a.bScrollOversize=100===i[0].offsetWidth&&100!==o[0].clientWidth,a.bScrollbarLeft=1!==Math.round(i.offset().left),a.bBounding=!!r[0].getBoundingClientRect().width,r.remove()}t.extend(n.oBrowser,tB.__browser),n.oScroll.iBarWidth=tB.__browser.barWidth}function c(t,e,n,r,o,i){var l=!1;if(n!==a){var s=n;l=!0}for(;r!==o;)t.hasOwnProperty(r)&&(s=l?e(s,t[r],r,t):t[r],l=!0,r+=i);return s}function f(e,a){var r=tB.defaults.column,o=e.aoColumns.length;r=t.extend({},tB.models.oColumn,r,{nTh:a||n.createElement("th"),sTitle:r.sTitle?r.sTitle:a?a.innerHTML:"",aDataSort:r.aDataSort?r.aDataSort:[o],mData:r.mData?r.mData:o,idx:o}),e.aoColumns.push(r),(r=e.aoPreSearchCols)[o]=t.extend({},tB.models.oSearch,r[o]),d(e,o,t(a).data())}function d(e,n,r){n=e.aoColumns[n];var i=e.oClasses,l=t(n.nTh);if(!n.sWidthOrig){n.sWidthOrig=l.attr("width")||null;var u=(l.attr("style")||"").match(/width:\s*(\d+[pxem%]+)/);u&&(n.sWidthOrig=u[1])}r!==a&&null!==r&&(s(r),o(tB.defaults.column,r,!0),r.mDataProp===a||r.mData||(r.mData=r.mDataProp),r.sType&&(n._sManualType=r.sType),r.className&&!r.sClass&&(r.sClass=r.className),r.sClass&&l.addClass(r.sClass),t.extend(n,r),tA(n,r,"sWidth","sWidthOrig"),r.iDataSort!==a&&(n.aDataSort=[r.iDataSort]),tA(n,r,"aDataSort"));var c=n.mData,f=en(c),d=n.mRender?en(n.mRender):null;r=function(t){return"string"==typeof t&&-1!==t.indexOf("@")},n._bAttrSrc=t.isPlainObject(c)&&(r(c.sort)||r(c.type)||r(c.filter)),n._setter=null,n.fnGetData=function(t,e,n){var r=f(t,e,a,n);return d&&e?d(r,e,t,n):r},n.fnSetData=function(t,e,n){return ea(c)(t,e,n)},"number"!=typeof c&&(e._rowReadObject=!0),e.oFeatures.bSort||(n.bSortable=!1,l.addClass(i.sSortableNone)),e=-1!==t.inArray("asc",n.asSorting),r=-1!==t.inArray("desc",n.asSorting),n.bSortable&&(e||r)?e&&!r?(n.sSortingClass=i.sSortableAsc,n.sSortingClassJUI=i.sSortJUIAscAllowed):!e&&r?(n.sSortingClass=i.sSortableDesc,n.sSortingClassJUI=i.sSortJUIDescAllowed):(n.sSortingClass=i.sSortable,n.sSortingClassJUI=i.sSortJUI):(n.sSortingClass=i.sSortableNone,n.sSortingClassJUI="")}function h(t){if(!1!==t.oFeatures.bAutoWidth){var e=t.aoColumns;th(t);for(var n=0,a=e.length;nd[h])o(u.length+d[h],c);else if("string"==typeof d[h]){var p=0;for(s=u.length;pe&&t[o]--;-1!=r&&n===a&&t.splice(r,1)}function I(t,e,n,r){var o,i=t.aoData[e],l=function(n,a){for(;n.childNodes.length;)n.removeChild(n.firstChild);n.innerHTML=D(t,e,a,"display")};if("dom"!==n&&(n&&"auto"!==n||"dom"!==i.src)){var s=i.anCells;if(s){if(r!==a)l(s[r],r);else for(n=0,o=s.length;n").appendTo(r));var u=0;for(n=s.length;u=e.fnRecordsDisplay()?0:o,e.iInitDisplayStart=-1),r=tP(e,"aoPreDrawCallback","preDraw",[e]),-1!==t.inArray(!1,r))tu(e,!1);else{r=[];var i=0,l=(o=e.asStripeClasses).length,s=e.oLanguage,u="ssp"==tH(e),c=e.aiDisplay,f=e._iDisplayStart,d=e.fnDisplayEnd();if(e.bDrawing=!0,e.bDeferLoading)e.bDeferLoading=!1,e.iDraw++,tu(e,!1);else if(u){if(!e.bDestroying&&!n){M(e);return}}else e.iDraw++;if(0!==c.length)for(n=u?e.aoData.length:d,s=u?0:f;s",{class:l?o[0]:""}).append(t("",{valign:"top",colSpan:b(e),class:e.oClasses.sRowEmpty}).html(i))[0];tP(e,"aoHeaderCallback","header",[t(e.nTHead).children("tr")[0],w(e),f,d,c]),tP(e,"aoFooterCallback","footer",[t(e.nTFoot).children("tr")[0],w(e),f,d,c]),(o=t(e.nTBody)).children().detach(),o.append(t(r)),tP(e,"aoDrawCallback","draw",[e]),e.bSorted=!1,e.bFiltered=!1,e.bDrawing=!1}}function O(t,e){var n=t.oFeatures,a=n.bFilter;n.bSort&&tv(t),a?X(t,t.oPreviousSearch):t.aiDisplay=t.aiDisplayMaster.slice(),!0!==e&&(t._iDisplayStart=0),t._drawHold=e,R(t),t._drawHold=!1}function H(e){var n=e.oClasses,a=t(e.nTable);a=t("
").insertBefore(a);var r=e.oFeatures,o=t("
",{id:e.sTableId+"_wrapper",class:n.sWrapper+(e.nTFoot?"":" "+n.sNoFooter)});e.nHolding=a[0],e.nTableWrapper=o[0],e.nTableReinsertBefore=e.nTable.nextSibling;for(var i,l,s,u,c,f,d=e.sDom.split(""),h=0;h")[0],"'"==(u=d[h+1])||'"'==u){for(f=2,c="";d[h+f]!=u;)c+=d[h+f],f++;"H"==c?c=n.sJUIHeader:"F"==c&&(c=n.sJUIFooter),-1!=c.indexOf(".")?(u=c.split("."),s.id=u[0].substr(1,u[0].length-1),s.className=u[1]):"#"==c.charAt(0)?s.id=c.substr(1,c.length-1):s.className=c,h+=f}o.append(s),o=t(s)}else if(">"==l)o=o.parent();else if("l"==l&&r.bPaginate&&r.bLengthChange)i=to(e);else if("f"==l&&r.bFilter)i=V(e);else if("r"==l&&r.bProcessing)i=ts(e);else if("t"==l)i=tc(e);else if("i"==l&&r.bInfo)i=Q(e);else if("p"==l&&r.bPaginate)i=ti(e);else if(0!==tB.ext.feature.length){for(s=tB.ext.feature,f=0,u=s.length;f',u=o.sSearch;u=u.match(/_INPUT_/)?u.replace("_INPUT_",s):u+s,a=t("
",{id:l.f?null:r+"_filter",class:a.sFilter}).append(t("
").addClass(n.sLength);return e.aanFeatures.l||(u[0].id=a+"_length"),u.children().append(e.oLanguage.sLengthMenu.replace("_MENU_",o[0].outerHTML)),t("select",u).val(e._iDisplayLength).on("change.DT",function(n){tr(e,t(this).val()),R(e)}),t(e.nTable).on("length.dt.DT",function(n,a,r){e===a&&t("select",u).val(r)}),u[0]}function ti(e){var n=e.sPaginationType,a=tB.ext.pager[n],r="function"==typeof a,o=function(t){R(t)};n=t("
").addClass(e.oClasses.sPaging+n)[0];var i=e.aanFeatures;return r||a.fnInit(e,n,o),i.p||(n.id=e.sTableId+"_paginate",e.aoDrawCallback.push({fn:function(t){if(r){var e,n=t._iDisplayStart,l=t._iDisplayLength,s=t.fnRecordsDisplay(),u=-1===l;for(s=a(n=u?0:Math.ceil(n/l),l=u?1:Math.ceil(s/l)),u=0,e=i.p.length;uo&&(a=0):"first"==e?a=0:"previous"==e?0>(a=0<=r?a-r:0)&&(a=0):"next"==e?a+r",{id:e.aanFeatures.r?null:e.sTableId+"_processing",class:e.oClasses.sProcessing}).html(e.oLanguage.sProcessing).insertBefore(e.nTable)[0]}function tu(e,n){e.oFeatures.bProcessing&&t(e.aanFeatures.r).css("display",n?"block":"none"),tP(e,null,"processing",[e,n])}function tc(e){var n=t(e.nTable),a=e.oScroll;if(""===a.sX&&""===a.sY)return e.nTable;var r=a.sX,o=a.sY,i=e.oClasses,l=n.children("caption"),s=l.length?l[0]._captionSide:null,u=t(n[0].cloneNode(!1)),c=t(n[0].cloneNode(!1)),f=n.children("tfoot");f.length||(f=null),u=t("
",{class:i.sScrollWrapper}).append(t("
",{class:i.sScrollHead}).css({overflow:"hidden",position:"relative",border:0,width:r?r?t$(r):null:"100%"}).append(t("
",{class:i.sScrollHeadInner}).css({"box-sizing":"content-box",width:a.sXInner||"100%"}).append(u.removeAttr("id").css("margin-left",0).append("top"===s?l:null).append(n.children("thead"))))).append(t("
",{class:i.sScrollBody}).css({position:"relative",overflow:"auto",width:r?t$(r):null}).append(n)),f&&u.append(t("
",{class:i.sScrollFoot}).css({overflow:"hidden",border:0,width:r?r?t$(r):null:"100%"}).append(t("
",{class:i.sScrollFootInner}).append(c.removeAttr("id").css("margin-left",0).append("bottom"===s?l:null).append(n.children("tfoot")))));var d=(n=u.children())[0];i=n[1];var h=f?n[2]:null;return r&&t(i).on("scroll.DT",function(t){t=this.scrollLeft,d.scrollLeft=t,f&&(h.scrollLeft=t)}),t(i).css("max-height",o),a.bCollapse||t(i).css("height",o),e.nScrollHead=d,e.nScrollBody=i,e.nScrollFoot=h,e.aoDrawCallback.push({fn:tf,sName:"scrolling"}),u[0]}function tf(n){var r=n.oScroll,o=r.sX,i=r.sXInner,l=r.sY;r=r.iBarWidth;var s=t(n.nScrollHead),u=s[0].style,c=s.children("div"),f=c[0].style,d=c.children("table"),g=t(c=n.nScrollBody),b=c.style,$=t(n.nScrollFoot).children("div"),m=$.children("table"),v=t(n.nTHead),S=t(n.nTable),y=S[0],D=y.style,T=n.nTFoot?t(n.nTFoot):null,C=n.oBrowser,w=C.bScrollOversize;t3(n.aoColumns,"nTh");var x,_=[],I=[],A=[],L=[],F=function(t){(t=t.style).paddingTop="0",t.paddingBottom="0",t.borderTopWidth="0",t.borderBottomWidth="0",t.height=0},j=c.scrollHeight>c.clientHeight;if(n.scrollBarVis!==j&&n.scrollBarVis!==a)n.scrollBarVis=j,h(n);else{if(n.scrollBarVis=j,S.children("thead, tfoot").remove(),T){var P=T.clone().prependTo(S),R=T.find("tr");P=P.find("tr")}var O=v.clone().prependTo(S);v=v.find("tr"),j=O.find("tr"),O.find("th, td").removeAttr("tabindex"),o||(b.width="100%",s[0].style.width="100%"),t.each(k(n,O),function(t,e){x=p(n,t),e.style.width=n.aoColumns[x].sWidth}),T&&td(function(t){t.style.width=""},P),s=S.outerWidth(),""===o?(D.width="100%",w&&(S.find("tbody").height()>c.offsetHeight||"scroll"==g.css("overflow-y"))&&(D.width=t$(S.outerWidth()-r)),s=S.outerWidth()):""!==i&&(D.width=t$(i),s=S.outerWidth()),td(F,j),td(function(n){var a=e.getComputedStyle?e.getComputedStyle(n).width:t$(t(n).width());A.push(n.innerHTML),_.push(a)},j),td(function(t,e){t.style.width=_[e]},v),t(j).css("height",0),T&&(td(F,P),td(function(e){L.push(e.innerHTML),I.push(t$(t(e).css("width")))},P),td(function(t,e){t.style.width=I[e]},R),t(P).height(0)),td(function(t,e){t.innerHTML='
'+A[e]+"
",t.childNodes[0].style.height="0",t.childNodes[0].style.overflow="hidden",t.style.width=_[e]},j),T&&td(function(t,e){t.innerHTML='
'+L[e]+"
",t.childNodes[0].style.height="0",t.childNodes[0].style.overflow="hidden",t.style.width=I[e]},P),Math.round(S.outerWidth())c.offsetHeight||"scroll"==g.css("overflow-y")?s+r:s,w&&(c.scrollHeight>c.offsetHeight||"scroll"==g.css("overflow-y"))&&(D.width=t$(R-r)),""!==o&&""===i||tI(n,1,"Possible column misalignment",6)):R="100%",b.width=t$(R),u.width=t$(R),T&&(n.nScrollFoot.style.width=t$(R)),!l&&w&&(b.height=t$(y.offsetHeight+r)),o=S.outerWidth(),d[0].style.width=t$(o),f.width=t$(o),i=S.height()>c.clientHeight||"scroll"==g.css("overflow-y"),f[l="padding"+(C.bScrollbarLeft?"Left":"Right")]=i?r+"px":"0px",T&&(m[0].style.width=t$(o),$[0].style.width=t$(o),$[0].style[l]=i?r+"px":"0px"),S.children("colgroup").insertBefore(S.children("thead")),g.trigger("scroll"),(n.bSorted||n.bFiltered)&&!n._drawHold&&(c.scrollTop=0)}}function td(t,e,n){for(var a,r,o=0,i=0,l=e.length;i").appendTo(c.find("tbody"));for(c.find("thead, tfoot").remove(),c.append(t(n.nTHead).clone()).append(t(n.nTFoot).clone()),c.find("tfoot th, tfoot td").css("width",""),d=k(n,c.find("thead")[0]),a=0;a").css({width:y.sWidthOrig,margin:0,padding:0,border:0,height:1}));if(n.aoData.length)for(a=0;a").css(s||l?{position:"absolute",top:0,left:0,height:1,right:0,overflow:"hidden"}:{}).append(c).appendTo(m),s&&u?c.width(u):s?(c.css("width","auto"),c.removeAttr("width"),c.width()").css("width",t$(e)).appendTo(a||n.body))[0].offsetWidth,e.remove(),a):0}function tg(e,n){var a=tb(e,n);if(0>a)return null;var r=e.aoData[a];return r.nTr?r.anCells[n]:t("").html(D(e,a,n,"display"))[0]}function tb(t,e){for(var n,a=-1,r=-1,o=0,i=t.aoData.length;oa&&(a=n.length,r=o);return r}function t$(t){return null===t?"0px":"number"==typeof t?0>t?"0px":t+"px":t.match(/\d$/)?t+"px":t}function tm(e){var n=[],r=e.aoColumns,o=e.aaSortingFixed,i=t.isPlainObject(o),l=[],s=function(e){e.length&&!Array.isArray(e[0])?l.push(e):t.merge(l,e)};for(Array.isArray(o)&&s(o),i&&o.pre&&s(o.pre),s(e.aaSorting),i&&o.post&&s(o.post),e=0;ef?1:0))return"asc"===u.dir?c:-c}return(c=n[t])<(f=n[e])?-1:c>f?1:0}):i.sort(function(t,e){var o,i=l.length,s=r[t]._aSortData,u=r[e]._aSortData;for(o=0;od?1:0})}t.bSorted=!0}function tS(t){var e=t.aoColumns,n=tm(t);t=t.oLanguage.oAria;for(var a=0,r=e.length;a/g,""),s=o.nTh;s.removeAttribute("aria-sort"),o.bSortable&&(0i?i+1:3))}for(i=0,n=o.length;ii?i+1:3))}e.aLastSort=o}function tT(t,e){var n,a=t.aoColumns[e],r=tB.ext.order[a.sSortDataType];r&&(n=r.call(t.oInstance,t,e,g(t,e)));for(var o,i=tB.ext.type.order[a.sType+"-pre"],l=0,s=t.aoData.length;l=i.length?[0,n[1]]:n)})),n.search!==a&&t.extend(e.oPreviousSearch,K(n.search)),n.columns){for(s=0,o=n.columns.length;s=n&&(e=n-a),e-=e%a,(-1===a||0>e)&&(e=0),t._iDisplayStart=e}function tO(e,n){e=e.renderer;var a=tB.ext.renderer[n];return t.isPlainObject(e)&&e[n]?a[e[n]]||a._:"string"==typeof e&&a[e]||a._}function tH(t){return t.oFeatures.bServerSide?"ssp":t.ajax||t.sAjaxSource?"ajax":"dom"}function tN(t,e){var n=eI.numbers_length,a=Math.floor(n/2);return e<=n?t=t4(0,e):t<=a?((t=t4(0,n-2)).push("ellipsis"),t.push(e-1)):(t>=e-1-a?t=t4(e-(n-2),e):((t=t4(t-a+2,t+a-1)).push("ellipsis"),t.push(e-1)),t.splice(0,0,"ellipsis"),t.splice(0,0,0)),t.DT_el="span",t}function tk(e){t.each({num:function(t){return eA(t,e)},"num-fmt":function(t){return eA(t,e,tq)},"html-num":function(t){return eA(t,e,tX)},"html-num-fmt":function(t){return eA(t,e,tX,tq)}},function(t,n){tM.type.order[t+e+"-pre"]=n,t.match(/^html\-/)&&(tM.type.search[t+e]=tM.type.search.html)})}function tE(t){return function(){var e=[t_(this[tB.ext.iApiIndex])].concat(Array.prototype.slice.call(arguments));return tB.ext.internal[t].apply(this,e)}}var tM,tW,t9,tB=function(e,n){if(this instanceof tB)return t(e).DataTable(n);n=e,this.$=function(t,e){return this.api(!0).$(t,e)},this._=function(t,e){return this.api(!0).rows(t,e).data()},this.api=function(t){return new ed(t?t_(this[tM.iApiIndex]):this)},this.fnAddData=function(e,n){var r=this.api(!0);return e=Array.isArray(e)&&(Array.isArray(e[0])||t.isPlainObject(e[0]))?r.rows.add(e):r.row.add(e),(n===a||n)&&r.draw(),e.flatten().toArray()},this.fnAdjustColumnSizing=function(t){var e=this.api(!0).columns.adjust(),n=e.settings()[0],r=n.oScroll;t===a||t?e.draw(!1):(""!==r.sX||""!==r.sY)&&tf(n)},this.fnClearTable=function(t){var e=this.api(!0).clear();(t===a||t)&&e.draw()},this.fnClose=function(t){this.api(!0).row(t).child.hide()},this.fnDeleteRow=function(t,e,n){var r=this.api(!0),o=(t=r.rows(t)).settings()[0],i=o.aoData[t[0][0]];return t.remove(),e&&e.call(this,o,i),(n===a||n)&&r.draw(),i},this.fnDestroy=function(t){this.api(!0).destroy(t)},this.fnDraw=function(t){this.api(!0).draw(t)},this.fnFilter=function(t,e,n,r,o,i){o=this.api(!0),null===e||e===a?o.search(t,n,r,i):o.column(e).search(t,n,r,i),o.draw()},this.fnGetData=function(t,e){var n=this.api(!0);if(t!==a){var r=t.nodeName?t.nodeName.toLowerCase():"";return e!==a||"td"==r||"th"==r?n.cell(t,e).data():n.row(t).data()||null}return n.data().toArray()},this.fnGetNodes=function(t){var e=this.api(!0);return t!==a?e.row(t).node():e.rows().nodes().flatten().toArray()},this.fnGetPosition=function(t){var e=this.api(!0),n=t.nodeName.toUpperCase();return"TR"==n?e.row(t).index():"TD"==n||"TH"==n?[(t=e.cell(t).index()).row,t.columnVisible,t.column]:null},this.fnIsOpen=function(t){return this.api(!0).row(t).child.isShown()},this.fnOpen=function(t,e,n){return this.api(!0).row(t).child(e,n).show().child()[0]},this.fnPageChange=function(t,e){t=this.api(!0).page(t),(e===a||e)&&t.draw(!1)},this.fnSetColumnVis=function(t,e,n){t=this.api(!0).column(t).visible(e),(n===a||n)&&t.columns.adjust().draw()},this.fnSettings=function(){return t_(this[tM.iApiIndex])},this.fnSort=function(t){this.api(!0).order(t).draw()},this.fnSortListener=function(t,e,n){this.api(!0).order.listener(t,e,n)},this.fnUpdate=function(t,e,n,r,o){var i=this.api(!0);return n===a||null===n?i.row(e).data(t):i.cell(e,n).data(t),(o===a||o)&&i.columns.adjust(),(r===a||r)&&i.draw(),0},this.fnVersionCheck=tM.fnVersionCheck;var r=this,c=n===a,h=this.length;for(var p in c&&(n={}),this.oApi=this.internal=tM.internal,tB.ext.internal)p&&(this[p]=tE(p));return this.each(function(){var e,p={},g=1").appendTo(D)),x.nTHead=r[0];var o=D.children("tbody");if(0===o.length&&(o=t("").insertAfter(r)),x.nTBody=o[0],0===(r=D.children("tfoot")).length&&0").appendTo(D)),0===r.length||0===r.children().length?D.addClass(_.sNoFooter):0/g,t0=/^\d{2,4}[\.\/\-]\d{1,2}[\.\/\-]\d{1,2}([T ]{1}\d{1,2}[:\.]\d{2}([\.:]\d{2})?)?$/,tJ=/(\/|\.|\*|\+|\?|\||\(|\)|\[|\]|\{|\}|\\|\$|\^|\-)/g,tq=/['\u00A0,$£€¥%\u2009\u202F\u20BD\u20a9\u20BArfkɃΞ]/gi,tY=function(t){return!t||!0===t||"-"===t},tG=function(t){var e=parseInt(t,10);return!isNaN(e)&&isFinite(t)?e:null},tz=function(t,e){return tU[e]||(tU[e]=RegExp(er(e),"g")),"string"==typeof t&&"."!==e?t.replace(/\./g,"").replace(tU[e],"."):t},t1=function(t,e,n){var a="string"==typeof t;return!!tY(t)||(e&&a&&(t=tz(t,e)),n&&a&&(t=t.replace(tq,"")),!isNaN(parseFloat(t))&&isFinite(t))},tZ=function(t,e,n){return!!tY(t)||(tY(t)||"string"==typeof t)&&!!t1(t.replace(tX,""),e,n)||null},t3=function(t,e,n){var r=[],o=0,i=t.length;if(n!==a)for(;ot.length))for(var e=t.slice().sort(),n=e[0],a=1,r=e.length;a")[0],ei=eo.textContent!==a,el=/<.*?>/g,es=tB.util.throttle,eu=[],ec=Array.prototype,ef=function(e){var n,a=tB.settings,r=t.map(a,function(t,e){return t.nTable});if(!e)return[];if(e.nTable&&e.oApi)return[e];if(e.nodeName&&"table"===e.nodeName.toLowerCase()){var o=t.inArray(e,r);return -1!==o?[a[o]]:null}return e&&"function"==typeof e.settings?e.settings().toArray():("string"==typeof e?n=t(e):e instanceof t&&(n=e),n)?n.map(function(e){return -1!==(o=t.inArray(this,r))?a[o]:null}).toArray():void 0},ed=function(e,n){if(!(this instanceof ed))return new ed(e,n);var a=[],r=function(t){(t=ef(t))&&a.push.apply(a,t)};if(Array.isArray(e))for(var o=0,i=e.length;ot?new ed(e[t],this[t]):null},filter:function(t){var e=[];if(ec.filter)e=ec.filter.call(this,t,this);else for(var n=0,a=this.length;n").addClass(a),t("td",r).addClass(a).html(n)[0].colSpan=b(e),o.push(r[0]))};i(a,r),n._details&&n._details.detach(),n._details=t(o),n._detailsShow&&n._details.insertAfter(n.nTr)},ey=tB.util.throttle(function(t){tC(t[0])},500),eD=function(e,n){var r=e.context;r.length&&(e=r[0].aoData[n!==a?n:e[0]])&&e._details&&(e._details.remove(),e._detailsShow=a,e._details=a,t(e.nTr).removeClass("dt-hasChild"),ey(r))},e8=function(e,n){var a=e.context;if(a.length&&e.length){var r=a[0].aoData[e[0]];r._details&&((r._detailsShow=n)?(r._details.insertAfter(r.nTr),t(r.nTr).addClass("dt-hasChild")):(r._details.detach(),t(r.nTr).removeClass("dt-hasChild")),tP(a[0],null,"childRow",[n,e.row(e[0])]),eT(a[0]),ey(a))}},eT=function(t){var e=new ed(t),n=t.aoData;e.off("draw.dt.DT_details column-visibility.dt.DT_details destroy.dt.DT_details"),0(l=parseInt(u[1],10))){var c=t.map(r,function(t,e){return t.bVisible?e:null});return[c[c.length+l]]}return[p(e,l)];case"name":return t.map(o,function(t,e){return t===u[1]?e:null});default:return[]}return n.nodeName&&n._DT_CellIndex?[n._DT_CellIndex.column]:(l=t(i).filter(n).map(function(){return t.inArray(this,i)}).toArray()).length||!n.nodeName?l:(l=t(n).closest("*[data-dt-column]")).length?[l.data("dt-column")]:[]},e,a)};tW("columns()",function(e,n){e===a?e="":t.isPlainObject(e)&&(n=e,e=""),n=eb(n);var r=this.iterator("table",function(t){return ex(t,e,n)},1);return r.selector.cols=e,r.selector.opts=n,r}),t9("columns().header()","column().header()",function(t,e){return this.iterator("column",function(t,e){return t.aoColumns[e].nTh},1)}),t9("columns().footer()","column().footer()",function(t,e){return this.iterator("column",function(t,e){return t.aoColumns[e].nTf},1)}),t9("columns().data()","column().data()",function(){return this.iterator("column-rows",ew,1)}),t9("columns().dataSrc()","column().dataSrc()",function(){return this.iterator("column",function(t,e){return t.aoColumns[e].mData},1)}),t9("columns().cache()","column().cache()",function(t){return this.iterator("column-rows",function(e,n,a,r,o){return t2(e.aoData,o,"search"===t?"_aFilterData":"_aSortData",n)},1)}),t9("columns().nodes()","column().nodes()",function(){return this.iterator("column-rows",function(t,e,n,a,r){return t2(t.aoData,r,"anCells",e)},1)}),t9("columns().visible()","column().visible()",function(e,n){var r=this,o=this.iterator("column",function(n,r){if(e===a)return n.aoColumns[r].bVisible;var o,i=n.aoColumns,l=i[r],s=n.aoData;if(e!==a&&l.bVisible!==e){if(e){var u=t.inArray(!0,t3(i,"bVisible"),r+1);for(i=0,o=s.length;ia;return!0},tB.isDataTable=tB.fnIsDataTable=function(e){var n=t(e).get(0),a=!1;return e instanceof tB.Api||(t.each(tB.settings,function(e,r){e=r.nScrollHead?t("table",r.nScrollHead)[0]:null;var o=r.nScrollFoot?t("table",r.nScrollFoot)[0]:null;(r.nTable===n||e===n||o===n)&&(a=!0)}),a)},tB.tables=tB.fnTables=function(e){var n=!1;t.isPlainObject(e)&&(n=e.api,e=e.visible);var a=t.map(tB.settings,function(n){if(!e||e&&t(n.nTable).is(":visible"))return n.nTable});return n?new ed(a):a},tB.camelToHungarian=o,tW("$()",function(e,n){return n=t(n=this.rows(n).nodes()),t([].concat(n.filter(e).toArray(),n.find(e).toArray()))}),t.each(["on","one","off"],function(e,n){tW(n+"()",function(){var e=Array.prototype.slice.call(arguments);e[0]=t.map(e[0].split(/\s/),function(t){return t.match(/\.dt\b/)?t:t+".dt"}).join(" ");var a=t(this.tables().nodes());return a[n].apply(a,e),this})}),tW("clear()",function(){return this.iterator("table",function(t){x(t)})}),tW("settings()",function(){return new ed(this.context,this.context)}),tW("init()",function(){var t=this.context;return t.length?t[0].oInit:null}),tW("data()",function(){return this.iterator("table",function(t){return t3(t.aoData,"_aData")}).flatten()}),tW("destroy()",function(n){return n=n||!1,this.iterator("table",function(a){var r=a.nTableWrapper.parentNode,o=a.oClasses,i=a.nTable,l=a.nTBody,s=a.nTHead,u=a.nTFoot,c=t(i);l=t(l);var f,d=t(a.nTableWrapper),h=t.map(a.aoData,function(t){return t.nTr});a.bDestroying=!0,tP(a,"aoDestroyCallback","destroy",[a]),n||new ed(a).columns().visible(!0),d.off(".DT").find(":not(tbody *)").off(".DT"),t(e).off(".DT-"+a.sInstance),i!=s.parentNode&&(c.children("thead").detach(),c.append(s)),u&&i!=u.parentNode&&(c.children("tfoot").detach(),c.append(u)),a.aaSorting=[],a.aaSortingFixed=[],t8(a),t(h).removeClass(a.asStripeClasses.join(" ")),t("th, td",s).removeClass(o.sSortable+" "+o.sSortableAsc+" "+o.sSortableDesc+" "+o.sSortableNone),l.children().detach(),l.append(h),c[s=n?"remove":"detach"](),d[s](),!n&&r&&(r.insertBefore(i,a.nTableReinsertBefore),c.css("width",a.sDestroyWidth).removeClass(o.sTable),(f=a.asDestroyStripes.length)&&l.children().each(function(e){t(this).addClass(a.asDestroyStripes[e%f])})),-1!==(r=t.inArray(a,tB.settings))&&tB.settings.splice(r,1)})}),t.each(["column","row","cell"],function(t,e){tW(e+"s().every()",function(t){var n=this.selector.opts,r=this;return this.iterator(e,function(o,i,l,s,u){t.call(r[e](i,"cell"===e?l:n,"cell"===e?n:a),i,l,s,u)})})}),tW("i18n()",function(e,n,r){var o=this.context[0];return(e=en(e)(o.oLanguage))===a&&(e=n),r!==a&&t.isPlainObject(e)&&(e=e[r]!==a?e[r]:e._),e.replace("%d",r)}),tB.version="1.11.5",tB.settings=[],tB.models={},tB.models.oSearch={bCaseInsensitive:!0,sSearch:"",bRegex:!1,bSmart:!0,return:!1},tB.models.oRow={nTr:null,anCells:null,_aData:[],_aSortData:null,_aFilterData:null,_sFilterRow:null,_sRowStripe:"",src:null,idx:-1},tB.models.oColumn={idx:null,aDataSort:null,asSorting:null,bSearchable:null,bSortable:null,bVisible:null,_sManualType:null,_bAttrSrc:!1,fnCreatedCell:null,fnGetData:null,fnSetData:null,mData:null,mRender:null,nTh:null,nTf:null,sClass:null,sContentPadding:null,sDefaultContent:null,sName:null,sSortDataType:"std",sSortingClass:null,sSortingClassJUI:null,sTitle:null,sType:null,sWidth:null,sWidthOrig:null},tB.defaults={aaData:null,aaSorting:[[0,"asc"]],aaSortingFixed:[],ajax:null,aLengthMenu:[10,25,50,100],aoColumns:null,aoColumnDefs:null,aoSearchCols:[],asStripeClasses:null,bAutoWidth:!0,bDeferRender:!1,bDestroy:!1,bFilter:!0,bInfo:!0,bLengthChange:!0,bPaginate:!0,bProcessing:!1,bRetrieve:!1,bScrollCollapse:!1,bServerSide:!1,bSort:!0,bSortMulti:!0,bSortCellsTop:!1,bSortClasses:!0,bStateSave:!1,fnCreatedRow:null,fnDrawCallback:null,fnFooterCallback:null,fnFormatNumber:function(t){return t.toString().replace(/\B(?=(\d{3})+(?!\d))/g,this.oLanguage.sThousands)},fnHeaderCallback:null,fnInfoCallback:null,fnInitComplete:null,fnPreDrawCallback:null,fnRowCallback:null,fnServerData:null,fnServerParams:null,fnStateLoadCallback:function(t){try{return JSON.parse((-1===t.iStateDuration?sessionStorage:localStorage).getItem("DataTables_"+t.sInstance+"_"+location.pathname))}catch(e){return{}}},fnStateLoadParams:null,fnStateLoaded:null,fnStateSaveCallback:function(t,e){try{(-1===t.iStateDuration?sessionStorage:localStorage).setItem("DataTables_"+t.sInstance+"_"+location.pathname,JSON.stringify(e))}catch(n){}},fnStateSaveParams:null,iStateDuration:7200,iDeferLoading:null,iDisplayLength:10,iDisplayStart:0,iTabIndex:0,oClasses:{},oLanguage:{oAria:{sSortAscending:": activate to sort column ascending",sSortDescending:": activate to sort column descending"},oPaginate:{sFirst:"First",sLast:"Last",sNext:"Next",sPrevious:"Previous"},sEmptyTable:"No data available in table",sInfo:"Showing _START_ to _END_ of _TOTAL_ entries",sInfoEmpty:"Showing 0 to 0 of 0 entries",sInfoFiltered:"(filtered from _MAX_ total entries)",sInfoPostFix:"",sDecimal:"",sThousands:",",sLengthMenu:"Show _MENU_ entries",sLoadingRecords:"Loading...",sProcessing:"Processing...",sSearch:"Search:",sSearchPlaceholder:"",sUrl:"",sZeroRecords:"No matching records found"},oSearch:t.extend({},tB.models.oSearch),sAjaxDataProp:"data",sAjaxSource:null,sDom:"lfrtip",searchDelay:null,sPaginationType:"simple_numbers",sScrollX:"",sScrollXInner:"",sScrollY:"",sServerMethod:"GET",renderer:null,rowId:"DT_RowId"},r(tB.defaults),tB.defaults.column={aDataSort:null,iDataSort:-1,asSorting:["asc","desc"],bSearchable:!0,bSortable:!0,bVisible:!0,fnCreatedCell:null,mData:null,mRender:null,sCellType:"td",sClass:"",sContentPadding:"",sDefaultContent:null,sName:"",sSortDataType:"std",sTitle:null,sType:null,sWidth:null},r(tB.defaults.column),tB.models.oSettings={oFeatures:{bAutoWidth:null,bDeferRender:null,bFilter:null,bInfo:null,bLengthChange:null,bPaginate:null,bProcessing:null,bServerSide:null,bSort:null,bSortMulti:null,bSortClasses:null,bStateSave:null},oScroll:{bCollapse:null,iBarWidth:0,sX:null,sXInner:null,sY:null},oLanguage:{fnInfoCallback:null},oBrowser:{bScrollOversize:!1,bScrollbarLeft:!1,bBounding:!1,barWidth:0},ajax:null,aanFeatures:[],aoData:[],aiDisplay:[],aiDisplayMaster:[],aIds:{},aoColumns:[],aoHeader:[],aoFooter:[],oPreviousSearch:{},aoPreSearchCols:[],aaSorting:null,aaSortingFixed:[],asStripeClasses:null,asDestroyStripes:[],sDestroyWidth:0,aoRowCallback:[],aoHeaderCallback:[],aoFooterCallback:[],aoDrawCallback:[],aoRowCreatedCallback:[],aoPreDrawCallback:[],aoInitComplete:[],aoStateSaveParams:[],aoStateLoadParams:[],aoStateLoaded:[],sTableId:"",nTable:null,nTHead:null,nTFoot:null,nTBody:null,nTableWrapper:null,bDeferLoading:!1,bInitialised:!1,aoOpenRows:[],sDom:null,searchDelay:null,sPaginationType:"two_button",iStateDuration:0,aoStateSave:[],aoStateLoad:[],oSavedState:null,oLoadedState:null,sAjaxSource:null,sAjaxDataProp:null,jqXHR:null,json:a,oAjaxData:a,fnServerData:null,aoServerParams:[],sServerMethod:null,fnFormatNumber:null,aLengthMenu:null,iDraw:0,bDrawing:!1,iDrawError:-1,_iDisplayLength:10,_iDisplayStart:0,_iRecordsTotal:0,_iRecordsDisplay:0,oClasses:{},bFiltered:!1,bSorted:!1,bSortCellsTop:null,oInit:null,aoDestroyCallback:[],fnRecordsTotal:function(){return"ssp"==tH(this)?1*this._iRecordsTotal:this.aiDisplayMaster.length},fnRecordsDisplay:function(){return"ssp"==tH(this)?1*this._iRecordsDisplay:this.aiDisplay.length},fnDisplayEnd:function(){var t=this._iDisplayLength,e=this._iDisplayStart,n=e+t,a=this.aiDisplay.length,r=this.oFeatures,o=r.bPaginate;return r.bServerSide?!1===o||-1===t?e+a:Math.min(e+t,this._iRecordsDisplay):!o||n>a||-1===t?a:n},oInstance:null,sInstance:null,iTabIndex:0,nScrollHead:null,nScrollFoot:null,aLastSort:[],oPlugins:{},rowIdFn:null,rowId:null},tB.ext=tM={buttons:{},classes:{},builder:"-source-",errMode:"alert",feature:[],search:[],selector:{cell:[],column:[],row:[]},internal:{},legacy:{ajax:null},pager:{},renderer:{pageButton:{},header:{}},order:{},type:{detect:[],search:{},order:{}},_unique:0,fnVersionCheck:tB.fnVersionCheck,iApiIndex:0,oJUIClasses:{},sVersion:tB.version},t.extend(tM,{afnFiltering:tM.search,aTypes:tM.type.detect,ofnSearch:tM.type.search,oSort:tM.type.order,afnSortData:tM.order,aoFeatures:tM.feature,oApi:tM.internal,oStdClasses:tM.classes,oPagination:tM.pager}),t.extend(tB.ext.classes,{sTable:"dataTable",sNoFooter:"no-footer",sPageButton:"paginate_button",sPageButtonActive:"current",sPageButtonDisabled:"disabled",sStripeOdd:"odd",sStripeEven:"even",sRowEmpty:"dataTables_empty",sWrapper:"dataTables_wrapper",sFilter:"dataTables_filter",sInfo:"dataTables_info",sPaging:"dataTables_paginate paging_",sLength:"dataTables_length",sProcessing:"dataTables_processing",sSortAsc:"sorting_asc",sSortDesc:"sorting_desc",sSortable:"sorting",sSortableAsc:"sorting_desc_disabled",sSortableDesc:"sorting_asc_disabled",sSortableNone:"sorting_disabled",sSortColumn:"sorting_",sFilterInput:"",sLengthSelect:"",sScrollWrapper:"dataTables_scroll",sScrollHead:"dataTables_scrollHead",sScrollHeadInner:"dataTables_scrollHeadInner",sScrollBody:"dataTables_scrollBody",sScrollFoot:"dataTables_scrollFoot",sScrollFootInner:"dataTables_scrollFootInner",sHeaderTH:"",sFooterTH:"",sSortJUIAsc:"",sSortJUIDesc:"",sSortJUI:"",sSortJUIAscAllowed:"",sSortJUIDescAllowed:"",sSortJUIWrapper:"",sSortIcon:"",sJUIHeader:"",sJUIFooter:""});var eI=tB.ext.pager;t.extend(eI,{simple:function(t,e){return["previous","next"]},full:function(t,e){return["first","previous","next","last"]},numbers:function(t,e){return[tN(t,e)]},simple_numbers:function(t,e){return["previous",tN(t,e),"next"]},full_numbers:function(t,e){return["first","previous",tN(t,e),"next","last"]},first_last_numbers:function(t,e){return["first",tN(t,e),"last"]},_numbers:tN,numbers_length:7}),t.extend(!0,tB.ext.renderer,{pageButton:{_:function(e,r,o,i,l,s){var u,c,f=e.oClasses,d=e.oLanguage.oPaginate,h=e.oLanguage.oAria.paginate||{},p=0,g=function(n,a){var r,i=f.sPageButtonDisabled,b=function(t){tl(e,t.data.action,!0)},$=0;for(r=a.length;$").appendTo(n);g(v,m)}else{switch(u=null,c=m,v=e.iTabIndex,m){case"ellipsis":n.append('');break;case"first":u=d.sFirst,0===l&&(v=-1,c+=" "+i);break;case"previous":u=d.sPrevious,0===l&&(v=-1,c+=" "+i);break;case"next":u=d.sNext,(0===s||l===s-1)&&(v=-1,c+=" "+i);break;case"last":u=d.sLast,(0===s||l===s-1)&&(v=-1,c+=" "+i);break;default:u=e.fnFormatNumber(m+1),c=l===m?f.sPageButtonActive:""}null!==u&&(v=t("",{class:f.sPageButton+" "+c,"aria-controls":e.sTableId,"aria-label":h[m],"data-dt-idx":p,tabindex:v,id:0===o&&"string"==typeof m?e.sTableId+"_"+m:null}).html(u).appendTo(n),tF(v,{action:m},b),p++)}}};try{var b=t(r).find(n.activeElement).data("dt-idx")}catch($){}g(t(r).empty(),i),b!==a&&t(r).find("[data-dt-idx="+b+"]").trigger("focus")}}}),t.extend(tB.ext.type.detect,[function(t,e){return t1(t,e=e.oLanguage.sDecimal)?"num"+e:null},function(t,e){return(!t||t instanceof Date||t0.test(t))&&(null!==(e=Date.parse(t))&&!isNaN(e)||tY(t))?"date":null},function(t,e){return t1(t,e=e.oLanguage.sDecimal,!0)?"num-fmt"+e:null},function(t,e){return tZ(t,e=e.oLanguage.sDecimal)?"html-num"+e:null},function(t,e){return tZ(t,e=e.oLanguage.sDecimal,!0)?"html-num-fmt"+e:null},function(t,e){return tY(t)||"string"==typeof t&&-1!==t.indexOf("<")?"html":null}]),t.extend(tB.ext.type.search,{html:function(t){return tY(t)?t:"string"==typeof t?t.replace(tV," ").replace(tX,""):""},string:function(t){return tY(t)?t:"string"==typeof t?t.replace(tV," "):t}});var eA=function(t,e,n,a){return 0===t||t&&"-"!==t?(e&&(t=tz(t,e)),t.replace&&(n&&(t=t.replace(n,"")),a&&(t=t.replace(a,""))),1*t):-1/0};t.extend(tM.type.order,{"date-pre":function(t){return isNaN(t=Date.parse(t))?-1/0:t},"html-pre":function(t){return tY(t)?"":t.replace?t.replace(/<.*?>/g,"").toLowerCase():t+""},"string-pre":function(t){return tY(t)?"":"string"==typeof t?t.toLowerCase():t.toString?t.toString():""},"string-asc":function(t,e){return te?1:0},"string-desc":function(t,e){return te?-1:0}}),tk(""),t.extend(!0,tB.ext.renderer,{header:{_:function(e,n,a,r){t(e.nTable).on("order.dt.DT",function(t,o,i,l){e===o&&(t=a.idx,n.removeClass(r.sSortAsc+" "+r.sSortDesc).addClass("asc"==l[t]?r.sSortAsc:"desc"==l[t]?r.sSortDesc:a.sSortingClass))})},jqueryui:function(e,n,a,r){t("
").addClass(r.sSortJUIWrapper).append(n.contents()).append(t("").addClass(r.sSortIcon+" "+a.sSortingClassJUI)).appendTo(n),t(e.nTable).on("order.dt.DT",function(t,o,i,l){e===o&&(t=a.idx,n.removeClass(r.sSortAsc+" "+r.sSortDesc).addClass("asc"==l[t]?r.sSortAsc:"desc"==l[t]?r.sSortDesc:a.sSortingClass),n.find("span."+r.sSortIcon).removeClass(r.sSortJUIAsc+" "+r.sSortJUIDesc+" "+r.sSortJUI+" "+r.sSortJUIAscAllowed+" "+r.sSortJUIDescAllowed).addClass("asc"==l[t]?r.sSortJUIAsc:"desc"==l[t]?r.sSortJUIDesc:a.sSortingClassJUI))})}}});var eL=function(t){return Array.isArray(t)&&(t=t.join(",")),"string"==typeof t?t.replace(/&/g,"&").replace(//g,">").replace(/"/g,"""):t};return tB.render={number:function(t,e,n,a,r){return{display:function(o){if("number"!=typeof o&&"string"!=typeof o)return o;var i=0>o?"-":"",l=parseFloat(o);return isNaN(l)?eL(o):(o=Math.abs(l=l.toFixed(n)),l=parseInt(o,10),o=n?e+(o-l).toFixed(n).substring(2):"",0===l&&0===parseFloat(o)&&(i=""),i+(a||"")+l.toString().replace(/\B(?=(\d{3})+(?!\d))/g,t)+o+(r||""))}}},text:function(){return{display:eL,filter:eL}}},t.extend(tB.ext.internal,{_fnExternApiFunc:tE,_fnBuildAjax:E,_fnAjaxUpdate:M,_fnAjaxParameters:W,_fnAjaxUpdateDraw:B,_fnAjaxDataSrc:U,_fnAddColumn:f,_fnColumnOptions:d,_fnAdjustColumnSizing:h,_fnVisibleToColumnIndex:p,_fnColumnIndexToVisible:g,_fnVisbleColumns:b,_fnGetColumns:$,_fnColumnTypes:m,_fnApplyColumnDefs:v,_fnHungarianMap:r,_fnCamelToHungarian:o,_fnLanguageCompat:i,_fnBrowserDetect:u,_fnAddData:S,_fnAddTr:y,_fnNodeToDataIndex:function(t,e){return e._DT_RowIndex!==a?e._DT_RowIndex:null},_fnNodeToColumnIndex:function(e,n,a){return t.inArray(a,e.aoData[n].anCells)},_fnGetCellData:D,_fnSetCellData:T,_fnSplitObjNotation:C,_fnGetObjectDataFn:en,_fnSetObjectDataFn:ea,_fnGetDataMaster:w,_fnClearTable:x,_fnDeleteIndex:_,_fnInvalidate:I,_fnGetRowElements:A,_fnCreateTr:L,_fnBuildHead:j,_fnDrawHead:P,_fnDraw:R,_fnReDraw:O,_fnAddOptionsHtml:H,_fnDetectHeader:N,_fnGetUniqueThs:k,_fnFeatureHtmlFilter:V,_fnFilterComplete:X,_fnFilterCustom:J,_fnFilterColumn:q,_fnFilter:Y,_fnFilterCreateSearch:G,_fnEscapeRegex:er,_fnFilterData:z,_fnFeatureHtmlInfo:Q,_fnUpdateInfo:tt,_fnInfoMacros:te,_fnInitialise:tn,_fnInitComplete:ta,_fnLengthChange:tr,_fnFeatureHtmlLength:to,_fnFeatureHtmlPaginate:ti,_fnPageChange:tl,_fnFeatureHtmlProcessing:ts,_fnProcessingDisplay:tu,_fnFeatureHtmlTable:tc,_fnScrollDraw:tf,_fnApplyToChildren:td,_fnCalculateColumnWidths:th,_fnThrottle:es,_fnConvertToWidth:tp,_fnGetWidestNode:tg,_fnGetMaxLenString:tb,_fnStringToCss:t$,_fnSortFlatten:tm,_fnSort:tv,_fnSortAria:tS,_fnSortListener:ty,_fnSortAttachListener:tD,_fnSortingClasses:t8,_fnSortData:tT,_fnSaveState:tC,_fnLoadState:tw,_fnImplementState:tx,_fnSettingsFromNode:t_,_fnLog:tI,_fnMap:tA,_fnBindAction:tF,_fnCallbackReg:tj,_fnCallbackFire:tP,_fnLengthOverflow:tR,_fnRenderer:tO,_fnDataSource:tH,_fnRowAttributes:F,_fnExtend:tL,_fnCalculateEnd:function(){}}),t.fn.dataTable=tB,tB.$=t,t.fn.dataTableSettings=tB.settings,t.fn.dataTableExt=tB.ext,t.fn.DataTable=function(e){return t(this).dataTable(e).api()},t.each(tB,function(e,n){t.fn.DataTable[e]=n}),tB}); \ No newline at end of file diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/moment.min.js b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/moment.min.js index d1204ccedd2..4b9e7a6d6a7 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/moment.min.js +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/moment.min.js @@ -1,5 +1,5 @@ //! moment.js -//! version : 2.29.2 +//! version : 2.29.4 //! license : MIT //! momentjs.com -!function(e,t){"object"==typeof exports&&"undefined"!=typeof module?module.exports=t():"function"==typeof define&&define.amd?define(t):e.moment=t()}(this,function(){"use strict";var e;function c(){return e.apply(null,arguments)}function a(e){return e instanceof Array||"[object Array]"===Object.prototype.toString.call(e)}function o(e){return null!=e&&"[object Object]"===Object.prototype.toString.call(e)}function f(e,t){return Object.prototype.hasOwnProperty.call(e,t)}function u(e){if(Object.getOwnPropertyNames)return 0===Object.getOwnPropertyNames(e).length;for(var t in e)if(f(e,t))return;return 1}function l(e){return void 0===e}function h(e){return"number"==typeof e||"[object Number]"===Object.prototype.toString.call(e)}function s(e){return e instanceof Date||"[object Date]"===Object.prototype.toString.call(e)}function i(e,t){for(var n=[],s=e.length,i=0;i>>0,s=0;sFe(e)?(r=e+1,i-Fe(e)):(r=e,i);return{year:r,dayOfYear:i}}function Ie(e,t,n){var s,i,r=Ee(e.year(),t,n),r=Math.floor((e.dayOfYear()-r-1)/7)+1;return r<1?s=r+je(i=e.year()-1,t,n):r>je(e.year(),t,n)?(s=r-je(e.year(),t,n),i=e.year()+1):(i=e.year(),s=r),{week:s,year:i}}function je(e,t,n){var s=Ee(e,t,n),n=Ee(e+1,t,n);return(Fe(e)-s+n)/7}C("w",["ww",2],"wo","week"),C("W",["WW",2],"Wo","isoWeek"),L("week","w"),L("isoWeek","W"),A("week",5),A("isoWeek",5),de("w",ee),de("ww",ee,J),de("W",ee),de("WW",ee,J),ge(["w","ww","W","WW"],function(e,t,n,s){t[s.substr(0,1)]=Z(e)});function Ze(e,t){return e.slice(t,7).concat(e.slice(0,t))}C("d",0,"do","day"),C("dd",0,0,function(e){return this.localeData().weekdaysMin(this,e)}),C("ddd",0,0,function(e){return this.localeData().weekdaysShort(this,e)}),C("dddd",0,0,function(e){return this.localeData().weekdays(this,e)}),C("e",0,0,"weekday"),C("E",0,0,"isoWeekday"),L("day","d"),L("weekday","e"),L("isoWeekday","E"),A("day",11),A("weekday",11),A("isoWeekday",11),de("d",ee),de("e",ee),de("E",ee),de("dd",function(e,t){return t.weekdaysMinRegex(e)}),de("ddd",function(e,t){return t.weekdaysShortRegex(e)}),de("dddd",function(e,t){return t.weekdaysRegex(e)}),ge(["dd","ddd","dddd"],function(e,t,n,s){s=n._locale.weekdaysParse(e,s,n._strict);null!=s?t.d=s:_(n).invalidWeekday=e}),ge(["d","e","E"],function(e,t,n,s){t[s]=Z(e)});var ze="Sunday_Monday_Tuesday_Wednesday_Thursday_Friday_Saturday".split("_"),$e="Sun_Mon_Tue_Wed_Thu_Fri_Sat".split("_"),qe="Su_Mo_Tu_We_Th_Fr_Sa".split("_"),Be=he,Je=he,Qe=he;function Xe(){function e(e,t){return t.length-e.length}for(var t,n,s,i=[],r=[],a=[],o=[],u=0;u<7;u++)s=m([2e3,1]).day(u),t=fe(this.weekdaysMin(s,"")),n=fe(this.weekdaysShort(s,"")),s=fe(this.weekdays(s,"")),i.push(t),r.push(n),a.push(s),o.push(t),o.push(n),o.push(s);i.sort(e),r.sort(e),a.sort(e),o.sort(e),this._weekdaysRegex=new RegExp("^("+o.join("|")+")","i"),this._weekdaysShortRegex=this._weekdaysRegex,this._weekdaysMinRegex=this._weekdaysRegex,this._weekdaysStrictRegex=new RegExp("^("+a.join("|")+")","i"),this._weekdaysShortStrictRegex=new RegExp("^("+r.join("|")+")","i"),this._weekdaysMinStrictRegex=new RegExp("^("+i.join("|")+")","i")}function Ke(){return this.hours()%12||12}function et(e,t){C(e,0,0,function(){return this.localeData().meridiem(this.hours(),this.minutes(),t)})}function tt(e,t){return t._meridiemParse}C("H",["HH",2],0,"hour"),C("h",["hh",2],0,Ke),C("k",["kk",2],0,function(){return this.hours()||24}),C("hmm",0,0,function(){return""+Ke.apply(this)+T(this.minutes(),2)}),C("hmmss",0,0,function(){return""+Ke.apply(this)+T(this.minutes(),2)+T(this.seconds(),2)}),C("Hmm",0,0,function(){return""+this.hours()+T(this.minutes(),2)}),C("Hmmss",0,0,function(){return""+this.hours()+T(this.minutes(),2)+T(this.seconds(),2)}),et("a",!0),et("A",!1),L("hour","h"),A("hour",13),de("a",tt),de("A",tt),de("H",ee),de("h",ee),de("k",ee),de("HH",ee,J),de("hh",ee,J),de("kk",ee,J),de("hmm",te),de("hmmss",ne),de("Hmm",te),de("Hmmss",ne),ye(["H","HH"],Me),ye(["k","kk"],function(e,t,n){e=Z(e);t[Me]=24===e?0:e}),ye(["a","A"],function(e,t,n){n._isPm=n._locale.isPM(e),n._meridiem=e}),ye(["h","hh"],function(e,t,n){t[Me]=Z(e),_(n).bigHour=!0}),ye("hmm",function(e,t,n){var s=e.length-2;t[Me]=Z(e.substr(0,s)),t[De]=Z(e.substr(s)),_(n).bigHour=!0}),ye("hmmss",function(e,t,n){var s=e.length-4,i=e.length-2;t[Me]=Z(e.substr(0,s)),t[De]=Z(e.substr(s,2)),t[Se]=Z(e.substr(i)),_(n).bigHour=!0}),ye("Hmm",function(e,t,n){var s=e.length-2;t[Me]=Z(e.substr(0,s)),t[De]=Z(e.substr(s))}),ye("Hmmss",function(e,t,n){var s=e.length-4,i=e.length-2;t[Me]=Z(e.substr(0,s)),t[De]=Z(e.substr(s,2)),t[Se]=Z(e.substr(i))});var nt=z("Hours",!0);var st,it={calendar:{sameDay:"[Today at] LT",nextDay:"[Tomorrow at] LT",nextWeek:"dddd [at] LT",lastDay:"[Yesterday at] LT",lastWeek:"[Last] dddd [at] LT",sameElse:"L"},longDateFormat:{LTS:"h:mm:ss A",LT:"h:mm A",L:"MM/DD/YYYY",LL:"MMMM D, YYYY",LLL:"MMMM D, YYYY h:mm A",LLLL:"dddd, MMMM D, YYYY h:mm A"},invalidDate:"Invalid date",ordinal:"%d",dayOfMonthOrdinalParse:/\d{1,2}/,relativeTime:{future:"in %s",past:"%s ago",s:"a few seconds",ss:"%d seconds",m:"a minute",mm:"%d minutes",h:"an hour",hh:"%d hours",d:"a day",dd:"%d days",w:"a week",ww:"%d weeks",M:"a month",MM:"%d months",y:"a year",yy:"%d years"},months:Te,monthsShort:Ne,week:{dow:0,doy:6},weekdays:ze,weekdaysMin:qe,weekdaysShort:$e,meridiemParse:/[ap]\.?m?\.?/i},rt={},at={};function ot(e){return e&&e.toLowerCase().replace("_","-")}function ut(e){for(var t,n,s,i,r=0;r=t&&function(e,t){for(var n=Math.min(e.length,t.length),s=0;s=t-1)break;t--}r++}return st}function lt(t){var e;if(void 0===rt[t]&&"undefined"!=typeof module&&module&&module.exports&&null!=t.match("^[^/\\\\]*$"))try{e=st._abbr,require("./locale/"+t),ht(e)}catch(e){rt[t]=null}return rt[t]}function ht(e,t){return e&&((t=l(t)?ct(e):dt(e,t))?st=t:"undefined"!=typeof console&&console.warn&&console.warn("Locale "+e+" not found. Did you forget to load it?")),st._abbr}function dt(e,t){if(null===t)return delete rt[e],null;var n,s=it;if(t.abbr=e,null!=rt[e])S("defineLocaleOverride","use moment.updateLocale(localeName, config) to change an existing locale. moment.defineLocale(localeName, config) should only be used for creating a new locale See http://momentjs.com/guides/#/warnings/define-locale/ for more info."),s=rt[e]._config;else if(null!=t.parentLocale)if(null!=rt[t.parentLocale])s=rt[t.parentLocale]._config;else{if(null==(n=lt(t.parentLocale)))return at[t.parentLocale]||(at[t.parentLocale]=[]),at[t.parentLocale].push({name:e,config:t}),null;s=n._config}return rt[e]=new b(O(s,t)),at[e]&&at[e].forEach(function(e){dt(e.name,e.config)}),ht(e),rt[e]}function ct(e){var t;if(!(e=e&&e._locale&&e._locale._abbr?e._locale._abbr:e))return st;if(!a(e)){if(t=lt(e))return t;e=[e]}return ut(e)}function ft(e){var t=e._a;return t&&-2===_(e).overflow&&(t=t[ve]<0||11xe(t[pe],t[ve])?ke:t[Me]<0||24je(n,r,a)?_(e)._overflowWeeks=!0:null!=o?_(e)._overflowWeekday=!0:(a=Ae(n,s,i,r,a),e._a[pe]=a.year,e._dayOfYear=a.dayOfYear)}(e),null!=e._dayOfYear&&(s=Yt(e._a[pe],n[pe]),(e._dayOfYear>Fe(s)||0===e._dayOfYear)&&(_(e)._overflowDayOfYear=!0),s=Ge(s,0,e._dayOfYear),e._a[ve]=s.getUTCMonth(),e._a[ke]=s.getUTCDate()),t=0;t<3&&null==e._a[t];++t)e._a[t]=a[t]=n[t];for(;t<7;t++)e._a[t]=a[t]=null==e._a[t]?2===t?1:0:e._a[t];24===e._a[Me]&&0===e._a[De]&&0===e._a[Se]&&0===e._a[Ye]&&(e._nextDay=!0,e._a[Me]=0),e._d=(e._useUTC?Ge:Ve).apply(null,a),s=e._useUTC?e._d.getUTCDay():e._d.getDay(),null!=e._tzm&&e._d.setUTCMinutes(e._d.getUTCMinutes()-e._tzm),e._nextDay&&(e._a[Me]=24),e._w&&void 0!==e._w.d&&e._w.d!==s&&(_(e).weekdayMismatch=!0)}}function bt(e){if(e._f!==c.ISO_8601)if(e._f!==c.RFC_2822){e._a=[],_(e).empty=!0;for(var t,n,s,i,r,a=""+e._i,o=a.length,u=0,l=H(e._f,e._locale).match(N)||[],h=l.length,d=0;de.valueOf():e.valueOf()"}),ie.toJSON=function(){return this.isValid()?this.toISOString():null},ie.toString=function(){return this.clone().locale("en").format("ddd MMM DD YYYY HH:mm:ss [GMT]ZZ")},ie.unix=function(){return Math.floor(this.valueOf()/1e3)},ie.valueOf=function(){return this._d.valueOf()-6e4*(this._offset||0)},ie.creationData=function(){return{input:this._i,format:this._f,locale:this._locale,isUTC:this._isUTC,strict:this._strict}},ie.eraName=function(){for(var e,t=this.localeData().eras(),n=0,s=t.length;nthis.clone().month(0).utcOffset()||this.utcOffset()>this.clone().month(5).utcOffset()},ie.isLocal=function(){return!!this.isValid()&&!this._isUTC},ie.isUtcOffset=function(){return!!this.isValid()&&this._isUTC},ie.isUtc=It,ie.isUTC=It,ie.zoneAbbr=function(){return this._isUTC?"UTC":""},ie.zoneName=function(){return this._isUTC?"Coordinated Universal Time":""},ie.dates=n("dates accessor is deprecated. Use date instead.",ne),ie.months=n("months accessor is deprecated. Use month instead",Ue),ie.years=n("years accessor is deprecated. Use year instead",Le),ie.zone=n("moment().zone is deprecated, use moment().utcOffset instead. http://momentjs.com/guides/#/warnings/zone/",function(e,t){return null!=e?(this.utcOffset(e="string"!=typeof e?-e:e,t),this):-this.utcOffset()}),ie.isDSTShifted=n("isDSTShifted is deprecated. See http://momentjs.com/guides/#/warnings/dst-shifted/ for more information",function(){if(!l(this._isDSTShifted))return this._isDSTShifted;var e,t={};return p(t,this),(t=xt(t))._a?(e=(t._isUTC?m:Nt)(t._a),this._isDSTShifted=this.isValid()&&0>>0,s=0;sAe(e)?(r=e+1,t-Ae(e)):(r=e,t);return{year:r,dayOfYear:n}}function qe(e,t,n){var s,i,r=ze(e.year(),t,n),r=Math.floor((e.dayOfYear()-r-1)/7)+1;return r<1?s=r+P(i=e.year()-1,t,n):r>P(e.year(),t,n)?(s=r-P(e.year(),t,n),i=e.year()+1):(i=e.year(),s=r),{week:s,year:i}}function P(e,t,n){var s=ze(e,t,n),t=ze(e+1,t,n);return(Ae(e)-s+t)/7}s("w",["ww",2],"wo","week"),s("W",["WW",2],"Wo","isoWeek"),t("week","w"),t("isoWeek","W"),n("week",5),n("isoWeek",5),k("w",p),k("ww",p,w),k("W",p),k("WW",p,w),Te(["w","ww","W","WW"],function(e,t,n,s){t[s.substr(0,1)]=g(e)});function Be(e,t){return e.slice(t,7).concat(e.slice(0,t))}s("d",0,"do","day"),s("dd",0,0,function(e){return this.localeData().weekdaysMin(this,e)}),s("ddd",0,0,function(e){return this.localeData().weekdaysShort(this,e)}),s("dddd",0,0,function(e){return this.localeData().weekdays(this,e)}),s("e",0,0,"weekday"),s("E",0,0,"isoWeekday"),t("day","d"),t("weekday","e"),t("isoWeekday","E"),n("day",11),n("weekday",11),n("isoWeekday",11),k("d",p),k("e",p),k("E",p),k("dd",function(e,t){return t.weekdaysMinRegex(e)}),k("ddd",function(e,t){return t.weekdaysShortRegex(e)}),k("dddd",function(e,t){return t.weekdaysRegex(e)}),Te(["dd","ddd","dddd"],function(e,t,n,s){s=n._locale.weekdaysParse(e,s,n._strict);null!=s?t.d=s:m(n).invalidWeekday=e}),Te(["d","e","E"],function(e,t,n,s){t[s]=g(e)});var Je="Sunday_Monday_Tuesday_Wednesday_Thursday_Friday_Saturday".split("_"),Qe="Sun_Mon_Tue_Wed_Thu_Fri_Sat".split("_"),Xe="Su_Mo_Tu_We_Th_Fr_Sa".split("_"),Ke=v,et=v,tt=v;function nt(){function e(e,t){return t.length-e.length}for(var t,n,s,i=[],r=[],a=[],o=[],u=0;u<7;u++)s=l([2e3,1]).day(u),t=M(this.weekdaysMin(s,"")),n=M(this.weekdaysShort(s,"")),s=M(this.weekdays(s,"")),i.push(t),r.push(n),a.push(s),o.push(t),o.push(n),o.push(s);i.sort(e),r.sort(e),a.sort(e),o.sort(e),this._weekdaysRegex=new RegExp("^("+o.join("|")+")","i"),this._weekdaysShortRegex=this._weekdaysRegex,this._weekdaysMinRegex=this._weekdaysRegex,this._weekdaysStrictRegex=new RegExp("^("+a.join("|")+")","i"),this._weekdaysShortStrictRegex=new RegExp("^("+r.join("|")+")","i"),this._weekdaysMinStrictRegex=new RegExp("^("+i.join("|")+")","i")}function st(){return this.hours()%12||12}function it(e,t){s(e,0,0,function(){return this.localeData().meridiem(this.hours(),this.minutes(),t)})}function rt(e,t){return t._meridiemParse}s("H",["HH",2],0,"hour"),s("h",["hh",2],0,st),s("k",["kk",2],0,function(){return this.hours()||24}),s("hmm",0,0,function(){return""+st.apply(this)+r(this.minutes(),2)}),s("hmmss",0,0,function(){return""+st.apply(this)+r(this.minutes(),2)+r(this.seconds(),2)}),s("Hmm",0,0,function(){return""+this.hours()+r(this.minutes(),2)}),s("Hmmss",0,0,function(){return""+this.hours()+r(this.minutes(),2)+r(this.seconds(),2)}),it("a",!0),it("A",!1),t("hour","h"),n("hour",13),k("a",rt),k("A",rt),k("H",p),k("h",p),k("k",p),k("HH",p,w),k("hh",p,w),k("kk",p,w),k("hmm",ge),k("hmmss",we),k("Hmm",ge),k("Hmmss",we),D(["H","HH"],x),D(["k","kk"],function(e,t,n){e=g(e);t[x]=24===e?0:e}),D(["a","A"],function(e,t,n){n._isPm=n._locale.isPM(e),n._meridiem=e}),D(["h","hh"],function(e,t,n){t[x]=g(e),m(n).bigHour=!0}),D("hmm",function(e,t,n){var s=e.length-2;t[x]=g(e.substr(0,s)),t[T]=g(e.substr(s)),m(n).bigHour=!0}),D("hmmss",function(e,t,n){var s=e.length-4,i=e.length-2;t[x]=g(e.substr(0,s)),t[T]=g(e.substr(s,2)),t[N]=g(e.substr(i)),m(n).bigHour=!0}),D("Hmm",function(e,t,n){var s=e.length-2;t[x]=g(e.substr(0,s)),t[T]=g(e.substr(s))}),D("Hmmss",function(e,t,n){var s=e.length-4,i=e.length-2;t[x]=g(e.substr(0,s)),t[T]=g(e.substr(s,2)),t[N]=g(e.substr(i))});v=de("Hours",!0);var at,ot={calendar:{sameDay:"[Today at] LT",nextDay:"[Tomorrow at] LT",nextWeek:"dddd [at] LT",lastDay:"[Yesterday at] LT",lastWeek:"[Last] dddd [at] LT",sameElse:"L"},longDateFormat:{LTS:"h:mm:ss A",LT:"h:mm A",L:"MM/DD/YYYY",LL:"MMMM D, YYYY",LLL:"MMMM D, YYYY h:mm A",LLLL:"dddd, MMMM D, YYYY h:mm A"},invalidDate:"Invalid date",ordinal:"%d",dayOfMonthOrdinalParse:/\d{1,2}/,relativeTime:{future:"in %s",past:"%s ago",s:"a few seconds",ss:"%d seconds",m:"a minute",mm:"%d minutes",h:"an hour",hh:"%d hours",d:"a day",dd:"%d days",w:"a week",ww:"%d weeks",M:"a month",MM:"%d months",y:"a year",yy:"%d years"},months:Ce,monthsShort:Ue,week:{dow:0,doy:6},weekdays:Je,weekdaysMin:Xe,weekdaysShort:Qe,meridiemParse:/[ap]\.?m?\.?/i},R={},ut={};function lt(e){return e&&e.toLowerCase().replace("_","-")}function ht(e){for(var t,n,s,i,r=0;r=t&&function(e,t){for(var n=Math.min(e.length,t.length),s=0;s=t-1)break;t--}r++}return at}function dt(t){var e;if(void 0===R[t]&&"undefined"!=typeof module&&module&&module.exports&&null!=t.match("^[^/\\\\]*$"))try{e=at._abbr,require("./locale/"+t),ct(e)}catch(e){R[t]=null}return R[t]}function ct(e,t){return e&&((t=o(t)?mt(e):ft(e,t))?at=t:"undefined"!=typeof console&&console.warn&&console.warn("Locale "+e+" not found. Did you forget to load it?")),at._abbr}function ft(e,t){if(null===t)return delete R[e],null;var n,s=ot;if(t.abbr=e,null!=R[e])Q("defineLocaleOverride","use moment.updateLocale(localeName, config) to change an existing locale. moment.defineLocale(localeName, config) should only be used for creating a new locale See http://momentjs.com/guides/#/warnings/define-locale/ for more info."),s=R[e]._config;else if(null!=t.parentLocale)if(null!=R[t.parentLocale])s=R[t.parentLocale]._config;else{if(null==(n=dt(t.parentLocale)))return ut[t.parentLocale]||(ut[t.parentLocale]=[]),ut[t.parentLocale].push({name:e,config:t}),null;s=n._config}return R[e]=new K(X(s,t)),ut[e]&&ut[e].forEach(function(e){ft(e.name,e.config)}),ct(e),R[e]}function mt(e){var t;if(!(e=e&&e._locale&&e._locale._abbr?e._locale._abbr:e))return at;if(!a(e)){if(t=dt(e))return t;e=[e]}return ht(e)}function _t(e){var t=e._a;return t&&-2===m(e).overflow&&(t=t[O]<0||11We(t[Y],t[O])?b:t[x]<0||24P(r,u,l)?m(s)._overflowWeeks=!0:null!=h?m(s)._overflowWeekday=!0:(d=$e(r,a,o,u,l),s._a[Y]=d.year,s._dayOfYear=d.dayOfYear)),null!=e._dayOfYear&&(i=bt(e._a[Y],n[Y]),(e._dayOfYear>Ae(i)||0===e._dayOfYear)&&(m(e)._overflowDayOfYear=!0),h=Ze(i,0,e._dayOfYear),e._a[O]=h.getUTCMonth(),e._a[b]=h.getUTCDate()),t=0;t<3&&null==e._a[t];++t)e._a[t]=c[t]=n[t];for(;t<7;t++)e._a[t]=c[t]=null==e._a[t]?2===t?1:0:e._a[t];24===e._a[x]&&0===e._a[T]&&0===e._a[N]&&0===e._a[Ne]&&(e._nextDay=!0,e._a[x]=0),e._d=(e._useUTC?Ze:je).apply(null,c),r=e._useUTC?e._d.getUTCDay():e._d.getDay(),null!=e._tzm&&e._d.setUTCMinutes(e._d.getUTCMinutes()-e._tzm),e._nextDay&&(e._a[x]=24),e._w&&void 0!==e._w.d&&e._w.d!==r&&(m(e).weekdayMismatch=!0)}}function Tt(e){if(e._f===f.ISO_8601)St(e);else if(e._f===f.RFC_2822)Ot(e);else{e._a=[],m(e).empty=!0;for(var t,n,s,i,r,a=""+e._i,o=a.length,u=0,l=ae(e._f,e._locale).match(te)||[],h=l.length,d=0;de.valueOf():e.valueOf()"}),i.toJSON=function(){return this.isValid()?this.toISOString():null},i.toString=function(){return this.clone().locale("en").format("ddd MMM DD YYYY HH:mm:ss [GMT]ZZ")},i.unix=function(){return Math.floor(this.valueOf()/1e3)},i.valueOf=function(){return this._d.valueOf()-6e4*(this._offset||0)},i.creationData=function(){return{input:this._i,format:this._f,locale:this._locale,isUTC:this._isUTC,strict:this._strict}},i.eraName=function(){for(var e,t=this.localeData().eras(),n=0,s=t.length;nthis.clone().month(0).utcOffset()||this.utcOffset()>this.clone().month(5).utcOffset()},i.isLocal=function(){return!!this.isValid()&&!this._isUTC},i.isUtcOffset=function(){return!!this.isValid()&&this._isUTC},i.isUtc=At,i.isUTC=At,i.zoneAbbr=function(){return this._isUTC?"UTC":""},i.zoneName=function(){return this._isUTC?"Coordinated Universal Time":""},i.dates=e("dates accessor is deprecated. Use date instead.",ve),i.months=e("months accessor is deprecated. Use month instead",Ge),i.years=e("years accessor is deprecated. Use year instead",Ie),i.zone=e("moment().zone is deprecated, use moment().utcOffset instead. http://momentjs.com/guides/#/warnings/zone/",function(e,t){return null!=e?(this.utcOffset(e="string"!=typeof e?-e:e,t),this):-this.utcOffset()}),i.isDSTShifted=e("isDSTShifted is deprecated. See http://momentjs.com/guides/#/warnings/dst-shifted/ for more information",function(){if(!o(this._isDSTShifted))return this._isDSTShifted;var e,t={};return $(t,this),(t=Nt(t))._a?(e=(t._isUTC?l:W)(t._a),this._isDSTShifted=this.isValid()&&0hdfs://mycluster -* **dfs.ha.nn.not-become-active-in-safemode** - if prevent safe mode namenodes to become active +* **dfs.ha.nn.not-become-active-in-safemode** - if prevent safe mode namenodes to become active or observer Whether allow namenode to become active when it is in safemode, when it is set to true, namenode in safemode will report SERVICE_UNHEALTHY to ZKFC if auto failover is on, or will throw exception to fail the transition to - active if auto failover is off. For example: + active if auto failover is off. If you transition namenode to observer state + when it is in safemode, when this configuration is set to true, namenode will throw exception + to fail the transition to observer. For example: dfs.ha.nn.not-become-active-in-safemode diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithQJM.md b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithQJM.md index 6bdd4e1c529..b6b408db8b4 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithQJM.md +++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithQJM.md @@ -376,12 +376,14 @@ The order in which you set these configurations is unimportant, but the values y /path/to/journal/node/local/data -* **dfs.ha.nn.not-become-active-in-safemode** - if prevent safe mode namenodes to become active +* **dfs.ha.nn.not-become-active-in-safemode** - if prevent safe mode namenodes to become active or observer Whether allow namenode to become active when it is in safemode, when it is set to true, namenode in safemode will report SERVICE_UNHEALTHY to ZKFC if auto failover is on, or will throw exception to fail the transition to - active if auto failover is off. For example: + active if auto failover is off. If you transition namenode to observer state + when it is in safemode, when this configuration is set to true, namenode will throw exception + to fail the transition to observer. For example: dfs.ha.nn.not-become-active-in-safemode @@ -500,6 +502,16 @@ lag time will be much longer. The relevant configurations are: the oldest data in the cache was at transaction ID 20, a value of 10 would be added to the average. +* **dfs.journalnode.edit-cache-size.fraction** - This fraction refers to the proportion of + the maximum memory of the JVM. Used to calculate the size of the edits cache that is + kept in the JournalNode's memory. This config is an alternative to the + dfs.journalnode.edit-cache-size.bytes. And it is used to serve edits for tailing via + the RPC-based mechanism, and is only enabled when dfs.ha.tail-edits.in-progress is true. + Transactions range in size but are around 200 bytes on average, so the default of 1MB + can store around 5000 transactions. So we can configure a reasonable value based on + the maximum memory. The recommended value is less than 0.9. If we set + dfs.journalnode.edit-cache-size.bytes, this parameter will not take effect. + This feature is primarily useful in conjunction with the Standby/Observer Read feature. Using this feature, read requests can be serviced from non-active NameNodes; thus tailing in-progress edits provides these nodes with the ability to serve requests with data which is much more fresh. See the diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsNfsGateway.md b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsNfsGateway.md index bd87b909975..3482fe942dd 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsNfsGateway.md +++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsNfsGateway.md @@ -24,7 +24,7 @@ The NFS Gateway supports NFSv3 and allows HDFS to be mounted as part of the clie * Users can browse the HDFS file system through their local file system on NFSv3 client compatible operating systems. -* Users can download files from the the HDFS file system on to their +* Users can download files from the HDFS file system on to their local file system. * Users can upload files from their local file system directly to the HDFS file system. @@ -92,7 +92,7 @@ The rest of the NFS gateway configurations are optional for both secure and non- the super-user can do anything in that permissions checks never fail for the super-user. If the following property is configured, the superuser on NFS client can access any file on HDFS. By default, the super user is not configured in the gateway. - Note that, even the the superuser is configured, "nfs.exports.allowed.hosts" still takes effect. + Note that, even the superuser is configured, "nfs.exports.allowed.hosts" still takes effect. For example, the superuser will not have write access to HDFS files through the gateway if the NFS client host is not allowed to have write access in "nfs.exports.allowed.hosts". @@ -154,7 +154,7 @@ It's strongly recommended for the users to update a few configuration properties the super-user can do anything in that permissions checks never fail for the super-user. If the following property is configured, the superuser on NFS client can access any file on HDFS. By default, the super user is not configured in the gateway. - Note that, even the the superuser is configured, "nfs.exports.allowed.hosts" still takes effect. + Note that, even the superuser is configured, "nfs.exports.allowed.hosts" still takes effect. For example, the superuser will not have write access to HDFS files through the gateway if the NFS client host is not allowed to have write access in "nfs.exports.allowed.hosts". diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsUserGuide.md b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsUserGuide.md index 16970730c92..3aa41b4dd8b 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsUserGuide.md +++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsUserGuide.md @@ -227,7 +227,7 @@ For command usage, see [namenode](./HDFSCommands.html#namenode). Balancer -------- -HDFS data might not always be be placed uniformly across the DataNode. One common reason is addition of new DataNodes to an existing cluster. While placing new blocks (data for a file is stored as a series of blocks), NameNode considers various parameters before choosing the DataNodes to receive these blocks. Some of the considerations are: +HDFS data might not always be placed uniformly across the DataNode. One common reason is addition of new DataNodes to an existing cluster. While placing new blocks (data for a file is stored as a series of blocks), NameNode considers various parameters before choosing the DataNodes to receive these blocks. Some of the considerations are: * Policy to keep one of the replicas of a block on the same node as the node that is writing the block. diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ObserverNameNode.md b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ObserverNameNode.md index 00aeb5bd2e0..74026ec8625 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ObserverNameNode.md +++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ObserverNameNode.md @@ -194,6 +194,24 @@ few configurations to your **hdfs-site.xml**: 1048576 +* **dfs.journalnode.edit-cache-size.fraction** - the fraction refers to + the proportion of the maximum memory of the JVM. + + Used to calculate the size of the edits cache that + is kept in the JournalNode's memory. + This config is an alternative to the dfs.journalnode.edit-cache-size.bytes. + And it is used to serve edits for tailing via the RPC-based mechanism, and is only + enabled when dfs.ha.tail-edits.in-progress is true. Transactions range in size but + are around 200 bytes on average, so the default of 1MB can store around 5000 transactions. + So we can configure a reasonable value based on the maximum memory. The recommended value + is less than 0.9. If we set dfs.journalnode.edit-cache-size.bytes, this parameter will + not take effect. + + + dfs.journalnode.edit-cache-size.fraction + 0.5f + + * **dfs.namenode.accesstime.precision** -- whether to enable access time for HDFS file. diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md index f3eb336da61..3b61f3ce75e 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md +++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md @@ -84,11 +84,11 @@ If users want some of their existing cluster (`hdfs://cluster`) data to mount wi Let's consider the following operations to understand where these operations will be delegated based on mount links. - *Op1:* Create a file with the the path `hdfs://cluster/user/fileA`, then physically this file will be created at `hdfs://cluster/user/fileA`. This delegation happened based on the first configuration parameter in above configurations. Here `/user` mapped to `hdfs://cluster/user/`. + *Op1:* Create a file with the path `hdfs://cluster/user/fileA`, then physically this file will be created at `hdfs://cluster/user/fileA`. This delegation happened based on the first configuration parameter in above configurations. Here `/user` mapped to `hdfs://cluster/user/`. - *Op2:* Create a file the the path `hdfs://cluster/data/datafile`, then this file will be created at `o3fs://bucket1.volume1.omhost/data/datafile`. This delegation happened based on second configurations parameter in above configurations. Here `/data` was mapped with `o3fs://bucket1.volume1.omhost/data/`. + *Op2:* Create a file the path `hdfs://cluster/data/datafile`, then this file will be created at `o3fs://bucket1.volume1.omhost/data/datafile`. This delegation happened based on second configurations parameter in above configurations. Here `/data` was mapped with `o3fs://bucket1.volume1.omhost/data/`. - *Op3:* Create a file with the the path `hdfs://cluster/backup/data.zip`, then physically this file will be created at `s3a://bucket1/backup/data.zip`. This delegation happened based on the third configuration parameter in above configurations. Here `/backup` was mapped to `s3a://bucket1/backup/`. + *Op3:* Create a file with the path `hdfs://cluster/backup/data.zip`, then physically this file will be created at `s3a://bucket1/backup/data.zip`. This delegation happened based on the third configuration parameter in above configurations. Here `/backup` was mapped to `s3a://bucket1/backup/`. **Example 2:** @@ -114,11 +114,11 @@ If users want some of their existing cluster (`s3a://bucketA/`) data to mount wi ``` Let's consider the following operations to understand to where these operations will be delegated based on mount links. - *Op1:* Create a file with the the path `s3a://bucketA/user/fileA`, then this file will be created physically at `hdfs://cluster/user/fileA`. This delegation happened based on the first configuration parameter in above configurations. Here `/user` mapped to `hdfs://cluster/user`. + *Op1:* Create a file with the path `s3a://bucketA/user/fileA`, then this file will be created physically at `hdfs://cluster/user/fileA`. This delegation happened based on the first configuration parameter in above configurations. Here `/user` mapped to `hdfs://cluster/user`. - *Op2:* Create a file the the path `s3a://bucketA/data/datafile`, then this file will be created at `o3fs://bucket1.volume1.omhost/data/datafile`. This delegation happened based on second configurations parameter in above configurations. Here `/data` was mapped with `o3fs://bucket1.volume1.omhost/data/`. + *Op2:* Create a file the path `s3a://bucketA/data/datafile`, then this file will be created at `o3fs://bucket1.volume1.omhost/data/datafile`. This delegation happened based on second configurations parameter in above configurations. Here `/data` was mapped with `o3fs://bucket1.volume1.omhost/data/`. - *Op3:* Create a file with the the path `s3a://bucketA/salesDB/dbfile`, then physically this file will be created at `s3a://bucketA/salesDB/dbfile`. This delegation happened based on the third configuration parameter in above configurations. Here `/salesDB` was mapped to `s3a://bucket1/salesDB`. + *Op3:* Create a file with the path `s3a://bucketA/salesDB/dbfile`, then physically this file will be created at `s3a://bucketA/salesDB/dbfile`. This delegation happened based on the third configuration parameter in above configurations. Here `/salesDB` was mapped to `s3a://bucket1/salesDB`. Note: In above examples we used create operation only, but the same mechanism applies to any other file system APIs here. diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md index bb4ee39be51..46b5613fe72 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md +++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md @@ -517,6 +517,7 @@ See also: [`newlength`](#New_Length), [FileSystem](../../api/org/apache/hadoop/f "replication" : 0, "snapshotEnabled" : true "type" : "DIRECTORY" //enum {FILE, DIRECTORY, SYMLINK} + "ecPolicy" : "RS-6-3-1024k" } } @@ -2311,6 +2312,26 @@ var fileStatusProperties = "description": "The type of the path object.", "enum" : ["FILE", "DIRECTORY", "SYMLINK"], "required" : true + }, + "aclBit": + { + "description": "Has ACLs set or not.", + "type" : "boolean", + }, + "encBit": + { + "description": "Is Encrypted or not.", + "type" : "boolean", + }, + "ecBit": + { + "description": "Is ErasureCoded or not.", + "type" : "boolean", + }, + "ecPolicy": + { + "description": "The namenode of ErasureCodePolicy.", + "type" : "String", } } }; diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java index a00f21ecc94..e54b7332b1e 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java @@ -37,6 +37,7 @@ import java.util.zip.GZIPOutputStream; import java.util.function.Supplier; import org.apache.commons.lang3.RandomStringUtils; +import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy; import org.apache.hadoop.hdfs.protocol.XAttrNotFoundException; import org.apache.hadoop.util.Lists; import org.slf4j.Logger; @@ -3043,6 +3044,96 @@ public class TestDFSShell { assertThat(res, not(0)); } + @Test (timeout = 300000) + public void testAppendToFileWithOptionN() throws Exception { + final int inputFileLength = 1024 * 1024; + File testRoot = new File(TEST_ROOT_DIR, "testAppendToFileWithOptionN"); + testRoot.mkdirs(); + + File file1 = new File(testRoot, "file1"); + createLocalFileWithRandomData(inputFileLength, file1); + + Configuration conf = new HdfsConfiguration(); + try (MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).numDataNodes(6).build()) { + cluster.waitActive(); + FileSystem hdfs = cluster.getFileSystem(); + assertTrue("Not a HDFS: " + hdfs.getUri(), + hdfs instanceof DistributedFileSystem); + + // Run appendToFile with option n by replica policy once, make sure that the target file is + // created and is of the right size and block number is correct. + String dir = "/replica"; + boolean mkdirs = hdfs.mkdirs(new Path(dir)); + assertTrue("Mkdir fail", mkdirs); + Path remoteFile = new Path(dir + "/remoteFile"); + FsShell shell = new FsShell(); + shell.setConf(conf); + String[] argv = new String[] { + "-appendToFile", "-n", file1.toString(), remoteFile.toString() }; + int res = ToolRunner.run(shell, argv); + assertEquals("Run appendToFile command fail", 0, res); + FileStatus fileStatus = hdfs.getFileStatus(remoteFile); + assertEquals("File size should be " + inputFileLength, + inputFileLength, fileStatus.getLen()); + BlockLocation[] fileBlockLocations = + hdfs.getFileBlockLocations(fileStatus, 0, fileStatus.getLen()); + assertEquals("Block Num should be 1", 1, fileBlockLocations.length); + + // Run appendToFile with option n by replica policy again and + // make sure that the target file size has been doubled and block number has been doubled. + res = ToolRunner.run(shell, argv); + assertEquals("Run appendToFile command fail", 0, res); + fileStatus = hdfs.getFileStatus(remoteFile); + assertEquals("File size should be " + inputFileLength * 2, + inputFileLength * 2, fileStatus.getLen()); + fileBlockLocations = hdfs.getFileBlockLocations(fileStatus, 0, fileStatus.getLen()); + assertEquals("Block Num should be 2", 2, fileBlockLocations.length); + + // Before run appendToFile with option n by ec policy, set ec policy for the dir. + dir = "/ecPolicy"; + final String ecPolicyName = "RS-6-3-1024k"; + mkdirs = hdfs.mkdirs(new Path(dir)); + assertTrue("Mkdir fail", mkdirs); + ((DistributedFileSystem) hdfs).setErasureCodingPolicy(new Path(dir), ecPolicyName); + ErasureCodingPolicy erasureCodingPolicy = + ((DistributedFileSystem) hdfs).getErasureCodingPolicy(new Path(dir)); + assertEquals("Set ec policy fail", ecPolicyName, erasureCodingPolicy.getName()); + + // Run appendToFile with option n by ec policy once, make sure that the target file is + // created and is of the right size and block group number is correct. + remoteFile = new Path(dir + "/remoteFile"); + argv = new String[] { + "-appendToFile", "-n", file1.toString(), remoteFile.toString() }; + res = ToolRunner.run(shell, argv); + assertEquals("Run appendToFile command fail", 0, res); + fileStatus = hdfs.getFileStatus(remoteFile); + assertEquals("File size should be " + inputFileLength, + inputFileLength, fileStatus.getLen()); + fileBlockLocations = hdfs.getFileBlockLocations(fileStatus, 0, fileStatus.getLen()); + assertEquals("Block Group Num should be 1", 1, fileBlockLocations.length); + + // Run appendToFile without option n by ec policy again and make sure that + // append on EC file without new block must fail. + argv = new String[] { + "-appendToFile", file1.toString(), remoteFile.toString() }; + res = ToolRunner.run(shell, argv); + assertTrue("Run appendToFile command must fail", res != 0); + + // Run appendToFile with option n by ec policy again and + // make sure that the target file size has been doubled + // and block group number has been doubled. + argv = new String[] { + "-appendToFile", "-n", file1.toString(), remoteFile.toString() }; + res = ToolRunner.run(shell, argv); + assertEquals("Run appendToFile command fail", 0, res); + fileStatus = hdfs.getFileStatus(remoteFile); + assertEquals("File size should be " + inputFileLength * 2, + inputFileLength * 2, fileStatus.getLen()); + fileBlockLocations = hdfs.getFileBlockLocations(fileStatus, 0, fileStatus.getLen()); + assertEquals("Block Group Num should be 2", 2, fileBlockLocations.length); + } + } + @Test (timeout = 30000) public void testSetXAttrPermission() throws Exception { UserGroupInformation user = UserGroupInformation. diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommissionWithStriped.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommissionWithStriped.java index c68cb1707c2..206f75eae70 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommissionWithStriped.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommissionWithStriped.java @@ -759,7 +759,7 @@ public class TestDecommissionWithStriped { DatanodeInfo extraDn = getDatanodeOutOfTheBlock(blk); DatanodeDescriptor target = bm.getDatanodeManager() .getDatanode(extraDn.getDatanodeUuid()); - dn0.addBlockToBeReplicated(targetBlk, + dn0.addECBlockToBeReplicated(targetBlk, new DatanodeStorageInfo[] {target.getStorageInfos()[0]}); // dn0 replicates in success @@ -883,7 +883,7 @@ public class TestDecommissionWithStriped { .getDatanode(extraDn.getDatanodeUuid()); DatanodeDescriptor dnStartIndexDecommission = bm.getDatanodeManager() .getDatanode(dnLocs[decommNodeIndex].getDatanodeUuid()); - dnStartIndexDecommission.addBlockToBeReplicated(targetBlk, + dnStartIndexDecommission.addECBlockToBeReplicated(targetBlk, new DatanodeStorageInfo[] {target.getStorageInfos()[0]}); // Wait for replication success. @@ -972,7 +972,7 @@ public class TestDecommissionWithStriped { DatanodeInfo extraDn = getDatanodeOutOfTheBlock(blk); DatanodeDescriptor target = bm.getDatanodeManager() .getDatanode(extraDn.getDatanodeUuid()); - dn0.addBlockToBeReplicated(targetBlk, + dn0.addECBlockToBeReplicated(targetBlk, new DatanodeStorageInfo[] {target.getStorageInfos()[0]}); // dn0 replicates in success diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java index 1900160ed91..12bc75a9f78 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java @@ -105,6 +105,7 @@ import org.apache.hadoop.util.ToolRunner; import org.apache.hadoop.crypto.key.KeyProviderDelegationTokenExtension.DelegationTokenExtension; import org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.CryptoExtension; import org.apache.hadoop.io.Text; +import org.apache.hadoop.util.XMLUtils; import org.junit.After; import org.junit.Assert; import org.junit.Before; @@ -153,7 +154,6 @@ import org.xml.sax.InputSource; import org.xml.sax.helpers.DefaultHandler; import javax.xml.parsers.SAXParser; -import javax.xml.parsers.SAXParserFactory; public class TestEncryptionZones { static final Logger LOG = LoggerFactory.getLogger(TestEncryptionZones.class); @@ -1734,7 +1734,7 @@ public class TestEncryptionZones { PBImageXmlWriter v = new PBImageXmlWriter(new Configuration(), pw); v.visit(new RandomAccessFile(originalFsimage, "r")); final String xml = output.toString(); - SAXParser parser = SAXParserFactory.newInstance().newSAXParser(); + SAXParser parser = XMLUtils.newSecureSAXParserFactory().newSAXParser(); parser.parse(new InputSource(new StringReader(xml)), new DefaultHandler()); } diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceWithStriped.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceWithStriped.java new file mode 100644 index 00000000000..2e17b9681b7 --- /dev/null +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceWithStriped.java @@ -0,0 +1,267 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertTrue; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.concurrent.TimeoutException; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileChecksum; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hdfs.protocol.Block; +import org.apache.hadoop.hdfs.protocol.DatanodeInfo; +import org.apache.hadoop.hdfs.protocol.DatanodeInfo.AdminStates; +import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy; +import org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType; +import org.apache.hadoop.hdfs.protocol.LocatedBlocks; +import org.apache.hadoop.hdfs.protocol.LocatedStripedBlock; +import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo; +import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped; +import org.apache.hadoop.hdfs.server.blockmanagement.BlockManager; +import org.apache.hadoop.hdfs.server.blockmanagement.CombinedHostFileManager; +import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo; +import org.apache.hadoop.hdfs.server.blockmanagement.HostConfigManager; +import org.apache.hadoop.hdfs.server.namenode.FSNamesystem; +import org.apache.hadoop.hdfs.server.namenode.INodeFile; +import org.apache.hadoop.hdfs.server.namenode.NameNode; +import org.apache.hadoop.hdfs.server.namenode.NameNodeAdapter; +import org.apache.hadoop.hdfs.util.HostsFileWriter; +import org.apache.hadoop.test.GenericTestUtils; +import org.junit.After; +import org.junit.Assert; +import org.junit.Before; +import org.junit.Test; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * This class tests the in maintenance of datanode with striped blocks. + */ +public class TestMaintenanceWithStriped { + private static final Logger LOG = + LoggerFactory.getLogger(TestMaintenanceWithStriped.class); + + // heartbeat interval in seconds + private static final int HEARTBEAT_INTERVAL = 1; + // block report in msec + private static final int BLOCKREPORT_INTERVAL_MSEC = 1000; + // replication interval + private static final int NAMENODE_REPLICATION_INTERVAL = 1; + + private Configuration conf; + private MiniDFSCluster cluster; + private DistributedFileSystem dfs; + private final ErasureCodingPolicy ecPolicy = + StripedFileTestUtil.getDefaultECPolicy(); + private int numDNs; + private final int cellSize = ecPolicy.getCellSize(); + private final int dataBlocks = ecPolicy.getNumDataUnits(); + private final int parityBlocks = ecPolicy.getNumParityUnits(); + private final int blockSize = cellSize * 4; + private final int blockGroupSize = blockSize * dataBlocks; + private final Path ecDir = new Path("/" + this.getClass().getSimpleName()); + private HostsFileWriter hostsFileWriter; + private boolean useCombinedHostFileManager = true; + + private FSNamesystem fsn; + private BlockManager bm; + + protected Configuration createConfiguration() { + return new HdfsConfiguration(); + } + + @Before + public void setup() throws IOException { + // Set up the hosts/exclude files. + hostsFileWriter = new HostsFileWriter(); + conf = createConfiguration(); + if (useCombinedHostFileManager) { + conf.setClass(DFSConfigKeys.DFS_NAMENODE_HOSTS_PROVIDER_CLASSNAME_KEY, + CombinedHostFileManager.class, HostConfigManager.class); + } + hostsFileWriter.initialize(conf, "temp/admin"); + + + conf.setInt(DFSConfigKeys.DFS_NAMENODE_HEARTBEAT_RECHECK_INTERVAL_KEY, + 2000); + conf.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, HEARTBEAT_INTERVAL); + conf.setInt(DFSConfigKeys.DFS_NAMENODE_REDUNDANCY_INTERVAL_SECONDS_KEY, 1); + conf.setInt(DFSConfigKeys.DFS_BLOCKREPORT_INTERVAL_MSEC_KEY, + BLOCKREPORT_INTERVAL_MSEC); + conf.setInt(DFSConfigKeys.DFS_NAMENODE_RECONSTRUCTION_PENDING_TIMEOUT_SEC_KEY, + 4); + conf.setInt(DFSConfigKeys.DFS_NAMENODE_REDUNDANCY_INTERVAL_SECONDS_KEY, + NAMENODE_REPLICATION_INTERVAL); + conf.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, blockSize); + conf.setInt( + DFSConfigKeys.DFS_DN_EC_RECONSTRUCTION_STRIPED_READ_BUFFER_SIZE_KEY, + cellSize - 1); + conf.setInt(DFSConfigKeys.DFS_NAMENODE_REDUNDANCY_INTERVAL_SECONDS_KEY, 1); + conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_REDUNDANCY_CONSIDERLOAD_KEY, + false); + + numDNs = dataBlocks + parityBlocks + 5; + cluster = new MiniDFSCluster.Builder(conf).numDataNodes(numDNs).build(); + cluster.waitActive(); + dfs = cluster.getFileSystem(0); + fsn = cluster.getNamesystem(); + bm = fsn.getBlockManager(); + + dfs.enableErasureCodingPolicy( + StripedFileTestUtil.getDefaultECPolicy().getName()); + dfs.mkdirs(ecDir); + dfs.setErasureCodingPolicy(ecDir, + StripedFileTestUtil.getDefaultECPolicy().getName()); + } + + @After + public void teardown() throws IOException { + hostsFileWriter.cleanup(); + if (cluster != null) { + cluster.shutdown(); + cluster = null; + } + } + + /** + * test DN maintenance with striped blocks. + * @throws Exception + */ + @Test(timeout = 120000) + public void testInMaintenance() throws Exception { + //1. create EC file + // d0 d1 d2 d3 d4 d5 d6 d7 d8 + final Path ecFile = new Path(ecDir, "testInMaintenance"); + int writeBytes = cellSize * dataBlocks; + writeStripedFile(dfs, ecFile, writeBytes); + Assert.assertEquals(0, bm.numOfUnderReplicatedBlocks()); + FileChecksum fileChecksum1 = dfs.getFileChecksum(ecFile, writeBytes); + + final INodeFile fileNode = cluster.getNamesystem().getFSDirectory() + .getINode4Write(ecFile.toString()).asFile(); + BlockInfo firstBlock = fileNode.getBlocks()[0]; + DatanodeStorageInfo[] dnStorageInfos = bm.getStorages(firstBlock); + + //2. maintenance node + // d4 d5 d6 d7 d8 + int maintenanceDNIndex = 4; + int numMaintenance= 5; + List maintenanceNodes = new ArrayList<>(); + + for (int i = maintenanceDNIndex; i < numMaintenance + maintenanceDNIndex; ++i) { + maintenanceNodes.add(dnStorageInfos[i].getDatanodeDescriptor()); + } + + maintenanceNode(0, maintenanceNodes, AdminStates.IN_MAINTENANCE, Long.MAX_VALUE); + + //3. wait for maintenance block to replicate + GenericTestUtils.waitFor( + () -> maintenanceNodes.size() == fsn.getNumInMaintenanceLiveDataNodes(), + 100, 60000); + + //4. check DN status, it should be reconstructed again + LocatedBlocks lbs = cluster.getNameNodeRpc().getBlockLocations( + ecFile.toString(), 0, writeBytes); + LocatedStripedBlock bg = (LocatedStripedBlock) (lbs.get(0)); + + BlockInfoStriped blockInfo = + (BlockInfoStriped)bm.getStoredBlock( + new Block(bg.getBlock().getBlockId())); + + // So far, there are 11 total internal blocks, 6 live (d0 d1 d2 d3 d4' d5') + // and 5 in maintenance (d4 d5 d6 d7 d8) internal blocks. + + assertEquals(6, bm.countNodes(blockInfo).liveReplicas()); + assertEquals(5, bm.countNodes(blockInfo).maintenanceNotForReadReplicas()); + + FileChecksum fileChecksum2 = dfs.getFileChecksum(ecFile, writeBytes); + Assert.assertEquals("Checksum mismatches!", fileChecksum1, fileChecksum2); + } + + + /* Get DFSClient to the namenode */ + private static DFSClient getDfsClient(NameNode nn, Configuration conf) + throws IOException { + return new DFSClient(nn.getNameNodeAddress(), conf); + } + + private byte[] writeStripedFile(DistributedFileSystem fs, Path ecFile, + int writeBytes) throws Exception { + byte[] bytes = StripedFileTestUtil.generateBytes(writeBytes); + DFSTestUtil.writeFile(fs, ecFile, new String(bytes)); + StripedFileTestUtil.waitBlockGroupsReported(fs, ecFile.toString()); + + StripedFileTestUtil.checkData(fs, ecFile, writeBytes, + new ArrayList(), null, blockGroupSize); + return bytes; + } + + /* + * maintenance the DN at index dnIndex or one random node if dnIndex is set + * to -1 and wait for the node to reach the given {@code waitForState}. + */ + private void maintenanceNode(int nnIndex, List maintenancedNodes, + AdminStates waitForState, long maintenanceExpirationInMS) + throws IOException, TimeoutException, InterruptedException { + DFSClient client = getDfsClient(cluster.getNameNode(nnIndex), conf); + DatanodeInfo[] info = client.datanodeReport(DatanodeReportType.LIVE); + + // write nodename into the exclude file. + Map maintenanceNodes = new HashMap<>(); + + for (DatanodeInfo dn : maintenancedNodes) { + boolean nodeExists = false; + for (DatanodeInfo dninfo : info) { + if (dninfo.getDatanodeUuid().equals(dn.getDatanodeUuid())) { + nodeExists = true; + break; + } + } + assertTrue("Datanode: " + dn + " is not LIVE", nodeExists); + maintenanceNodes.put(dn.getName(), maintenanceExpirationInMS); + LOG.info("Maintenance node: " + dn.getName()); + } + // write node names into the json host file. + hostsFileWriter.initOutOfServiceHosts(null, maintenanceNodes); + + refreshNodes(cluster.getNamesystem(nnIndex), conf); + for (DatanodeInfo dn : maintenancedNodes) { + DatanodeInfo ret = NameNodeAdapter + .getDatanode(cluster.getNamesystem(nnIndex), dn); + LOG.info("Waiting for node " + ret + " to change state to " + waitForState + + " current state: " + ret.getAdminState()); + GenericTestUtils.waitFor( + () -> ret.getAdminState() == waitForState, + 100, 60000); + LOG.info("node " + ret + " reached the state " + waitForState); + } + } + + private static void refreshNodes(final FSNamesystem ns, + final Configuration conf) throws IOException { + ns.getBlockManager().getDatanodeManager().refreshNodes(conf); + } + +} diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgrade.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgrade.java index 6e7014c42eb..bb5da24a682 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgrade.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgrade.java @@ -33,6 +33,9 @@ import javax.management.ObjectName; import javax.management.ReflectionException; import javax.management.openmbean.CompositeDataSupport; +import org.apache.hadoop.hdfs.protocol.HdfsConstants; +import org.apache.hadoop.hdfs.server.namenode.NameNode; +import org.apache.hadoop.hdfs.server.namenode.ha.HATestUtil; import org.junit.Rule; import org.junit.rules.TemporaryFolder; import org.slf4j.Logger; @@ -720,6 +723,39 @@ public class TestRollingUpgrade { } } + @Test + public void testEditLogTailerRollingUpgrade() throws IOException, InterruptedException { + Configuration conf = new Configuration(); + conf.setInt(DFSConfigKeys.DFS_HA_TAILEDITS_PERIOD_KEY, 1); + conf.setInt(DFSConfigKeys.DFS_NAMENODE_CHECKPOINT_PERIOD_KEY, 1); + + HAUtil.setAllowStandbyReads(conf, true); + + MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf) + .nnTopology(MiniDFSNNTopology.simpleHATopology()) + .numDataNodes(0) + .build(); + cluster.waitActive(); + + cluster.transitionToActive(0); + + NameNode nn1 = cluster.getNameNode(0); + NameNode nn2 = cluster.getNameNode(1); + try { + // RU start should trigger rollback image in standbycheckpointer + nn1.getRpcServer().rollingUpgrade(HdfsConstants.RollingUpgradeAction.PREPARE); + HATestUtil.waitForStandbyToCatchUp(nn1, nn2); + Assert.assertTrue(nn2.getNamesystem().isNeedRollbackFsImage()); + + // RU finalize should reset rollback image flag in standbycheckpointer + nn1.getRpcServer().rollingUpgrade(HdfsConstants.RollingUpgradeAction.FINALIZE); + HATestUtil.waitForStandbyToCatchUp(nn1, nn2); + Assert.assertFalse(nn2.getNamesystem().isNeedRollbackFsImage()); + } finally { + cluster.shutdown(); + } + } + /** * In non-HA setup, after rolling upgrade prepare, the Secondary NN should * still be able to do checkpoint diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestNNWithQJM.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestNNWithQJM.java index 4483667e31b..6a340024c29 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestNNWithQJM.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestNNWithQJM.java @@ -33,7 +33,6 @@ import org.apache.hadoop.hdfs.server.namenode.NameNode; import org.apache.hadoop.ipc.RemoteException; import org.apache.hadoop.test.GenericTestUtils; import org.apache.hadoop.util.ExitUtil; -import org.apache.hadoop.util.ExitUtil.ExitException; import org.junit.After; import org.junit.Before; import org.junit.Test; @@ -197,10 +196,9 @@ public class TestNNWithQJM { .manageNameDfsDirs(false).format(false).checkExitOnShutdown(false) .build(); fail("New NN with different namespace should have been rejected"); - } catch (ExitException ee) { + } catch (IOException ioe) { GenericTestUtils.assertExceptionContains( - "Unable to start log segment 1: too few journals", ee); - assertTrue("Didn't terminate properly ", ExitUtil.terminateCalled()); + "recoverUnfinalizedSegments failed for too many journals", ioe); } } } diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/SpyQJournalUtil.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/SpyQJournalUtil.java new file mode 100644 index 00000000000..5816862704a --- /dev/null +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/SpyQJournalUtil.java @@ -0,0 +1,108 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs.qjournal.client; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos.GetJournaledEditsResponseProto; +import org.apache.hadoop.hdfs.server.protocol.NamespaceInfo; +import org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ListenableFuture; +import org.mockito.Mockito; +import org.mockito.stubbing.Answer; + +import java.io.IOException; +import java.net.InetSocketAddress; +import java.net.URI; +import java.util.List; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Semaphore; + +/** + * One Util class to mock QJM for some UTs not in this package. + */ +public final class SpyQJournalUtil { + + private SpyQJournalUtil() { + } + + /** + * Mock a QuorumJournalManager with input uri, nsInfo and namServiceId. + * @param conf input configuration. + * @param uri input uri. + * @param nsInfo input nameservice info. + * @param nameServiceId input nameservice Id. + * @return one mocked QuorumJournalManager. + * @throws IOException throw IOException. + */ + public static QuorumJournalManager createSpyingQJM(Configuration conf, + URI uri, NamespaceInfo nsInfo, String nameServiceId) throws IOException { + AsyncLogger.Factory spyFactory = new AsyncLogger.Factory() { + @Override + public AsyncLogger createLogger(Configuration conf, NamespaceInfo nsInfo, + String journalId, String nameServiceId, InetSocketAddress addr) { + AsyncLogger logger = new IPCLoggerChannel(conf, nsInfo, journalId, + nameServiceId, addr) { + protected ExecutorService createSingleThreadExecutor() { + // Don't parallelize calls to the quorum in the tests. + // This makes the tests more deterministic. + return new DirectExecutorService(); + } + }; + return Mockito.spy(logger); + } + }; + return new QuorumJournalManager(conf, uri, nsInfo, nameServiceId, spyFactory); + } + + /** + * Mock Journals with different response for getJournaledEdits rpc with the input startTxid. + * 1. First journal with one empty response. + * 2. Second journal with one normal response. + * 3. Third journal with one slow response. + * @param manager input QuorumJournalManager. + * @param startTxid input start txid. + */ + public static void mockJNWithEmptyOrSlowResponse(QuorumJournalManager manager, long startTxid) { + List spies = manager.getLoggerSetForTests().getLoggersForTests(); + Semaphore semaphore = new Semaphore(0); + + // Mock JN0 return an empty response. + Mockito.doAnswer(invocation -> { + semaphore.release(); + return GetJournaledEditsResponseProto.newBuilder().setTxnCount(0).build(); + }).when(spies.get(0)) + .getJournaledEdits(startTxid, QuorumJournalManager.QJM_RPC_MAX_TXNS_DEFAULT); + + // Mock JN1 return a normal response. + spyGetJournaledEdits(spies, 1, startTxid, () -> semaphore.release(1)); + + // Mock JN2 return a slow response + spyGetJournaledEdits(spies, 2, startTxid, () -> semaphore.acquireUninterruptibly(2)); + } + + public static void spyGetJournaledEdits(List spies, + int jnSpyIdx, long fromTxId, Runnable preHook) { + Mockito.doAnswer((Answer>) invocation -> { + preHook.run(); + @SuppressWarnings("unchecked") + ListenableFuture result = + (ListenableFuture) invocation.callRealMethod(); + return result; + }).when(spies.get(jnSpyIdx)).getJournaledEdits(fromTxId, + QuorumJournalManager.QJM_RPC_MAX_TXNS_DEFAULT); + } +} diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumJournalManager.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumJournalManager.java index 5361dbfc1ba..225225efa80 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumJournalManager.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumJournalManager.java @@ -23,6 +23,7 @@ import static org.apache.hadoop.hdfs.qjournal.QJMTestUtil.verifyEdits; import static org.apache.hadoop.hdfs.qjournal.QJMTestUtil.writeSegment; import static org.apache.hadoop.hdfs.qjournal.QJMTestUtil.writeTxns; import static org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManagerUnit.futureReturns; +import static org.apache.hadoop.hdfs.qjournal.client.SpyQJournalUtil.spyGetJournaledEdits; import static org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManagerUnit.futureThrows; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; @@ -35,12 +36,10 @@ import java.io.File; import java.io.IOException; import java.net.InetSocketAddress; import java.net.URI; -import java.net.URISyntaxException; import java.net.URL; import java.net.UnknownHostException; import java.util.ArrayList; import java.util.List; -import java.util.concurrent.ExecutorService; import java.util.concurrent.Semaphore; import java.util.concurrent.TimeoutException; import java.util.concurrent.atomic.AtomicInteger; @@ -60,7 +59,6 @@ import org.apache.hadoop.hdfs.DFSConfigKeys; import org.apache.hadoop.hdfs.qjournal.MiniJournalCluster; import org.apache.hadoop.hdfs.qjournal.QJMTestUtil; import org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos.SegmentStateProto; -import org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos.GetJournaledEditsResponseProto; import org.apache.hadoop.hdfs.qjournal.server.JournalFaultInjector; import org.apache.hadoop.hdfs.qjournal.server.JournalNode; import org.apache.hadoop.hdfs.server.namenode.EditLogInputStream; @@ -69,7 +67,6 @@ import org.apache.hadoop.hdfs.server.namenode.FileJournalManager; import org.apache.hadoop.hdfs.server.namenode.FileJournalManager.EditLogFile; import org.apache.hadoop.hdfs.server.namenode.NNStorage; import org.apache.hadoop.hdfs.server.namenode.NameNodeLayoutVersion; -import org.apache.hadoop.hdfs.server.protocol.NamespaceInfo; import org.apache.hadoop.io.IOUtils; import org.apache.hadoop.ipc.ProtobufRpcEngine2; import org.apache.hadoop.test.GenericTestUtils; @@ -1168,9 +1165,9 @@ public class TestQuorumJournalManager { writeTxns(stm, 21, 20); Semaphore semaphore = new Semaphore(0); - spyGetJournaledEdits(0, 21, () -> semaphore.release(1)); - spyGetJournaledEdits(1, 21, () -> semaphore.release(1)); - spyGetJournaledEdits(2, 21, () -> semaphore.acquireUninterruptibly(2)); + spyGetJournaledEdits(spies, 0, 21, () -> semaphore.release(1)); + spyGetJournaledEdits(spies, 1, 21, () -> semaphore.release(1)); + spyGetJournaledEdits(spies, 2, 21, () -> semaphore.acquireUninterruptibly(2)); List streams = new ArrayList<>(); qjm.selectInputStreams(streams, 21, true, true); @@ -1180,17 +1177,6 @@ public class TestQuorumJournalManager { assertEquals(40, streams.get(0).getLastTxId()); } - private void spyGetJournaledEdits(int jnSpyIdx, long fromTxId, Runnable preHook) { - Mockito.doAnswer((Answer>) invocation -> { - preHook.run(); - @SuppressWarnings("unchecked") - ListenableFuture result = - (ListenableFuture) invocation.callRealMethod(); - return result; - }).when(spies.get(jnSpyIdx)).getJournaledEdits(fromTxId, - QuorumJournalManager.QJM_RPC_MAX_TXNS_DEFAULT); - } - @Test public void testSelectViaRpcAfterJNRestart() throws Exception { EditLogOutputStream stm = @@ -1243,27 +1229,10 @@ public class TestQuorumJournalManager { // expected } } - - private QuorumJournalManager createSpyingQJM() - throws IOException, URISyntaxException { - AsyncLogger.Factory spyFactory = new AsyncLogger.Factory() { - @Override - public AsyncLogger createLogger(Configuration conf, NamespaceInfo nsInfo, - String journalId, String nameServiceId, InetSocketAddress addr) { - AsyncLogger logger = new IPCLoggerChannel(conf, nsInfo, journalId, - nameServiceId, addr) { - protected ExecutorService createSingleThreadExecutor() { - // Don't parallelize calls to the quorum in the tests. - // This makes the tests more deterministic. - return new DirectExecutorService(); - } - }; - - return Mockito.spy(logger); - } - }; - return closeLater(new QuorumJournalManager( - conf, cluster.getQuorumJournalURI(JID), FAKE_NSINFO, spyFactory)); + + private QuorumJournalManager createSpyingQJM() throws IOException { + return closeLater(SpyQJournalUtil.createSpyingQJM( + conf, cluster.getQuorumJournalURI(JID), FAKE_NSINFO, null)); } private static void waitForAllPendingCalls(AsyncLoggerSet als) diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestJournaledEditsCache.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestJournaledEditsCache.java index 2a178a1547e..82b8b587694 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestJournaledEditsCache.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestJournaledEditsCache.java @@ -221,6 +221,27 @@ public class TestJournaledEditsCache { cache.retrieveEdits(-1, 10, new ArrayList<>()); } + @Test + public void testCacheSizeConfigs() { + // Assert the default configs. + Configuration config = new Configuration(); + cache = new JournaledEditsCache(config); + assertEquals((int) (Runtime.getRuntime().maxMemory() * 0.5f), cache.getCapacity()); + + // Set dfs.journalnode.edit-cache-size.bytes. + Configuration config1 = new Configuration(); + config1.setInt(DFSConfigKeys.DFS_JOURNALNODE_EDIT_CACHE_SIZE_KEY, 1); + config1.setFloat(DFSConfigKeys.DFS_JOURNALNODE_EDIT_CACHE_SIZE_FRACTION_KEY, 0.1f); + cache = new JournaledEditsCache(config1); + assertEquals(1, cache.getCapacity()); + + // Don't set dfs.journalnode.edit-cache-size.bytes. + Configuration config2 = new Configuration(); + config2.setFloat(DFSConfigKeys.DFS_JOURNALNODE_EDIT_CACHE_SIZE_FRACTION_KEY, 0.1f); + cache = new JournaledEditsCache(config2); + assertEquals((int) (Runtime.getRuntime().maxMemory() * 0.1f), cache.getCapacity()); + } + private void storeEdits(int startTxn, int endTxn) throws Exception { cache.storeEdits(createTxnData(startTxn, endTxn - startTxn + 1), startTxn, endTxn, NameNodeLayoutVersion.CURRENT_LAYOUT_VERSION); diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java index 4fa320ac29e..c25cc88059d 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java @@ -44,7 +44,7 @@ public class BlockManagerTestUtil { public static void setNodeReplicationLimit(final BlockManager blockManager, final int limit) { - blockManager.maxReplicationStreams = limit; + blockManager.setMaxReplicationStreams(limit, false); } /** @return the datanode descriptor for the given the given storageID. */ diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java index 8ebcbfe2e34..c8a94e5ad20 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java @@ -677,8 +677,8 @@ public class TestBlockManager { */ @Test public void testHighestPriReplSrcChosenDespiteMaxReplLimit() throws Exception { - bm.maxReplicationStreams = 0; - bm.replicationStreamsHardLimit = 1; + bm.setMaxReplicationStreams(0, false); + bm.setReplicationStreamsHardLimit(1); long blockId = 42; // arbitrary Block aBlock = new Block(blockId, 0, 0); @@ -735,7 +735,7 @@ public class TestBlockManager { @Test public void testChooseSrcDatanodesWithDupEC() throws Exception { - bm.maxReplicationStreams = 4; + bm.setMaxReplicationStreams(4, false); long blockId = -9223372036854775776L; // real ec block id Block aBlock = new Block(blockId, 0, 0); @@ -895,7 +895,7 @@ public class TestBlockManager { assertNotNull(work); // simulate the 2 nodes reach maxReplicationStreams - for(int i = 0; i < bm.maxReplicationStreams; i++){ + for(int i = 0; i < bm.getMaxReplicationStreams(); i++){ ds3.getDatanodeDescriptor().incrementPendingReplicationWithoutTargets(); ds4.getDatanodeDescriptor().incrementPendingReplicationWithoutTargets(); } @@ -939,7 +939,7 @@ public class TestBlockManager { assertNotNull(work); // simulate the 1 node reaches maxReplicationStreams - for(int i = 0; i < bm.maxReplicationStreams; i++){ + for(int i = 0; i < bm.getMaxReplicationStreams(); i++){ ds2.getDatanodeDescriptor().incrementPendingReplicationWithoutTargets(); } @@ -948,7 +948,7 @@ public class TestBlockManager { assertNotNull(work); // simulate the 1 more node reaches maxReplicationStreams - for(int i = 0; i < bm.maxReplicationStreams; i++){ + for(int i = 0; i < bm.getMaxReplicationStreams(); i++){ ds3.getDatanodeDescriptor().incrementPendingReplicationWithoutTargets(); } @@ -957,10 +957,62 @@ public class TestBlockManager { assertNull(work); } + @Test + public void testSkipReconstructionWithManyBusyNodes3() { + NameNode.initMetrics(new Configuration(), HdfsServerConstants.NamenodeRole.NAMENODE); + long blockId = -9223372036854775776L; // Real ec block id + // RS-3-2 EC policy + ErasureCodingPolicy ecPolicy = + SystemErasureCodingPolicies.getPolicies().get(1); + + // Create an EC block group: 3 data blocks + 2 parity blocks. + Block aBlockGroup = new Block(blockId, ecPolicy.getCellSize() * ecPolicy.getNumDataUnits(), 0); + BlockInfoStriped aBlockInfoStriped = new BlockInfoStriped(aBlockGroup, ecPolicy); + + // Create 4 storageInfo, which means 1 block is missing. + DatanodeStorageInfo ds1 = DFSTestUtil.createDatanodeStorageInfo( + "storage1", "1.1.1.1", "rack1", "host1"); + DatanodeStorageInfo ds2 = DFSTestUtil.createDatanodeStorageInfo( + "storage2", "2.2.2.2", "rack2", "host2"); + DatanodeStorageInfo ds3 = DFSTestUtil.createDatanodeStorageInfo( + "storage3", "3.3.3.3", "rack3", "host3"); + DatanodeStorageInfo ds4 = DFSTestUtil.createDatanodeStorageInfo( + "storage4", "4.4.4.4", "rack4", "host4"); + + // Link block with storage. + aBlockInfoStriped.addStorage(ds1, aBlockGroup); + aBlockInfoStriped.addStorage(ds2, new Block(blockId + 1, 0, 0)); + aBlockInfoStriped.addStorage(ds3, new Block(blockId + 2, 0, 0)); + aBlockInfoStriped.addStorage(ds4, new Block(blockId + 3, 0, 0)); + + addEcBlockToBM(blockId, ecPolicy); + aBlockInfoStriped.setBlockCollectionId(mockINodeId); + + // Reconstruction should be scheduled. + BlockReconstructionWork work = bm.scheduleReconstruction(aBlockInfoStriped, 3); + assertNotNull(work); + + ExtendedBlock dummyBlock = new ExtendedBlock("bpid", 1, 1, 1); + DatanodeDescriptor dummyDD = ds1.getDatanodeDescriptor(); + DatanodeDescriptor[] dummyDDArray = new DatanodeDescriptor[]{dummyDD}; + DatanodeStorageInfo[] dummyDSArray = new DatanodeStorageInfo[]{ds1}; + // Simulate the 2 nodes reach maxReplicationStreams. + for(int i = 0; i < bm.getMaxReplicationStreams(); i++){ //Add some dummy EC reconstruction task. + ds3.getDatanodeDescriptor().addBlockToBeErasureCoded(dummyBlock, dummyDDArray, + dummyDSArray, new byte[0], new byte[0], ecPolicy); + ds4.getDatanodeDescriptor().addBlockToBeErasureCoded(dummyBlock, dummyDDArray, + dummyDSArray, new byte[0], new byte[0], ecPolicy); + } + + // Reconstruction should be skipped since the number of non-busy nodes are not enough. + work = bm.scheduleReconstruction(aBlockInfoStriped, 3); + assertNull(work); + } + @Test public void testFavorDecomUntilHardLimit() throws Exception { - bm.maxReplicationStreams = 0; - bm.replicationStreamsHardLimit = 1; + bm.setMaxReplicationStreams(0, false); + bm.setReplicationStreamsHardLimit(1); long blockId = 42; // arbitrary Block aBlock = new Block(blockId, 0, 0); diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java index 35ff36a856b..015a0385a73 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java @@ -967,20 +967,22 @@ public class TestDatanodeManager { * Verify the correctness of pending recovery process. * * @param numReplicationBlocks the number of replication blocks in the queue. - * @param numECBlocks number of EC blocks in the queue. + * @param numEcBlocksToBeReplicated the number of EC blocks to be replicated in the queue. + * @param numBlocksToBeErasureCoded number of EC blocks to be erasure coded in the queue. * @param maxTransfers the maxTransfer value. * @param maxTransfersHardLimit the maxTransfer hard limit value. - * @param numReplicationTasks the number of replication tasks polled from - * the queue. - * @param numECTasks the number of EC tasks polled from the queue. + * @param numReplicationTasks the number of replication tasks polled from the queue. + * @param numECTasksToBeReplicated the number of EC tasks to be replicated polled from the queue. + * @param numECTasksToBeErasureCoded the number of EC tasks to be erasure coded polled from + * the queue. * @param isDecommissioning if the node is in the decommissioning process. * * @throws IOException */ private void verifyPendingRecoveryTasks( - int numReplicationBlocks, int numECBlocks, - int maxTransfers, int maxTransfersHardLimit, - int numReplicationTasks, int numECTasks, boolean isDecommissioning) + int numReplicationBlocks, int numEcBlocksToBeReplicated, int numBlocksToBeErasureCoded, + int maxTransfers, int maxTransfersHardLimit, int numReplicationTasks, + int numECTasksToBeReplicated, int numECTasksToBeErasureCoded, boolean isDecommissioning) throws IOException { FSNamesystem fsn = Mockito.mock(FSNamesystem.class); Mockito.when(fsn.hasWriteLock()).thenReturn(true); @@ -1009,13 +1011,25 @@ public class TestDatanodeManager { .thenReturn(tasks); } - if (numECBlocks > 0) { + if (numEcBlocksToBeReplicated > 0) { + Mockito.when(nodeInfo.getNumberOfECBlocksToBeReplicated()) + .thenReturn(numEcBlocksToBeReplicated); + + List ecReplicatedTasks = + Collections.nCopies( + Math.min(numECTasksToBeReplicated, numEcBlocksToBeReplicated), + new BlockTargetPair(null, null)); + Mockito.when(nodeInfo.getECReplicatedCommand(numECTasksToBeReplicated)) + .thenReturn(ecReplicatedTasks); + } + + if (numBlocksToBeErasureCoded > 0) { Mockito.when(nodeInfo.getNumberOfBlocksToBeErasureCoded()) - .thenReturn(numECBlocks); + .thenReturn(numBlocksToBeErasureCoded); List tasks = - Collections.nCopies(numECTasks, null); - Mockito.when(nodeInfo.getErasureCodeCommand(numECTasks)) + Collections.nCopies(numECTasksToBeErasureCoded, null); + Mockito.when(nodeInfo.getErasureCodeCommand(numECTasksToBeErasureCoded)) .thenReturn(tasks); } @@ -1026,42 +1040,43 @@ public class TestDatanodeManager { SlowPeerReports.EMPTY_REPORT, SlowDiskReports.EMPTY_REPORT); long expectedNumCmds = Arrays.stream( - new int[]{numReplicationTasks, numECTasks}) + new int[]{numReplicationTasks + numECTasksToBeReplicated, numECTasksToBeErasureCoded}) .filter(x -> x > 0) .count(); assertEquals(expectedNumCmds, cmds.length); int idx = 0; - if (numReplicationTasks > 0) { + if (numReplicationTasks > 0 || numECTasksToBeReplicated > 0) { assertTrue(cmds[idx] instanceof BlockCommand); BlockCommand cmd = (BlockCommand) cmds[0]; - assertEquals(numReplicationTasks, cmd.getBlocks().length); - assertEquals(numReplicationTasks, cmd.getTargets().length); + assertEquals(numReplicationTasks + numECTasksToBeReplicated, cmd.getBlocks().length); + assertEquals(numReplicationTasks + numECTasksToBeReplicated, cmd.getTargets().length); idx++; } - if (numECTasks > 0) { + if (numECTasksToBeErasureCoded > 0) { assertTrue(cmds[idx] instanceof BlockECReconstructionCommand); BlockECReconstructionCommand cmd = (BlockECReconstructionCommand) cmds[idx]; - assertEquals(numECTasks, cmd.getECTasks().size()); + assertEquals(numECTasksToBeErasureCoded, cmd.getECTasks().size()); } Mockito.verify(nodeInfo).getReplicationCommand(numReplicationTasks); - Mockito.verify(nodeInfo).getErasureCodeCommand(numECTasks); + Mockito.verify(nodeInfo).getECReplicatedCommand(numECTasksToBeReplicated); + Mockito.verify(nodeInfo).getErasureCodeCommand(numECTasksToBeErasureCoded); } @Test public void testPendingRecoveryTasks() throws IOException { // Tasks are slitted according to the ratio between queue lengths. - verifyPendingRecoveryTasks(20, 20, 20, 30, 10, 10, false); - verifyPendingRecoveryTasks(40, 10, 20, 30, 16, 4, false); + verifyPendingRecoveryTasks(20, 0, 20, 20, 30, 10, 0, 10, false); + verifyPendingRecoveryTasks(40, 0, 10, 20, 30, 16, 0, 4, false); // Approximately load tasks if the ratio between queue length is large. - verifyPendingRecoveryTasks(400, 1, 20, 30, 20, 1, false); + verifyPendingRecoveryTasks(400, 0, 1, 20, 30, 20, 0, 1, false); // Tasks use dfs.namenode.replication.max-streams-hard-limit for decommissioning node - verifyPendingRecoveryTasks(30, 30, 20, 30, 15, 15, true); + verifyPendingRecoveryTasks(20, 10, 10, 20, 40, 10, 10, 5, true); } @Test diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlocks.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlocks.java index 0487c3f9736..04d2572b392 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlocks.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlocks.java @@ -159,9 +159,9 @@ public class TestUnderReplicatedBlocks { BlockManagerTestUtil.updateState(bm); assertTrue("The number of blocks to be replicated should be less than " - + "or equal to " + bm.replicationStreamsHardLimit, + + "or equal to " + bm.getReplicationStreamsHardLimit(), secondDn.getNumberOfBlocksToBeReplicated() - <= bm.replicationStreamsHardLimit); + <= bm.getReplicationStreamsHardLimit()); DFSTestUtil.verifyClientStats(conf, cluster); } finally { cluster.shutdown(); diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestHostRestrictingAuthorizationFilter.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestHostRestrictingAuthorizationFilter.java index 34bc616e540..503c4170c46 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestHostRestrictingAuthorizationFilter.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestHostRestrictingAuthorizationFilter.java @@ -244,14 +244,13 @@ public class TestHostRestrictingAuthorizationFilter { } /** - * Test acceptable behavior to malformed requests - * Case: the request URI does not start with "/webhdfs/v1" + * A request that don't access WebHDFS API should pass through. */ @Test - public void testInvalidURI() throws Exception { + public void testNotWebhdfsAPIRequest() throws Exception { HttpServletRequest request = Mockito.mock(HttpServletRequest.class); Mockito.when(request.getMethod()).thenReturn("GET"); - Mockito.when(request.getRequestURI()).thenReturn("/InvalidURI"); + Mockito.when(request.getRequestURI()).thenReturn("/conf"); HttpServletResponse response = Mockito.mock(HttpServletResponse.class); Filter filter = new HostRestrictingAuthorizationFilter(); @@ -260,11 +259,7 @@ public class TestHostRestrictingAuthorizationFilter { FilterConfig fc = new DummyFilterConfig(configs); filter.init(fc); - filter.doFilter(request, response, - (servletRequest, servletResponse) -> {}); - Mockito.verify(response, Mockito.times(1)) - .sendError(Mockito.eq(HttpServletResponse.SC_NOT_FOUND), - Mockito.anyString()); + filter.doFilter(request, response, (servletRequest, servletResponse) -> {}); filter.destroy(); } diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeReconfiguration.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeReconfiguration.java index 14e3f63691b..d9578ca02a9 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeReconfiguration.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeReconfiguration.java @@ -45,6 +45,7 @@ import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_FILEIO_PROFILING import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_MIN_OUTLIER_DETECTION_DISKS_KEY; import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_OUTLIERS_REPORT_INTERVAL_KEY; import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_SLOWDISK_LOW_THRESHOLD_MS_KEY; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_MAX_SLOWDISKS_TO_EXCLUDE_KEY; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertNull; @@ -636,13 +637,15 @@ public class TestDataNodeReconfiguration { String[] slowDisksParameters2 = { DFS_DATANODE_FILEIO_PROFILING_SAMPLING_PERCENTAGE_KEY, DFS_DATANODE_MIN_OUTLIER_DETECTION_DISKS_KEY, - DFS_DATANODE_SLOWDISK_LOW_THRESHOLD_MS_KEY}; + DFS_DATANODE_SLOWDISK_LOW_THRESHOLD_MS_KEY, + DFS_DATANODE_MAX_SLOWDISKS_TO_EXCLUDE_KEY}; for (String parameter : slowDisksParameters2) { dn.reconfigureProperty(parameter, "99"); } // Assert diskMetrics. assertEquals(99, dn.getDiskMetrics().getMinOutlierDetectionDisks()); assertEquals(99, dn.getDiskMetrics().getLowThresholdMs()); + assertEquals(99, dn.getDiskMetrics().getMaxSlowDisksToExclude()); // Assert dnConf. assertTrue(dn.getDnConf().diskStatsEnabled); // Assert profilingEventHook. @@ -673,12 +676,16 @@ public class TestDataNodeReconfiguration { dn.reconfigureProperty(DFS_DATANODE_FILEIO_PROFILING_SAMPLING_PERCENTAGE_KEY, "1"); dn.reconfigureProperty(DFS_DATANODE_MIN_OUTLIER_DETECTION_DISKS_KEY, null); dn.reconfigureProperty(DFS_DATANODE_SLOWDISK_LOW_THRESHOLD_MS_KEY, null); + dn.reconfigureProperty(DFS_DATANODE_MAX_SLOWDISKS_TO_EXCLUDE_KEY, null); assertEquals(String.format("expect %s is not configured", DFS_DATANODE_MIN_OUTLIER_DETECTION_DISKS_KEY), null, dn.getConf().get(DFS_DATANODE_MIN_OUTLIER_DETECTION_DISKS_KEY)); assertEquals(String.format("expect %s is not configured", DFS_DATANODE_SLOWDISK_LOW_THRESHOLD_MS_KEY), null, dn.getConf().get(DFS_DATANODE_SLOWDISK_LOW_THRESHOLD_MS_KEY)); + assertEquals(String.format("expect %s is not configured", + DFS_DATANODE_MAX_SLOWDISKS_TO_EXCLUDE_KEY), null, + dn.getConf().get(DFS_DATANODE_MAX_SLOWDISKS_TO_EXCLUDE_KEY)); assertEquals(DFS_DATANODE_MIN_OUTLIER_DETECTION_DISKS_DEFAULT, dn.getDiskMetrics().getSlowDiskDetector().getMinOutlierDetectionNodes()); assertEquals(DFS_DATANODE_SLOWDISK_LOW_THRESHOLD_MS_DEFAULT, diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java index 0805257a287..d6f42f3d020 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java @@ -25,10 +25,13 @@ import java.util.Random; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.Future; +import java.util.concurrent.Semaphore; +import java.util.concurrent.TimeoutException; import java.util.function.Supplier; import org.apache.hadoop.fs.DF; import org.apache.hadoop.hdfs.server.blockmanagement.BlockManager; +import org.apache.hadoop.hdfs.server.datanode.DataNodeFaultInjector; import org.apache.hadoop.hdfs.server.datanode.DataSetLockManager; import org.apache.hadoop.hdfs.server.datanode.DirectoryScanner; import org.apache.hadoop.hdfs.server.datanode.LocalReplica; @@ -659,6 +662,9 @@ public class TestFsDatasetImpl { for (Future f : futureList) { f.get(); } + // Wait for the async deletion task finish. + GenericTestUtils.waitFor(() -> dataset.asyncDiskService.countPendingDeletions() == 0, + 100, 10000); for (String bpid : dataset.volumeMap.getBlockPoolList()) { assertEquals(numBlocks / 2, dataset.volumeMap.size(bpid)); } @@ -1830,4 +1836,93 @@ public class TestFsDatasetImpl { assertEquals(3, metrics.getNativeCopyIoQuantiles().length); } } + + /** + * The block should be in the replicaMap if the async deletion task is pending. + */ + @Test + public void testAysncDiskServiceDeleteReplica() + throws IOException, InterruptedException, TimeoutException { + HdfsConfiguration config = new HdfsConfiguration(); + // Bump up replication interval. + config.setInt(DFSConfigKeys.DFS_NAMENODE_REDUNDANCY_INTERVAL_SECONDS_KEY, 10); + MiniDFSCluster cluster = new MiniDFSCluster.Builder(config).numDataNodes(3).build(); + DistributedFileSystem fs = cluster.getFileSystem(); + String bpid = cluster.getNamesystem().getBlockPoolId(); + DataNodeFaultInjector oldInjector = DataNodeFaultInjector.get(); + final Semaphore semaphore = new Semaphore(0); + try { + cluster.waitActive(); + final DataNodeFaultInjector injector = new DataNodeFaultInjector() { + @Override + public void delayDeleteReplica() { + // Lets wait for the remove replica process. + try { + semaphore.acquire(1); + } catch (InterruptedException e) { + // ignore. + } + } + }; + DataNodeFaultInjector.set(injector); + + // Create file. + Path path = new Path("/testfile"); + DFSTestUtil.createFile(fs, path, 1024, (short) 3, 0); + DFSTestUtil.waitReplication(fs, path, (short) 3); + LocatedBlock lb = DFSTestUtil.getAllBlocks(fs, path).get(0); + ExtendedBlock extendedBlock = lb.getBlock(); + DatanodeInfo[] loc = lb.getLocations(); + assertEquals(3, loc.length); + + // DN side. + DataNode dn = cluster.getDataNode(loc[0].getIpcPort()); + final FsDatasetImpl ds = (FsDatasetImpl) DataNodeTestUtils.getFSDataset(dn); + List blockList = Lists.newArrayList(extendedBlock.getLocalBlock()); + assertNotNull(ds.getStoredBlock(bpid, extendedBlock.getBlockId())); + ds.invalidate(bpid, blockList.toArray(new Block[0])); + + // Test get blocks and datanodes. + loc = DFSTestUtil.getAllBlocks(fs, path).get(0).getLocations(); + assertEquals(3, loc.length); + List uuids = Lists.newArrayList(); + for (DatanodeInfo datanodeInfo : loc) { + uuids.add(datanodeInfo.getDatanodeUuid()); + } + assertTrue(uuids.contains(dn.getDatanodeUuid())); + + // Do verification that the first replication shouldn't be deleted from the memory first. + // Because the namenode still contains this replica, so client will try to read it. + // If this replica is deleted from memory, the client would got an ReplicaNotFoundException. + assertNotNull(ds.getStoredBlock(bpid, extendedBlock.getBlockId())); + + // Make it resume the removeReplicaFromMem method. + semaphore.release(1); + + // Waiting for the async deletion task finish. + GenericTestUtils.waitFor(() -> + ds.asyncDiskService.countPendingDeletions() == 0, 100, 1000); + + // Sleep for two heartbeat times. + Thread.sleep(config.getTimeDuration(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, + DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_DEFAULT, + TimeUnit.SECONDS, TimeUnit.MILLISECONDS) * 2); + + // Test get blocks and datanodes again. + loc = DFSTestUtil.getAllBlocks(fs, path).get(0).getLocations(); + assertEquals(2, loc.length); + uuids = Lists.newArrayList(); + for (DatanodeInfo datanodeInfo : loc) { + uuids.add(datanodeInfo.getDatanodeUuid()); + } + // The namenode does not contain this replica. + assertFalse(uuids.contains(dn.getDatanodeUuid())); + + // This replica has deleted from datanode memory. + assertNull(ds.getStoredBlock(bpid, extendedBlock.getBlockId())); + } finally { + cluster.shutdown(); + DataNodeFaultInjector.set(oldInjector); + } + } } \ No newline at end of file diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuthorizationContext.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuthorizationContext.java index 1f52cf33ba1..f9d98d781e4 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuthorizationContext.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuthorizationContext.java @@ -103,7 +103,7 @@ public class TestAuthorizationContext { thenReturn(mockEnforcer); FSPermissionChecker checker = new FSPermissionChecker( - fsOwner, superGroup, ugi, mockINodeAttributeProvider, false); + fsOwner, superGroup, ugi, mockINodeAttributeProvider, false, 0); when(iip.getPathSnapshotId()).thenReturn(snapshotId); when(iip.getINodesArray()).thenReturn(inodes); @@ -128,7 +128,7 @@ public class TestAuthorizationContext { // force it to use the new, checkPermissionWithContext API. FSPermissionChecker checker = new FSPermissionChecker( - fsOwner, superGroup, ugi, mockINodeAttributeProvider, true); + fsOwner, superGroup, ugi, mockINodeAttributeProvider, true, 0); String operationName = "abc"; FSPermissionChecker.setOperationType(operationName); diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSEditLogLoader.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSEditLogLoader.java index 8008be79d91..89193ca6633 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSEditLogLoader.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSEditLogLoader.java @@ -807,12 +807,13 @@ public class TestFSEditLogLoader { } @Test - public void setLoadFSEditLogThrottling() throws Exception { + public void testLoadFSEditLogThrottling() throws Exception { FSNamesystem namesystem = mock(FSNamesystem.class); namesystem.dir = mock(FSDirectory.class); FakeTimer timer = new FakeTimer(); FSEditLogLoader loader = new FSEditLogLoader(namesystem, 0, timer); + FSEditLogLoader.LOAD_EDITS_LOG_HELPER.reset(); LogCapturer capture = LogCapturer.captureLogs(FSImage.LOG); loader.loadFSEdits(getFakeEditLogInputStream(1, 10), 1); diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java index 6312e92fd07..f13ed7efdcb 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java @@ -40,6 +40,7 @@ import static org.mockito.Mockito.mock; import java.io.IOException; import java.util.Arrays; +import java.util.function.LongFunction; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; @@ -52,6 +53,7 @@ import org.apache.hadoop.hdfs.server.namenode.FSDirectory.DirOp; import org.apache.hadoop.hdfs.server.namenode.snapshot.Snapshot; import org.apache.hadoop.security.AccessControlException; import org.apache.hadoop.security.UserGroupInformation; +import org.junit.Assert; import org.junit.Before; import org.junit.Test; import org.mockito.invocation.InvocationOnMock; @@ -446,4 +448,29 @@ public class TestFSPermissionChecker { parent.addChild(inodeFile); return inodeFile; } + + @Test + public void testCheckAccessControlEnforcerSlowness() throws Exception { + final long thresholdMs = 10; + final LongFunction checkAccessControlEnforcerSlowness = + elapsedMs -> FSPermissionChecker.checkAccessControlEnforcerSlowness( + elapsedMs, thresholdMs, INodeAttributeProvider.AccessControlEnforcer.class, + false, "/foo", "mkdir", "client"); + + final String m1 = FSPermissionChecker.runCheckPermission( + () -> FSPermissionChecker.LOG.info("Fast runner"), + checkAccessControlEnforcerSlowness); + Assert.assertNull(m1); + + final String m2 = FSPermissionChecker.runCheckPermission(() -> { + FSPermissionChecker.LOG.info("Slow runner"); + try { + Thread.sleep(20); + } catch (InterruptedException e) { + Thread.currentThread().interrupt(); + throw new IllegalStateException(e); + } + }, checkAccessControlEnforcerSlowness); + Assert.assertNotNull(m2); + } } diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java index 420635e012a..60442c6bd04 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java @@ -2450,6 +2450,68 @@ public class TestFsck { assertTrue(outStr.contains("has 1 CORRUPT blocks")); } + @Test + public void testFsckECBlockIdRedundantInternalBlocks() throws Exception { + final int dataBlocks = StripedFileTestUtil.getDefaultECPolicy().getNumDataUnits(); + final int parityBlocks = StripedFileTestUtil.getDefaultECPolicy().getNumParityUnits(); + final int cellSize = StripedFileTestUtil.getDefaultECPolicy().getCellSize(); + final short groupSize = (short) (dataBlocks + parityBlocks); + final File builderBaseDir = new File(GenericTestUtils.getRandomizedTempPath()); + final Path dirPath = new Path("/ec_dir"); + final Path filePath = new Path(dirPath, "file"); + + conf.setInt(DFSConfigKeys.DFS_NAMENODE_REDUNDANCY_INTERVAL_SECONDS_KEY, 1); + cluster = new MiniDFSCluster.Builder(conf, builderBaseDir).numDataNodes(groupSize + 1).build(); + cluster.waitActive(); + + DistributedFileSystem fs = cluster.getFileSystem(); + fs.enableErasureCodingPolicy( + StripedFileTestUtil.getDefaultECPolicy().getName()); + + try { + fs.mkdirs(dirPath); + fs.setErasureCodingPolicy(dirPath, StripedFileTestUtil.getDefaultECPolicy().getName()); + DFSTestUtil.createFile(fs, filePath, cellSize * dataBlocks * 2, (short) 1, 0L); + LocatedBlocks blks = fs.getClient().getLocatedBlocks(filePath.toString(), 0); + LocatedStripedBlock block = (LocatedStripedBlock) blks.getLastLocatedBlock(); + Assert.assertEquals(groupSize, block.getLocations().length); + + //general test. + String runFsckResult = runFsck(conf, 0, true, "/", + "-blockId", block.getBlock().getBlockName()); + assertTrue(runFsckResult.contains(block.getBlock().getBlockName())); + assertTrue(runFsckResult.contains("No. of Expected Replica: " + groupSize)); + assertTrue(runFsckResult.contains("No. of live Replica: " + groupSize)); + assertTrue(runFsckResult.contains("No. of redundant Replica: " + 0)); + + // stop a dn. + DatanodeInfo dnToStop = block.getLocations()[0]; + MiniDFSCluster.DataNodeProperties dnProp = cluster.stopDataNode(dnToStop.getXferAddr()); + cluster.setDataNodeDead(dnToStop); + + // wait for reconstruction to happen. + DFSTestUtil.waitForReplication(fs, filePath, groupSize, 15 * 1000); + + // bring the dn back: 10 internal blocks now. + cluster.restartDataNode(dnProp); + cluster.waitActive(); + + blks = fs.getClient().getLocatedBlocks(filePath.toString(), 0); + block = (LocatedStripedBlock) blks.getLastLocatedBlock(); + Assert.assertEquals(groupSize + 1, block.getLocations().length); + + //general test, number of redundant internal block replicas. + runFsckResult = runFsck(conf, 0, true, "/", + "-blockId", block.getBlock().getBlockName()); + assertTrue(runFsckResult.contains(block.getBlock().getBlockName())); + assertTrue(runFsckResult.contains("No. of Expected Replica: " + groupSize)); + assertTrue(runFsckResult.contains("No. of live Replica: " + groupSize)); + assertTrue(runFsckResult.contains("No. of redundant Replica: " + 1)); + } finally { + cluster.shutdown(); + } + } + private void waitForUnrecoverableBlockGroup(Configuration configuration) throws TimeoutException, InterruptedException { GenericTestUtils.waitFor(new Supplier() { diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHAWithInProgressTail.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHAWithInProgressTail.java new file mode 100644 index 00000000000..746503d8452 --- /dev/null +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHAWithInProgressTail.java @@ -0,0 +1,121 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs.server.namenode; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.permission.FsPermission; +import org.apache.hadoop.hdfs.DFSConfigKeys; +import org.apache.hadoop.hdfs.DFSTestUtil; +import org.apache.hadoop.hdfs.HAUtil; +import org.apache.hadoop.hdfs.MiniDFSCluster; +import org.apache.hadoop.hdfs.qjournal.MiniJournalCluster; +import org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster; +import org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager; +import org.apache.hadoop.hdfs.qjournal.client.SpyQJournalUtil; +import org.junit.After; +import org.junit.Before; +import org.junit.Test; +import org.mockito.Mockito; + +import java.io.IOException; + +import static org.apache.hadoop.hdfs.server.namenode.NameNodeAdapter.getFileInfo; +import static org.junit.Assert.assertNotNull; +import static org.mockito.ArgumentMatchers.anyBoolean; +import static org.mockito.Mockito.spy; + +public class TestHAWithInProgressTail { + private MiniQJMHACluster qjmhaCluster; + private MiniDFSCluster cluster; + private MiniJournalCluster jnCluster; + private NameNode nn0; + private NameNode nn1; + + @Before + public void startUp() throws IOException { + Configuration conf = new Configuration(); + conf.setBoolean(DFSConfigKeys.DFS_HA_TAILEDITS_INPROGRESS_KEY, true); + conf.setInt(DFSConfigKeys.DFS_QJOURNAL_SELECT_INPUT_STREAMS_TIMEOUT_KEY, 500); + HAUtil.setAllowStandbyReads(conf, true); + qjmhaCluster = new MiniQJMHACluster.Builder(conf).build(); + cluster = qjmhaCluster.getDfsCluster(); + jnCluster = qjmhaCluster.getJournalCluster(); + + // Get NameNode from cluster to future manual control + nn0 = cluster.getNameNode(0); + nn1 = cluster.getNameNode(1); + } + + @After + public void tearDown() throws IOException { + if (qjmhaCluster != null) { + qjmhaCluster.shutdown(); + } + } + + + /** + * Test that Standby Node tails multiple segments while catching up + * during the transition to Active. + */ + @Test + public void testFailoverWithAbnormalJN() throws Exception { + cluster.transitionToActive(0); + cluster.waitActive(0); + + // Stop EditlogTailer in Standby NameNode. + cluster.getNameNode(1).getNamesystem().getEditLogTailer().stop(); + + String p = "/testFailoverWhileTailingWithoutCache/"; + nn0.getRpcServer().mkdirs(p + 0, FsPermission.getCachePoolDefault(), true); + + cluster.transitionToStandby(0); + spyFSEditLog(); + cluster.transitionToActive(1); + + // we should read them in nn1. + assertNotNull(getFileInfo(nn1, p + 0, true, false, false)); + } + + private void spyFSEditLog() throws IOException { + FSEditLog spyEditLog = spy(nn1.getNamesystem().getFSImage().getEditLog()); + Mockito.doAnswer(invocation -> { + invocation.callRealMethod(); + spyOnJASjournal(spyEditLog.getJournalSet()); + return null; + }).when(spyEditLog).recoverUnclosedStreams(anyBoolean()); + + DFSTestUtil.setEditLogForTesting(nn1.getNamesystem(), spyEditLog); + nn1.getNamesystem().getEditLogTailer().setEditLog(spyEditLog); + } + + private void spyOnJASjournal(JournalSet journalSet) throws IOException { + JournalSet.JournalAndStream jas = journalSet.getAllJournalStreams().get(0); + JournalManager oldManager = jas.getManager(); + oldManager.close(); + + // Create a SpyingQJM + QuorumJournalManager manager = SpyQJournalUtil.createSpyingQJM(nn1.getConf(), + jnCluster.getQuorumJournalURI("ns1"), + nn1.getNamesystem().getNamespaceInfo(), "ns1"); + manager.recoverUnfinalizedSegments(); + jas.setJournalForTests(manager); + + SpyQJournalUtil.mockJNWithEmptyOrSlowResponse(manager, 1); + } +} diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeReconfigure.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeReconfigure.java index d0484298146..5573b1fa107 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeReconfigure.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeReconfigure.java @@ -22,6 +22,8 @@ import java.io.IOException; import java.util.ArrayList; import java.util.List; +import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminBackoffMonitor; +import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminMonitorInterface; import org.junit.Test; import org.junit.Before; import org.junit.After; @@ -62,6 +64,8 @@ import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_INVALIDATE_LIMIT_KE import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_AVOID_SLOW_DATANODE_FOR_READ_KEY; import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_BLOCKPLACEMENTPOLICY_EXCLUDE_SLOW_NODES_ENABLED_KEY; import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_MAX_SLOWPEER_COLLECT_NODES_KEY; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_DECOMMISSION_BACKOFF_MONITOR_PENDING_LIMIT; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_DECOMMISSION_BACKOFF_MONITOR_PENDING_BLOCKS_PER_LOCK; import static org.apache.hadoop.fs.CommonConfigurationKeys.IPC_BACKOFF_ENABLE_DEFAULT; public class TestNameNodeReconfigure { @@ -567,6 +571,87 @@ public class TestNameNodeReconfigure { return containReport; } + @Test + public void testReconfigureDecommissionBackoffMonitorParameters() + throws ReconfigurationException, IOException { + Configuration conf = new HdfsConfiguration(); + conf.setClass(DFSConfigKeys.DFS_NAMENODE_DECOMMISSION_MONITOR_CLASS, + DatanodeAdminBackoffMonitor.class, DatanodeAdminMonitorInterface.class); + int defaultPendingRepLimit = 1000; + conf.setInt(DFS_NAMENODE_DECOMMISSION_BACKOFF_MONITOR_PENDING_LIMIT, defaultPendingRepLimit); + int defaultBlocksPerLock = 1000; + conf.setInt(DFS_NAMENODE_DECOMMISSION_BACKOFF_MONITOR_PENDING_BLOCKS_PER_LOCK, + defaultBlocksPerLock); + + try (MiniDFSCluster newCluster = new MiniDFSCluster.Builder(conf).build()) { + newCluster.waitActive(); + final NameNode nameNode = newCluster.getNameNode(); + final DatanodeManager datanodeManager = nameNode.namesystem + .getBlockManager().getDatanodeManager(); + + // verify defaultPendingRepLimit. + assertEquals(datanodeManager.getDatanodeAdminManager().getPendingRepLimit(), + defaultPendingRepLimit); + + // try invalid pendingRepLimit. + try { + nameNode.reconfigureProperty(DFS_NAMENODE_DECOMMISSION_BACKOFF_MONITOR_PENDING_LIMIT, + "non-numeric"); + fail("Should not reach here"); + } catch (ReconfigurationException e) { + assertEquals("Could not change property " + + "dfs.namenode.decommission.backoff.monitor.pending.limit from '" + + defaultPendingRepLimit + "' to 'non-numeric'", e.getMessage()); + } + + try { + nameNode.reconfigureProperty(DFS_NAMENODE_DECOMMISSION_BACKOFF_MONITOR_PENDING_LIMIT, + "-1"); + fail("Should not reach here"); + } catch (ReconfigurationException e) { + assertEquals("Could not change property " + + "dfs.namenode.decommission.backoff.monitor.pending.limit from '" + + defaultPendingRepLimit + "' to '-1'", e.getMessage()); + } + + // try correct pendingRepLimit. + nameNode.reconfigureProperty(DFS_NAMENODE_DECOMMISSION_BACKOFF_MONITOR_PENDING_LIMIT, + "20000"); + assertEquals(datanodeManager.getDatanodeAdminManager().getPendingRepLimit(), 20000); + + // verify defaultBlocksPerLock. + assertEquals(datanodeManager.getDatanodeAdminManager().getBlocksPerLock(), + defaultBlocksPerLock); + + // try invalid blocksPerLock. + try { + nameNode.reconfigureProperty( + DFS_NAMENODE_DECOMMISSION_BACKOFF_MONITOR_PENDING_BLOCKS_PER_LOCK, + "non-numeric"); + fail("Should not reach here"); + } catch (ReconfigurationException e) { + assertEquals("Could not change property " + + "dfs.namenode.decommission.backoff.monitor.pending.blocks.per.lock from '" + + defaultBlocksPerLock + "' to 'non-numeric'", e.getMessage()); + } + + try { + nameNode.reconfigureProperty( + DFS_NAMENODE_DECOMMISSION_BACKOFF_MONITOR_PENDING_BLOCKS_PER_LOCK, "-1"); + fail("Should not reach here"); + } catch (ReconfigurationException e) { + assertEquals("Could not change property " + + "dfs.namenode.decommission.backoff.monitor.pending.blocks.per.lock from '" + + defaultBlocksPerLock + "' to '-1'", e.getMessage()); + } + + // try correct blocksPerLock. + nameNode.reconfigureProperty( + DFS_NAMENODE_DECOMMISSION_BACKOFF_MONITOR_PENDING_BLOCKS_PER_LOCK, "10000"); + assertEquals(datanodeManager.getDatanodeAdminManager().getBlocksPerLock(), 10000); + } + } + @After public void shutDown() throws IOException { if (cluster != null) { diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRpcServer.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRpcServer.java index 2960a7ee6d4..d29e11cffee 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRpcServer.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRpcServer.java @@ -29,26 +29,32 @@ import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_RPC_BIND_HOST_KE import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertNotEquals; import static org.junit.Assert.assertNotNull; +import static org.junit.Assert.assertNull; import java.io.IOException; import java.nio.charset.StandardCharsets; import java.security.PrivilegedExceptionAction; +import java.util.EnumSet; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FSDataOutputStream; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.permission.FsPermission; +import org.apache.hadoop.hdfs.AddBlockFlag; import org.apache.hadoop.hdfs.DFSTestUtil; import org.apache.hadoop.hdfs.DistributedFileSystem; import org.apache.hadoop.hdfs.HdfsConfiguration; import org.apache.hadoop.hdfs.MiniDFSCluster; +import org.apache.hadoop.hdfs.protocol.HdfsFileStatus; import org.apache.hadoop.hdfs.protocol.LocatedBlocks; import org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster; import org.apache.hadoop.ipc.CallerContext; +import org.apache.hadoop.ipc.ObserverRetryOnActiveException; import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.test.GenericTestUtils; +import org.apache.hadoop.test.LambdaTestUtils; import org.junit.Test; import org.junit.jupiter.api.Timeout; @@ -158,6 +164,43 @@ public class TestNameNodeRpcServer { } } + @Test + @Timeout(30000) + public void testObserverHandleAddBlock() throws Exception { + String baseDir = GenericTestUtils.getRandomizedTempPath(); + Configuration conf = new HdfsConfiguration(); + MiniQJMHACluster.Builder builder = new MiniQJMHACluster.Builder(conf).setNumNameNodes(3); + builder.getDfsBuilder().numDataNodes(3); + try (MiniQJMHACluster qjmhaCluster = builder.baseDir(baseDir).build()) { + MiniDFSCluster dfsCluster = qjmhaCluster.getDfsCluster(); + dfsCluster.waitActive(); + dfsCluster.transitionToActive(0); + dfsCluster.transitionToObserver(2); + + NameNode activeNN = dfsCluster.getNameNode(0); + NameNode observerNN = dfsCluster.getNameNode(2); + + // Stop the editLogTailer of Observer NameNode + observerNN.getNamesystem().getEditLogTailer().stop(); + DistributedFileSystem dfs = dfsCluster.getFileSystem(0); + + Path testPath = new Path("/testObserverHandleAddBlock/file.txt"); + try (FSDataOutputStream ignore = dfs.create(testPath)) { + HdfsFileStatus fileStatus = activeNN.getRpcServer().getFileInfo(testPath.toUri().getPath()); + assertNotNull(fileStatus); + assertNull(observerNN.getRpcServer().getFileInfo(testPath.toUri().getPath())); + + LambdaTestUtils.intercept(ObserverRetryOnActiveException.class, () -> { + observerNN.getRpcServer().addBlock(testPath.toUri().getPath(), + dfs.getClient().getClientName(), null, null, + fileStatus.getFileId(), null, EnumSet.noneOf(AddBlockFlag.class)); + }); + } finally { + dfs.delete(testPath, true); + } + } + } + /** * A test to make sure that if an authorized user adds "clientIp:" to their * caller context, it will be used to make locality decisions on the NN. diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHASafeMode.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHASafeMode.java index 16a57c68672..4766c4cecc9 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHASafeMode.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHASafeMode.java @@ -977,4 +977,26 @@ public class TestHASafeMode { () -> miniCluster.transitionToActive(0)); } } + + @Test + public void testTransitionToObserverWhenSafeMode() throws Exception { + Configuration config = new Configuration(); + config.setBoolean(DFS_HA_NN_NOT_BECOME_ACTIVE_IN_SAFEMODE, true); + try (MiniDFSCluster miniCluster = new MiniDFSCluster.Builder(config, + new File(GenericTestUtils.getRandomizedTempPath())) + .nnTopology(MiniDFSNNTopology.simpleHATopology()) + .numDataNodes(1) + .build()) { + miniCluster.waitActive(); + miniCluster.transitionToStandby(0); + miniCluster.transitionToStandby(1); + NameNode namenode0 = miniCluster.getNameNode(0); + NameNode namenode1 = miniCluster.getNameNode(1); + NameNodeAdapter.enterSafeMode(namenode0, false); + NameNodeAdapter.enterSafeMode(namenode1, false); + LambdaTestUtils.intercept(ServiceFailedException.class, + "NameNode still not leave safemode", + () -> miniCluster.transitionToObserver(0)); + } + } } diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestObserverNode.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestObserverNode.java index 60728284e59..8b691a11725 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestObserverNode.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestObserverNode.java @@ -652,6 +652,29 @@ public class TestObserverNode { } } + @Test + public void testSimpleReadEmptyDirOrFile() throws IOException { + // read empty dir + dfs.mkdirs(new Path("/emptyDir")); + assertSentTo(0); + + dfs.getClient().listPaths("/", new byte[0], true); + assertSentTo(2); + + dfs.getClient().getLocatedFileInfo("/emptyDir", true); + assertSentTo(2); + + // read empty file + dfs.create(new Path("/emptyFile"), (short)1); + assertSentTo(0); + + dfs.getClient().getLocatedFileInfo("/emptyFile", true); + assertSentTo(2); + + dfs.getClient().getBlockLocations("/emptyFile", 0, 1); + assertSentTo(2); + } + private static void assertSentTo(DistributedFileSystem fs, int nnIdx) throws IOException { assertTrue("Request was not sent to the expected namenode " + nnIdx, diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java index 99e4b348f61..9a87365eb2f 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java @@ -43,6 +43,8 @@ import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_PLACEMENT_EC_CLASSN import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_REPLICATOR_CLASSNAME_KEY; import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_AVOID_SLOW_DATANODE_FOR_READ_KEY; import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_BLOCKPLACEMENTPOLICY_EXCLUDE_SLOW_NODES_ENABLED_KEY; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_DECOMMISSION_BACKOFF_MONITOR_PENDING_LIMIT; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_DECOMMISSION_BACKOFF_MONITOR_PENDING_BLOCKS_PER_LOCK; import org.apache.commons.io.FileUtils; import org.apache.commons.text.TextStringBuilder; @@ -343,7 +345,7 @@ public class TestDFSAdmin { final List outs = Lists.newArrayList(); final List errs = Lists.newArrayList(); getReconfigurableProperties("datanode", address, outs, errs); - assertEquals(19, outs.size()); + assertEquals(20, outs.size()); assertEquals(DFSConfigKeys.DFS_DATANODE_DATA_DIR_KEY, outs.get(1)); } @@ -438,7 +440,7 @@ public class TestDFSAdmin { final List outs = Lists.newArrayList(); final List errs = Lists.newArrayList(); getReconfigurableProperties("namenode", address, outs, errs); - assertEquals(20, outs.size()); + assertEquals(22, outs.size()); assertTrue(outs.get(0).contains("Reconfigurable properties:")); assertEquals(DFS_BLOCK_INVALIDATE_LIMIT_KEY, outs.get(1)); assertEquals(DFS_BLOCK_PLACEMENT_EC_CLASSNAME_KEY, outs.get(2)); @@ -449,8 +451,10 @@ public class TestDFSAdmin { assertEquals(DFS_IMAGE_PARALLEL_LOAD_KEY, outs.get(7)); assertEquals(DFS_NAMENODE_AVOID_SLOW_DATANODE_FOR_READ_KEY, outs.get(8)); assertEquals(DFS_NAMENODE_BLOCKPLACEMENTPOLICY_EXCLUDE_SLOW_NODES_ENABLED_KEY, outs.get(9)); - assertEquals(DFS_NAMENODE_HEARTBEAT_RECHECK_INTERVAL_KEY, outs.get(10)); - assertEquals(DFS_NAMENODE_MAX_SLOWPEER_COLLECT_NODES_KEY, outs.get(11)); + assertEquals(DFS_NAMENODE_DECOMMISSION_BACKOFF_MONITOR_PENDING_BLOCKS_PER_LOCK, outs.get(10)); + assertEquals(DFS_NAMENODE_DECOMMISSION_BACKOFF_MONITOR_PENDING_LIMIT, outs.get(11)); + assertEquals(DFS_NAMENODE_HEARTBEAT_RECHECK_INTERVAL_KEY, outs.get(12)); + assertEquals(DFS_NAMENODE_MAX_SLOWPEER_COLLECT_NODES_KEY, outs.get(13)); assertEquals(errs.size(), 0); } diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java index aa048f865c2..d0edd175ec1 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java @@ -17,10 +17,12 @@ */ package org.apache.hadoop.hdfs.tools; +import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_HA_NN_NOT_BECOME_ACTIVE_IN_SAFEMODE; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertTrue; +import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; import java.io.File; import java.io.IOException; @@ -70,6 +72,7 @@ public class TestDFSHAAdminMiniCluster { @Before public void setup() throws IOException { conf = new Configuration(); + conf.setBoolean(DFS_HA_NN_NOT_BECOME_ACTIVE_IN_SAFEMODE, true); cluster = new MiniDFSCluster.Builder(conf) .nnTopology(MiniDFSNNTopology.simpleHATopology()).numDataNodes(0) .build(); @@ -161,7 +164,28 @@ public class TestDFSHAAdminMiniCluster { assertEquals(-1, runTool("-transitionToActive", "nn1")); assertFalse(nnode1.isActiveState()); } - + + /** + * Tests that a Namenode in safe mode should not be transfer to observer. + */ + @Test + public void testObserverTransitionInSafeMode() throws Exception { + NameNodeAdapter.enterSafeMode(cluster.getNameNode(0), false); + DFSHAAdmin admin = new DFSHAAdmin(); + admin.setConf(conf); + System.setIn(new ByteArrayInputStream("yes\n".getBytes())); + int result = admin.run( + new String[]{"-transitionToObserver", "-forcemanual", "nn1"}); + assertEquals("State transition returned: " + result, -1, result); + + NameNodeAdapter.leaveSafeMode(cluster.getNameNode(0)); + System.setIn(new ByteArrayInputStream("yes\n".getBytes())); + int result1 = admin.run( + new String[]{"-transitionToObserver", "-forcemanual", "nn1"}); + assertEquals("State transition returned: " + result1, 0, result1); + assertFalse(cluster.getNameNode(0).isInSafeMode()); + } + @Test public void testTryFailoverToSafeMode() throws Exception { conf.set(DFSConfigKeys.DFS_HA_FENCE_METHODS_KEY, diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDebugAdmin.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDebugAdmin.java index 8dd303d84db..37cd38eedb8 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDebugAdmin.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDebugAdmin.java @@ -194,8 +194,13 @@ public class TestDebugAdmin { cluster.waitActive(); DistributedFileSystem fs = cluster.getFileSystem(); - assertEquals("ret: 1, verifyEC -file Verify HDFS erasure coding on " + - "all block groups of the file.", runCmd(new String[]{"verifyEC"})); + assertEquals("ret: 1, verifyEC -file [-blockId ] " + + "[-skipFailureBlocks] -file Verify HDFS erasure coding on all block groups of the file." + + " -skipFailureBlocks specify will skip any block group failures during verify," + + " and continues verify all block groups of the file," + + " the default is not to skip failure blocks." + + " -blockId specify blk_Id to verify for a specific one block group.", + runCmd(new String[]{"verifyEC"})); assertEquals("ret: 1, File /bar does not exist.", runCmd(new String[]{"verifyEC", "-file", "/bar"})); @@ -270,6 +275,41 @@ public class TestDebugAdmin { "-out", metaFile.getAbsolutePath()}); assertTrue(runCmd(new String[]{"verifyEC", "-file", "/ec/foo_corrupt"}) .contains("Status: ERROR, message: EC compute result not match.")); + + // Specify -blockId. + Path newFile = new Path(ecDir, "foo_new"); + DFSTestUtil.createFile(fs, newFile, (int) k, 6 * m, m, repl, seed); + blocks = DFSTestUtil.getAllBlocks(fs, newFile); + assertEquals(2, blocks.size()); + blockGroup = (LocatedStripedBlock) blocks.get(0); + String blockName = blockGroup.getBlock().getBlockName(); + assertTrue(runCmd(new String[]{"verifyEC", "-file", "/ec/foo_new", "-blockId", blockName}) + .contains("ret: 0, Checking EC block group: " + blockName + "Status: OK")); + + // Specify -verifyAllFailures. + indexedBlocks = StripedBlockUtil.parseStripedBlockGroup(blockGroup, + ecPolicy.getCellSize(), ecPolicy.getNumDataUnits(), ecPolicy.getNumParityUnits()); + // Try corrupt block 0 in block group. + toCorruptLocatedBlock = indexedBlocks[0]; + toCorruptBlock = toCorruptLocatedBlock.getBlock(); + datanode = cluster.getDataNode(toCorruptLocatedBlock.getLocations()[0].getIpcPort()); + blockFile = getBlockFile(datanode.getFSDataset(), + toCorruptBlock.getBlockPoolId(), toCorruptBlock.getLocalBlock()); + metaFile = getMetaFile(datanode.getFSDataset(), + toCorruptBlock.getBlockPoolId(), toCorruptBlock.getLocalBlock()); + metaFile.delete(); + // Write error bytes to block file and re-generate meta checksum. + errorBytes = new byte[1048576]; + new Random(0x12345678L).nextBytes(errorBytes); + FileUtils.writeByteArrayToFile(blockFile, errorBytes); + runCmd(new String[]{"computeMeta", "-block", blockFile.getAbsolutePath(), + "-out", metaFile.getAbsolutePath()}); + // VerifyEC and set skipFailureBlocks. + LocatedStripedBlock blockGroup2 = (LocatedStripedBlock) blocks.get(1); + assertTrue(runCmd(new String[]{"verifyEC", "-file", "/ec/foo_new", "-skipFailureBlocks"}) + .contains("ret: 1, Checking EC block group: " + blockGroup.getBlock().getBlockName() + + "Status: ERROR, message: EC compute result not match." + + "Checking EC block group: " + blockGroup2.getBlock().getBlockName() + "Status: OK")); } } diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java index 9878469c89e..047750de225 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java @@ -95,6 +95,7 @@ import org.apache.hadoop.security.token.Token; import org.apache.hadoop.test.GenericTestUtils; import org.apache.hadoop.test.LambdaTestUtils; import org.apache.hadoop.util.Lists; +import org.apache.hadoop.util.XMLUtils; import org.apache.hadoop.thirdparty.com.google.common.collect.ImmutableMap; import org.apache.hadoop.thirdparty.com.google.common.collect.Maps; @@ -565,7 +566,7 @@ public class TestOfflineImageViewer { try (RandomAccessFile r = new RandomAccessFile(originalFsimage, "r")) { v.visit(r); } - SAXParserFactory spf = SAXParserFactory.newInstance(); + SAXParserFactory spf = XMLUtils.newSecureSAXParserFactory(); SAXParser parser = spf.newSAXParser(); final String xml = output.toString(); ECXMLHandler ecxmlHandler = new ECXMLHandler(); @@ -1028,13 +1029,13 @@ public class TestOfflineImageViewer { private void deleteINodeFromXML(File inputFile, File outputFile, List corruptibleIds) throws Exception { - DocumentBuilderFactory docFactory = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory docFactory = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder docBuilder = docFactory.newDocumentBuilder(); Document doc = docBuilder.parse(inputFile); properINodeDelete(corruptibleIds, doc); - TransformerFactory transformerFactory = TransformerFactory.newInstance(); + TransformerFactory transformerFactory = XMLUtils.newSecureTransformerFactory(); Transformer transformer = transformerFactory.newTransformer(); DOMSource source = new DOMSource(doc); StreamResult result = new StreamResult(outputFile); @@ -1370,10 +1371,9 @@ public class TestOfflineImageViewer { v.visit(new RandomAccessFile(originalFsimage, "r")); final String xml = output.toString(); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); - InputSource is = new InputSource(); - is.setCharacterStream(new StringReader(xml)); + InputSource is = new InputSource(new StringReader(xml)); Document dom = db.parse(is); NodeList ecSection = dom.getElementsByTagName(ERASURE_CODING_SECTION_NAME); assertEquals(1, ecSection.getLength()); diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewerForAcl.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewerForAcl.java index 31dec3f5e5c..3af8e03d898 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewerForAcl.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewerForAcl.java @@ -47,6 +47,8 @@ import org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil; import org.apache.hadoop.hdfs.web.WebHdfsFileSystem; import org.apache.hadoop.net.NetUtils; import org.apache.hadoop.util.Lists; +import org.apache.hadoop.util.XMLUtils; + import org.junit.AfterClass; import org.junit.BeforeClass; import org.junit.Test; @@ -221,7 +223,7 @@ public class TestOfflineImageViewerForAcl { PrintStream o = new PrintStream(output); PBImageXmlWriter v = new PBImageXmlWriter(new Configuration(), o); v.visit(new RandomAccessFile(originalFsimage, "r")); - SAXParserFactory spf = SAXParserFactory.newInstance(); + SAXParserFactory spf = XMLUtils.newSecureSAXParserFactory(); SAXParser parser = spf.newSAXParser(); final String xml = output.toString(); parser.parse(new InputSource(new StringReader(xml)), new DefaultHandler()); diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtilClient.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtilClient.java new file mode 100644 index 00000000000..85bd1f857e9 --- /dev/null +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtilClient.java @@ -0,0 +1,68 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs.web; + +import org.apache.hadoop.fs.BlockLocation; +import org.apache.hadoop.fs.StorageType; +import org.apache.hadoop.util.JsonSerialization; +import org.junit.Test; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.List; +import java.util.Map; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertNotNull; + +public class TestJsonUtilClient { + @Test + public void testToStringArray() { + List strList = new ArrayList(Arrays.asList("aaa", "bbb", "ccc")); + + String[] strArr = JsonUtilClient.toStringArray(strList); + assertEquals("Expected 3 items in the array", 3, strArr.length); + assertEquals("aaa", strArr[0]); + assertEquals("bbb", strArr[1]); + assertEquals("ccc", strArr[2]); + } + + @Test + public void testToBlockLocationArray() throws Exception { + BlockLocation blockLocation = new BlockLocation( + new String[] {"127.0.0.1:62870"}, + new String[] {"127.0.0.1"}, + null, + new String[] {"/default-rack/127.0.0.1:62870"}, + null, + new StorageType[] {StorageType.DISK}, + 0, 1, false); + + Map blockLocationsMap = + JsonUtil.toJsonMap(new BlockLocation[] {blockLocation}); + String json = JsonUtil.toJsonString("BlockLocations", blockLocationsMap); + assertNotNull(json); + Map jsonMap = JsonSerialization.mapReader().readValue(json); + + BlockLocation[] deserializedBlockLocations = + JsonUtilClient.toBlockLocationArray(jsonMap); + assertEquals(1, deserializedBlockLocations.length); + assertEquals(blockLocation.toString(), + deserializedBlockLocations[0].toString()); + } +} diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml index e3b3511c0ce..dc69f1b65e5 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml @@ -100,6 +100,39 @@ test-jar test + + org.junit.jupiter + junit-jupiter-api + test + + + org.junit.jupiter + junit-jupiter-engine + test + + + org.junit.jupiter + junit-jupiter-params + test + + + org.mockito + mockito-junit-jupiter + 4.11.0 + test + + + uk.org.webcompere + system-stubs-core + 1.1.0 + test + + + uk.org.webcompere + system-stubs-jupiter + 1.1.0 + test + com.fasterxml.jackson.core jackson-databind diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/speculate/StartEndTimesBase.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/speculate/StartEndTimesBase.java index 8eda00528f1..be35b68a56f 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/speculate/StartEndTimesBase.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/speculate/StartEndTimesBase.java @@ -61,7 +61,7 @@ abstract class StartEndTimesBase implements TaskRuntimeEstimator { = new HashMap(); - private final Map slowTaskRelativeTresholds + private final Map slowTaskRelativeThresholds = new HashMap(); protected final Set doneTasks = new HashSet(); @@ -89,7 +89,7 @@ abstract class StartEndTimesBase implements TaskRuntimeEstimator { final Job job = entry.getValue(); mapperStatistics.put(job, new DataStatistics()); reducerStatistics.put(job, new DataStatistics()); - slowTaskRelativeTresholds.put + slowTaskRelativeThresholds.put (job, conf.getFloat(MRJobConfig.SPECULATIVE_SLOWTASK_THRESHOLD,1.0f)); } } @@ -141,7 +141,7 @@ abstract class StartEndTimesBase implements TaskRuntimeEstimator { long result = statistics == null ? Long.MAX_VALUE - : (long)statistics.outlier(slowTaskRelativeTresholds.get(job)); + : (long)statistics.outlier(slowTaskRelativeThresholds.get(job)); return result; } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapred/TestLocalContainerLauncher.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapred/TestLocalContainerLauncher.java index 94cd5182a58..3a99760aab9 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapred/TestLocalContainerLauncher.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapred/TestLocalContainerLauncher.java @@ -53,10 +53,11 @@ import org.apache.hadoop.yarn.api.records.Container; import org.apache.hadoop.yarn.api.records.NodeId; import org.apache.hadoop.yarn.event.Event; import org.apache.hadoop.yarn.event.EventHandler; -import org.junit.AfterClass; -import org.junit.Assert; -import org.junit.BeforeClass; -import org.junit.Test; +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.Assertions; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; import org.mockito.invocation.InvocationOnMock; import org.mockito.stubbing.Answer; import org.slf4j.Logger; @@ -75,7 +76,7 @@ public class TestLocalContainerLauncher { fs.delete(p, true); } - @BeforeClass + @BeforeAll public static void setupTestDirs() throws IOException { testWorkDir = new File("target", TestLocalContainerLauncher.class.getCanonicalName()); @@ -89,7 +90,7 @@ public class TestLocalContainerLauncher { } } - @AfterClass + @AfterAll public static void cleanupTestDirs() throws IOException { if (testWorkDir != null) { delete(testWorkDir); @@ -97,7 +98,8 @@ public class TestLocalContainerLauncher { } @SuppressWarnings("rawtypes") - @Test(timeout=10000) + @Test + @Timeout(10000) public void testKillJob() throws Exception { JobConf conf = new JobConf(); AppContext context = mock(AppContext.class); @@ -198,8 +200,8 @@ public class TestLocalContainerLauncher { final Path mapOut = mrOutputFiles.getOutputFileForWrite(1); conf.set(MRConfig.LOCAL_DIR, localDirs[1].toString()); final Path mapOutIdx = mrOutputFiles.getOutputIndexFileForWrite(1); - Assert.assertNotEquals("Paths must be different!", - mapOut.getParent(), mapOutIdx.getParent()); + Assertions.assertNotEquals(mapOut.getParent(), mapOutIdx.getParent(), + "Paths must be different!"); // make both dirs part of LOCAL_DIR conf.setStrings(MRConfig.LOCAL_DIR, localDirs); diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapred/TestTaskAttemptFinishingMonitor.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapred/TestTaskAttemptFinishingMonitor.java index 49b986e2259..7389aebbd30 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapred/TestTaskAttemptFinishingMonitor.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapred/TestTaskAttemptFinishingMonitor.java @@ -37,8 +37,8 @@ import org.apache.hadoop.yarn.event.Event; import org.apache.hadoop.yarn.event.EventHandler; import org.apache.hadoop.yarn.util.SystemClock; -import org.junit.Test; -import static org.junit.Assert.assertTrue; +import org.junit.jupiter.api.Test; +import static org.junit.jupiter.api.Assertions.assertTrue; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.when; @@ -87,7 +87,7 @@ public class TestTaskAttemptFinishingMonitor { } taskAttemptFinishingMonitor.stop(); - assertTrue("Finishing attempt didn't time out.", eventHandler.timedOut); + assertTrue(eventHandler.timedOut, "Finishing attempt didn't time out."); } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapred/TestTaskAttemptListenerImpl.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapred/TestTaskAttemptListenerImpl.java index b5a7694e4cc..f57ac802fe5 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapred/TestTaskAttemptListenerImpl.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapred/TestTaskAttemptListenerImpl.java @@ -19,19 +19,18 @@ package org.apache.hadoop.mapred; import java.io.IOException; import java.util.ArrayList; -import java.util.Arrays; import java.util.List; import java.util.concurrent.ConcurrentMap; import java.util.concurrent.atomic.AtomicReference; import java.util.function.Supplier; -import org.junit.After; -import org.junit.Test; -import org.junit.runner.RunWith; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; +import org.junit.jupiter.api.extension.ExtendWith; import org.mockito.ArgumentCaptor; import org.mockito.Captor; import org.mockito.Mock; -import org.mockito.junit.MockitoJUnitRunner; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; @@ -67,14 +66,15 @@ import org.apache.hadoop.yarn.factories.RecordFactory; import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider; import org.apache.hadoop.yarn.util.ControlledClock; import org.apache.hadoop.yarn.util.SystemClock; +import org.mockito.junit.jupiter.MockitoExtension; import static org.assertj.core.api.Assertions.assertThat; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertNotNull; -import static org.junit.Assert.assertNull; -import static org.junit.Assert.assertTrue; -import static org.junit.Assert.fail; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertNull; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; import static org.mockito.Mockito.any; import static org.mockito.Mockito.doReturn; import static org.mockito.Mockito.eq; @@ -87,7 +87,7 @@ import static org.mockito.Mockito.when; /** * Tests the behavior of TaskAttemptListenerImpl. */ -@RunWith(MockitoJUnitRunner.class) +@ExtendWith(MockitoExtension.class) public class TestTaskAttemptListenerImpl { private static final String ATTEMPT1_ID = "attempt_123456789012_0001_m_000001_0"; @@ -172,7 +172,7 @@ public class TestTaskAttemptListenerImpl { } } - @After + @AfterEach public void after() throws IOException { if (listener != null) { listener.close(); @@ -180,7 +180,8 @@ public class TestTaskAttemptListenerImpl { } } - @Test (timeout=5000) + @Test + @Timeout(5000) public void testGetTask() throws IOException { configureMocks(); startListener(false); @@ -189,12 +190,12 @@ public class TestTaskAttemptListenerImpl { //The JVM ID has not been registered yet so we should kill it. JvmContext context = new JvmContext(); - context.jvmId = id; + context.jvmId = id; JvmTask result = listener.getTask(context); assertNotNull(result); assertTrue(result.shouldDie); - // Verify ask after registration but before launch. + // Verify ask after registration but before launch. // Don't kill, should be null. //Now put a task with the ID listener.registerPendingTask(task, wid); @@ -238,7 +239,8 @@ public class TestTaskAttemptListenerImpl { } - @Test (timeout=5000) + @Test + @Timeout(5000) public void testJVMId() { JVMId jvmid = new JVMId("test", 1, true, 2); @@ -247,7 +249,8 @@ public class TestTaskAttemptListenerImpl { assertEquals(0, jvmid.compareTo(jvmid1)); } - @Test (timeout=10000) + @Test + @Timeout(10000) public void testGetMapCompletionEvents() throws IOException { TaskAttemptCompletionEvent[] empty = {}; TaskAttemptCompletionEvent[] taskEvents = { @@ -257,12 +260,6 @@ public class TestTaskAttemptListenerImpl { createTce(3, false, TaskAttemptCompletionEventStatus.FAILED) }; TaskAttemptCompletionEvent[] mapEvents = { taskEvents[0], taskEvents[2] }; Job mockJob = mock(Job.class); - when(mockJob.getTaskAttemptCompletionEvents(0, 100)) - .thenReturn(taskEvents); - when(mockJob.getTaskAttemptCompletionEvents(0, 2)) - .thenReturn(Arrays.copyOfRange(taskEvents, 0, 2)); - when(mockJob.getTaskAttemptCompletionEvents(2, 100)) - .thenReturn(Arrays.copyOfRange(taskEvents, 2, 4)); when(mockJob.getMapAttemptCompletionEvents(0, 100)).thenReturn( TypeConverter.fromYarn(mapEvents)); when(mockJob.getMapAttemptCompletionEvents(0, 2)).thenReturn( @@ -312,7 +309,8 @@ public class TestTaskAttemptListenerImpl { return tce; } - @Test (timeout=10000) + @Test + @Timeout(10000) public void testCommitWindow() throws IOException { SystemClock clock = SystemClock.getInstance(); diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapred/TestYarnChild.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapred/TestYarnChild.java index 8ad62065fa1..daaabf3e863 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapred/TestYarnChild.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapred/TestYarnChild.java @@ -21,8 +21,8 @@ import java.io.IOException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.ClusterStorageCapacityExceededException; -import org.junit.Before; -import org.junit.Test; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; import static org.mockito.Mockito.*; @@ -36,7 +36,7 @@ public class TestYarnChild { final static private String KILL_LIMIT_EXCEED_CONF_NAME = "mapreduce.job.dfs.storage.capacity.kill-limit-exceed"; - @Before + @BeforeEach public void setUp() throws Exception { task = mock(Task.class); umbilical = mock(TaskUmbilicalProtocol.class); diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestEvents.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestEvents.java index 08896b7b2cc..43d3dd89cb9 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestEvents.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestEvents.java @@ -19,8 +19,8 @@ package org.apache.hadoop.mapreduce.jobhistory; import static org.assertj.core.api.Assertions.assertThat; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertTrue; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertTrue; import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; @@ -40,7 +40,8 @@ import org.apache.hadoop.mapreduce.TaskType; import org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl; import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEvent; import org.apache.hadoop.yarn.api.records.timelineservice.TimelineMetric; -import org.junit.Test; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; public class TestEvents { @@ -50,9 +51,9 @@ public class TestEvents { * * @throws Exception */ - @Test(timeout = 10000) + @Test + @Timeout(10000) public void testTaskAttemptFinishedEvent() throws Exception { - JobID jid = new JobID("001", 1); TaskID tid = new TaskID(jid, TaskType.REDUCE, 2); TaskAttemptID taskAttemptId = new TaskAttemptID(tid, 3); @@ -79,17 +80,18 @@ public class TestEvents { * @throws Exception */ - @Test(timeout = 10000) + @Test + @Timeout(10000) public void testJobPriorityChange() throws Exception { org.apache.hadoop.mapreduce.JobID jid = new JobID("001", 1); JobPriorityChangeEvent test = new JobPriorityChangeEvent(jid, JobPriority.LOW); assertThat(test.getJobId().toString()).isEqualTo(jid.toString()); assertThat(test.getPriority()).isEqualTo(JobPriority.LOW); - } - - @Test(timeout = 10000) + + @Test + @Timeout(10000) public void testJobQueueChange() throws Exception { org.apache.hadoop.mapreduce.JobID jid = new JobID("001", 1); JobQueueChangeEvent test = new JobQueueChangeEvent(jid, @@ -103,14 +105,14 @@ public class TestEvents { * * @throws Exception */ - @Test(timeout = 10000) + @Test + @Timeout(10000) public void testTaskUpdated() throws Exception { JobID jid = new JobID("001", 1); TaskID tid = new TaskID(jid, TaskType.REDUCE, 2); TaskUpdatedEvent test = new TaskUpdatedEvent(tid, 1234L); assertThat(test.getTaskId().toString()).isEqualTo(tid.toString()); assertThat(test.getFinishTime()).isEqualTo(1234L); - } /* @@ -118,9 +120,9 @@ public class TestEvents { * instance of HistoryEvent Different HistoryEvent should have a different * datum. */ - @Test(timeout = 10000) + @Test + @Timeout(10000) public void testEvents() throws Exception { - EventReader reader = new EventReader(new DataInputStream( new ByteArrayInputStream(getEvents()))); HistoryEvent e = reader.getNextEvent(); diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java index 8159bc2456c..ccaf3531034 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java @@ -19,9 +19,9 @@ package org.apache.hadoop.mapreduce.jobhistory; import static org.assertj.core.api.Assertions.assertThat; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertTrue; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertTrue; import static org.mockito.ArgumentMatchers.any; import static org.mockito.Mockito.doNothing; import static org.mockito.Mockito.doReturn; @@ -81,11 +81,12 @@ import org.apache.hadoop.yarn.event.DrainDispatcher; import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; import org.apache.hadoop.yarn.server.MiniYARNCluster; import org.apache.hadoop.yarn.server.timeline.TimelineStore; -import org.junit.After; -import org.junit.AfterClass; -import org.junit.Assert; -import org.junit.BeforeClass; -import org.junit.Test; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.Assertions; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; import org.mockito.Mockito; import com.fasterxml.jackson.databind.JsonNode; @@ -101,7 +102,7 @@ public class TestJobHistoryEventHandler { private static MiniDFSCluster dfsCluster = null; private static String coreSitePath; - @BeforeClass + @BeforeAll public static void setUpClass() throws Exception { coreSitePath = "." + File.separator + "target" + File.separator + "test-classes" + File.separator + "core-site.xml"; @@ -109,17 +110,18 @@ public class TestJobHistoryEventHandler { dfsCluster = new MiniDFSCluster.Builder(conf).build(); } - @AfterClass + @AfterAll public static void cleanUpClass() throws Exception { dfsCluster.shutdown(); } - @After + @AfterEach public void cleanTest() throws Exception { new File(coreSitePath).delete(); } - @Test (timeout=50000) + @Test + @Timeout(50000) public void testFirstFlushOnCompletionEvent() throws Exception { TestParams t = new TestParams(); Configuration conf = new Configuration(); @@ -162,7 +164,8 @@ public class TestJobHistoryEventHandler { } } - @Test (timeout=50000) + @Test + @Timeout(50000) public void testMaxUnflushedCompletionEvents() throws Exception { TestParams t = new TestParams(); Configuration conf = new Configuration(); @@ -207,7 +210,8 @@ public class TestJobHistoryEventHandler { } } - @Test (timeout=50000) + @Test + @Timeout(50000) public void testUnflushedTimer() throws Exception { TestParams t = new TestParams(); Configuration conf = new Configuration(); @@ -232,25 +236,26 @@ public class TestJobHistoryEventHandler { mockWriter = jheh.getEventWriter(); verify(mockWriter).write(any(HistoryEvent.class)); - for (int i = 0 ; i < 100 ; i++) { + for (int i = 0; i < 100; i++) { queueEvent(jheh, new JobHistoryEvent(t.jobId, new TaskFinishedEvent( t.taskID, t.taskAttemptID, 0, TaskType.MAP, "", null, 0))); } handleNextNEvents(jheh, 9); - Assert.assertTrue(jheh.getFlushTimerStatus()); + Assertions.assertTrue(jheh.getFlushTimerStatus()); verify(mockWriter, times(0)).flush(); Thread.sleep(2 * 4 * 1000l); // 4 seconds should be enough. Just be safe. verify(mockWriter).flush(); - Assert.assertFalse(jheh.getFlushTimerStatus()); + Assertions.assertFalse(jheh.getFlushTimerStatus()); } finally { jheh.stop(); verify(mockWriter).close(); } } - @Test (timeout=50000) + @Test + @Timeout(50000) public void testBatchedFlushJobEndMultiplier() throws Exception { TestParams t = new TestParams(); Configuration conf = new Configuration(); @@ -295,7 +300,8 @@ public class TestJobHistoryEventHandler { } // In case of all types of events, process Done files if it's last AM retry - @Test (timeout=50000) + @Test + @Timeout(50000) public void testProcessDoneFilesOnLastAMRetry() throws Exception { TestParams t = new TestParams(true); Configuration conf = new Configuration(); @@ -309,12 +315,12 @@ public class TestJobHistoryEventHandler { try { jheh.start(); handleEvent(jheh, new JobHistoryEvent(t.jobId, new AMStartedEvent( - t.appAttemptId, 200, t.containerId, "nmhost", 3000, 4000, -1))); + t.appAttemptId, 200, t.containerId, "nmhost", 3000, 4000, -1))); verify(jheh, times(0)).processDoneFiles(any(JobId.class)); handleEvent(jheh, new JobHistoryEvent(t.jobId, - new JobUnsuccessfulCompletionEvent(TypeConverter.fromYarn(t.jobId), 0, - 0, 0, 0, 0, 0, 0, JobStateInternal.ERROR.toString()))); + new JobUnsuccessfulCompletionEvent(TypeConverter.fromYarn(t.jobId), 0, + 0, 0, 0, 0, 0, 0, JobStateInternal.ERROR.toString()))); verify(jheh, times(1)).processDoneFiles(any(JobId.class)); handleEvent(jheh, new JobHistoryEvent(t.jobId, new JobFinishedEvent( @@ -323,13 +329,13 @@ public class TestJobHistoryEventHandler { verify(jheh, times(2)).processDoneFiles(any(JobId.class)); handleEvent(jheh, new JobHistoryEvent(t.jobId, - new JobUnsuccessfulCompletionEvent(TypeConverter.fromYarn(t.jobId), 0, - 0, 0, 0, 0, 0, 0, JobStateInternal.FAILED.toString()))); + new JobUnsuccessfulCompletionEvent(TypeConverter.fromYarn(t.jobId), 0, + 0, 0, 0, 0, 0, 0, JobStateInternal.FAILED.toString()))); verify(jheh, times(3)).processDoneFiles(any(JobId.class)); handleEvent(jheh, new JobHistoryEvent(t.jobId, - new JobUnsuccessfulCompletionEvent(TypeConverter.fromYarn(t.jobId), 0, - 0, 0, 0, 0, 0, 0, JobStateInternal.KILLED.toString()))); + new JobUnsuccessfulCompletionEvent(TypeConverter.fromYarn(t.jobId), 0, + 0, 0, 0, 0, 0, 0, JobStateInternal.KILLED.toString()))); verify(jheh, times(4)).processDoneFiles(any(JobId.class)); mockWriter = jheh.getEventWriter(); @@ -341,7 +347,8 @@ public class TestJobHistoryEventHandler { } // Skip processing Done files in case of ERROR, if it's not last AM retry - @Test (timeout=50000) + @Test + @Timeout(50000) public void testProcessDoneFilesNotLastAMRetry() throws Exception { TestParams t = new TestParams(false); Configuration conf = new Configuration(); @@ -354,13 +361,13 @@ public class TestJobHistoryEventHandler { try { jheh.start(); handleEvent(jheh, new JobHistoryEvent(t.jobId, new AMStartedEvent( - t.appAttemptId, 200, t.containerId, "nmhost", 3000, 4000, -1))); + t.appAttemptId, 200, t.containerId, "nmhost", 3000, 4000, -1))); verify(jheh, times(0)).processDoneFiles(t.jobId); // skip processing done files handleEvent(jheh, new JobHistoryEvent(t.jobId, - new JobUnsuccessfulCompletionEvent(TypeConverter.fromYarn(t.jobId), 0, - 0, 0, 0, 0, 0, 0, JobStateInternal.ERROR.toString()))); + new JobUnsuccessfulCompletionEvent(TypeConverter.fromYarn(t.jobId), 0, + 0, 0, 0, 0, 0, 0, JobStateInternal.ERROR.toString()))); verify(jheh, times(0)).processDoneFiles(t.jobId); handleEvent(jheh, new JobHistoryEvent(t.jobId, new JobFinishedEvent( @@ -369,13 +376,13 @@ public class TestJobHistoryEventHandler { verify(jheh, times(1)).processDoneFiles(t.jobId); handleEvent(jheh, new JobHistoryEvent(t.jobId, - new JobUnsuccessfulCompletionEvent(TypeConverter.fromYarn(t.jobId), 0, - 0, 0, 0, 0, 0, 0, JobStateInternal.FAILED.toString()))); + new JobUnsuccessfulCompletionEvent(TypeConverter.fromYarn(t.jobId), 0, + 0, 0, 0, 0, 0, 0, JobStateInternal.FAILED.toString()))); verify(jheh, times(2)).processDoneFiles(t.jobId); handleEvent(jheh, new JobHistoryEvent(t.jobId, - new JobUnsuccessfulCompletionEvent(TypeConverter.fromYarn(t.jobId), 0, - 0, 0, 0, 0, 0, 0, JobStateInternal.KILLED.toString()))); + new JobUnsuccessfulCompletionEvent(TypeConverter.fromYarn(t.jobId), 0, + 0, 0, 0, 0, 0, 0, JobStateInternal.KILLED.toString()))); verify(jheh, times(3)).processDoneFiles(t.jobId); mockWriter = jheh.getEventWriter(); @@ -421,16 +428,15 @@ public class TestJobHistoryEventHandler { // load the job_conf.xml in JHS directory and verify property redaction. Path jhsJobConfFile = getJobConfInIntermediateDoneDir(conf, params.jobId); - Assert.assertTrue("The job_conf.xml file is not in the JHS directory", - FileContext.getFileContext(conf).util().exists(jhsJobConfFile)); + Assertions.assertTrue(FileContext.getFileContext(conf).util().exists(jhsJobConfFile), + "The job_conf.xml file is not in the JHS directory"); Configuration jhsJobConf = new Configuration(); try (InputStream input = FileSystem.get(conf).open(jhsJobConfFile)) { jhsJobConf.addResource(input); - Assert.assertEquals( - sensitivePropertyName + " is not redacted in HDFS.", - MRJobConfUtil.REDACTION_REPLACEMENT_VAL, - jhsJobConf.get(sensitivePropertyName)); + Assertions.assertEquals(MRJobConfUtil.REDACTION_REPLACEMENT_VAL, + jhsJobConf.get(sensitivePropertyName), + sensitivePropertyName + " is not redacted in HDFS."); } } finally { jheh.stop(); @@ -456,19 +462,20 @@ public class TestJobHistoryEventHandler { fs.delete(new Path(intermDoneDirPrefix), true); } - @Test (timeout=50000) + @Test + @Timeout(50000) public void testDefaultFsIsUsedForHistory() throws Exception { // Create default configuration pointing to the minicluster Configuration conf = new Configuration(); conf.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, - dfsCluster.getURI().toString()); + dfsCluster.getURI().toString()); FileOutputStream os = new FileOutputStream(coreSitePath); conf.writeXml(os); os.close(); // simulate execution under a non-default namenode conf.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, - "file:///"); + "file:///"); TestParams t = new TestParams(); conf.set(MRJobConfig.MR_AM_STAGING_DIR, t.dfsWorkDir); @@ -490,11 +497,11 @@ public class TestJobHistoryEventHandler { // If we got here then event handler worked but we don't know with which // file system. Now we check that history stuff was written to minicluster FileSystem dfsFileSystem = dfsCluster.getFileSystem(); - assertTrue("Minicluster contains some history files", - dfsFileSystem.globStatus(new Path(t.dfsWorkDir + "/*")).length != 0); + assertTrue(dfsFileSystem.globStatus(new Path(t.dfsWorkDir + "/*")).length != 0, + "Minicluster contains some history files"); FileSystem localFileSystem = LocalFileSystem.get(conf); - assertFalse("No history directory on non-default file system", - localFileSystem.exists(new Path(t.dfsWorkDir))); + assertFalse(localFileSystem.exists(new Path(t.dfsWorkDir)), + "No history directory on non-default file system"); } finally { jheh.stop(); purgeHdfsHistoryIntermediateDoneDirectory(conf); @@ -509,7 +516,7 @@ public class TestJobHistoryEventHandler { "/mapred/history/done_intermediate"); conf.set(MRJobConfig.USER_NAME, System.getProperty("user.name")); String pathStr = JobHistoryUtils.getHistoryIntermediateDoneDirForUser(conf); - Assert.assertEquals("/mapred/history/done_intermediate/" + + Assertions.assertEquals("/mapred/history/done_intermediate/" + System.getProperty("user.name"), pathStr); // Test fully qualified path @@ -523,13 +530,14 @@ public class TestJobHistoryEventHandler { conf.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, "file:///"); pathStr = JobHistoryUtils.getHistoryIntermediateDoneDirForUser(conf); - Assert.assertEquals(dfsCluster.getURI().toString() + + Assertions.assertEquals(dfsCluster.getURI().toString() + "/mapred/history/done_intermediate/" + System.getProperty("user.name"), pathStr); } // test AMStartedEvent for submitTime and startTime - @Test (timeout=50000) + @Test + @Timeout(50000) public void testAMStartedEvent() throws Exception { TestParams t = new TestParams(); Configuration conf = new Configuration(); @@ -571,7 +579,8 @@ public class TestJobHistoryEventHandler { // Have JobHistoryEventHandler handle some events and make sure they get // stored to the Timeline store - @Test (timeout=50000) + @Test + @Timeout(50000) public void testTimelineEventHandling() throws Exception { TestParams t = new TestParams(RunningAppContext.class, false); Configuration conf = new YarnConfiguration(); @@ -598,13 +607,13 @@ public class TestJobHistoryEventHandler { jheh.getDispatcher().await(); TimelineEntities entities = ts.getEntities("MAPREDUCE_JOB", null, null, null, null, null, null, null, null, null); - Assert.assertEquals(1, entities.getEntities().size()); + Assertions.assertEquals(1, entities.getEntities().size()); TimelineEntity tEntity = entities.getEntities().get(0); - Assert.assertEquals(t.jobId.toString(), tEntity.getEntityId()); - Assert.assertEquals(1, tEntity.getEvents().size()); - Assert.assertEquals(EventType.AM_STARTED.toString(), + Assertions.assertEquals(t.jobId.toString(), tEntity.getEntityId()); + Assertions.assertEquals(1, tEntity.getEvents().size()); + Assertions.assertEquals(EventType.AM_STARTED.toString(), tEntity.getEvents().get(0).getEventType()); - Assert.assertEquals(currentTime - 10, + Assertions.assertEquals(currentTime - 10, tEntity.getEvents().get(0).getTimestamp()); handleEvent(jheh, new JobHistoryEvent(t.jobId, @@ -615,17 +624,17 @@ public class TestJobHistoryEventHandler { jheh.getDispatcher().await(); entities = ts.getEntities("MAPREDUCE_JOB", null, null, null, null, null, null, null, null, null); - Assert.assertEquals(1, entities.getEntities().size()); + Assertions.assertEquals(1, entities.getEntities().size()); tEntity = entities.getEntities().get(0); - Assert.assertEquals(t.jobId.toString(), tEntity.getEntityId()); - Assert.assertEquals(2, tEntity.getEvents().size()); - Assert.assertEquals(EventType.JOB_SUBMITTED.toString(), + Assertions.assertEquals(t.jobId.toString(), tEntity.getEntityId()); + Assertions.assertEquals(2, tEntity.getEvents().size()); + Assertions.assertEquals(EventType.JOB_SUBMITTED.toString(), tEntity.getEvents().get(0).getEventType()); - Assert.assertEquals(EventType.AM_STARTED.toString(), + Assertions.assertEquals(EventType.AM_STARTED.toString(), tEntity.getEvents().get(1).getEventType()); - Assert.assertEquals(currentTime + 10, + Assertions.assertEquals(currentTime + 10, tEntity.getEvents().get(0).getTimestamp()); - Assert.assertEquals(currentTime - 10, + Assertions.assertEquals(currentTime - 10, tEntity.getEvents().get(1).getTimestamp()); handleEvent(jheh, new JobHistoryEvent(t.jobId, @@ -634,80 +643,80 @@ public class TestJobHistoryEventHandler { jheh.getDispatcher().await(); entities = ts.getEntities("MAPREDUCE_JOB", null, null, null, null, null, null, null, null, null); - Assert.assertEquals(1, entities.getEntities().size()); + Assertions.assertEquals(1, entities.getEntities().size()); tEntity = entities.getEntities().get(0); - Assert.assertEquals(t.jobId.toString(), tEntity.getEntityId()); - Assert.assertEquals(3, tEntity.getEvents().size()); - Assert.assertEquals(EventType.JOB_SUBMITTED.toString(), + Assertions.assertEquals(t.jobId.toString(), tEntity.getEntityId()); + Assertions.assertEquals(3, tEntity.getEvents().size()); + Assertions.assertEquals(EventType.JOB_SUBMITTED.toString(), tEntity.getEvents().get(0).getEventType()); - Assert.assertEquals(EventType.AM_STARTED.toString(), + Assertions.assertEquals(EventType.AM_STARTED.toString(), tEntity.getEvents().get(1).getEventType()); - Assert.assertEquals(EventType.JOB_QUEUE_CHANGED.toString(), + Assertions.assertEquals(EventType.JOB_QUEUE_CHANGED.toString(), tEntity.getEvents().get(2).getEventType()); - Assert.assertEquals(currentTime + 10, + Assertions.assertEquals(currentTime + 10, tEntity.getEvents().get(0).getTimestamp()); - Assert.assertEquals(currentTime - 10, + Assertions.assertEquals(currentTime - 10, tEntity.getEvents().get(1).getTimestamp()); - Assert.assertEquals(currentTime - 20, + Assertions.assertEquals(currentTime - 20, tEntity.getEvents().get(2).getTimestamp()); handleEvent(jheh, new JobHistoryEvent(t.jobId, - new JobFinishedEvent(TypeConverter.fromYarn(t.jobId), 0, 0, 0, 0, + new JobFinishedEvent(TypeConverter.fromYarn(t.jobId), 0, 0, 0, 0, 0, 0, 0, new Counters(), new Counters(), new Counters()), currentTime)); jheh.getDispatcher().await(); entities = ts.getEntities("MAPREDUCE_JOB", null, null, null, - null, null, null, null, null, null); - Assert.assertEquals(1, entities.getEntities().size()); + null, null, null, null, null, null); + Assertions.assertEquals(1, entities.getEntities().size()); tEntity = entities.getEntities().get(0); - Assert.assertEquals(t.jobId.toString(), tEntity.getEntityId()); - Assert.assertEquals(4, tEntity.getEvents().size()); - Assert.assertEquals(EventType.JOB_SUBMITTED.toString(), + Assertions.assertEquals(t.jobId.toString(), tEntity.getEntityId()); + Assertions.assertEquals(4, tEntity.getEvents().size()); + Assertions.assertEquals(EventType.JOB_SUBMITTED.toString(), tEntity.getEvents().get(0).getEventType()); - Assert.assertEquals(EventType.JOB_FINISHED.toString(), + Assertions.assertEquals(EventType.JOB_FINISHED.toString(), tEntity.getEvents().get(1).getEventType()); - Assert.assertEquals(EventType.AM_STARTED.toString(), + Assertions.assertEquals(EventType.AM_STARTED.toString(), tEntity.getEvents().get(2).getEventType()); - Assert.assertEquals(EventType.JOB_QUEUE_CHANGED.toString(), + Assertions.assertEquals(EventType.JOB_QUEUE_CHANGED.toString(), tEntity.getEvents().get(3).getEventType()); - Assert.assertEquals(currentTime + 10, + Assertions.assertEquals(currentTime + 10, tEntity.getEvents().get(0).getTimestamp()); - Assert.assertEquals(currentTime, + Assertions.assertEquals(currentTime, tEntity.getEvents().get(1).getTimestamp()); - Assert.assertEquals(currentTime - 10, + Assertions.assertEquals(currentTime - 10, tEntity.getEvents().get(2).getTimestamp()); - Assert.assertEquals(currentTime - 20, + Assertions.assertEquals(currentTime - 20, tEntity.getEvents().get(3).getTimestamp()); handleEvent(jheh, new JobHistoryEvent(t.jobId, new JobUnsuccessfulCompletionEvent(TypeConverter.fromYarn(t.jobId), 0, 0, 0, 0, 0, 0, 0, JobStateInternal.KILLED.toString()), - currentTime + 20)); + currentTime + 20)); jheh.getDispatcher().await(); entities = ts.getEntities("MAPREDUCE_JOB", null, null, null, null, null, null, null, null, null); - Assert.assertEquals(1, entities.getEntities().size()); + Assertions.assertEquals(1, entities.getEntities().size()); tEntity = entities.getEntities().get(0); - Assert.assertEquals(t.jobId.toString(), tEntity.getEntityId()); - Assert.assertEquals(5, tEntity.getEvents().size()); - Assert.assertEquals(EventType.JOB_KILLED.toString(), + Assertions.assertEquals(t.jobId.toString(), tEntity.getEntityId()); + Assertions.assertEquals(5, tEntity.getEvents().size()); + Assertions.assertEquals(EventType.JOB_KILLED.toString(), tEntity.getEvents().get(0).getEventType()); - Assert.assertEquals(EventType.JOB_SUBMITTED.toString(), + Assertions.assertEquals(EventType.JOB_SUBMITTED.toString(), tEntity.getEvents().get(1).getEventType()); - Assert.assertEquals(EventType.JOB_FINISHED.toString(), + Assertions.assertEquals(EventType.JOB_FINISHED.toString(), tEntity.getEvents().get(2).getEventType()); - Assert.assertEquals(EventType.AM_STARTED.toString(), + Assertions.assertEquals(EventType.AM_STARTED.toString(), tEntity.getEvents().get(3).getEventType()); - Assert.assertEquals(EventType.JOB_QUEUE_CHANGED.toString(), + Assertions.assertEquals(EventType.JOB_QUEUE_CHANGED.toString(), tEntity.getEvents().get(4).getEventType()); - Assert.assertEquals(currentTime + 20, + Assertions.assertEquals(currentTime + 20, tEntity.getEvents().get(0).getTimestamp()); - Assert.assertEquals(currentTime + 10, + Assertions.assertEquals(currentTime + 10, tEntity.getEvents().get(1).getTimestamp()); - Assert.assertEquals(currentTime, + Assertions.assertEquals(currentTime, tEntity.getEvents().get(2).getTimestamp()); - Assert.assertEquals(currentTime - 10, + Assertions.assertEquals(currentTime - 10, tEntity.getEvents().get(3).getTimestamp()); - Assert.assertEquals(currentTime - 20, + Assertions.assertEquals(currentTime - 20, tEntity.getEvents().get(4).getTimestamp()); handleEvent(jheh, new JobHistoryEvent(t.jobId, @@ -715,13 +724,13 @@ public class TestJobHistoryEventHandler { jheh.getDispatcher().await(); entities = ts.getEntities("MAPREDUCE_TASK", null, null, null, null, null, null, null, null, null); - Assert.assertEquals(1, entities.getEntities().size()); + Assertions.assertEquals(1, entities.getEntities().size()); tEntity = entities.getEntities().get(0); - Assert.assertEquals(t.taskID.toString(), tEntity.getEntityId()); - Assert.assertEquals(1, tEntity.getEvents().size()); - Assert.assertEquals(EventType.TASK_STARTED.toString(), + Assertions.assertEquals(t.taskID.toString(), tEntity.getEntityId()); + Assertions.assertEquals(1, tEntity.getEvents().size()); + Assertions.assertEquals(EventType.TASK_STARTED.toString(), tEntity.getEvents().get(0).getEventType()); - Assert.assertEquals(TaskType.MAP.toString(), + Assertions.assertEquals(TaskType.MAP.toString(), tEntity.getEvents().get(0).getEventInfo().get("TASK_TYPE")); handleEvent(jheh, new JobHistoryEvent(t.jobId, @@ -729,30 +738,31 @@ public class TestJobHistoryEventHandler { jheh.getDispatcher().await(); entities = ts.getEntities("MAPREDUCE_TASK", null, null, null, null, null, null, null, null, null); - Assert.assertEquals(1, entities.getEntities().size()); + Assertions.assertEquals(1, entities.getEntities().size()); tEntity = entities.getEntities().get(0); - Assert.assertEquals(t.taskID.toString(), tEntity.getEntityId()); - Assert.assertEquals(2, tEntity.getEvents().size()); - Assert.assertEquals(EventType.TASK_STARTED.toString(), + Assertions.assertEquals(t.taskID.toString(), tEntity.getEntityId()); + Assertions.assertEquals(2, tEntity.getEvents().size()); + Assertions.assertEquals(EventType.TASK_STARTED.toString(), tEntity.getEvents().get(1).getEventType()); - Assert.assertEquals(TaskType.REDUCE.toString(), + Assertions.assertEquals(TaskType.REDUCE.toString(), tEntity.getEvents().get(0).getEventInfo().get("TASK_TYPE")); - Assert.assertEquals(TaskType.MAP.toString(), + Assertions.assertEquals(TaskType.MAP.toString(), tEntity.getEvents().get(1).getEventInfo().get("TASK_TYPE")); } } - @Test (timeout=50000) + @Test + @Timeout(50000) public void testCountersToJSON() throws Exception { JobHistoryEventHandler jheh = new JobHistoryEventHandler(null, 0); Counters counters = new Counters(); CounterGroup group1 = counters.addGroup("DOCTORS", - "Incarnations of the Doctor"); + "Incarnations of the Doctor"); group1.addCounter("PETER_CAPALDI", "Peter Capaldi", 12); group1.addCounter("MATT_SMITH", "Matt Smith", 11); group1.addCounter("DAVID_TENNANT", "David Tennant", 10); CounterGroup group2 = counters.addGroup("COMPANIONS", - "Companions of the Doctor"); + "Companions of the Doctor"); group2.addCounter("CLARA_OSWALD", "Clara Oswald", 6); group2.addCounter("RORY_WILLIAMS", "Rory Williams", 5); group2.addCounter("AMY_POND", "Amy Pond", 4); @@ -775,30 +785,31 @@ public class TestJobHistoryEventHandler { + "{\"NAME\":\"MATT_SMITH\",\"DISPLAY_NAME\":\"Matt Smith\",\"VALUE\":" + "11},{\"NAME\":\"PETER_CAPALDI\",\"DISPLAY_NAME\":\"Peter Capaldi\"," + "\"VALUE\":12}]}]"; - Assert.assertEquals(expected, jsonStr); + Assertions.assertEquals(expected, jsonStr); } - @Test (timeout=50000) + @Test + @Timeout(50000) public void testCountersToJSONEmpty() throws Exception { JobHistoryEventHandler jheh = new JobHistoryEventHandler(null, 0); Counters counters = null; JsonNode jsonNode = JobHistoryEventUtils.countersToJSON(counters); String jsonStr = new ObjectMapper().writeValueAsString(jsonNode); String expected = "[]"; - Assert.assertEquals(expected, jsonStr); + Assertions.assertEquals(expected, jsonStr); counters = new Counters(); jsonNode = JobHistoryEventUtils.countersToJSON(counters); jsonStr = new ObjectMapper().writeValueAsString(jsonNode); expected = "[]"; - Assert.assertEquals(expected, jsonStr); + Assertions.assertEquals(expected, jsonStr); counters.addGroup("DOCTORS", "Incarnations of the Doctor"); jsonNode = JobHistoryEventUtils.countersToJSON(counters); jsonStr = new ObjectMapper().writeValueAsString(jsonNode); expected = "[{\"NAME\":\"DOCTORS\",\"DISPLAY_NAME\":\"Incarnations of the " + "Doctor\",\"COUNTERS\":[]}]"; - Assert.assertEquals(expected, jsonStr); + Assertions.assertEquals(expected, jsonStr); } private void queueEvent(JHEvenHandlerForTest jheh, JobHistoryEvent event) { @@ -912,8 +923,9 @@ public class TestJobHistoryEventHandler { } jheh.stop(); //Make sure events were handled - assertTrue("handleEvent should've been called only 4 times but was " - + jheh.eventsHandled, jheh.eventsHandled == 4); + assertTrue(jheh.eventsHandled == 4, + "handleEvent should've been called only 4 times but was " + + jheh.eventsHandled); //Create a new jheh because the last stop closed the eventWriter etc. jheh = new JHEventHandlerForSigtermTest(mockedContext, 0); @@ -934,14 +946,15 @@ public class TestJobHistoryEventHandler { } jheh.stop(); //Make sure events were handled, 4 + 1 finish event - assertTrue("handleEvent should've been called only 5 times but was " - + jheh.eventsHandled, jheh.eventsHandled == 5); - assertTrue("Last event handled wasn't JobUnsuccessfulCompletionEvent", - jheh.lastEventHandled.getHistoryEvent() - instanceof JobUnsuccessfulCompletionEvent); + assertTrue(jheh.eventsHandled == 5, "handleEvent should've been called only 5 times but was " + + jheh.eventsHandled); + assertTrue(jheh.lastEventHandled.getHistoryEvent() + instanceof JobUnsuccessfulCompletionEvent, + "Last event handled wasn't JobUnsuccessfulCompletionEvent"); } - @Test (timeout=50000) + @Test + @Timeout(50000) public void testSetTrackingURLAfterHistoryIsWritten() throws Exception { TestParams t = new TestParams(true); Configuration conf = new Configuration(); @@ -972,7 +985,8 @@ public class TestJobHistoryEventHandler { } } - @Test (timeout=50000) + @Test + @Timeout(50000) public void testDontSetTrackingURLIfHistoryWriteFailed() throws Exception { TestParams t = new TestParams(true); Configuration conf = new Configuration(); @@ -1003,7 +1017,8 @@ public class TestJobHistoryEventHandler { jheh.stop(); } } - @Test (timeout=50000) + @Test + @Timeout(50000) public void testDontSetTrackingURLIfHistoryWriteThrows() throws Exception { TestParams t = new TestParams(true); Configuration conf = new Configuration(); @@ -1039,7 +1054,8 @@ public class TestJobHistoryEventHandler { } } - @Test(timeout = 50000) + @Test + @Timeout(50000) public void testJobHistoryFilePermissions() throws Exception { TestParams t = new TestParams(true); Configuration conf = new Configuration(); diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobSummary.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobSummary.java index b81f716ebc7..41835d4f3b7 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobSummary.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobSummary.java @@ -19,9 +19,9 @@ package org.apache.hadoop.mapreduce.jobhistory; import org.apache.hadoop.mapreduce.v2.api.records.JobId; -import org.junit.Assert; -import org.junit.Before; -import org.junit.Test; +import org.junit.jupiter.api.Assertions; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -34,7 +34,7 @@ public class TestJobSummary { LoggerFactory.getLogger(TestJobSummary.class); private JobSummary summary = new JobSummary(); - @Before + @BeforeEach public void before() { JobId mockJobId = mock(JobId.class); when(mockJobId.toString()).thenReturn("testJobId"); @@ -64,8 +64,8 @@ public class TestJobSummary { summary.setJobName("aa\rbb\ncc\r\ndd"); String out = summary.getJobSummaryString(); LOG.info("summary: " + out); - Assert.assertFalse(out.contains("\r")); - Assert.assertFalse(out.contains("\n")); - Assert.assertTrue(out.contains("aa\\rbb\\ncc\\r\\ndd")); + Assertions.assertFalse(out.contains("\r")); + Assertions.assertFalse(out.contains("\n")); + Assertions.assertTrue(out.contains("aa\\rbb\\ncc\\r\\ndd")); } } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/api/records/TestTaskAttemptReport.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/api/records/TestTaskAttemptReport.java index c8d81aea99b..4d4be84a74b 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/api/records/TestTaskAttemptReport.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/api/records/TestTaskAttemptReport.java @@ -24,12 +24,12 @@ import org.apache.hadoop.mapreduce.v2.app.MockJobs; import org.apache.hadoop.mapreduce.v2.proto.MRProtos; import org.apache.hadoop.yarn.util.Records; -import org.junit.Test; +import org.junit.jupiter.api.Test; import static org.assertj.core.api.Assertions.assertThat; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertNotEquals; -import static org.junit.Assert.assertTrue; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotEquals; +import static org.junit.jupiter.api.Assertions.assertTrue; public class TestTaskAttemptReport { diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/api/records/TestTaskReport.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/api/records/TestTaskReport.java index a9b34eea7cf..bc25ac4e9cd 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/api/records/TestTaskReport.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/api/records/TestTaskReport.java @@ -24,12 +24,12 @@ import org.apache.hadoop.mapreduce.v2.app.MockJobs; import org.apache.hadoop.mapreduce.v2.proto.MRProtos; import org.apache.hadoop.yarn.util.Records; -import org.junit.Test; +import org.junit.jupiter.api.Test; import static org.assertj.core.api.Assertions.assertThat; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertNotEquals; -import static org.junit.Assert.assertTrue; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotEquals; +import static org.junit.jupiter.api.Assertions.assertTrue; public class TestTaskReport { diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MRApp.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MRApp.java index 4be80c44a3e..39cf27ae441 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MRApp.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MRApp.java @@ -98,7 +98,7 @@ import org.apache.hadoop.yarn.state.StateMachine; import org.apache.hadoop.yarn.state.StateMachineFactory; import org.apache.hadoop.yarn.util.Clock; import org.apache.hadoop.yarn.util.SystemClock; -import org.junit.Assert; +import org.junit.jupiter.api.Assertions; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -326,8 +326,8 @@ public class MRApp extends MRAppMaster { iState = job.getInternalState(); } LOG.info("Job {} Internal State is : {}", job.getID(), iState); - Assert.assertEquals("Task Internal state is not correct (timedout)", - finalState, iState); + Assertions.assertEquals(finalState, iState, + "Task Internal state is not correct (timedout)"); } public void waitForInternalState(TaskImpl task, @@ -339,8 +339,8 @@ public class MRApp extends MRAppMaster { iState = task.getInternalState(); } LOG.info("Task {} Internal State is : {}", task.getID(), iState); - Assert.assertEquals("Task Internal state is not correct (timedout)", - finalState, iState); + Assertions.assertEquals(finalState, iState, + "Task Internal state is not correct (timedout)"); } public void waitForInternalState(TaskAttemptImpl attempt, @@ -352,8 +352,8 @@ public class MRApp extends MRAppMaster { iState = attempt.getInternalState(); } LOG.info("TaskAttempt {} Internal State is : {}", attempt.getID(), iState); - Assert.assertEquals("TaskAttempt Internal state is not correct (timedout)", - finalState, iState); + Assertions.assertEquals(finalState, iState, + "TaskAttempt Internal state is not correct (timedout)"); } public void waitForState(TaskAttempt attempt, @@ -367,9 +367,8 @@ public class MRApp extends MRAppMaster { } LOG.info("TaskAttempt {} State is : {}", attempt.getID(), report.getTaskAttemptState()); - Assert.assertEquals("TaskAttempt state is not correct (timedout)", - finalState, - report.getTaskAttemptState()); + Assertions.assertEquals(finalState, report.getTaskAttemptState(), + "TaskAttempt state is not correct (timedout)"); } public void waitForState(Task task, TaskState finalState) throws Exception { @@ -381,8 +380,8 @@ public class MRApp extends MRAppMaster { report = task.getReport(); } LOG.info("Task {} State is : {}", task.getID(), report.getTaskState()); - Assert.assertEquals("Task state is not correct (timedout)", finalState, - report.getTaskState()); + Assertions.assertEquals(finalState, report.getTaskState(), + "Task state is not correct (timedout)"); } public void waitForState(Job job, JobState finalState) throws Exception { @@ -394,14 +393,14 @@ public class MRApp extends MRAppMaster { Thread.sleep(WAIT_FOR_STATE_INTERVAL); } LOG.info("Job {} State is : {}", job.getID(), report.getJobState()); - Assert.assertEquals("Job state is not correct (timedout)", finalState, - job.getState()); + Assertions.assertEquals(finalState, job.getState(), + "Job state is not correct (timedout)"); } public void waitForState(Service.STATE finalState) throws Exception { if (finalState == Service.STATE.STOPPED) { - Assert.assertTrue("Timeout while waiting for MRApp to stop", - waitForServiceToStop(20 * 1000)); + Assertions.assertTrue(waitForServiceToStop(20 * 1000), + "Timeout while waiting for MRApp to stop"); } else { int timeoutSecs = 0; while (!finalState.equals(getServiceState()) @@ -409,8 +408,8 @@ public class MRApp extends MRAppMaster { Thread.sleep(WAIT_FOR_STATE_INTERVAL); } LOG.info("MRApp State is : {}", getServiceState()); - Assert.assertEquals("MRApp state is not correct (timedout)", finalState, - getServiceState()); + Assertions.assertEquals(finalState, getServiceState(), + "MRApp state is not correct (timedout)"); } } @@ -419,22 +418,23 @@ public class MRApp extends MRAppMaster { JobReport jobReport = job.getReport(); LOG.info("Job start time :{}", jobReport.getStartTime()); LOG.info("Job finish time :", jobReport.getFinishTime()); - Assert.assertTrue("Job start time is not less than finish time", - jobReport.getStartTime() <= jobReport.getFinishTime()); - Assert.assertTrue("Job finish time is in future", - jobReport.getFinishTime() <= System.currentTimeMillis()); + Assertions.assertTrue(jobReport.getStartTime() <= jobReport.getFinishTime(), + "Job start time is not less than finish time"); + Assertions.assertTrue(jobReport.getFinishTime() <= System.currentTimeMillis(), + "Job finish time is in future"); for (Task task : job.getTasks().values()) { TaskReport taskReport = task.getReport(); LOG.info("Task {} start time : {}", task.getID(), taskReport.getStartTime()); LOG.info("Task {} finish time : {}", task.getID(), taskReport.getFinishTime()); - Assert.assertTrue("Task start time is not less than finish time", - taskReport.getStartTime() <= taskReport.getFinishTime()); + Assertions.assertTrue(taskReport.getStartTime() <= taskReport.getFinishTime(), + "Task start time is not less than finish time"); for (TaskAttempt attempt : task.getAttempts().values()) { TaskAttemptReport attemptReport = attempt.getReport(); - Assert.assertTrue("Attempt start time is not less than finish time", - attemptReport.getStartTime() <= attemptReport.getFinishTime()); + Assertions.assertTrue(attemptReport.getStartTime() <= + attemptReport.getFinishTime(), + "Attempt start time is not less than finish time"); } } } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MRAppBenchmark.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MRAppBenchmark.java index efe150fad19..20e1a836f04 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MRAppBenchmark.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MRAppBenchmark.java @@ -56,7 +56,8 @@ import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; import org.apache.hadoop.yarn.factories.RecordFactory; import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider; import org.apache.hadoop.yarn.util.Records; -import org.junit.Test; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; import org.slf4j.event.Level; public class MRAppBenchmark { @@ -196,7 +197,8 @@ public class MRAppBenchmark { } } - @Test(timeout = 60000) + @Test + @Timeout(60000) public void benchmark1() throws Exception { int maps = 100; // Adjust for benchmarking. Start with thousands. int reduces = 0; @@ -275,7 +277,8 @@ public class MRAppBenchmark { }); } - @Test(timeout = 60000) + @Test + @Timeout(60000) public void benchmark2() throws Exception { int maps = 100; // Adjust for benchmarking, start with a couple of thousands int reduces = 50; diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestAMInfos.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestAMInfos.java index 4b9015f10c5..085013b774a 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestAMInfos.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestAMInfos.java @@ -21,7 +21,7 @@ package org.apache.hadoop.mapreduce.v2.app; import java.util.Iterator; import java.util.List; -import org.junit.Assert; +import org.junit.jupiter.api.Assertions; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.mapreduce.MRJobConfig; @@ -33,7 +33,7 @@ import org.apache.hadoop.mapreduce.v2.app.TestRecovery.MRAppWithHistory; import org.apache.hadoop.mapreduce.v2.app.job.Job; import org.apache.hadoop.mapreduce.v2.app.job.Task; import org.apache.hadoop.mapreduce.v2.app.job.TaskAttempt; -import org.junit.Test; +import org.junit.jupiter.api.Test; public class TestAMInfos { @@ -50,7 +50,7 @@ public class TestAMInfos { long am1StartTime = app.getAllAMInfos().get(0).getStartTime(); - Assert.assertEquals("No of tasks not correct", 1, job.getTasks().size()); + Assertions.assertEquals(1, job.getTasks().size(), "No of tasks not correct"); Iterator it = job.getTasks().values().iterator(); Task mapTask = it.next(); app.waitForState(mapTask, TaskState.RUNNING); @@ -71,14 +71,14 @@ public class TestAMInfos { conf.setBoolean(MRJobConfig.JOB_UBERTASK_ENABLE, false); job = app.submit(conf); app.waitForState(job, JobState.RUNNING); - Assert.assertEquals("No of tasks not correct", 1, job.getTasks().size()); + Assertions.assertEquals(1, job.getTasks().size(), "No of tasks not correct"); it = job.getTasks().values().iterator(); mapTask = it.next(); // There should be two AMInfos List amInfos = app.getAllAMInfos(); - Assert.assertEquals(2, amInfos.size()); + Assertions.assertEquals(2, amInfos.size()); AMInfo amInfoOne = amInfos.get(0); - Assert.assertEquals(am1StartTime, amInfoOne.getStartTime()); + Assertions.assertEquals(am1StartTime, amInfoOne.getStartTime()); app.stop(); } } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestCheckpointPreemptionPolicy.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestCheckpointPreemptionPolicy.java index 59778161f20..fbe8cb18248 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestCheckpointPreemptionPolicy.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestCheckpointPreemptionPolicy.java @@ -22,7 +22,7 @@ import org.apache.hadoop.yarn.api.records.PreemptionMessage; import org.apache.hadoop.yarn.api.records.Priority; import org.apache.hadoop.yarn.util.resource.Resources; -import static org.junit.Assert.*; +import static org.junit.jupiter.api.Assertions.*; import static org.mockito.Mockito.*; import java.util.ArrayList; @@ -58,8 +58,8 @@ import org.apache.hadoop.yarn.event.EventHandler; import org.apache.hadoop.yarn.factories.RecordFactory; import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.Allocation; -import org.junit.Before; -import org.junit.Test; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; public class TestCheckpointPreemptionPolicy { @@ -77,7 +77,7 @@ public class TestCheckpointPreemptionPolicy { private int minAlloc = 1024; - @Before + @BeforeEach @SuppressWarnings("rawtypes") // mocked generics public void setup() { ApplicationId appId = ApplicationId.newInstance(200, 1); diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestFail.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestFail.java index 3b5cfe221ed..170e39f53eb 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestFail.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestFail.java @@ -24,7 +24,7 @@ import java.util.Iterator; import java.util.Map; import org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptFailEvent; -import org.junit.Assert; +import org.junit.jupiter.api.Assertions; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.mapred.TaskAttemptListenerImpl; @@ -48,7 +48,7 @@ import org.apache.hadoop.mapreduce.v2.app.rm.preemption.AMPreemptionPolicy; import org.apache.hadoop.net.NetUtils; import org.apache.hadoop.yarn.api.records.ContainerId; import org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy.ContainerManagementProtocolProxyData; -import org.junit.Test; +import org.junit.jupiter.api.Test; /** * Tests the state machine with respect to Job/Task/TaskAttempt failure @@ -68,20 +68,20 @@ public class TestFail { Job job = app.submit(conf); app.waitForState(job, JobState.SUCCEEDED); Map tasks = job.getTasks(); - Assert.assertEquals("Num tasks is not correct", 1, tasks.size()); + Assertions.assertEquals(1, tasks.size(), "Num tasks is not correct"); Task task = tasks.values().iterator().next(); - Assert.assertEquals("Task state not correct", TaskState.SUCCEEDED, - task.getReport().getTaskState()); + Assertions.assertEquals(TaskState.SUCCEEDED, task.getReport().getTaskState(), + "Task state not correct"); Map attempts = tasks.values().iterator().next().getAttempts(); - Assert.assertEquals("Num attempts is not correct", 2, attempts.size()); + Assertions.assertEquals(2, attempts.size(), "Num attempts is not correct"); //one attempt must be failed //and another must have succeeded Iterator it = attempts.values().iterator(); - Assert.assertEquals("Attempt state not correct", TaskAttemptState.FAILED, - it.next().getReport().getTaskAttemptState()); - Assert.assertEquals("Attempt state not correct", TaskAttemptState.SUCCEEDED, - it.next().getReport().getTaskAttemptState()); + Assertions.assertEquals(TaskAttemptState.FAILED, + it.next().getReport().getTaskAttemptState(), "Attempt state not correct"); + Assertions.assertEquals(TaskAttemptState.SUCCEEDED, + it.next().getReport().getTaskAttemptState(), "Attempt state not correct"); } @Test @@ -159,17 +159,17 @@ public class TestFail { Job job = app.submit(conf); app.waitForState(job, JobState.FAILED); Map tasks = job.getTasks(); - Assert.assertEquals("Num tasks is not correct", 1, tasks.size()); + Assertions.assertEquals(1, tasks.size(), "Num tasks is not correct"); Task task = tasks.values().iterator().next(); - Assert.assertEquals("Task state not correct", TaskState.FAILED, - task.getReport().getTaskState()); + Assertions.assertEquals(TaskState.FAILED, + task.getReport().getTaskState(), "Task state not correct"); Map attempts = tasks.values().iterator().next().getAttempts(); - Assert.assertEquals("Num attempts is not correct", maxAttempts, - attempts.size()); + Assertions.assertEquals(maxAttempts, + attempts.size(), "Num attempts is not correct"); for (TaskAttempt attempt : attempts.values()) { - Assert.assertEquals("Attempt state not correct", TaskAttemptState.FAILED, - attempt.getReport().getTaskAttemptState()); + Assertions.assertEquals(TaskAttemptState.FAILED, + attempt.getReport().getTaskAttemptState(), "Attempt state not correct"); } } @@ -185,13 +185,14 @@ public class TestFail { Job job = app.submit(conf); app.waitForState(job, JobState.RUNNING); Map tasks = job.getTasks(); - Assert.assertEquals("Num tasks is not correct", 1, tasks.size()); + Assertions.assertEquals(1, tasks.size(), + "Num tasks is not correct"); Task task = tasks.values().iterator().next(); app.waitForState(task, TaskState.SCHEDULED); Map attempts = tasks.values().iterator() .next().getAttempts(); - Assert.assertEquals("Num attempts is not correct", maxAttempts, attempts - .size()); + Assertions.assertEquals(maxAttempts, attempts.size(), + "Num attempts is not correct"); TaskAttempt attempt = attempts.values().iterator().next(); app.waitForInternalState((TaskAttemptImpl) attempt, TaskAttemptStateInternal.ASSIGNED); diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestFetchFailure.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestFetchFailure.java index d2bd0104fff..4fe2237bcf7 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestFetchFailure.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestFetchFailure.java @@ -19,7 +19,7 @@ package org.apache.hadoop.mapreduce.v2.app; import static org.assertj.core.api.Assertions.assertThat; -import static org.junit.Assert.assertEquals; +import static org.junit.jupiter.api.Assertions.assertEquals; import java.util.ArrayList; import java.util.Arrays; @@ -50,8 +50,8 @@ import org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType; import org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptStatusUpdateEvent; import org.apache.hadoop.test.GenericTestUtils; import org.apache.hadoop.yarn.event.EventHandler; -import org.junit.Assert; -import org.junit.Test; +import org.junit.jupiter.api.Assertions; +import org.junit.jupiter.api.Test; public class TestFetchFailure { @@ -65,8 +65,8 @@ public class TestFetchFailure { Job job = app.submit(conf); app.waitForState(job, JobState.RUNNING); //all maps would be running - Assert.assertEquals("Num tasks not correct", - 2, job.getTasks().size()); + Assertions.assertEquals(2, job.getTasks().size(), + "Num tasks not correct"); Iterator it = job.getTasks().values().iterator(); Task mapTask = it.next(); Task reduceTask = it.next(); @@ -97,10 +97,10 @@ public class TestFetchFailure { TaskAttemptCompletionEvent[] events = job.getTaskAttemptCompletionEvents(0, 100); - Assert.assertEquals("Num completion events not correct", - 1, events.length); - Assert.assertEquals("Event status not correct", - TaskAttemptCompletionEventStatus.SUCCEEDED, events[0].getStatus()); + Assertions.assertEquals(1, events.length, + "Num completion events not correct"); + Assertions.assertEquals(TaskAttemptCompletionEventStatus.SUCCEEDED, + events[0].getStatus(), "Event status not correct"); // wait for reduce to start running app.waitForState(reduceTask, TaskState.RUNNING); @@ -117,11 +117,11 @@ public class TestFetchFailure { app.waitForState(mapTask, TaskState.RUNNING); //map attempt must have become FAILED - Assert.assertEquals("Map TaskAttempt state not correct", - TaskAttemptState.FAILED, mapAttempt1.getState()); + Assertions.assertEquals(TaskAttemptState.FAILED, mapAttempt1.getState(), + "Map TaskAttempt state not correct"); - Assert.assertEquals("Num attempts in Map Task not correct", - 2, mapTask.getAttempts().size()); + Assertions.assertEquals(2, mapTask.getAttempts().size(), + "Num attempts in Map Task not correct"); Iterator atIt = mapTask.getAttempts().values().iterator(); atIt.next(); @@ -144,39 +144,41 @@ public class TestFetchFailure { app.waitForState(job, JobState.SUCCEEDED); //previous completion event now becomes obsolete - Assert.assertEquals("Event status not correct", - TaskAttemptCompletionEventStatus.OBSOLETE, events[0].getStatus()); + Assertions.assertEquals(TaskAttemptCompletionEventStatus.OBSOLETE, + events[0].getStatus(), "Event status not correct"); events = job.getTaskAttemptCompletionEvents(0, 100); - Assert.assertEquals("Num completion events not correct", - 4, events.length); - Assert.assertEquals("Event map attempt id not correct", - mapAttempt1.getID(), events[0].getAttemptId()); - Assert.assertEquals("Event map attempt id not correct", - mapAttempt1.getID(), events[1].getAttemptId()); - Assert.assertEquals("Event map attempt id not correct", - mapAttempt2.getID(), events[2].getAttemptId()); - Assert.assertEquals("Event redude attempt id not correct", - reduceAttempt.getID(), events[3].getAttemptId()); - Assert.assertEquals("Event status not correct for map attempt1", - TaskAttemptCompletionEventStatus.OBSOLETE, events[0].getStatus()); - Assert.assertEquals("Event status not correct for map attempt1", - TaskAttemptCompletionEventStatus.FAILED, events[1].getStatus()); - Assert.assertEquals("Event status not correct for map attempt2", - TaskAttemptCompletionEventStatus.SUCCEEDED, events[2].getStatus()); - Assert.assertEquals("Event status not correct for reduce attempt1", - TaskAttemptCompletionEventStatus.SUCCEEDED, events[3].getStatus()); + Assertions.assertEquals(4, events.length, + "Num completion events not correct"); + Assertions.assertEquals(mapAttempt1.getID(), events[0].getAttemptId(), + "Event map attempt id not correct"); + Assertions.assertEquals(mapAttempt1.getID(), events[1].getAttemptId(), + "Event map attempt id not correct"); + Assertions.assertEquals(mapAttempt2.getID(), events[2].getAttemptId(), + "Event map attempt id not correct"); + Assertions.assertEquals(reduceAttempt.getID(), events[3].getAttemptId(), + "Event redude attempt id not correct"); + Assertions.assertEquals(TaskAttemptCompletionEventStatus.OBSOLETE, + events[0].getStatus(), "Event status not correct for map attempt1"); + Assertions.assertEquals(TaskAttemptCompletionEventStatus.FAILED, + events[1].getStatus(), "Event status not correct for map attempt1"); + Assertions.assertEquals(TaskAttemptCompletionEventStatus.SUCCEEDED, + events[2].getStatus(), "Event status not correct for map attempt2"); + Assertions.assertEquals(TaskAttemptCompletionEventStatus.SUCCEEDED, + events[3].getStatus(), "Event status not correct for reduce attempt1"); TaskCompletionEvent mapEvents[] = job.getMapAttemptCompletionEvents(0, 2); TaskCompletionEvent convertedEvents[] = TypeConverter.fromYarn(events); - Assert.assertEquals("Incorrect number of map events", 2, mapEvents.length); - Assert.assertArrayEquals("Unexpected map events", - Arrays.copyOfRange(convertedEvents, 0, 2), mapEvents); + Assertions.assertEquals(2, mapEvents.length, + "Incorrect number of map events"); + Assertions.assertArrayEquals(Arrays.copyOfRange(convertedEvents, 0, 2), + mapEvents, "Unexpected map events"); mapEvents = job.getMapAttemptCompletionEvents(2, 200); - Assert.assertEquals("Incorrect number of map events", 1, mapEvents.length); - Assert.assertEquals("Unexpected map event", convertedEvents[2], - mapEvents[0]); + Assertions.assertEquals(1, mapEvents.length, + "Incorrect number of map events"); + Assertions.assertEquals(convertedEvents[2], mapEvents[0], + "Unexpected map event"); } /** @@ -197,8 +199,8 @@ public class TestFetchFailure { Job job = app.submit(conf); app.waitForState(job, JobState.RUNNING); //all maps would be running - Assert.assertEquals("Num tasks not correct", - 2, job.getTasks().size()); + Assertions.assertEquals(2, job.getTasks().size(), + "Num tasks not correct"); Iterator it = job.getTasks().values().iterator(); Task mapTask = it.next(); Task reduceTask = it.next(); @@ -218,10 +220,10 @@ public class TestFetchFailure { TaskAttemptCompletionEvent[] events = job.getTaskAttemptCompletionEvents(0, 100); - Assert.assertEquals("Num completion events not correct", - 1, events.length); - Assert.assertEquals("Event status not correct", - TaskAttemptCompletionEventStatus.SUCCEEDED, events[0].getStatus()); + Assertions.assertEquals(1, events.length, + "Num completion events not correct"); + Assertions.assertEquals(TaskAttemptCompletionEventStatus.SUCCEEDED, + events[0].getStatus(), "Event status not correct"); // wait for reduce to start running app.waitForState(reduceTask, TaskState.RUNNING); @@ -250,8 +252,8 @@ public class TestFetchFailure { job = app.submit(conf); app.waitForState(job, JobState.RUNNING); //all maps would be running - Assert.assertEquals("Num tasks not correct", - 2, job.getTasks().size()); + Assertions.assertEquals(2, job.getTasks().size(), + "Num tasks not correct"); it = job.getTasks().values().iterator(); mapTask = it.next(); reduceTask = it.next(); @@ -277,7 +279,8 @@ public class TestFetchFailure { app.waitForState(job, JobState.SUCCEEDED); events = job.getTaskAttemptCompletionEvents(0, 100); - Assert.assertEquals("Num completion events not correct", 2, events.length); + Assertions.assertEquals(2, events.length, + "Num completion events not correct"); } @Test @@ -290,8 +293,8 @@ public class TestFetchFailure { Job job = app.submit(conf); app.waitForState(job, JobState.RUNNING); //all maps would be running - Assert.assertEquals("Num tasks not correct", - 4, job.getTasks().size()); + Assertions.assertEquals(4, job.getTasks().size(), + "Num tasks not correct"); Iterator it = job.getTasks().values().iterator(); Task mapTask = it.next(); Task reduceTask = it.next(); @@ -313,10 +316,10 @@ public class TestFetchFailure { TaskAttemptCompletionEvent[] events = job.getTaskAttemptCompletionEvents(0, 100); - Assert.assertEquals("Num completion events not correct", - 1, events.length); - Assert.assertEquals("Event status not correct", - TaskAttemptCompletionEventStatus.SUCCEEDED, events[0].getStatus()); + Assertions.assertEquals(1, events.length, + "Num completion events not correct"); + Assertions.assertEquals(TaskAttemptCompletionEventStatus.SUCCEEDED, events[0].getStatus(), + "Event status not correct"); // wait for reduce to start running app.waitForState(reduceTask, TaskState.RUNNING); @@ -354,16 +357,16 @@ public class TestFetchFailure { app.waitForState(mapTask, TaskState.RUNNING); //map attempt must have become FAILED - Assert.assertEquals("Map TaskAttempt state not correct", - TaskAttemptState.FAILED, mapAttempt1.getState()); + Assertions.assertEquals(TaskAttemptState.FAILED, mapAttempt1.getState(), + "Map TaskAttempt state not correct"); assertThat(mapAttempt1.getDiagnostics().get(0)) .isEqualTo("Too many fetch failures. Failing the attempt. " + "Last failure reported by " + reduceAttempt3.getID().toString() + " from host host3"); - Assert.assertEquals("Num attempts in Map Task not correct", - 2, mapTask.getAttempts().size()); + Assertions.assertEquals(2, mapTask.getAttempts().size(), + "Num attempts in Map Task not correct"); Iterator atIt = mapTask.getAttempts().values().iterator(); atIt.next(); @@ -396,39 +399,40 @@ public class TestFetchFailure { app.waitForState(job, JobState.SUCCEEDED); //previous completion event now becomes obsolete - Assert.assertEquals("Event status not correct", - TaskAttemptCompletionEventStatus.OBSOLETE, events[0].getStatus()); + Assertions.assertEquals(TaskAttemptCompletionEventStatus.OBSOLETE, events[0].getStatus(), + "Event status not correct"); events = job.getTaskAttemptCompletionEvents(0, 100); - Assert.assertEquals("Num completion events not correct", - 6, events.length); - Assert.assertEquals("Event map attempt id not correct", - mapAttempt1.getID(), events[0].getAttemptId()); - Assert.assertEquals("Event map attempt id not correct", - mapAttempt1.getID(), events[1].getAttemptId()); - Assert.assertEquals("Event map attempt id not correct", - mapAttempt2.getID(), events[2].getAttemptId()); - Assert.assertEquals("Event reduce attempt id not correct", - reduceAttempt.getID(), events[3].getAttemptId()); - Assert.assertEquals("Event status not correct for map attempt1", - TaskAttemptCompletionEventStatus.OBSOLETE, events[0].getStatus()); - Assert.assertEquals("Event status not correct for map attempt1", - TaskAttemptCompletionEventStatus.FAILED, events[1].getStatus()); - Assert.assertEquals("Event status not correct for map attempt2", - TaskAttemptCompletionEventStatus.SUCCEEDED, events[2].getStatus()); - Assert.assertEquals("Event status not correct for reduce attempt1", - TaskAttemptCompletionEventStatus.SUCCEEDED, events[3].getStatus()); + Assertions.assertEquals(6, events.length, + "Num completion events not correct"); + Assertions.assertEquals(mapAttempt1.getID(), events[0].getAttemptId(), + "Event map attempt id not correct"); + Assertions.assertEquals(mapAttempt1.getID(), events[1].getAttemptId(), + "Event map attempt id not correct"); + Assertions.assertEquals(mapAttempt2.getID(), events[2].getAttemptId(), + "Event map attempt id not correct"); + Assertions.assertEquals(reduceAttempt.getID(), events[3].getAttemptId(), + "Event reduce attempt id not correct"); + Assertions.assertEquals(TaskAttemptCompletionEventStatus.OBSOLETE, + events[0].getStatus(), "Event status not correct for map attempt1"); + Assertions.assertEquals(TaskAttemptCompletionEventStatus.FAILED, + events[1].getStatus(), "Event status not correct for map attempt1"); + Assertions.assertEquals(TaskAttemptCompletionEventStatus.SUCCEEDED, + events[2].getStatus(), "Event status not correct for map attempt2"); + Assertions.assertEquals(TaskAttemptCompletionEventStatus.SUCCEEDED, + events[3].getStatus(), "Event status not correct for reduce attempt1"); TaskCompletionEvent mapEvents[] = job.getMapAttemptCompletionEvents(0, 2); TaskCompletionEvent convertedEvents[] = TypeConverter.fromYarn(events); - Assert.assertEquals("Incorrect number of map events", 2, mapEvents.length); - Assert.assertArrayEquals("Unexpected map events", - Arrays.copyOfRange(convertedEvents, 0, 2), mapEvents); + Assertions.assertEquals(2, mapEvents.length, + "Incorrect number of map events"); + Assertions.assertArrayEquals(Arrays.copyOfRange(convertedEvents, 0, 2), + mapEvents, "Unexpected map events"); mapEvents = job.getMapAttemptCompletionEvents(2, 200); - Assert.assertEquals("Incorrect number of map events", 1, mapEvents.length); - Assert.assertEquals("Unexpected map event", convertedEvents[2], - mapEvents[0]); + Assertions.assertEquals(1, mapEvents.length, "Incorrect number of map events"); + Assertions.assertEquals(convertedEvents[2], mapEvents[0], + "Unexpected map event"); } private void updateStatus(MRApp app, TaskAttempt attempt, Phase phase) { diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestJobEndNotifier.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestJobEndNotifier.java index 1cd625551a6..e7fe432d45b 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestJobEndNotifier.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestJobEndNotifier.java @@ -59,8 +59,8 @@ import org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator; import org.apache.hadoop.mapreduce.v2.app.rm.RMHeartbeatHandler; import org.apache.hadoop.yarn.exceptions.YarnException; import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; -import org.junit.Assert; -import org.junit.Test; +import org.junit.jupiter.api.Assertions; +import org.junit.jupiter.api.Test; /** * Tests job end notification @@ -74,18 +74,18 @@ public class TestJobEndNotifier extends JobEndNotifier { conf.set(MRJobConfig.MR_JOB_END_NOTIFICATION_MAX_ATTEMPTS, "0"); conf.set(MRJobConfig.MR_JOB_END_RETRY_ATTEMPTS, "10"); setConf(conf); - Assert.assertTrue("Expected numTries to be 0, but was " + numTries, - numTries == 0 ); + Assertions.assertTrue(numTries == 0, + "Expected numTries to be 0, but was " + numTries); conf.set(MRJobConfig.MR_JOB_END_NOTIFICATION_MAX_ATTEMPTS, "1"); setConf(conf); - Assert.assertTrue("Expected numTries to be 1, but was " + numTries, - numTries == 1 ); + Assertions.assertTrue(numTries == 1, + "Expected numTries to be 1, but was " + numTries); conf.set(MRJobConfig.MR_JOB_END_NOTIFICATION_MAX_ATTEMPTS, "20"); setConf(conf); - Assert.assertTrue("Expected numTries to be 11, but was " + numTries, - numTries == 11 ); //11 because number of _retries_ is 10 + Assertions.assertTrue(numTries == 11 , "Expected numTries to be 11, but was " + + numTries); //11 because number of _retries_ is 10 } //Test maximum retry interval is capped by @@ -94,53 +94,53 @@ public class TestJobEndNotifier extends JobEndNotifier { conf.set(MRJobConfig.MR_JOB_END_NOTIFICATION_MAX_RETRY_INTERVAL, "5000"); conf.set(MRJobConfig.MR_JOB_END_RETRY_INTERVAL, "1000"); setConf(conf); - Assert.assertTrue("Expected waitInterval to be 1000, but was " - + waitInterval, waitInterval == 1000); + Assertions.assertTrue(waitInterval == 1000, + "Expected waitInterval to be 1000, but was " + waitInterval); conf.set(MRJobConfig.MR_JOB_END_RETRY_INTERVAL, "10000"); setConf(conf); - Assert.assertTrue("Expected waitInterval to be 5000, but was " - + waitInterval, waitInterval == 5000); + Assertions.assertTrue(waitInterval == 5000, + "Expected waitInterval to be 5000, but was " + waitInterval); //Test negative numbers are set to default conf.set(MRJobConfig.MR_JOB_END_RETRY_INTERVAL, "-10"); setConf(conf); - Assert.assertTrue("Expected waitInterval to be 5000, but was " - + waitInterval, waitInterval == 5000); + Assertions.assertTrue(waitInterval == 5000, + "Expected waitInterval to be 5000, but was " + waitInterval); } private void testTimeout(Configuration conf) { conf.set(MRJobConfig.MR_JOB_END_NOTIFICATION_TIMEOUT, "1000"); setConf(conf); - Assert.assertTrue("Expected timeout to be 1000, but was " - + timeout, timeout == 1000); + Assertions.assertTrue(timeout == 1000, + "Expected timeout to be 1000, but was " + timeout); } private void testProxyConfiguration(Configuration conf) { conf.set(MRJobConfig.MR_JOB_END_NOTIFICATION_PROXY, "somehost"); setConf(conf); - Assert.assertTrue("Proxy shouldn't be set because port wasn't specified", - proxyToUse.type() == Proxy.Type.DIRECT); + Assertions.assertTrue(proxyToUse.type() == Proxy.Type.DIRECT, + "Proxy shouldn't be set because port wasn't specified"); conf.set(MRJobConfig.MR_JOB_END_NOTIFICATION_PROXY, "somehost:someport"); setConf(conf); - Assert.assertTrue("Proxy shouldn't be set because port wasn't numeric", - proxyToUse.type() == Proxy.Type.DIRECT); + Assertions.assertTrue(proxyToUse.type() == Proxy.Type.DIRECT, + "Proxy shouldn't be set because port wasn't numeric"); conf.set(MRJobConfig.MR_JOB_END_NOTIFICATION_PROXY, "somehost:1000"); setConf(conf); - Assert.assertEquals("Proxy should have been set but wasn't ", - "HTTP @ somehost:1000", proxyToUse.toString()); + Assertions.assertEquals("HTTP @ somehost:1000", proxyToUse.toString(), + "Proxy should have been set but wasn't "); conf.set(MRJobConfig.MR_JOB_END_NOTIFICATION_PROXY, "socks@somehost:1000"); setConf(conf); - Assert.assertEquals("Proxy should have been socks but wasn't ", - "SOCKS @ somehost:1000", proxyToUse.toString()); + Assertions.assertEquals("SOCKS @ somehost:1000", proxyToUse.toString(), + "Proxy should have been socks but wasn't "); conf.set(MRJobConfig.MR_JOB_END_NOTIFICATION_PROXY, "SOCKS@somehost:1000"); setConf(conf); - Assert.assertEquals("Proxy should have been socks but wasn't ", - "SOCKS @ somehost:1000", proxyToUse.toString()); + Assertions.assertEquals("SOCKS @ somehost:1000", proxyToUse.toString(), + "Proxy should have been socks but wasn't "); conf.set(MRJobConfig.MR_JOB_END_NOTIFICATION_PROXY, "sfafn@somehost:1000"); setConf(conf); - Assert.assertEquals("Proxy should have been http but wasn't ", - "HTTP @ somehost:1000", proxyToUse.toString()); + Assertions.assertEquals("HTTP @ somehost:1000", proxyToUse.toString(), + "Proxy should have been http but wasn't "); } @@ -181,10 +181,10 @@ public class TestJobEndNotifier extends JobEndNotifier { this.setConf(conf); this.notify(jobReport); long endTime = System.currentTimeMillis(); - Assert.assertEquals("Only 1 try was expected but was : " - + this.notificationCount, 1, this.notificationCount); - Assert.assertTrue("Should have taken more than 5 seconds it took " - + (endTime - startTime), endTime - startTime > 5000); + Assertions.assertEquals(1, this.notificationCount, + "Only 1 try was expected but was : " + this.notificationCount); + Assertions.assertTrue(endTime - startTime > 5000, + "Should have taken more than 5 seconds it took " + (endTime - startTime)); conf.set(MRJobConfig.MR_JOB_END_NOTIFICATION_MAX_ATTEMPTS, "3"); conf.set(MRJobConfig.MR_JOB_END_RETRY_ATTEMPTS, "3"); @@ -196,10 +196,10 @@ public class TestJobEndNotifier extends JobEndNotifier { this.setConf(conf); this.notify(jobReport); endTime = System.currentTimeMillis(); - Assert.assertEquals("Only 3 retries were expected but was : " - + this.notificationCount, 3, this.notificationCount); - Assert.assertTrue("Should have taken more than 9 seconds it took " - + (endTime - startTime), endTime - startTime > 9000); + Assertions.assertEquals(3, this.notificationCount, + "Only 3 retries were expected but was : " + this.notificationCount); + Assertions.assertTrue(endTime - startTime > 9000, + "Should have taken more than 9 seconds it took " + (endTime - startTime)); } @@ -222,11 +222,11 @@ public class TestJobEndNotifier extends JobEndNotifier { doThrow(runtimeException).when(app).stop(); } app.shutDownJob(); - Assert.assertTrue(app.isLastAMRetry()); - Assert.assertEquals(1, JobEndServlet.calledTimes); - Assert.assertEquals("jobid=" + job.getID() + "&status=SUCCEEDED", + Assertions.assertTrue(app.isLastAMRetry()); + Assertions.assertEquals(1, JobEndServlet.calledTimes); + Assertions.assertEquals("jobid=" + job.getID() + "&status=SUCCEEDED", JobEndServlet.requestUri.getQuery()); - Assert.assertEquals(JobState.SUCCEEDED.toString(), + Assertions.assertEquals(JobState.SUCCEEDED.toString(), JobEndServlet.foundJobState); server.stop(); } @@ -262,10 +262,10 @@ public class TestJobEndNotifier extends JobEndNotifier { app.shutDownJob(); // Not the last AM attempt. So user should that the job is still running. app.waitForState(job, JobState.RUNNING); - Assert.assertFalse(app.isLastAMRetry()); - Assert.assertEquals(0, JobEndServlet.calledTimes); - Assert.assertNull(JobEndServlet.requestUri); - Assert.assertNull(JobEndServlet.foundJobState); + Assertions.assertFalse(app.isLastAMRetry()); + Assertions.assertEquals(0, JobEndServlet.calledTimes); + Assertions.assertNull(JobEndServlet.requestUri); + Assertions.assertNull(JobEndServlet.foundJobState); server.stop(); } @@ -294,11 +294,11 @@ public class TestJobEndNotifier extends JobEndNotifier { // Unregistration fails: isLastAMRetry is recalculated, this is ///reboot will stop service internally, we don't need to shutdown twice app.waitForServiceToStop(10000); - Assert.assertFalse(app.isLastAMRetry()); + Assertions.assertFalse(app.isLastAMRetry()); // Since it's not last retry, JobEndServlet didn't called - Assert.assertEquals(0, JobEndServlet.calledTimes); - Assert.assertNull(JobEndServlet.requestUri); - Assert.assertNull(JobEndServlet.foundJobState); + Assertions.assertEquals(0, JobEndServlet.calledTimes); + Assertions.assertNull(JobEndServlet.requestUri); + Assertions.assertNull(JobEndServlet.foundJobState); server.stop(); } @@ -321,7 +321,7 @@ public class TestJobEndNotifier extends JobEndNotifier { this.notify(jobReport); final URL urlToNotify = CustomNotifier.urlToNotify; - Assert.assertEquals("http://example.com?jobId=mock-Id&jobStatus=SUCCEEDED", + Assertions.assertEquals("http://example.com?jobId=mock-Id&jobStatus=SUCCEEDED", urlToNotify.toString()); } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestKill.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestKill.java index f681cf81650..63dc2f88067 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestKill.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestKill.java @@ -23,7 +23,7 @@ import java.util.Map; import java.util.concurrent.CountDownLatch; import org.apache.hadoop.service.Service; -import org.junit.Assert; +import org.junit.jupiter.api.Assertions; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.mapreduce.v2.api.records.JobId; @@ -48,7 +48,7 @@ import org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl; import org.apache.hadoop.yarn.event.AsyncDispatcher; import org.apache.hadoop.yarn.event.Dispatcher; import org.apache.hadoop.yarn.event.Event; -import org.junit.Test; +import org.junit.jupiter.api.Test; /** * Tests the state machine with respect to Job/Task/TaskAttempt kill scenarios. @@ -83,18 +83,17 @@ public class TestKill { app.waitForState(Service.STATE.STOPPED); Map tasks = job.getTasks(); - Assert.assertEquals("No of tasks is not correct", 1, - tasks.size()); + Assertions.assertEquals(1, tasks.size(), + "No of tasks is not correct"); Task task = tasks.values().iterator().next(); - Assert.assertEquals("Task state not correct", TaskState.KILLED, - task.getReport().getTaskState()); + Assertions.assertEquals(TaskState.KILLED, + task.getReport().getTaskState(), "Task state not correct"); Map attempts = tasks.values().iterator().next().getAttempts(); - Assert.assertEquals("No of attempts is not correct", 1, - attempts.size()); + Assertions.assertEquals(1, attempts.size(), "No of attempts is not correct"); Iterator it = attempts.values().iterator(); - Assert.assertEquals("Attempt state not correct", TaskAttemptState.KILLED, - it.next().getReport().getTaskAttemptState()); + Assertions.assertEquals(TaskAttemptState.KILLED, + it.next().getReport().getTaskAttemptState(), "Attempt state not correct"); } @Test @@ -107,8 +106,8 @@ public class TestKill { //wait and vailidate for Job to become RUNNING app.waitForInternalState((JobImpl) job, JobStateInternal.RUNNING); Map tasks = job.getTasks(); - Assert.assertEquals("No of tasks is not correct", 2, - tasks.size()); + Assertions.assertEquals(2, tasks.size(), + "No of tasks is not correct"); Iterator it = tasks.values().iterator(); Task task1 = it.next(); Task task2 = it.next(); @@ -125,24 +124,24 @@ public class TestKill { //first Task is killed and second is Succeeded //Job is succeeded - - Assert.assertEquals("Task state not correct", TaskState.KILLED, - task1.getReport().getTaskState()); - Assert.assertEquals("Task state not correct", TaskState.SUCCEEDED, - task2.getReport().getTaskState()); + + Assertions.assertEquals(TaskState.KILLED, task1.getReport().getTaskState(), + "Task state not correct"); + Assertions.assertEquals(TaskState.SUCCEEDED, task2.getReport().getTaskState(), + "Task state not correct"); Map attempts = task1.getAttempts(); - Assert.assertEquals("No of attempts is not correct", 1, - attempts.size()); + Assertions.assertEquals(1, attempts.size(), + "No of attempts is not correct"); Iterator iter = attempts.values().iterator(); - Assert.assertEquals("Attempt state not correct", TaskAttemptState.KILLED, - iter.next().getReport().getTaskAttemptState()); + Assertions.assertEquals(TaskAttemptState.KILLED, + iter.next().getReport().getTaskAttemptState(), "Attempt state not correct"); attempts = task2.getAttempts(); - Assert.assertEquals("No of attempts is not correct", 1, - attempts.size()); + Assertions.assertEquals(1, attempts.size(), + "No of attempts is not correct"); iter = attempts.values().iterator(); - Assert.assertEquals("Attempt state not correct", TaskAttemptState.SUCCEEDED, - iter.next().getReport().getTaskAttemptState()); + Assertions.assertEquals(TaskAttemptState.SUCCEEDED, + iter.next().getReport().getTaskAttemptState(), "Attempt state not correct"); } @Test @@ -194,7 +193,8 @@ public class TestKill { Job job = app.submit(new Configuration()); JobId jobId = app.getJobId(); app.waitForState(job, JobState.RUNNING); - Assert.assertEquals("Num tasks not correct", 2, job.getTasks().size()); + Assertions.assertEquals(2, job.getTasks().size(), + "Num tasks not correct"); Iterator it = job.getTasks().values().iterator(); Task mapTask = it.next(); Task reduceTask = it.next(); @@ -232,7 +232,8 @@ public class TestKill { Job job = app.submit(new Configuration()); JobId jobId = app.getJobId(); app.waitForState(job, JobState.RUNNING); - Assert.assertEquals("Num tasks not correct", 2, job.getTasks().size()); + Assertions.assertEquals(2, job.getTasks().size(), + "Num tasks not correct"); Iterator it = job.getTasks().values().iterator(); Task mapTask = it.next(); Task reduceTask = it.next(); @@ -280,7 +281,8 @@ public class TestKill { Job job = app.submit(new Configuration()); JobId jobId = app.getJobId(); app.waitForState(job, JobState.RUNNING); - Assert.assertEquals("Num tasks not correct", 2, job.getTasks().size()); + Assertions.assertEquals(2, job.getTasks().size(), + "Num tasks not correct"); Iterator it = job.getTasks().values().iterator(); Task mapTask = it.next(); Task reduceTask = it.next(); @@ -370,8 +372,8 @@ public class TestKill { //wait and vailidate for Job to become RUNNING app.waitForState(job, JobState.RUNNING); Map tasks = job.getTasks(); - Assert.assertEquals("No of tasks is not correct", 2, - tasks.size()); + Assertions.assertEquals(2, tasks.size(), + "No of tasks is not correct"); Iterator it = tasks.values().iterator(); Task task1 = it.next(); Task task2 = it.next(); @@ -394,26 +396,26 @@ public class TestKill { //first Task will have two attempts 1st is killed, 2nd Succeeds //both Tasks and Job succeeds - Assert.assertEquals("Task state not correct", TaskState.SUCCEEDED, - task1.getReport().getTaskState()); - Assert.assertEquals("Task state not correct", TaskState.SUCCEEDED, - task2.getReport().getTaskState()); + Assertions.assertEquals(TaskState.SUCCEEDED, + task1.getReport().getTaskState(), "Task state not correct"); + Assertions.assertEquals(TaskState.SUCCEEDED, + task2.getReport().getTaskState(), "Task state not correct"); Map attempts = task1.getAttempts(); - Assert.assertEquals("No of attempts is not correct", 2, - attempts.size()); + Assertions.assertEquals(2, attempts.size(), + "No of attempts is not correct"); Iterator iter = attempts.values().iterator(); - Assert.assertEquals("Attempt state not correct", TaskAttemptState.KILLED, - iter.next().getReport().getTaskAttemptState()); - Assert.assertEquals("Attempt state not correct", TaskAttemptState.SUCCEEDED, - iter.next().getReport().getTaskAttemptState()); + Assertions.assertEquals(TaskAttemptState.KILLED, + iter.next().getReport().getTaskAttemptState(), "Attempt state not correct"); + Assertions.assertEquals(TaskAttemptState.SUCCEEDED, + iter.next().getReport().getTaskAttemptState(), "Attempt state not correct"); attempts = task2.getAttempts(); - Assert.assertEquals("No of attempts is not correct", 1, - attempts.size()); + Assertions.assertEquals(1, attempts.size(), + "No of attempts is not correct"); iter = attempts.values().iterator(); - Assert.assertEquals("Attempt state not correct", TaskAttemptState.SUCCEEDED, - iter.next().getReport().getTaskAttemptState()); + Assertions.assertEquals(TaskAttemptState.SUCCEEDED, + iter.next().getReport().getTaskAttemptState(), "Attempt state not correct"); } static class BlockingMRApp extends MRApp { diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestKillAMPreemptionPolicy.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestKillAMPreemptionPolicy.java index 3c3c4c90625..62e016a734b 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestKillAMPreemptionPolicy.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestKillAMPreemptionPolicy.java @@ -47,7 +47,7 @@ import org.apache.hadoop.yarn.event.Event; import org.apache.hadoop.yarn.event.EventHandler; import org.apache.hadoop.yarn.factories.RecordFactory; import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider; -import org.junit.Test; +import org.junit.jupiter.api.Test; public class TestKillAMPreemptionPolicy { private final RecordFactory recordFactory = RecordFactoryProvider diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRApp.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRApp.java index 534bcd09408..f4a68a34e74 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRApp.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRApp.java @@ -30,7 +30,7 @@ import java.util.concurrent.atomic.AtomicInteger; import java.util.function.Supplier; import org.apache.hadoop.test.GenericTestUtils; -import org.junit.Assert; +import org.junit.jupiter.api.Assertions; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.mapreduce.MRJobConfig; @@ -68,7 +68,7 @@ import org.apache.hadoop.yarn.event.AsyncDispatcher; import org.apache.hadoop.yarn.event.Dispatcher; import org.apache.hadoop.yarn.event.EventHandler; import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider; -import org.junit.Test; +import org.junit.jupiter.api.Test; import org.mockito.Mockito; /** @@ -83,7 +83,7 @@ public class TestMRApp { Job job = app.submit(new Configuration()); app.waitForState(job, JobState.SUCCEEDED); app.verifyCompleted(); - Assert.assertEquals(System.getProperty("user.name"),job.getUserName()); + Assertions.assertEquals(System.getProperty("user.name"),job.getUserName()); } @Test @@ -106,7 +106,7 @@ public class TestMRApp { MRApp app = new MRApp(1, 0, false, this.getClass().getName(), true); Job job = app.submit(new Configuration()); app.waitForState(job, JobState.RUNNING); - Assert.assertEquals("Num tasks not correct", 1, job.getTasks().size()); + Assertions.assertEquals(1, job.getTasks().size(), "Num tasks not correct"); Iterator it = job.getTasks().values().iterator(); Task task = it.next(); app.waitForState(task, TaskState.RUNNING); @@ -151,7 +151,7 @@ public class TestMRApp { Job job = app.submit(conf); app.waitForState(job, JobState.RUNNING); //all maps would be running - Assert.assertEquals("Num tasks not correct", 3, job.getTasks().size()); + Assertions.assertEquals(3, job.getTasks().size(), "Num tasks not correct"); Iterator it = job.getTasks().values().iterator(); Task mapTask1 = it.next(); Task mapTask2 = it.next(); @@ -170,8 +170,8 @@ public class TestMRApp { app.waitForState(task2Attempt, TaskAttemptState.RUNNING); // reduces must be in NEW state - Assert.assertEquals("Reduce Task state not correct", - TaskState.NEW, reduceTask.getReport().getTaskState()); + Assertions.assertEquals(TaskState.NEW, + reduceTask.getReport().getTaskState(), "Reduce Task state not correct"); //send the done signal to the 1st map task app.getContext().getEventHandler().handle( @@ -224,7 +224,8 @@ public class TestMRApp { final Job job1 = app.submit(conf); app.waitForState(job1, JobState.RUNNING); - Assert.assertEquals("Num tasks not correct", 4, job1.getTasks().size()); + Assertions.assertEquals(4, job1.getTasks().size(), + "Num tasks not correct"); Iterator it = job1.getTasks().values().iterator(); Task mapTask1 = it.next(); Task mapTask2 = it.next(); @@ -239,7 +240,7 @@ public class TestMRApp { .next(); NodeId node1 = task1Attempt.getNodeId(); NodeId node2 = task2Attempt.getNodeId(); - Assert.assertEquals(node1, node2); + Assertions.assertEquals(node1, node2); // send the done signal to the task app.getContext() @@ -271,8 +272,8 @@ public class TestMRApp { TaskAttemptCompletionEvent[] events = job1.getTaskAttemptCompletionEvents (0, 100); - Assert.assertEquals("Expecting 2 completion events for success", 2, - events.length); + Assertions.assertEquals(2, events.length, + "Expecting 2 completion events for success"); // send updated nodes info ArrayList updatedNodes = new ArrayList(); @@ -297,8 +298,8 @@ public class TestMRApp { }, checkIntervalMillis, waitForMillis); events = job1.getTaskAttemptCompletionEvents(0, 100); - Assert.assertEquals("Expecting 2 more completion events for killed", 4, - events.length); + Assertions.assertEquals(4, events.length, + "Expecting 2 more completion events for killed"); // 2 map task attempts which were killed above should be requested from // container allocator with the previous map task marked as failed. If // this happens allocator will request the container for this mapper from @@ -335,8 +336,8 @@ public class TestMRApp { }, checkIntervalMillis, waitForMillis); events = job1.getTaskAttemptCompletionEvents(0, 100); - Assert.assertEquals("Expecting 1 more completion events for success", 5, - events.length); + Assertions.assertEquals(5, events.length, + "Expecting 1 more completion events for success"); // Crash the app again. app.stop(); @@ -351,7 +352,8 @@ public class TestMRApp { final Job job2 = app.submit(conf); app.waitForState(job2, JobState.RUNNING); - Assert.assertEquals("No of tasks not correct", 4, job2.getTasks().size()); + Assertions.assertEquals(4, job2.getTasks().size(), + "No of tasks not correct"); it = job2.getTasks().values().iterator(); mapTask1 = it.next(); mapTask2 = it.next(); @@ -372,9 +374,8 @@ public class TestMRApp { }, checkIntervalMillis, waitForMillis); events = job2.getTaskAttemptCompletionEvents(0, 100); - Assert.assertEquals( - "Expecting 2 completion events for killed & success of map1", 2, - events.length); + Assertions.assertEquals(2, events.length, + "Expecting 2 completion events for killed & success of map1"); task2Attempt = mapTask2.getAttempts().values().iterator().next(); app.getContext() @@ -394,8 +395,8 @@ public class TestMRApp { }, checkIntervalMillis, waitForMillis); events = job2.getTaskAttemptCompletionEvents(0, 100); - Assert.assertEquals("Expecting 1 more completion events for success", 3, - events.length); + Assertions.assertEquals(3, events.length, + "Expecting 1 more completion events for success"); app.waitForState(reduceTask1, TaskState.RUNNING); app.waitForState(reduceTask2, TaskState.RUNNING); @@ -433,8 +434,8 @@ public class TestMRApp { } }, checkIntervalMillis, waitForMillis); events = job2.getTaskAttemptCompletionEvents(0, 100); - Assert.assertEquals("Expecting 2 more completion events for reduce success", - 5, events.length); + Assertions.assertEquals(5, events.length, + "Expecting 2 more completion events for reduce success"); // job succeeds app.waitForState(job2, JobState.SUCCEEDED); @@ -472,7 +473,8 @@ public class TestMRApp { MRApp app = new MRApp(1, 0, false, this.getClass().getName(), true); Job job = app.submit(new Configuration()); app.waitForState(job, JobState.RUNNING); - Assert.assertEquals("Num tasks not correct", 1, job.getTasks().size()); + Assertions.assertEquals(1, job.getTasks().size(), + "Num tasks not correct"); Iterator it = job.getTasks().values().iterator(); Task task = it.next(); app.waitForState(task, TaskState.RUNNING); @@ -493,7 +495,7 @@ public class TestMRApp { JobImpl job = (JobImpl) app.submit(new Configuration()); app.waitForInternalState(job, JobStateInternal.SUCCEEDED); // AM is not unregistered - Assert.assertEquals(JobState.RUNNING, job.getState()); + Assertions.assertEquals(JobState.RUNNING, job.getState()); // imitate that AM is unregistered app.successfullyUnregistered.set(true); app.waitForState(job, JobState.SUCCEEDED); @@ -505,7 +507,8 @@ public class TestMRApp { MRApp app = new MRApp(1, 0, false, this.getClass().getName(), true); Job job = app.submit(new Configuration()); app.waitForState(job, JobState.RUNNING); - Assert.assertEquals("Num tasks not correct", 1, job.getTasks().size()); + Assertions.assertEquals(1, job.getTasks().size(), + "Num tasks not correct"); Iterator it = job.getTasks().values().iterator(); Task task = it.next(); app.waitForState(task, TaskState.RUNNING); @@ -530,7 +533,8 @@ public class TestMRApp { Configuration conf = new Configuration(); Job job = app.submit(conf); app.waitForState(job, JobState.RUNNING); - Assert.assertEquals("Num tasks not correct", 1, job.getTasks().size()); + Assertions.assertEquals(1, job.getTasks().size(), + "Num tasks not correct"); Iterator it = job.getTasks().values().iterator(); Task task = it.next(); app.waitForState(task, TaskState.RUNNING); @@ -624,7 +628,7 @@ public class TestMRApp { (TaskAttemptImpl) taskAttempts.iterator().next(); // Container from RM should pass through to the launcher. Container object // should be the same. - Assert.assertTrue(taskAttempt.container + Assertions.assertTrue(taskAttempt.container == containerObtainedByContainerLauncher); } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRAppComponentDependencies.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRAppComponentDependencies.java index 9710ec94a69..7e47ec1a49a 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRAppComponentDependencies.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRAppComponentDependencies.java @@ -20,7 +20,7 @@ package org.apache.hadoop.mapreduce.v2.app; import java.io.IOException; -import org.junit.Assert; +import org.junit.jupiter.api.Assertions; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent; @@ -35,11 +35,13 @@ import org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl; import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.yarn.event.EventHandler; import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; -import org.junit.Test; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; public class TestMRAppComponentDependencies { - @Test(timeout = 20000) + @Test + @Timeout(20000) public void testComponentStopOrder() throws Exception { @SuppressWarnings("resource") TestMRApp app = new TestMRApp(1, 1, true, this.getClass().getName(), true); @@ -54,8 +56,8 @@ public class TestMRAppComponentDependencies { } // assert JobHistoryEventHandlerStopped and then clientServiceStopped - Assert.assertEquals(1, app.JobHistoryEventHandlerStopped); - Assert.assertEquals(2, app.clientServiceStopped); + Assertions.assertEquals(1, app.JobHistoryEventHandlerStopped); + Assertions.assertEquals(2, app.clientServiceStopped); } private final class TestMRApp extends MRApp { diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRAppMaster.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRAppMaster.java index 06550378ba9..b8e55d9ca06 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRAppMaster.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRAppMaster.java @@ -18,9 +18,9 @@ package org.apache.hadoop.mapreduce.v2.app; import static org.assertj.core.api.Assertions.assertThat; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertTrue; -import static org.junit.Assert.fail; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; import static org.mockito.ArgumentMatchers.anyBoolean; import static org.mockito.Mockito.doNothing; import static org.mockito.Mockito.doReturn; @@ -40,7 +40,7 @@ import java.util.concurrent.atomic.AtomicLong; import java.util.HashMap; import java.util.Map; -import org.junit.Assert; +import org.junit.jupiter.api.Assertions; import org.apache.commons.io.FileUtils; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileContext; @@ -84,10 +84,11 @@ import org.apache.hadoop.yarn.api.records.ContainerId; import org.apache.hadoop.yarn.event.EventHandler; import org.apache.hadoop.yarn.security.AMRMTokenIdentifier; -import org.junit.AfterClass; -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.Test; +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; import org.mockito.ArgumentCaptor; import org.mockito.Mockito; import org.slf4j.event.Level; @@ -104,7 +105,7 @@ public class TestMRAppMaster { static String stagingDir = new Path(testDir, "staging").toString(); private static FileContext localFS = null; - @BeforeClass + @BeforeAll public static void setup() throws AccessControlException, FileNotFoundException, IllegalArgumentException, IOException { //Do not error out if metrics are inited multiple times @@ -116,7 +117,7 @@ public class TestMRAppMaster { new File(testDir.toString()).mkdir(); } - @Before + @BeforeEach public void prepare() throws IOException { File dir = new File(stagingDir); if(dir.exists()) { @@ -125,7 +126,7 @@ public class TestMRAppMaster { dir.mkdirs(); } - @AfterClass + @AfterAll public static void cleanup() throws IOException { localFS.delete(testDir, true); } @@ -226,8 +227,8 @@ public class TestMRAppMaster { "host", -1, -1, System.currentTimeMillis()); MRAppMaster.initAndStartAppMaster(appMaster, conf, userName); appMaster.stop(); - assertTrue("Job launch time should not be negative.", - appMaster.jobLaunchTime.get() >= 0); + assertTrue(appMaster.jobLaunchTime.get() >= 0, + "Job launch time should not be negative."); } @Test @@ -343,7 +344,8 @@ public class TestMRAppMaster { appMaster.stop(); } - @Test (timeout = 30000) + @Test + @Timeout(30000) public void testMRAppMasterMaxAppAttempts() throws IOException, InterruptedException { // No matter what's the maxAppAttempt or attempt id, the isLastRetry always @@ -368,8 +370,8 @@ public class TestMRAppMaster { new MRAppMasterTest(applicationAttemptId, containerId, "host", -1, -1, System.currentTimeMillis(), false, true); MRAppMaster.initAndStartAppMaster(appMaster, conf, userName); - assertEquals("isLastAMRetry is correctly computed.", expectedBools[i], - appMaster.isLastAMRetry()); + assertEquals(expectedBools[i], appMaster.isLastAMRetry(), + "isLastAMRetry is correctly computed."); } } @@ -465,37 +467,37 @@ public class TestMRAppMaster { // Now validate the task credentials Credentials appMasterCreds = appMaster.getCredentials(); - Assert.assertNotNull(appMasterCreds); - Assert.assertEquals(1, appMasterCreds.numberOfSecretKeys()); - Assert.assertEquals(1, appMasterCreds.numberOfTokens()); + Assertions.assertNotNull(appMasterCreds); + Assertions.assertEquals(1, appMasterCreds.numberOfSecretKeys()); + Assertions.assertEquals(1, appMasterCreds.numberOfTokens()); // Validate the tokens - app token should not be present Token usedToken = appMasterCreds.getToken(tokenAlias); - Assert.assertNotNull(usedToken); - Assert.assertEquals(storedToken, usedToken); + Assertions.assertNotNull(usedToken); + Assertions.assertEquals(storedToken, usedToken); // Validate the keys byte[] usedKey = appMasterCreds.getSecretKey(keyAlias); - Assert.assertNotNull(usedKey); - Assert.assertEquals("mySecretKey", new String(usedKey)); + Assertions.assertNotNull(usedKey); + Assertions.assertEquals("mySecretKey", new String(usedKey)); // The credentials should also be added to conf so that OuputCommitter can // access it - app token should not be present Credentials confCredentials = conf.getCredentials(); - Assert.assertEquals(1, confCredentials.numberOfSecretKeys()); - Assert.assertEquals(1, confCredentials.numberOfTokens()); - Assert.assertEquals(storedToken, confCredentials.getToken(tokenAlias)); - Assert.assertEquals("mySecretKey", + Assertions.assertEquals(1, confCredentials.numberOfSecretKeys()); + Assertions.assertEquals(1, confCredentials.numberOfTokens()); + Assertions.assertEquals(storedToken, confCredentials.getToken(tokenAlias)); + Assertions.assertEquals("mySecretKey", new String(confCredentials.getSecretKey(keyAlias))); // Verify the AM's ugi - app token should be present Credentials ugiCredentials = appMaster.getUgi().getCredentials(); - Assert.assertEquals(1, ugiCredentials.numberOfSecretKeys()); - Assert.assertEquals(2, ugiCredentials.numberOfTokens()); - Assert.assertEquals(storedToken, ugiCredentials.getToken(tokenAlias)); - Assert.assertEquals(appToken, ugiCredentials.getToken(appTokenService)); - Assert.assertEquals("mySecretKey", + Assertions.assertEquals(1, ugiCredentials.numberOfSecretKeys()); + Assertions.assertEquals(2, ugiCredentials.numberOfTokens()); + Assertions.assertEquals(storedToken, ugiCredentials.getToken(tokenAlias)); + Assertions.assertEquals(appToken, ugiCredentials.getToken(appTokenService)); + Assertions.assertEquals("mySecretKey", new String(ugiCredentials.getSecretKey(keyAlias))); @@ -525,10 +527,10 @@ public class TestMRAppMaster { doNothing().when(appMaster).serviceStop(); // Test normal shutdown. appMaster.shutDownJob(); - Assert.assertTrue("Expected shutDownJob to terminate.", - ExitUtil.terminateCalled()); - Assert.assertEquals("Expected shutDownJob to exit with status code of 0.", - 0, ExitUtil.getFirstExitException().status); + Assertions.assertTrue(ExitUtil.terminateCalled(), + "Expected shutDownJob to terminate."); + Assertions.assertEquals(0, ExitUtil.getFirstExitException().status, + "Expected shutDownJob to exit with status code of 0."); // Test shutdown with exception. ExitUtil.resetFirstExitException(); @@ -536,10 +538,10 @@ public class TestMRAppMaster { doThrow(new RuntimeException(msg)) .when(appMaster).notifyIsLastAMRetry(anyBoolean()); appMaster.shutDownJob(); - assertTrue("Expected message from ExitUtil.ExitException to be " + msg, - ExitUtil.getFirstExitException().getMessage().contains(msg)); - Assert.assertEquals("Expected shutDownJob to exit with status code of 1.", - 1, ExitUtil.getFirstExitException().status); + assertTrue(ExitUtil.getFirstExitException().getMessage().contains(msg), + "Expected message from ExitUtil.ExitException to be " + msg); + Assertions.assertEquals(1, ExitUtil.getFirstExitException().status, + "Expected shutDownJob to exit with status code of 1."); } private void verifyFailedStatus(MRAppMasterTest appMaster, diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRClientService.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRClientService.java index 9906def3ac9..4057ed5a46b 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRClientService.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRClientService.java @@ -18,7 +18,7 @@ package org.apache.hadoop.mapreduce.v2.app; -import static org.junit.Assert.fail; +import static org.junit.jupiter.api.Assertions.fail; import java.io.IOException; import java.security.PrivilegedExceptionAction; @@ -26,7 +26,7 @@ import java.util.Iterator; import java.util.List; import java.util.concurrent.atomic.AtomicReference; -import org.junit.Assert; +import org.junit.jupiter.api.Assertions; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.mapreduce.JobACL; @@ -70,7 +70,7 @@ import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.yarn.factories.RecordFactory; import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider; import org.apache.hadoop.yarn.ipc.YarnRPC; -import org.junit.Test; +import org.junit.jupiter.api.Test; public class TestMRClientService { @@ -82,7 +82,8 @@ public class TestMRClientService { Configuration conf = new Configuration(); Job job = app.submit(conf); app.waitForState(job, JobState.RUNNING); - Assert.assertEquals("Num tasks not correct", 1, job.getTasks().size()); + Assertions.assertEquals(1, job.getTasks().size(), + "Num tasks not correct"); Iterator it = job.getTasks().values().iterator(); Task task = it.next(); app.waitForState(task, TaskState.RUNNING); @@ -116,8 +117,8 @@ public class TestMRClientService { GetCountersRequest gcRequest = recordFactory.newRecordInstance(GetCountersRequest.class); gcRequest.setJobId(job.getID()); - Assert.assertNotNull("Counters is null", - proxy.getCounters(gcRequest).getCounters()); + Assertions.assertNotNull(proxy.getCounters(gcRequest).getCounters(), + "Counters is null"); GetJobReportRequest gjrRequest = recordFactory.newRecordInstance(GetJobReportRequest.class); @@ -131,14 +132,14 @@ public class TestMRClientService { gtaceRequest.setJobId(job.getID()); gtaceRequest.setFromEventId(0); gtaceRequest.setMaxEvents(10); - Assert.assertNotNull("TaskCompletionEvents is null", - proxy.getTaskAttemptCompletionEvents(gtaceRequest).getCompletionEventList()); + Assertions.assertNotNull(proxy.getTaskAttemptCompletionEvents(gtaceRequest). + getCompletionEventList(), "TaskCompletionEvents is null"); GetDiagnosticsRequest gdRequest = recordFactory.newRecordInstance(GetDiagnosticsRequest.class); gdRequest.setTaskAttemptId(attempt.getID()); - Assert.assertNotNull("Diagnostics is null", - proxy.getDiagnostics(gdRequest).getDiagnosticsList()); + Assertions.assertNotNull(proxy.getDiagnostics(gdRequest). + getDiagnosticsList(), "Diagnostics is null"); GetTaskAttemptReportRequest gtarRequest = recordFactory.newRecordInstance(GetTaskAttemptReportRequest.class); @@ -151,31 +152,32 @@ public class TestMRClientService { GetTaskReportRequest gtrRequest = recordFactory.newRecordInstance(GetTaskReportRequest.class); gtrRequest.setTaskId(task.getID()); - Assert.assertNotNull("TaskReport is null", - proxy.getTaskReport(gtrRequest).getTaskReport()); + Assertions.assertNotNull(proxy.getTaskReport(gtrRequest).getTaskReport(), + "TaskReport is null"); GetTaskReportsRequest gtreportsRequest = recordFactory.newRecordInstance(GetTaskReportsRequest.class); gtreportsRequest.setJobId(job.getID()); gtreportsRequest.setTaskType(TaskType.MAP); - Assert.assertNotNull("TaskReports for map is null", - proxy.getTaskReports(gtreportsRequest).getTaskReportList()); + Assertions.assertNotNull(proxy.getTaskReports(gtreportsRequest) + .getTaskReportList(), "TaskReports for map is null"); gtreportsRequest = recordFactory.newRecordInstance(GetTaskReportsRequest.class); gtreportsRequest.setJobId(job.getID()); gtreportsRequest.setTaskType(TaskType.REDUCE); - Assert.assertNotNull("TaskReports for reduce is null", - proxy.getTaskReports(gtreportsRequest).getTaskReportList()); + Assertions.assertNotNull(proxy.getTaskReports(gtreportsRequest).getTaskReportList(), + "TaskReports for reduce is null"); List diag = proxy.getDiagnostics(gdRequest).getDiagnosticsList(); - Assert.assertEquals("Num diagnostics not correct", 1 , diag.size()); - Assert.assertEquals("Diag 1 not correct", - diagnostic1, diag.get(0).toString()); + Assertions.assertEquals(1 , diag.size(), + "Num diagnostics not correct"); + Assertions.assertEquals(diagnostic1, diag.get(0).toString(), + "Diag 1 not correct"); TaskReport taskReport = proxy.getTaskReport(gtrRequest).getTaskReport(); - Assert.assertEquals("Num diagnostics not correct", 1, - taskReport.getDiagnosticsCount()); + Assertions.assertEquals(1, taskReport.getDiagnosticsCount(), + "Num diagnostics not correct"); //send the done signal to the task app.getContext().getEventHandler().handle( @@ -207,7 +209,8 @@ public class TestMRClientService { conf.set(MRJobConfig.JOB_ACL_VIEW_JOB, "viewonlyuser"); Job job = app.submit(conf); app.waitForState(job, JobState.RUNNING); - Assert.assertEquals("Num tasks not correct", 1, job.getTasks().size()); + Assertions.assertEquals(1, job.getTasks().size(), + "Num tasks not correct"); Iterator it = job.getTasks().values().iterator(); Task task = it.next(); app.waitForState(task, TaskState.RUNNING); @@ -217,10 +220,10 @@ public class TestMRClientService { UserGroupInformation viewOnlyUser = UserGroupInformation.createUserForTesting( "viewonlyuser", new String[] {}); - Assert.assertTrue("viewonlyuser cannot view job", - job.checkAccess(viewOnlyUser, JobACL.VIEW_JOB)); - Assert.assertFalse("viewonlyuser can modify job", - job.checkAccess(viewOnlyUser, JobACL.MODIFY_JOB)); + Assertions.assertTrue(job.checkAccess(viewOnlyUser, JobACL.VIEW_JOB), + "viewonlyuser cannot view job"); + Assertions.assertFalse(job.checkAccess(viewOnlyUser, JobACL.MODIFY_JOB), + "viewonlyuser can modify job"); MRClientProtocol client = viewOnlyUser.doAs( new PrivilegedExceptionAction() { @Override @@ -273,28 +276,28 @@ public class TestMRClientService { } private void verifyJobReport(JobReport jr) { - Assert.assertNotNull("JobReport is null", jr); + Assertions.assertNotNull(jr, "JobReport is null"); List amInfos = jr.getAMInfos(); - Assert.assertEquals(1, amInfos.size()); - Assert.assertEquals(JobState.RUNNING, jr.getJobState()); + Assertions.assertEquals(1, amInfos.size()); + Assertions.assertEquals(JobState.RUNNING, jr.getJobState()); AMInfo amInfo = amInfos.get(0); - Assert.assertEquals(MRApp.NM_HOST, amInfo.getNodeManagerHost()); - Assert.assertEquals(MRApp.NM_PORT, amInfo.getNodeManagerPort()); - Assert.assertEquals(MRApp.NM_HTTP_PORT, amInfo.getNodeManagerHttpPort()); - Assert.assertEquals(1, amInfo.getAppAttemptId().getAttemptId()); - Assert.assertEquals(1, amInfo.getContainerId().getApplicationAttemptId() + Assertions.assertEquals(MRApp.NM_HOST, amInfo.getNodeManagerHost()); + Assertions.assertEquals(MRApp.NM_PORT, amInfo.getNodeManagerPort()); + Assertions.assertEquals(MRApp.NM_HTTP_PORT, amInfo.getNodeManagerHttpPort()); + Assertions.assertEquals(1, amInfo.getAppAttemptId().getAttemptId()); + Assertions.assertEquals(1, amInfo.getContainerId().getApplicationAttemptId() .getAttemptId()); - Assert.assertTrue(amInfo.getStartTime() > 0); - Assert.assertFalse(jr.isUber()); + Assertions.assertTrue(amInfo.getStartTime() > 0); + Assertions.assertFalse(jr.isUber()); } private void verifyTaskAttemptReport(TaskAttemptReport tar) { - Assert.assertEquals(TaskAttemptState.RUNNING, tar.getTaskAttemptState()); - Assert.assertNotNull("TaskAttemptReport is null", tar); - Assert.assertEquals(MRApp.NM_HOST, tar.getNodeManagerHost()); - Assert.assertEquals(MRApp.NM_PORT, tar.getNodeManagerPort()); - Assert.assertEquals(MRApp.NM_HTTP_PORT, tar.getNodeManagerHttpPort()); - Assert.assertEquals(1, tar.getContainerId().getApplicationAttemptId() + Assertions.assertEquals(TaskAttemptState.RUNNING, tar.getTaskAttemptState()); + Assertions.assertNotNull(tar, "TaskAttemptReport is null"); + Assertions.assertEquals(MRApp.NM_HOST, tar.getNodeManagerHost()); + Assertions.assertEquals(MRApp.NM_PORT, tar.getNodeManagerPort()); + Assertions.assertEquals(MRApp.NM_HTTP_PORT, tar.getNodeManagerHttpPort()); + Assertions.assertEquals(1, tar.getContainerId().getApplicationAttemptId() .getAttemptId()); } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRecovery.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRecovery.java index 5a23b58875a..ce8e1e1573e 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRecovery.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRecovery.java @@ -19,9 +19,9 @@ package org.apache.hadoop.mapreduce.v2.app; import static org.assertj.core.api.Assertions.assertThat; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertTrue; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertTrue; import static org.mockito.Mockito.atLeast; import static org.mockito.Mockito.mock; @@ -42,7 +42,7 @@ import java.util.concurrent.TimeoutException; import org.apache.hadoop.mapreduce.util.MRJobConfUtil; import org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptFailEvent; -import org.junit.Assert; +import org.junit.jupiter.api.Assertions; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; @@ -107,8 +107,9 @@ import org.apache.hadoop.test.GenericTestUtils; import org.apache.hadoop.yarn.util.Clock; import org.apache.hadoop.yarn.util.SystemClock; -import org.junit.BeforeClass; -import org.junit.Test; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; import org.mockito.ArgumentCaptor; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -126,7 +127,7 @@ public class TestRecovery { private Text val1 = new Text("val1"); private Text val2 = new Text("val2"); - @BeforeClass + @BeforeAll public static void setupClass() throws Exception { // setup the test root directory testRootDir = @@ -158,8 +159,8 @@ public class TestRecovery { app.waitForState(job, JobState.RUNNING); long jobStartTime = job.getReport().getStartTime(); //all maps would be running - Assert.assertEquals("No of tasks not correct", - 3, job.getTasks().size()); + Assertions.assertEquals(3, job.getTasks().size(), + "No of tasks not correct"); Iterator it = job.getTasks().values().iterator(); Task mapTask1 = it.next(); Task mapTask2 = it.next(); @@ -192,7 +193,7 @@ public class TestRecovery { Thread.sleep(2000); LOG.info("Waiting for next attempt to start"); } - Assert.assertEquals(2, mapTask1.getAttempts().size()); + Assertions.assertEquals(2, mapTask1.getAttempts().size()); Iterator itr = mapTask1.getAttempts().values().iterator(); itr.next(); TaskAttempt task1Attempt2 = itr.next(); @@ -213,7 +214,7 @@ public class TestRecovery { Thread.sleep(2000); LOG.info("Waiting for next attempt to start"); } - Assert.assertEquals(3, mapTask1.getAttempts().size()); + Assertions.assertEquals(3, mapTask1.getAttempts().size()); itr = mapTask1.getAttempts().values().iterator(); itr.next(); itr.next(); @@ -234,7 +235,7 @@ public class TestRecovery { Thread.sleep(2000); LOG.info("Waiting for next attempt to start"); } - Assert.assertEquals(4, mapTask1.getAttempts().size()); + Assertions.assertEquals(4, mapTask1.getAttempts().size()); itr = mapTask1.getAttempts().values().iterator(); itr.next(); itr.next(); @@ -272,8 +273,8 @@ public class TestRecovery { job = app.submit(conf); app.waitForState(job, JobState.RUNNING); //all maps would be running - Assert.assertEquals("No of tasks not correct", - 3, job.getTasks().size()); + Assertions.assertEquals(3, job.getTasks().size(), + "No of tasks not correct"); it = job.getTasks().values().iterator(); mapTask1 = it.next(); mapTask2 = it.next(); @@ -308,29 +309,29 @@ public class TestRecovery { app.waitForState(job, JobState.SUCCEEDED); app.verifyCompleted(); - Assert.assertEquals("Job Start time not correct", - jobStartTime, job.getReport().getStartTime()); - Assert.assertEquals("Task Start time not correct", - task1StartTime, mapTask1.getReport().getStartTime()); - Assert.assertEquals("Task Finish time not correct", - task1FinishTime, mapTask1.getReport().getFinishTime()); - Assert.assertEquals(2, job.getAMInfos().size()); + Assertions.assertEquals(jobStartTime, job.getReport().getStartTime(), + "Job Start time not correct"); + Assertions.assertEquals(task1StartTime, mapTask1.getReport().getStartTime(), + "Task Start time not correct"); + Assertions.assertEquals(task1FinishTime, mapTask1.getReport().getFinishTime(), + "Task Finish time not correct"); + Assertions.assertEquals(2, job.getAMInfos().size()); int attemptNum = 1; // Verify AMInfo for (AMInfo amInfo : job.getAMInfos()) { - Assert.assertEquals(attemptNum++, amInfo.getAppAttemptId() + Assertions.assertEquals(attemptNum++, amInfo.getAppAttemptId() .getAttemptId()); - Assert.assertEquals(amInfo.getAppAttemptId(), amInfo.getContainerId() + Assertions.assertEquals(amInfo.getAppAttemptId(), amInfo.getContainerId() .getApplicationAttemptId()); - Assert.assertEquals(MRApp.NM_HOST, amInfo.getNodeManagerHost()); - Assert.assertEquals(MRApp.NM_PORT, amInfo.getNodeManagerPort()); - Assert.assertEquals(MRApp.NM_HTTP_PORT, amInfo.getNodeManagerHttpPort()); + Assertions.assertEquals(MRApp.NM_HOST, amInfo.getNodeManagerHost()); + Assertions.assertEquals(MRApp.NM_PORT, amInfo.getNodeManagerPort()); + Assertions.assertEquals(MRApp.NM_HTTP_PORT, amInfo.getNodeManagerHttpPort()); } long am1StartTimeReal = job.getAMInfos().get(0).getStartTime(); long am2StartTimeReal = job.getAMInfos().get(1).getStartTime(); - Assert.assertTrue(am1StartTimeReal >= am1StartTimeEst + Assertions.assertTrue(am1StartTimeReal >= am1StartTimeEst && am1StartTimeReal <= am2StartTimeEst); - Assert.assertTrue(am2StartTimeReal >= am2StartTimeEst + Assertions.assertTrue(am2StartTimeReal >= am2StartTimeEst && am2StartTimeReal <= System.currentTimeMillis()); // TODO Add verification of additional data from jobHistory - whatever was // available in the failed attempt should be available here @@ -371,7 +372,7 @@ public class TestRecovery { app.waitForState(job, JobState.RUNNING); // all maps would be running - Assert.assertEquals("No of tasks not correct", 3, job.getTasks().size()); + Assertions.assertEquals(3, job.getTasks().size(), "No of tasks not correct"); Iterator it = job.getTasks().values().iterator(); Task mapTask1 = it.next(); Task mapTask2 = it.next(); @@ -429,7 +430,7 @@ public class TestRecovery { job = app.submit(conf); app.waitForState(job, JobState.RUNNING); - Assert.assertEquals("No of tasks not correct", 3, job.getTasks().size()); + Assertions.assertEquals(3, job.getTasks().size(), "No of tasks not correct"); it = job.getTasks().values().iterator(); mapTask1 = it.next(); mapTask2 = it.next(); @@ -516,7 +517,7 @@ public class TestRecovery { app.waitForState(job, JobState.RUNNING); // all maps would be running - Assert.assertEquals("No of tasks not correct", 3, job.getTasks().size()); + Assertions.assertEquals(3, job.getTasks().size(), "No of tasks not correct"); Iterator it = job.getTasks().values().iterator(); Task mapTask1 = it.next(); Task mapTask2 = it.next(); @@ -575,7 +576,7 @@ public class TestRecovery { job = app.submit(conf); app.waitForState(job, JobState.RUNNING); - Assert.assertEquals("No of tasks not correct", 3, job.getTasks().size()); + Assertions.assertEquals(3, job.getTasks().size(), "No of tasks not correct"); it = job.getTasks().values().iterator(); mapTask1 = it.next(); mapTask2 = it.next(); @@ -641,8 +642,9 @@ public class TestRecovery { app = new MRAppWithHistory(1, 1, false, this.getClass().getName(), false, ++runCount); Job jobAttempt2 = app.submit(conf); - Assert.assertTrue("Recovery from previous job attempt is processed even " + - "though intermediate data encryption is enabled.", !app.recovered()); + Assertions.assertTrue(!app.recovered(), + "Recovery from previous job attempt is processed even " + + "though intermediate data encryption is enabled."); // The map task succeeded from previous job attempt will not be recovered // because the data spill encryption is enabled. @@ -694,7 +696,7 @@ public class TestRecovery { app.waitForState(job, JobState.RUNNING); // all maps would be running - Assert.assertEquals("No of tasks not correct", 3, job.getTasks().size()); + Assertions.assertEquals(3, job.getTasks().size(), "No of tasks not correct"); Iterator it = job.getTasks().values().iterator(); Task mapTask1 = it.next(); Task mapTask2 = it.next(); @@ -753,7 +755,7 @@ public class TestRecovery { job = app.submit(conf); app.waitForState(job, JobState.RUNNING); - Assert.assertEquals("No of tasks not correct", 3, job.getTasks().size()); + Assertions.assertEquals(3, job.getTasks().size(), "No of tasks not correct"); it = job.getTasks().values().iterator(); mapTask1 = it.next(); mapTask2 = it.next(); @@ -813,8 +815,8 @@ public class TestRecovery { Job job = app.submit(conf); app.waitForState(job, JobState.RUNNING); //all maps would be running - Assert.assertEquals("No of tasks not correct", - 3, job.getTasks().size()); + Assertions.assertEquals(3, job.getTasks().size(), + "No of tasks not correct"); Iterator it = job.getTasks().values().iterator(); Task mapTask1 = it.next(); Task mapTask2 = it.next(); @@ -833,8 +835,8 @@ public class TestRecovery { app.waitForState(task2Attempt, TaskAttemptState.RUNNING); // reduces must be in NEW state - Assert.assertEquals("Reduce Task state not correct", - TaskState.RUNNING, reduceTask.getReport().getTaskState()); + Assertions.assertEquals(TaskState.RUNNING, reduceTask.getReport().getTaskState(), + "Reduce Task state not correct"); //send the done signal to the 1st map app.getContext().getEventHandler().handle( @@ -862,8 +864,8 @@ public class TestRecovery { job = app.submit(conf); app.waitForState(job, JobState.RUNNING); //all maps would be running - Assert.assertEquals("No of tasks not correct", - 3, job.getTasks().size()); + Assertions.assertEquals(3, job.getTasks().size(), + "No of tasks not correct"); it = job.getTasks().values().iterator(); mapTask1 = it.next(); mapTask2 = it.next(); @@ -905,8 +907,8 @@ public class TestRecovery { job = app.submit(conf); app.waitForState(job, JobState.RUNNING); //all maps would be running - Assert.assertEquals("No of tasks not correct", - 3, job.getTasks().size()); + Assertions.assertEquals(3, job.getTasks().size(), + "No of tasks not correct"); it = job.getTasks().values().iterator(); mapTask1 = it.next(); mapTask2 = it.next(); @@ -940,8 +942,8 @@ public class TestRecovery { conf.set(FileOutputFormat.OUTDIR, outputDir.toString()); Job job = app.submit(conf); app.waitForState(job, JobState.RUNNING); - Assert.assertEquals("No of tasks not correct", - 3, job.getTasks().size()); + Assertions.assertEquals(3, job.getTasks().size(), + "No of tasks not correct"); Iterator it = job.getTasks().values().iterator(); Task mapTask1 = it.next(); Task reduceTask1 = it.next(); @@ -966,7 +968,7 @@ public class TestRecovery { app.waitForState(mapTask1, TaskState.SUCCEEDED); // Verify the shuffle-port - Assert.assertEquals(5467, task1Attempt1.getShufflePort()); + Assertions.assertEquals(5467, task1Attempt1.getShufflePort()); app.waitForState(reduceTask1, TaskState.RUNNING); TaskAttempt reduce1Attempt1 = reduceTask1.getAttempts().values().iterator().next(); @@ -998,8 +1000,8 @@ public class TestRecovery { conf.setBoolean(MRJobConfig.JOB_UBERTASK_ENABLE, false); job = app.submit(conf); app.waitForState(job, JobState.RUNNING); - Assert.assertEquals("No of tasks not correct", - 3, job.getTasks().size()); + Assertions.assertEquals(3, job.getTasks().size(), + "No of tasks not correct"); it = job.getTasks().values().iterator(); mapTask1 = it.next(); reduceTask1 = it.next(); @@ -1010,7 +1012,7 @@ public class TestRecovery { // Verify the shuffle-port after recovery task1Attempt1 = mapTask1.getAttempts().values().iterator().next(); - Assert.assertEquals(5467, task1Attempt1.getShufflePort()); + Assertions.assertEquals(5467, task1Attempt1.getShufflePort()); // first reduce will be recovered, no need to send done app.waitForState(reduceTask1, TaskState.SUCCEEDED); @@ -1051,7 +1053,7 @@ public class TestRecovery { conf.set(FileOutputFormat.OUTDIR, outputDir.toString()); Job job = app.submit(conf); app.waitForState(job, JobState.RUNNING); - Assert.assertEquals("No of tasks not correct", 3, job.getTasks().size()); + Assertions.assertEquals(3, job.getTasks().size(), "No of tasks not correct"); //stop the app before the job completes. app.stop(); app.close(); @@ -1061,11 +1063,11 @@ public class TestRecovery { ++runCount); job = app.submit(conf); app.waitForState(job, JobState.RUNNING); - Assert.assertEquals("No of tasks not correct", 3, job.getTasks().size()); + Assertions.assertEquals(3, job.getTasks().size(), "No of tasks not correct"); TestFileOutputCommitter committer = ( TestFileOutputCommitter) app.getCommitter(); - assertTrue("commiter.abortJob() has not been called", - committer.isAbortJobCalled()); + assertTrue(committer.isAbortJobCalled(), + "commiter.abortJob() has not been called"); app.close(); } @@ -1086,7 +1088,8 @@ public class TestRecovery { conf.set(FileOutputFormat.OUTDIR, outputDir.toString()); Job job = app.submit(conf); app.waitForState(job, JobState.RUNNING); - Assert.assertEquals("No of tasks not correct", 3, job.getTasks().size()); + Assertions.assertEquals(3, job.getTasks().size(), + "No of tasks not correct"); //stop the app before the job completes. app.stop(); app.close(); @@ -1096,11 +1099,12 @@ public class TestRecovery { ++runCount); job = app.submit(conf); app.waitForState(job, JobState.RUNNING); - Assert.assertEquals("No of tasks not correct", 3, job.getTasks().size()); + Assertions.assertEquals(3, job.getTasks().size(), + "No of tasks not correct"); TestFileOutputCommitter committer = ( TestFileOutputCommitter) app.getCommitter(); - assertFalse("commiter.abortJob() has been called", - committer.isAbortJobCalled()); + assertFalse(committer.isAbortJobCalled(), + "commiter.abortJob() has been called"); app.close(); } @@ -1116,8 +1120,8 @@ public class TestRecovery { conf.set(FileOutputFormat.OUTDIR, outputDir.toString()); Job job = app.submit(conf); app.waitForState(job, JobState.RUNNING); - Assert.assertEquals("No of tasks not correct", - 3, job.getTasks().size()); + Assertions.assertEquals(3, job.getTasks().size(), + "No of tasks not correct"); Iterator it = job.getTasks().values().iterator(); Task mapTask1 = it.next(); Task mapTask2 = it.next(); @@ -1147,7 +1151,7 @@ public class TestRecovery { app.waitForState(mapTask1, TaskState.SUCCEEDED); // Verify the shuffle-port - Assert.assertEquals(5467, task1Attempt1.getShufflePort()); + Assertions.assertEquals(5467, task1Attempt1.getShufflePort()); //stop the app before the job completes. app.stop(); @@ -1164,8 +1168,8 @@ public class TestRecovery { conf.setBoolean(MRJobConfig.JOB_UBERTASK_ENABLE, false); job = app.submit(conf); app.waitForState(job, JobState.RUNNING); - Assert.assertEquals("No of tasks not correct", - 3, job.getTasks().size()); + Assertions.assertEquals(3, job.getTasks().size(), + "No of tasks not correct"); it = job.getTasks().values().iterator(); mapTask1 = it.next(); mapTask2 = it.next(); @@ -1176,7 +1180,7 @@ public class TestRecovery { // Verify the shuffle-port after recovery task1Attempt1 = mapTask1.getAttempts().values().iterator().next(); - Assert.assertEquals(5467, task1Attempt1.getShufflePort()); + Assertions.assertEquals(5467, task1Attempt1.getShufflePort()); app.waitForState(mapTask2, TaskState.RUNNING); @@ -1197,7 +1201,7 @@ public class TestRecovery { app.waitForState(mapTask2, TaskState.SUCCEEDED); // Verify the shuffle-port - Assert.assertEquals(5467, task2Attempt1.getShufflePort()); + Assertions.assertEquals(5467, task2Attempt1.getShufflePort()); app.waitForState(reduceTask1, TaskState.RUNNING); TaskAttempt reduce1Attempt1 = reduceTask1.getAttempts().values().iterator().next(); @@ -1231,8 +1235,8 @@ public class TestRecovery { conf.set(FileOutputFormat.OUTDIR, outputDir.toString()); Job job = app.submit(conf); app.waitForState(job, JobState.RUNNING); - Assert.assertEquals("No of tasks not correct", - 3, job.getTasks().size()); + Assertions.assertEquals(3, job.getTasks().size(), + "No of tasks not correct"); Iterator it = job.getTasks().values().iterator(); Task mapTask1 = it.next(); Task reduceTask1 = it.next(); @@ -1257,7 +1261,7 @@ public class TestRecovery { app.waitForState(mapTask1, TaskState.SUCCEEDED); // Verify the shuffle-port - Assert.assertEquals(5467, task1Attempt1.getShufflePort()); + Assertions.assertEquals(5467, task1Attempt1.getShufflePort()); app.waitForState(reduceTask1, TaskState.RUNNING); TaskAttempt reduce1Attempt1 = reduceTask1.getAttempts().values().iterator().next(); @@ -1289,8 +1293,8 @@ public class TestRecovery { conf.setBoolean(MRJobConfig.JOB_UBERTASK_ENABLE, false); job = app.submit(conf); app.waitForState(job, JobState.RUNNING); - Assert.assertEquals("No of tasks not correct", - 3, job.getTasks().size()); + Assertions.assertEquals(3, job.getTasks().size(), + "No of tasks not correct"); it = job.getTasks().values().iterator(); mapTask1 = it.next(); reduceTask1 = it.next(); @@ -1301,7 +1305,7 @@ public class TestRecovery { // Verify the shuffle-port after recovery task1Attempt1 = mapTask1.getAttempts().values().iterator().next(); - Assert.assertEquals(5467, task1Attempt1.getShufflePort()); + Assertions.assertEquals(5467, task1Attempt1.getShufflePort()); // first reduce will be recovered, no need to send done app.waitForState(reduceTask1, TaskState.SUCCEEDED); @@ -1351,8 +1355,8 @@ public class TestRecovery { app.waitForState(job, JobState.RUNNING); long jobStartTime = job.getReport().getStartTime(); //all maps would be running - Assert.assertEquals("No of tasks not correct", - 3, job.getTasks().size()); + Assertions.assertEquals(3, job.getTasks().size(), + "No of tasks not correct"); Iterator it = job.getTasks().values().iterator(); Task mapTask1 = it.next(); @@ -1425,8 +1429,8 @@ public class TestRecovery { job = app.submit(conf); app.waitForState(job, JobState.RUNNING); //all maps would be running - Assert.assertEquals("No of tasks not correct", - 3, job.getTasks().size()); + Assertions.assertEquals(3, job.getTasks().size(), + "No of tasks not correct"); it = job.getTasks().values().iterator(); mapTask1 = it.next(); mapTask2 = it.next(); @@ -1462,36 +1466,36 @@ public class TestRecovery { app.waitForState(job, JobState.SUCCEEDED); app.verifyCompleted(); - Assert.assertEquals("Job Start time not correct", - jobStartTime, job.getReport().getStartTime()); - Assert.assertEquals("Task Start time not correct", - task1StartTime, mapTask1.getReport().getStartTime()); - Assert.assertEquals("Task Finish time not correct", - task1FinishTime, mapTask1.getReport().getFinishTime()); - Assert.assertEquals(2, job.getAMInfos().size()); + Assertions.assertEquals(jobStartTime, job.getReport().getStartTime(), + "Job Start time not correct"); + Assertions.assertEquals(task1StartTime, mapTask1.getReport().getStartTime(), + "Task Start time not correct"); + Assertions.assertEquals(task1FinishTime, mapTask1.getReport().getFinishTime(), + "Task Finish time not correct"); + Assertions.assertEquals(2, job.getAMInfos().size()); int attemptNum = 1; // Verify AMInfo for (AMInfo amInfo : job.getAMInfos()) { - Assert.assertEquals(attemptNum++, amInfo.getAppAttemptId() + Assertions.assertEquals(attemptNum++, amInfo.getAppAttemptId() .getAttemptId()); - Assert.assertEquals(amInfo.getAppAttemptId(), amInfo.getContainerId() + Assertions.assertEquals(amInfo.getAppAttemptId(), amInfo.getContainerId() .getApplicationAttemptId()); - Assert.assertEquals(MRApp.NM_HOST, amInfo.getNodeManagerHost()); - Assert.assertEquals(MRApp.NM_PORT, amInfo.getNodeManagerPort()); - Assert.assertEquals(MRApp.NM_HTTP_PORT, amInfo.getNodeManagerHttpPort()); + Assertions.assertEquals(MRApp.NM_HOST, amInfo.getNodeManagerHost()); + Assertions.assertEquals(MRApp.NM_PORT, amInfo.getNodeManagerPort()); + Assertions.assertEquals(MRApp.NM_HTTP_PORT, amInfo.getNodeManagerHttpPort()); } long am1StartTimeReal = job.getAMInfos().get(0).getStartTime(); long am2StartTimeReal = job.getAMInfos().get(1).getStartTime(); - Assert.assertTrue(am1StartTimeReal >= am1StartTimeEst + Assertions.assertTrue(am1StartTimeReal >= am1StartTimeEst && am1StartTimeReal <= am2StartTimeEst); - Assert.assertTrue(am2StartTimeReal >= am2StartTimeEst + Assertions.assertTrue(am2StartTimeReal >= am2StartTimeEst && am2StartTimeReal <= System.currentTimeMillis()); } - @Test(timeout=30000) + @Test + @Timeout(30000) public void testRecoveryWithoutShuffleSecret() throws Exception { - int runCount = 0; MRApp app = new MRAppNoShuffleSecret(2, 1, false, this.getClass().getName(), true, ++runCount); @@ -1503,8 +1507,8 @@ public class TestRecovery { Job job = app.submit(conf); app.waitForState(job, JobState.RUNNING); //all maps would be running - Assert.assertEquals("No of tasks not correct", - 3, job.getTasks().size()); + Assertions.assertEquals(3, job.getTasks().size(), + "No of tasks not correct"); Iterator it = job.getTasks().values().iterator(); Task mapTask1 = it.next(); Task mapTask2 = it.next(); @@ -1550,8 +1554,8 @@ public class TestRecovery { job = app.submit(conf); app.waitForState(job, JobState.RUNNING); //all maps would be running - Assert.assertEquals("No of tasks not correct", - 3, job.getTasks().size()); + Assertions.assertEquals(3, job.getTasks().size(), + "No of tasks not correct"); it = job.getTasks().values().iterator(); mapTask1 = it.next(); mapTask2 = it.next(); @@ -1890,16 +1894,16 @@ public class TestRecovery { ArgumentCaptor arg, List expectedJobHistoryEvents, long expectedMapLaunches, long expectedFailedMaps) { - assertEquals("Final State of Task", finalState, checkTask.getState()); + assertEquals(finalState, checkTask.getState(), "Final State of Task"); Map recoveredAttempts = checkTask.getAttempts(); - assertEquals("Expected Number of Task Attempts", - finalAttemptStates.size(), recoveredAttempts.size()); + assertEquals(finalAttemptStates.size(), recoveredAttempts.size(), + "Expected Number of Task Attempts"); for (TaskAttemptID taID : finalAttemptStates.keySet()) { - assertEquals("Expected Task Attempt State", - finalAttemptStates.get(taID), - recoveredAttempts.get(TypeConverter.toYarn(taID)).getState()); + assertEquals(finalAttemptStates.get(taID), + recoveredAttempts.get(TypeConverter.toYarn(taID)).getState(), + "Expected Task Attempt State"); } Iterator ie = arg.getAllValues().iterator(); @@ -1947,12 +1951,12 @@ public class TestRecovery { } } assertTrue(jobTaskEventReceived || (finalState == TaskState.RUNNING)); - assertEquals("Did not process all expected JobHistoryEvents", - 0, expectedJobHistoryEvents.size()); - assertEquals("Expected Map Launches", - expectedMapLaunches, totalLaunchedMaps); - assertEquals("Expected Failed Maps", - expectedFailedMaps, totalFailedMaps); + assertEquals(0, expectedJobHistoryEvents.size(), + "Did not process all expected JobHistoryEvents"); + assertEquals(expectedMapLaunches, totalLaunchedMaps, + "Expected Map Launches"); + assertEquals(expectedFailedMaps, totalFailedMaps, + "Expected Failed Maps"); } private MapTaskImpl getMockMapTask(long clusterTimestamp, EventHandler eh) { diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRuntimeEstimators.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRuntimeEstimators.java index 0031598da5b..b45b674bf50 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRuntimeEstimators.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRuntimeEstimators.java @@ -78,8 +78,8 @@ import org.apache.hadoop.yarn.security.client.ClientToAMTokenSecretManager; import org.apache.hadoop.yarn.util.Clock; import org.apache.hadoop.yarn.util.ControlledClock; import org.apache.hadoop.yarn.util.SystemClock; -import org.junit.Assert; -import org.junit.Test; +import org.junit.jupiter.api.Assertions; +import org.junit.jupiter.api.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -152,16 +152,16 @@ public class TestRuntimeEstimators { conf.setDouble(MRJobConfig.SPECULATIVECAP_TOTAL_TASKS, 0.001); conf.setInt(MRJobConfig.SPECULATIVE_MINIMUM_ALLOWED_TASKS, 5); speculator = new DefaultSpeculator(conf, myAppContext, estimator, clock); - Assert.assertEquals("wrong SPECULATIVE_RETRY_AFTER_NO_SPECULATE value", - 500L, speculator.getSoonestRetryAfterNoSpeculate()); - Assert.assertEquals("wrong SPECULATIVE_RETRY_AFTER_SPECULATE value", - 5000L, speculator.getSoonestRetryAfterSpeculate()); + Assertions.assertEquals(500L, speculator.getSoonestRetryAfterNoSpeculate(), + "wrong SPECULATIVE_RETRY_AFTER_NO_SPECULATE value"); + Assertions.assertEquals(5000L, speculator.getSoonestRetryAfterSpeculate(), + "wrong SPECULATIVE_RETRY_AFTER_SPECULATE value"); assertThat(speculator.getProportionRunningTasksSpeculatable()) .isCloseTo(0.1, offset(0.00001)); assertThat(speculator.getProportionTotalTasksSpeculatable()) .isCloseTo(0.001, offset(0.00001)); - Assert.assertEquals("wrong SPECULATIVE_MINIMUM_ALLOWED_TASKS value", - 5, speculator.getMinimumAllowedSpeculativeTasks()); + Assertions.assertEquals(5, speculator.getMinimumAllowedSpeculativeTasks(), + "wrong SPECULATIVE_MINIMUM_ALLOWED_TASKS value"); dispatcher.register(Speculator.EventType.class, speculator); @@ -244,8 +244,8 @@ public class TestRuntimeEstimators { } } - Assert.assertEquals("We got the wrong number of successful speculations.", - expectedSpeculations, successfulSpeculations.get()); + Assertions.assertEquals(expectedSpeculations, successfulSpeculations.get(), + "We got the wrong number of successful speculations."); } @Test @@ -279,8 +279,8 @@ public class TestRuntimeEstimators { TaskId taskID = event.getTaskID(); Task task = myJob.getTask(taskID); - Assert.assertEquals - ("Wrong type event", TaskEventType.T_ADD_SPEC_ATTEMPT, event.getType()); + Assertions.assertEquals + (TaskEventType.T_ADD_SPEC_ATTEMPT, event.getType(), "Wrong type event"); System.out.println("SpeculationRequestEventHandler.handle adds a speculation task for " + taskID); diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestStagingCleanup.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestStagingCleanup.java index 1f0ce2309e2..81314704d1f 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestStagingCleanup.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestStagingCleanup.java @@ -18,8 +18,8 @@ package org.apache.hadoop.mapreduce.v2.app; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertTrue; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertTrue; import static org.mockito.ArgumentMatchers.any; import static org.mockito.ArgumentMatchers.anyBoolean; import static org.mockito.Mockito.mock; @@ -61,9 +61,10 @@ import org.apache.hadoop.yarn.exceptions.YarnException; import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; import org.apache.hadoop.yarn.factories.RecordFactory; import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider; -import org.junit.After; -import org.junit.Assert; -import org.junit.Test; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.Assertions; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; /** @@ -78,7 +79,7 @@ import org.junit.Test; private final static RecordFactory recordFactory = RecordFactoryProvider. getRecordFactory(null); - @After + @AfterEach public void tearDown() { conf.setBoolean(MRJobConfig.PRESERVE_FAILED_TASK_FILES, false); } @@ -135,7 +136,7 @@ import org.junit.Test; JobId jobid = recordFactory.newRecordInstance(JobId.class); jobid.setAppId(appId); ContainerAllocator mockAlloc = mock(ContainerAllocator.class); - Assert.assertTrue(MRJobConfig.DEFAULT_MR_AM_MAX_ATTEMPTS > 1); + Assertions.assertTrue(MRJobConfig.DEFAULT_MR_AM_MAX_ATTEMPTS > 1); MRAppMaster appMaster = new TestMRApp(attemptId, mockAlloc, JobStateInternal.RUNNING, MRJobConfig.DEFAULT_MR_AM_MAX_ATTEMPTS); appMaster.init(conf); @@ -146,7 +147,8 @@ import org.junit.Test; verify(fs).delete(stagingJobPath, true); } - @Test (timeout = 30000) + @Test + @Timeout(30000) public void testNoDeletionofStagingOnReboot() throws IOException { conf.set(MRJobConfig.MAPREDUCE_JOB_DIR, stagingJobDir); fs = mock(FileSystem.class); @@ -158,7 +160,7 @@ import org.junit.Test; 0); ApplicationAttemptId attemptId = ApplicationAttemptId.newInstance(appId, 1); ContainerAllocator mockAlloc = mock(ContainerAllocator.class); - Assert.assertTrue(MRJobConfig.DEFAULT_MR_AM_MAX_ATTEMPTS > 1); + Assertions.assertTrue(MRJobConfig.DEFAULT_MR_AM_MAX_ATTEMPTS > 1); MRAppMaster appMaster = new TestMRApp(attemptId, mockAlloc, JobStateInternal.REBOOT, MRJobConfig.DEFAULT_MR_AM_MAX_ATTEMPTS); appMaster.init(conf); @@ -197,7 +199,8 @@ import org.junit.Test; verify(fs).delete(stagingJobPath, true); } - @Test (timeout = 30000) + @Test + @Timeout(30000) public void testDeletionofStagingOnKill() throws IOException { conf.set(MRJobConfig.MAPREDUCE_JOB_DIR, stagingJobDir); fs = mock(FileSystem.class); @@ -215,7 +218,7 @@ import org.junit.Test; MRAppMaster appMaster = new TestMRApp(attemptId, mockAlloc); appMaster.init(conf); //simulate the process being killed - MRAppMaster.MRAppMasterShutdownHook hook = + MRAppMaster.MRAppMasterShutdownHook hook = new MRAppMaster.MRAppMasterShutdownHook(appMaster); hook.run(); verify(fs, times(0)).delete(stagingJobPath, true); @@ -242,13 +245,14 @@ import org.junit.Test; ContainerAllocator mockAlloc = mock(ContainerAllocator.class); MRAppMaster appMaster = new TestMRApp(attemptId, mockAlloc); //no retry appMaster.init(conf); - assertTrue("appMaster.isLastAMRetry() is false", appMaster.isLastAMRetry()); + assertTrue(appMaster.isLastAMRetry(), + "appMaster.isLastAMRetry() is false"); //simulate the process being killed MRAppMaster.MRAppMasterShutdownHook hook = new MRAppMaster.MRAppMasterShutdownHook(appMaster); hook.run(); - assertTrue("MRAppMaster isn't stopped", - appMaster.isInState(Service.STATE.STOPPED)); + assertTrue(appMaster.isInState(Service.STATE.STOPPED), + "MRAppMaster isn't stopped"); verify(fs).delete(stagingJobPath, true); } @@ -270,7 +274,7 @@ import org.junit.Test; JobId jobid = recordFactory.newRecordInstance(JobId.class); jobid.setAppId(appId); ContainerAllocator mockAlloc = mock(ContainerAllocator.class); - Assert.assertTrue(MRJobConfig.DEFAULT_MR_AM_MAX_ATTEMPTS > 1); + Assertions.assertTrue(MRJobConfig.DEFAULT_MR_AM_MAX_ATTEMPTS > 1); MRAppMaster appMaster = new TestMRApp(attemptId, mockAlloc, JobStateInternal.FAILED, MRJobConfig.DEFAULT_MR_AM_MAX_ATTEMPTS); appMaster.init(conf); @@ -298,7 +302,7 @@ import org.junit.Test; JobId jobid = recordFactory.newRecordInstance(JobId.class); jobid.setAppId(appId); ContainerAllocator mockAlloc = mock(ContainerAllocator.class); - Assert.assertTrue(MRJobConfig.DEFAULT_MR_AM_MAX_ATTEMPTS > 1); + Assertions.assertTrue(MRJobConfig.DEFAULT_MR_AM_MAX_ATTEMPTS > 1); MRAppMaster appMaster = new TestMRApp(attemptId, mockAlloc, JobStateInternal.RUNNING, MRJobConfig.DEFAULT_MR_AM_MAX_ATTEMPTS); appMaster.init(conf); @@ -324,7 +328,7 @@ import org.junit.Test; JobId jobid = recordFactory.newRecordInstance(JobId.class); jobid.setAppId(appId); ContainerAllocator mockAlloc = mock(ContainerAllocator.class); - Assert.assertTrue(MRJobConfig.DEFAULT_MR_AM_MAX_ATTEMPTS > 1); + Assertions.assertTrue(MRJobConfig.DEFAULT_MR_AM_MAX_ATTEMPTS > 1); MRAppMaster appMaster = new TestMRApp(attemptId, mockAlloc, JobStateInternal.RUNNING, MRJobConfig.DEFAULT_MR_AM_MAX_ATTEMPTS); appMaster.init(conf); @@ -355,7 +359,7 @@ import org.junit.Test; JobId jobid = recordFactory.newRecordInstance(JobId.class); jobid.setAppId(appId); ContainerAllocator mockAlloc = mock(ContainerAllocator.class); - Assert.assertTrue(MRJobConfig.DEFAULT_MR_AM_MAX_ATTEMPTS > 1); + Assertions.assertTrue(MRJobConfig.DEFAULT_MR_AM_MAX_ATTEMPTS > 1); MRAppMaster appMaster = new TestMRApp(attemptId, mockAlloc, JobStateInternal.RUNNING, MRJobConfig.DEFAULT_MR_AM_MAX_ATTEMPTS); appMaster.init(conf); @@ -583,7 +587,8 @@ import org.junit.Test; }; } - @Test(timeout=20000) + @Test + @Timeout(20000) public void testStagingCleanupOrder() throws Exception { MRAppTestCleanup app = new MRAppTestCleanup(1, 1, true, this.getClass().getName(), true); @@ -598,7 +603,7 @@ import org.junit.Test; } // assert ContainerAllocatorStopped and then tagingDirCleanedup - Assert.assertEquals(1, app.ContainerAllocatorStopped); - Assert.assertEquals(2, app.stagingDirCleanedup); + Assertions.assertEquals(1, app.ContainerAllocatorStopped); + Assertions.assertEquals(2, app.stagingDirCleanedup); } } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestTaskHeartbeatHandler.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestTaskHeartbeatHandler.java index f5c30c2a8db..c0ba8d6c265 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestTaskHeartbeatHandler.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestTaskHeartbeatHandler.java @@ -18,7 +18,7 @@ package org.apache.hadoop.mapreduce.v2.app; -import static org.junit.Assert.assertFalse; +import static org.junit.jupiter.api.Assertions.assertFalse; import static org.mockito.ArgumentMatchers.any; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.never; @@ -40,8 +40,8 @@ import org.apache.hadoop.yarn.event.EventHandler; import org.apache.hadoop.yarn.util.Clock; import org.apache.hadoop.yarn.util.ControlledClock; import org.apache.hadoop.yarn.util.SystemClock; -import org.junit.Assert; -import org.junit.Test; +import org.junit.jupiter.api.Assertions; +import org.junit.jupiter.api.Test; import java.util.Map; import java.util.concurrent.ConcurrentMap; @@ -214,11 +214,11 @@ public class TestTaskHeartbeatHandler { JobId jobId = MRBuilderUtils.newJobId(appId, 4); TaskId tid = MRBuilderUtils.newTaskId(jobId, 3, TaskType.MAP); final TaskAttemptId taid = MRBuilderUtils.newTaskAttemptId(tid, 2); - Assert.assertFalse(hb.hasRecentlyUnregistered(taid)); + Assertions.assertFalse(hb.hasRecentlyUnregistered(taid)); hb.register(taid); - Assert.assertFalse(hb.hasRecentlyUnregistered(taid)); + Assertions.assertFalse(hb.hasRecentlyUnregistered(taid)); hb.unregister(taid); - Assert.assertTrue(hb.hasRecentlyUnregistered(taid)); + Assertions.assertTrue(hb.hasRecentlyUnregistered(taid)); long unregisterTimeout = conf.getLong(MRJobConfig.TASK_EXIT_TIMEOUT, MRJobConfig.TASK_EXIT_TIMEOUT_DEFAULT); clock.setTime(unregisterTimeout + 1); @@ -260,7 +260,7 @@ public class TestTaskHeartbeatHandler { new TaskHeartbeatHandler(null, SystemClock.getInstance(), 1); hb.init(conf); - Assert.assertTrue("The value of the task timeout is incorrect.", - hb.getTaskTimeOut() == expectedTimeout); + Assertions.assertTrue(hb.getTaskTimeOut() == expectedTimeout, + "The value of the task timeout is incorrect."); } } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/commit/TestCommitterEventHandler.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/commit/TestCommitterEventHandler.java index a3e85aad841..c051504b322 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/commit/TestCommitterEventHandler.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/commit/TestCommitterEventHandler.java @@ -27,7 +27,7 @@ import static org.mockito.Mockito.when; import java.util.concurrent.ConcurrentLinkedQueue; -import org.junit.Assert; +import org.junit.jupiter.api.Assertions; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.mapreduce.JobContext; @@ -39,7 +39,7 @@ import org.apache.hadoop.mapreduce.v2.app.rm.RMHeartbeatHandler; import org.apache.hadoop.yarn.event.AsyncDispatcher; import org.apache.hadoop.yarn.event.EventHandler; -import static org.junit.Assert.*; +import static org.junit.jupiter.api.Assertions.*; import static org.mockito.Mockito.*; import java.io.File; @@ -62,9 +62,9 @@ import org.apache.hadoop.yarn.event.Event; import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; import org.apache.hadoop.yarn.util.Clock; import org.apache.hadoop.yarn.util.SystemClock; -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.Test; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.Test; public class TestCommitterEventHandler { public static class WaitForItHandler implements EventHandler { @@ -95,13 +95,13 @@ public class TestCommitterEventHandler { static String stagingDir = "target/test-staging/"; - @BeforeClass + @BeforeAll public static void setup() { File dir = new File(stagingDir); stagingDir = dir.getAbsolutePath(); } - @Before + @BeforeEach public void cleanup() throws IOException { File dir = new File(stagingDir); if(dir.exists()) { @@ -146,11 +146,11 @@ public class TestCommitterEventHandler { Thread.sleep(10); timeToWaitMs -= 10; } - Assert.assertEquals("committer did not register a heartbeat callback", - 1, rmhh.getNumCallbacks()); + Assertions.assertEquals(1, rmhh.getNumCallbacks(), + "committer did not register a heartbeat callback"); verify(committer, never()).commitJob(any(JobContext.class)); - Assert.assertEquals("committer should not have committed", - 0, jeh.numCommitCompletedEvents); + Assertions.assertEquals(0, jeh.numCommitCompletedEvents, + "committer should not have committed"); // set a fresh heartbeat and verify commit completes rmhh.setLastHeartbeatTime(clock.getTime()); @@ -159,8 +159,8 @@ public class TestCommitterEventHandler { Thread.sleep(10); timeToWaitMs -= 10; } - Assert.assertEquals("committer did not complete commit after RM hearbeat", - 1, jeh.numCommitCompletedEvents); + Assertions.assertEquals(1, jeh.numCommitCompletedEvents, + "committer did not complete commit after RM hearbeat"); verify(committer, times(1)).commitJob(any()); //Clean up so we can try to commit again (Don't do this at home) @@ -174,8 +174,8 @@ public class TestCommitterEventHandler { Thread.sleep(10); timeToWaitMs -= 10; } - Assert.assertEquals("committer did not commit", - 2, jeh.numCommitCompletedEvents); + Assertions.assertEquals(2, jeh.numCommitCompletedEvents, + "committer did not commit"); verify(committer, times(2)).commitJob(any()); ceh.stop(); @@ -262,9 +262,9 @@ public class TestCommitterEventHandler { assertNotNull(e); assertTrue(e instanceof JobCommitCompletedEvent); FileSystem fs = FileSystem.get(conf); - assertTrue(startCommitFile.toString(), fs.exists(startCommitFile)); - assertTrue(endCommitSuccessFile.toString(), fs.exists(endCommitSuccessFile)); - assertFalse(endCommitFailureFile.toString(), fs.exists(endCommitFailureFile)); + assertTrue(fs.exists(startCommitFile), startCommitFile.toString()); + assertTrue(fs.exists(endCommitSuccessFile), endCommitSuccessFile.toString()); + assertFalse(fs.exists(endCommitFailureFile), endCommitFailureFile.toString()); verify(mockCommitter).commitJob(any(JobContext.class)); } finally { handler.stop(); diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java index 5f378e4f9c3..5f827e46d95 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java @@ -105,10 +105,11 @@ import org.apache.hadoop.yarn.state.StateMachine; import org.apache.hadoop.yarn.state.StateMachineFactory; import org.apache.hadoop.yarn.util.Records; import org.apache.hadoop.yarn.util.SystemClock; -import org.junit.Assert; -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.Test; +import org.junit.jupiter.api.Assertions; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; import org.mockito.Mockito; @@ -120,13 +121,13 @@ public class TestJobImpl { static String stagingDir = "target/test-staging/"; - @BeforeClass + @BeforeAll public static void setup() { File dir = new File(stagingDir); stagingDir = dir.getAbsolutePath(); } - @Before + @BeforeEach public void cleanup() throws IOException { File dir = new File(stagingDir); if(dir.exists()) { @@ -169,13 +170,14 @@ public class TestJobImpl { dispatcher.stop(); commitHandler.stop(); try { - Assert.assertTrue(jseHandler.getAssertValue()); + Assertions.assertTrue(jseHandler.getAssertValue()); } catch (InterruptedException e) { - Assert.fail("Workflow related attributes are not tested properly"); + Assertions.fail("Workflow related attributes are not tested properly"); } } - @Test(timeout=20000) + @Test + @Timeout(20000) public void testCommitJobFailsJob() throws Exception { Configuration conf = new Configuration(); conf.set(MRJobConfig.MR_AM_STAGING_DIR, stagingDir); @@ -200,7 +202,8 @@ public class TestJobImpl { commitHandler.stop(); } - @Test(timeout=20000) + @Test + @Timeout(20000) public void testCheckJobCompleteSuccess() throws Exception { Configuration conf = new Configuration(); conf.set(MRJobConfig.MR_AM_STAGING_DIR, stagingDir); @@ -239,7 +242,7 @@ public class TestJobImpl { JobEventType.JOB_TASK_ATTEMPT_COMPLETED)); assertJobState(job, JobStateInternal.SUCCEEDED); - job.handle(new JobEvent(job.getID(), + job.handle(new JobEvent(job.getID(), JobEventType.JOB_MAP_TASK_RESCHEDULED)); assertJobState(job, JobStateInternal.SUCCEEDED); @@ -247,13 +250,14 @@ public class TestJobImpl { JobEventType.JOB_TASK_COMPLETED)); dispatcher.await(); assertJobState(job, JobStateInternal.SUCCEEDED); - + dispatcher.stop(); commitHandler.stop(); } - @Test(timeout=20000) - public void testRebootedDuringSetup() throws Exception{ + @Test + @Timeout(20000) + public void testRebootedDuringSetup() throws Exception { Configuration conf = new Configuration(); conf.set(MRJobConfig.MR_AM_STAGING_DIR, stagingDir); AsyncDispatcher dispatcher = new AsyncDispatcher(); @@ -289,13 +293,14 @@ public class TestJobImpl { assertJobState(job, JobStateInternal.REBOOT); // return the external state as RUNNING since otherwise JobClient will // exit when it polls the AM for job state - Assert.assertEquals(JobState.RUNNING, job.getState()); + Assertions.assertEquals(JobState.RUNNING, job.getState()); dispatcher.stop(); commitHandler.stop(); } - @Test(timeout=20000) + @Test + @Timeout(20000) public void testRebootedDuringCommit() throws Exception { Configuration conf = new Configuration(); conf.set(MRJobConfig.MR_AM_STAGING_DIR, stagingDir); @@ -321,15 +326,16 @@ public class TestJobImpl { job.handle(new JobEvent(job.getID(), JobEventType.JOB_AM_REBOOT)); assertJobState(job, JobStateInternal.REBOOT); // return the external state as ERROR since this is last retry. - Assert.assertEquals(JobState.RUNNING, job.getState()); + Assertions.assertEquals(JobState.RUNNING, job.getState()); when(mockContext.hasSuccessfullyUnregistered()).thenReturn(true); - Assert.assertEquals(JobState.ERROR, job.getState()); + Assertions.assertEquals(JobState.ERROR, job.getState()); dispatcher.stop(); commitHandler.stop(); } - @Test(timeout=20000) + @Test + @Timeout(20000) public void testKilledDuringSetup() throws Exception { Configuration conf = new Configuration(); conf.set(MRJobConfig.MR_AM_STAGING_DIR, stagingDir); @@ -366,7 +372,8 @@ public class TestJobImpl { commitHandler.stop(); } - @Test(timeout=20000) + @Test + @Timeout(20000) public void testKilledDuringCommit() throws Exception { Configuration conf = new Configuration(); conf.set(MRJobConfig.MR_AM_STAGING_DIR, stagingDir); @@ -423,7 +430,8 @@ public class TestJobImpl { dispatcher.stop(); } - @Test (timeout=10000) + @Test + @Timeout(10000) public void testFailAbortDoesntHang() throws IOException { Configuration conf = new Configuration(); conf.set(MRJobConfig.MR_AM_STAGING_DIR, stagingDir); @@ -461,7 +469,8 @@ public class TestJobImpl { dispatcher.stop(); } - @Test(timeout=20000) + @Test + @Timeout(20000) public void testKilledDuringFailAbort() throws Exception { Configuration conf = new Configuration(); conf.set(MRJobConfig.MR_AM_STAGING_DIR, stagingDir); @@ -503,7 +512,8 @@ public class TestJobImpl { commitHandler.stop(); } - @Test(timeout=20000) + @Test + @Timeout(20000) public void testKilledDuringKillAbort() throws Exception { Configuration conf = new Configuration(); conf.set(MRJobConfig.MR_AM_STAGING_DIR, stagingDir); @@ -546,7 +556,8 @@ public class TestJobImpl { commitHandler.stop(); } - @Test(timeout=20000) + @Test + @Timeout(20000) public void testUnusableNodeTransition() throws Exception { Configuration conf = new Configuration(); conf.set(MRJobConfig.MR_AM_STAGING_DIR, stagingDir); @@ -599,7 +610,7 @@ public class TestJobImpl { job.handle(new JobTaskAttemptCompletedEvent(tce)); // complete the task itself job.handle(new JobTaskEvent(taskId, TaskState.SUCCEEDED)); - Assert.assertEquals(JobState.RUNNING, job.getState()); + Assertions.assertEquals(JobState.RUNNING, job.getState()); } } @@ -699,13 +710,13 @@ public class TestJobImpl { * much value. Instead, we validate the T_KILL events. */ if (killMappers) { - Assert.assertEquals("Number of killed events", 2, killedEvents.size()); - Assert.assertEquals("AttemptID", "task_1234567890000_0001_m_000000", - killedEvents.get(0).getTaskID().toString()); - Assert.assertEquals("AttemptID", "task_1234567890000_0001_m_000001", - killedEvents.get(1).getTaskID().toString()); + Assertions.assertEquals(2, killedEvents.size(), "Number of killed events"); + Assertions.assertEquals("task_1234567890000_0001_m_000000", + killedEvents.get(0).getTaskID().toString(), "AttemptID"); + Assertions.assertEquals("task_1234567890000_0001_m_000001", + killedEvents.get(1).getTaskID().toString(), "AttemptID"); } else { - Assert.assertEquals("Number of killed events", 0, killedEvents.size()); + Assertions.assertEquals(0, killedEvents.size(), "Number of killed events"); } } @@ -738,8 +749,8 @@ public class TestJobImpl { // Verify access JobImpl job1 = new JobImpl(jobId, null, conf1, null, null, null, null, null, null, null, null, true, user1, 0, null, null, null, null); - Assert.assertTrue(job1.checkAccess(ugi1, JobACL.VIEW_JOB)); - Assert.assertFalse(job1.checkAccess(ugi2, JobACL.VIEW_JOB)); + Assertions.assertTrue(job1.checkAccess(ugi1, JobACL.VIEW_JOB)); + Assertions.assertFalse(job1.checkAccess(ugi2, JobACL.VIEW_JOB)); // Setup configuration access to the user1 (owner) and user2 Configuration conf2 = new Configuration(); @@ -749,8 +760,8 @@ public class TestJobImpl { // Verify access JobImpl job2 = new JobImpl(jobId, null, conf2, null, null, null, null, null, null, null, null, true, user1, 0, null, null, null, null); - Assert.assertTrue(job2.checkAccess(ugi1, JobACL.VIEW_JOB)); - Assert.assertTrue(job2.checkAccess(ugi2, JobACL.VIEW_JOB)); + Assertions.assertTrue(job2.checkAccess(ugi1, JobACL.VIEW_JOB)); + Assertions.assertTrue(job2.checkAccess(ugi2, JobACL.VIEW_JOB)); // Setup configuration access with security enabled and access to all Configuration conf3 = new Configuration(); @@ -760,8 +771,8 @@ public class TestJobImpl { // Verify access JobImpl job3 = new JobImpl(jobId, null, conf3, null, null, null, null, null, null, null, null, true, user1, 0, null, null, null, null); - Assert.assertTrue(job3.checkAccess(ugi1, JobACL.VIEW_JOB)); - Assert.assertTrue(job3.checkAccess(ugi2, JobACL.VIEW_JOB)); + Assertions.assertTrue(job3.checkAccess(ugi1, JobACL.VIEW_JOB)); + Assertions.assertTrue(job3.checkAccess(ugi2, JobACL.VIEW_JOB)); // Setup configuration access without security enabled Configuration conf4 = new Configuration(); @@ -771,8 +782,8 @@ public class TestJobImpl { // Verify access JobImpl job4 = new JobImpl(jobId, null, conf4, null, null, null, null, null, null, null, null, true, user1, 0, null, null, null, null); - Assert.assertTrue(job4.checkAccess(ugi1, JobACL.VIEW_JOB)); - Assert.assertTrue(job4.checkAccess(ugi2, JobACL.VIEW_JOB)); + Assertions.assertTrue(job4.checkAccess(ugi1, JobACL.VIEW_JOB)); + Assertions.assertTrue(job4.checkAccess(ugi2, JobACL.VIEW_JOB)); // Setup configuration access without security enabled Configuration conf5 = new Configuration(); @@ -782,8 +793,8 @@ public class TestJobImpl { // Verify access JobImpl job5 = new JobImpl(jobId, null, conf5, null, null, null, null, null, null, null, null, true, user1, 0, null, null, null, null); - Assert.assertTrue(job5.checkAccess(ugi1, null)); - Assert.assertTrue(job5.checkAccess(ugi2, null)); + Assertions.assertTrue(job5.checkAccess(ugi1, null)); + Assertions.assertTrue(job5.checkAccess(ugi2, null)); } @Test @@ -804,8 +815,8 @@ public class TestJobImpl { mrAppMetrics, null, true, null, 0, null, mockContext, null, null); job.handle(diagUpdateEvent); String diagnostics = job.getReport().getDiagnostics(); - Assert.assertNotNull(diagnostics); - Assert.assertTrue(diagnostics.contains(diagMsg)); + Assertions.assertNotNull(diagnostics); + Assertions.assertTrue(diagnostics.contains(diagMsg)); job = new JobImpl(jobId, Records .newRecord(ApplicationAttemptId.class), new Configuration(), @@ -816,8 +827,8 @@ public class TestJobImpl { job.handle(new JobEvent(jobId, JobEventType.JOB_KILL)); job.handle(diagUpdateEvent); diagnostics = job.getReport().getDiagnostics(); - Assert.assertNotNull(diagnostics); - Assert.assertTrue(diagnostics.contains(diagMsg)); + Assertions.assertNotNull(diagnostics); + Assertions.assertTrue(diagnostics.contains(diagMsg)); } @Test @@ -826,13 +837,13 @@ public class TestJobImpl { // with default values, no of maps is 2 Configuration conf = new Configuration(); boolean isUber = testUberDecision(conf); - Assert.assertFalse(isUber); + Assertions.assertFalse(isUber); // enable uber mode, no of maps is 2 conf = new Configuration(); conf.setBoolean(MRJobConfig.JOB_UBERTASK_ENABLE, true); isUber = testUberDecision(conf); - Assert.assertTrue(isUber); + Assertions.assertTrue(isUber); // enable uber mode, no of maps is 2, no of reduces is 1 and uber task max // reduces is 0 @@ -841,7 +852,7 @@ public class TestJobImpl { conf.setInt(MRJobConfig.JOB_UBERTASK_MAXREDUCES, 0); conf.setInt(MRJobConfig.NUM_REDUCES, 1); isUber = testUberDecision(conf); - Assert.assertFalse(isUber); + Assertions.assertFalse(isUber); // enable uber mode, no of maps is 2, no of reduces is 1 and uber task max // reduces is 1 @@ -850,14 +861,14 @@ public class TestJobImpl { conf.setInt(MRJobConfig.JOB_UBERTASK_MAXREDUCES, 1); conf.setInt(MRJobConfig.NUM_REDUCES, 1); isUber = testUberDecision(conf); - Assert.assertTrue(isUber); + Assertions.assertTrue(isUber); // enable uber mode, no of maps is 2 and uber task max maps is 0 conf = new Configuration(); conf.setBoolean(MRJobConfig.JOB_UBERTASK_ENABLE, true); conf.setInt(MRJobConfig.JOB_UBERTASK_MAXMAPS, 1); isUber = testUberDecision(conf); - Assert.assertFalse(isUber); + Assertions.assertFalse(isUber); // enable uber mode of 0 reducer no matter how much memory assigned to reducer conf = new Configuration(); @@ -866,7 +877,7 @@ public class TestJobImpl { conf.setInt(MRJobConfig.REDUCE_MEMORY_MB, 2048); conf.setInt(MRJobConfig.REDUCE_CPU_VCORES, 10); isUber = testUberDecision(conf); - Assert.assertTrue(isUber); + Assertions.assertTrue(isUber); } private boolean testUberDecision(Configuration conf) { @@ -931,9 +942,9 @@ public class TestJobImpl { assertJobState(job, JobStateInternal.FAILED); job.handle(new JobEvent(jobId, JobEventType.JOB_TASK_ATTEMPT_FETCH_FAILURE)); assertJobState(job, JobStateInternal.FAILED); - Assert.assertEquals(JobState.RUNNING, job.getState()); + Assertions.assertEquals(JobState.RUNNING, job.getState()); when(mockContext.hasSuccessfullyUnregistered()).thenReturn(true); - Assert.assertEquals(JobState.FAILED, job.getState()); + Assertions.assertEquals(JobState.FAILED, job.getState()); dispatcher.stop(); commitHandler.stop(); @@ -960,12 +971,12 @@ public class TestJobImpl { JobEvent mockJobEvent = mock(JobEvent.class); JobStateInternal jobSI = initTransition.transition(job, mockJobEvent); - Assert.assertTrue("When init fails, return value from InitTransition.transition should equal NEW.", - jobSI.equals(JobStateInternal.NEW)); - Assert.assertTrue("Job diagnostics should contain YarnRuntimeException", - job.getDiagnostics().toString().contains("YarnRuntimeException")); - Assert.assertTrue("Job diagnostics should contain " + EXCEPTIONMSG, - job.getDiagnostics().toString().contains(EXCEPTIONMSG)); + Assertions.assertTrue(jobSI.equals(JobStateInternal.NEW), + "When init fails, return value from InitTransition.transition should equal NEW."); + Assertions.assertTrue(job.getDiagnostics().toString().contains("YarnRuntimeException"), + "Job diagnostics should contain YarnRuntimeException"); + Assertions.assertTrue(job.getDiagnostics().toString().contains(EXCEPTIONMSG), + "Job diagnostics should contain " + EXCEPTIONMSG); } @Test @@ -986,7 +997,7 @@ public class TestJobImpl { assertJobState(job, JobStateInternal.SETUP); // Update priority of job to 5, and it will be updated job.setJobPriority(submittedPriority); - Assert.assertEquals(submittedPriority, job.getReport().getJobPriority()); + Assertions.assertEquals(submittedPriority, job.getReport().getJobPriority()); job.handle(new JobSetupCompletedEvent(jobId)); assertJobState(job, JobStateInternal.RUNNING); @@ -996,10 +1007,10 @@ public class TestJobImpl { job.setJobPriority(updatedPriority); assertJobState(job, JobStateInternal.RUNNING); Priority jobPriority = job.getReport().getJobPriority(); - Assert.assertNotNull(jobPriority); + Assertions.assertNotNull(jobPriority); // Verify whether changed priority is same as what is set in Job. - Assert.assertEquals(updatedPriority, jobPriority); + Assertions.assertEquals(updatedPriority, jobPriority); } @Test @@ -1013,14 +1024,14 @@ public class TestJobImpl { filePolicies.put("file1", true); filePolicies.put("jar1", true); Job.setFileSharedCacheUploadPolicies(config, filePolicies); - Assert.assertEquals( + Assertions.assertEquals( 2, Job.getArchiveSharedCacheUploadPolicies(config).size()); - Assert.assertEquals( + Assertions.assertEquals( 2, Job.getFileSharedCacheUploadPolicies(config).size()); JobImpl.cleanupSharedCacheUploadPolicies(config); - Assert.assertEquals( + Assertions.assertEquals( 0, Job.getArchiveSharedCacheUploadPolicies(config).size()); - Assert.assertEquals( + Assertions.assertEquals( 0, Job.getFileSharedCacheUploadPolicies(config).size()); } @@ -1088,14 +1099,14 @@ public class TestJobImpl { job.handle(new JobTaskEvent( MRBuilderUtils.newTaskId(job.getID(), 1, TaskType.MAP), TaskState.SUCCEEDED)); - Assert.assertEquals(JobState.RUNNING, job.getState()); + Assertions.assertEquals(JobState.RUNNING, job.getState()); } int numReduces = job.getTotalReduces(); for (int i = 0; i < numReduces; ++i) { job.handle(new JobTaskEvent( MRBuilderUtils.newTaskId(job.getID(), 1, TaskType.MAP), TaskState.SUCCEEDED)); - Assert.assertEquals(JobState.RUNNING, job.getState()); + Assertions.assertEquals(JobState.RUNNING, job.getState()); } } @@ -1109,7 +1120,7 @@ public class TestJobImpl { break; } } - Assert.assertEquals(state, job.getInternalState()); + Assertions.assertEquals(state, job.getInternalState()); } private void createSpiedMapTasks(Map diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestMapReduceChildJVM.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestMapReduceChildJVM.java index f00ff281f30..5e3dfcca7cb 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestMapReduceChildJVM.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestMapReduceChildJVM.java @@ -22,7 +22,7 @@ import java.util.ArrayList; import java.util.Map; import org.apache.hadoop.mapreduce.TaskType; -import org.junit.Assert; +import org.junit.jupiter.api.Assertions; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.mapred.JobConf; @@ -37,7 +37,8 @@ import org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherEvent; import org.apache.hadoop.mapreduce.v2.app.launcher.ContainerRemoteLaunchEvent; import org.apache.hadoop.mapreduce.v2.util.MRApps; import org.apache.hadoop.yarn.api.records.ContainerLaunchContext; -import org.junit.Test; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -46,9 +47,9 @@ public class TestMapReduceChildJVM { private static final Logger LOG = LoggerFactory.getLogger(TestMapReduceChildJVM.class); - @Test (timeout = 30000) + @Test + @Timeout(30000) public void testCommandLine() throws Exception { - MyMRApp app = new MyMRApp(1, 0, true, this.getClass().getName(), true); Configuration conf = new Configuration(); conf.setBoolean(MRConfig.MAPREDUCE_APP_SUBMISSION_CROSS_PLATFORM, true); @@ -56,7 +57,7 @@ public class TestMapReduceChildJVM { app.waitForState(job, JobState.SUCCEEDED); app.verifyCompleted(); - Assert.assertEquals( + Assertions.assertEquals( "[" + MRApps.crossPlatformify("JAVA_HOME") + "/bin/java" + " -Djava.net.preferIPv4Stack=true" + " -Dhadoop.metrics.log.level=WARN " + @@ -71,24 +72,26 @@ public class TestMapReduceChildJVM { " 0" + " 1>/stdout" + " 2>/stderr ]", app.launchCmdList.get(0)); - - Assert.assertTrue("HADOOP_ROOT_LOGGER not set for job", - app.cmdEnvironment.containsKey("HADOOP_ROOT_LOGGER")); - Assert.assertEquals("INFO,console", + + Assertions.assertTrue(app.cmdEnvironment.containsKey("HADOOP_ROOT_LOGGER"), + "HADOOP_ROOT_LOGGER not set for job"); + Assertions.assertEquals("INFO,console", app.cmdEnvironment.get("HADOOP_ROOT_LOGGER")); - Assert.assertTrue("HADOOP_CLIENT_OPTS not set for job", - app.cmdEnvironment.containsKey("HADOOP_CLIENT_OPTS")); - Assert.assertEquals("", app.cmdEnvironment.get("HADOOP_CLIENT_OPTS")); + Assertions.assertTrue(app.cmdEnvironment.containsKey("HADOOP_CLIENT_OPTS"), + "HADOOP_CLIENT_OPTS not set for job"); + Assertions.assertEquals("", app.cmdEnvironment.get("HADOOP_CLIENT_OPTS")); } - @Test (timeout = 30000) + @Test + @Timeout(30000) public void testReduceCommandLineWithSeparateShuffle() throws Exception { final Configuration conf = new Configuration(); conf.setBoolean(MRJobConfig.REDUCE_SEPARATE_SHUFFLE_LOG, true); testReduceCommandLine(conf); } - @Test (timeout = 30000) + @Test + @Timeout(30000) public void testReduceCommandLineWithSeparateCRLAShuffle() throws Exception { final Configuration conf = new Configuration(); conf.setBoolean(MRJobConfig.REDUCE_SEPARATE_SHUFFLE_LOG, true); @@ -97,7 +100,8 @@ public class TestMapReduceChildJVM { testReduceCommandLine(conf); } - @Test (timeout = 30000) + @Test + @Timeout(30000) public void testReduceCommandLine() throws Exception { final Configuration conf = new Configuration(); testReduceCommandLine(conf); @@ -119,7 +123,7 @@ public class TestMapReduceChildJVM { ? "shuffleCRLA" : "shuffleCLA"; - Assert.assertEquals( + Assertions.assertEquals( "[" + MRApps.crossPlatformify("JAVA_HOME") + "/bin/java" + " -Djava.net.preferIPv4Stack=true" + " -Dhadoop.metrics.log.level=WARN " + @@ -139,16 +143,17 @@ public class TestMapReduceChildJVM { " 1>/stdout" + " 2>/stderr ]", app.launchCmdList.get(0)); - Assert.assertTrue("HADOOP_ROOT_LOGGER not set for job", - app.cmdEnvironment.containsKey("HADOOP_ROOT_LOGGER")); - Assert.assertEquals("INFO,console", + Assertions.assertTrue(app.cmdEnvironment.containsKey("HADOOP_ROOT_LOGGER"), + "HADOOP_ROOT_LOGGER not set for job"); + Assertions.assertEquals("INFO,console", app.cmdEnvironment.get("HADOOP_ROOT_LOGGER")); - Assert.assertTrue("HADOOP_CLIENT_OPTS not set for job", - app.cmdEnvironment.containsKey("HADOOP_CLIENT_OPTS")); - Assert.assertEquals("", app.cmdEnvironment.get("HADOOP_CLIENT_OPTS")); + Assertions.assertTrue(app.cmdEnvironment.containsKey("HADOOP_CLIENT_OPTS"), + "HADOOP_CLIENT_OPTS not set for job"); + Assertions.assertEquals("", app.cmdEnvironment.get("HADOOP_CLIENT_OPTS")); } - @Test (timeout = 30000) + @Test + @Timeout(30000) public void testCommandLineWithLog4JConifg() throws Exception { MyMRApp app = new MyMRApp(1, 0, true, this.getClass().getName(), true); @@ -161,7 +166,7 @@ public class TestMapReduceChildJVM { app.waitForState(job, JobState.SUCCEEDED); app.verifyCompleted(); - Assert.assertEquals( + Assertions.assertEquals( "[" + MRApps.crossPlatformify("JAVA_HOME") + "/bin/java" + " -Djava.net.preferIPv4Stack=true" + " -Dhadoop.metrics.log.level=WARN " + @@ -203,10 +208,10 @@ public class TestMapReduceChildJVM { MRJobConfig.DEFAULT_HEAP_MEMORY_MB_RATIO); // Verify map and reduce java opts are not set by default - Assert.assertNull("Default map java opts!", - conf.get(MRJobConfig.MAP_JAVA_OPTS)); - Assert.assertNull("Default reduce java opts!", - conf.get(MRJobConfig.REDUCE_JAVA_OPTS)); + Assertions.assertNull(conf.get(MRJobConfig.MAP_JAVA_OPTS), + "Default map java opts!"); + Assertions.assertNull(conf.get(MRJobConfig.REDUCE_JAVA_OPTS), + "Default reduce java opts!"); // Set the memory-mbs and java-opts if (mapMb > 0) { conf.setInt(MRJobConfig.MAP_MEMORY_MB, mapMb); @@ -242,8 +247,8 @@ public class TestMapReduceChildJVM { : MRJobConfig.REDUCE_JAVA_OPTS); heapMb = JobConf.parseMaximumHeapSizeMB(javaOpts); } - Assert.assertEquals("Incorrect heapsize in the command opts", - heapMb, JobConf.parseMaximumHeapSizeMB(cmd)); + Assertions.assertEquals(heapMb, JobConf.parseMaximumHeapSizeMB(cmd), + "Incorrect heapsize in the command opts"); } } @@ -288,13 +293,13 @@ public class TestMapReduceChildJVM { app.waitForState(job, JobState.SUCCEEDED); app.verifyCompleted(); - Assert.assertTrue("HADOOP_ROOT_LOGGER not set for job", - app.cmdEnvironment.containsKey("HADOOP_ROOT_LOGGER")); - Assert.assertEquals("WARN,console", + Assertions.assertTrue(app.cmdEnvironment.containsKey("HADOOP_ROOT_LOGGER"), + "HADOOP_ROOT_LOGGER not set for job"); + Assertions.assertEquals("WARN,console", app.cmdEnvironment.get("HADOOP_ROOT_LOGGER")); - Assert.assertTrue("HADOOP_CLIENT_OPTS not set for job", - app.cmdEnvironment.containsKey("HADOOP_CLIENT_OPTS")); - Assert.assertEquals("test", app.cmdEnvironment.get("HADOOP_CLIENT_OPTS")); + Assertions.assertTrue(app.cmdEnvironment.containsKey("HADOOP_CLIENT_OPTS"), + "HADOOP_CLIENT_OPTS not set for job"); + Assertions.assertEquals("test", app.cmdEnvironment.get("HADOOP_CLIENT_OPTS")); // Try one more. app = new MyMRApp(1, 0, true, this.getClass().getName(), true); @@ -304,9 +309,9 @@ public class TestMapReduceChildJVM { app.waitForState(job, JobState.SUCCEEDED); app.verifyCompleted(); - Assert.assertTrue("HADOOP_ROOT_LOGGER not set for job", - app.cmdEnvironment.containsKey("HADOOP_ROOT_LOGGER")); - Assert.assertEquals("trace", + Assertions.assertTrue(app.cmdEnvironment.containsKey("HADOOP_ROOT_LOGGER"), + "HADOOP_ROOT_LOGGER not set for job"); + Assertions.assertEquals("trace", app.cmdEnvironment.get("HADOOP_ROOT_LOGGER")); // Try one using the mapreduce.task.env.var=value syntax @@ -318,9 +323,9 @@ public class TestMapReduceChildJVM { app.waitForState(job, JobState.SUCCEEDED); app.verifyCompleted(); - Assert.assertTrue("HADOOP_ROOT_LOGGER not set for job", - app.cmdEnvironment.containsKey("HADOOP_ROOT_LOGGER")); - Assert.assertEquals("DEBUG,console", + Assertions.assertTrue(app.cmdEnvironment.containsKey("HADOOP_ROOT_LOGGER"), + "HADOOP_ROOT_LOGGER not set for job"); + Assertions.assertEquals("DEBUG,console", app.cmdEnvironment.get("HADOOP_ROOT_LOGGER")); } } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestShuffleProvider.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestShuffleProvider.java index f44ff81079b..64803a7a111 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestShuffleProvider.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestShuffleProvider.java @@ -53,8 +53,8 @@ import org.apache.hadoop.yarn.server.api.AuxiliaryService; import org.apache.hadoop.yarn.server.api.ApplicationInitializationContext; import org.apache.hadoop.yarn.server.api.ApplicationTerminationContext; import org.apache.hadoop.yarn.conf.YarnConfiguration; -import org.junit.Test; -import org.junit.Assert; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Assertions; public class TestShuffleProvider { @@ -110,9 +110,12 @@ public class TestShuffleProvider { credentials); Map serviceDataMap = launchCtx.getServiceData(); - Assert.assertNotNull("TestShuffleHandler1 is missing", serviceDataMap.get(TestShuffleHandler1.MAPREDUCE_TEST_SHUFFLE_SERVICEID)); - Assert.assertNotNull("TestShuffleHandler2 is missing", serviceDataMap.get(TestShuffleHandler2.MAPREDUCE_TEST_SHUFFLE_SERVICEID)); - Assert.assertTrue("mismatch number of services in map", serviceDataMap.size() == 3); // 2 that we entered + 1 for the built-in shuffle-provider + Assertions.assertNotNull(serviceDataMap.get(TestShuffleHandler1.MAPREDUCE_TEST_SHUFFLE_SERVICEID), + "TestShuffleHandler1 is missing"); + Assertions.assertNotNull(serviceDataMap.get(TestShuffleHandler2.MAPREDUCE_TEST_SHUFFLE_SERVICEID), + "TestShuffleHandler2 is missing"); + Assertions.assertTrue(serviceDataMap.size() == 3, + "mismatch number of services in map"); // 2 that we entered + 1 for the built-in shuffle-provider } static public class StubbedFS extends RawLocalFileSystem { diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskAttempt.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskAttempt.java index 15682eeefc6..cc9b4206f7c 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskAttempt.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskAttempt.java @@ -20,9 +20,10 @@ package org.apache.hadoop.mapreduce.v2.app.job.impl; import static org.apache.hadoop.test.GenericTestUtils.waitFor; import static org.assertj.core.api.Assertions.assertThat; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertTrue; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertThrows; +import static org.junit.jupiter.api.Assertions.assertTrue; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.spy; import static org.mockito.Mockito.times; @@ -41,10 +42,10 @@ import java.util.concurrent.CopyOnWriteArrayList; import org.apache.hadoop.thirdparty.com.google.common.collect.ImmutableMap; import org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptFailEvent; import org.apache.hadoop.yarn.util.resource.CustomResourceTypesConfigurationProvider; -import org.junit.After; -import org.junit.Assert; -import org.junit.Before; -import org.junit.BeforeClass; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.Assertions; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.BeforeAll; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; @@ -111,7 +112,7 @@ import org.apache.log4j.AppenderSkeleton; import org.apache.log4j.Level; import org.apache.log4j.Logger; import org.apache.log4j.spi.LoggingEvent; -import org.junit.Test; +import org.junit.jupiter.api.Test; import org.mockito.ArgumentCaptor; import org.apache.hadoop.thirdparty.com.google.common.collect.ImmutableList; @@ -151,17 +152,17 @@ public class TestTaskAttempt{ } } - @BeforeClass + @BeforeAll public static void setupBeforeClass() { ResourceUtils.resetResourceTypes(new Configuration()); } - @Before + @BeforeEach public void before() { TaskAttemptImpl.RESOURCE_REQUEST_CACHE.clear(); } - @After + @AfterEach public void tearDown() { ResourceUtils.resetResourceTypes(new Configuration()); } @@ -289,7 +290,7 @@ public class TestTaskAttempt{ ArgumentCaptor arg = ArgumentCaptor.forClass(Event.class); verify(eventHandler, times(2)).handle(arg.capture()); if (!(arg.getAllValues().get(1) instanceof ContainerRequestEvent)) { - Assert.fail("Second Event not of type ContainerRequestEvent"); + Assertions.fail("Second Event not of type ContainerRequestEvent"); } ContainerRequestEvent cre = (ContainerRequestEvent) arg.getAllValues().get(1); @@ -323,7 +324,7 @@ public class TestTaskAttempt{ ArgumentCaptor arg = ArgumentCaptor.forClass(Event.class); verify(eventHandler, times(2)).handle(arg.capture()); if (!(arg.getAllValues().get(1) instanceof ContainerRequestEvent)) { - Assert.fail("Second Event not of type ContainerRequestEvent"); + Assertions.fail("Second Event not of type ContainerRequestEvent"); } Map expected = new HashMap(); expected.put("host1", true); @@ -361,16 +362,16 @@ public class TestTaskAttempt{ Job job = app.submit(conf); app.waitForState(job, JobState.RUNNING); Map tasks = job.getTasks(); - Assert.assertEquals("Num tasks is not correct", 2, tasks.size()); + Assertions.assertEquals(2, tasks.size(), "Num tasks is not correct"); Iterator taskIter = tasks.values().iterator(); Task mTask = taskIter.next(); app.waitForState(mTask, TaskState.RUNNING); Task rTask = taskIter.next(); app.waitForState(rTask, TaskState.RUNNING); Map mAttempts = mTask.getAttempts(); - Assert.assertEquals("Num attempts is not correct", 1, mAttempts.size()); + Assertions.assertEquals(1, mAttempts.size(), "Num attempts is not correct"); Map rAttempts = rTask.getAttempts(); - Assert.assertEquals("Num attempts is not correct", 1, rAttempts.size()); + Assertions.assertEquals(1, rAttempts.size(), "Num attempts is not correct"); TaskAttempt mta = mAttempts.values().iterator().next(); TaskAttempt rta = rAttempts.values().iterator().next(); app.waitForState(mta, TaskAttemptState.RUNNING); @@ -392,21 +393,21 @@ public class TestTaskAttempt{ int memoryMb = (int) containerResource.getMemorySize(); int vcores = containerResource.getVirtualCores(); - Assert.assertEquals((int) Math.ceil((float) memoryMb / minContainerSize), + Assertions.assertEquals((int) Math.ceil((float) memoryMb / minContainerSize), counters.findCounter(JobCounter.SLOTS_MILLIS_MAPS).getValue()); - Assert.assertEquals((int) Math.ceil((float) memoryMb / minContainerSize), + Assertions.assertEquals((int) Math.ceil((float) memoryMb / minContainerSize), counters.findCounter(JobCounter.SLOTS_MILLIS_REDUCES).getValue()); - Assert.assertEquals(1, + Assertions.assertEquals(1, counters.findCounter(JobCounter.MILLIS_MAPS).getValue()); - Assert.assertEquals(1, + Assertions.assertEquals(1, counters.findCounter(JobCounter.MILLIS_REDUCES).getValue()); - Assert.assertEquals(memoryMb, + Assertions.assertEquals(memoryMb, counters.findCounter(JobCounter.MB_MILLIS_MAPS).getValue()); - Assert.assertEquals(memoryMb, + Assertions.assertEquals(memoryMb, counters.findCounter(JobCounter.MB_MILLIS_REDUCES).getValue()); - Assert.assertEquals(vcores, + Assertions.assertEquals(vcores, counters.findCounter(JobCounter.VCORES_MILLIS_MAPS).getValue()); - Assert.assertEquals(vcores, + Assertions.assertEquals(vcores, counters.findCounter(JobCounter.VCORES_MILLIS_REDUCES).getValue()); } @@ -452,23 +453,25 @@ public class TestTaskAttempt{ app.waitForState(job, JobState.FAILED); Map tasks = job.getTasks(); - Assert.assertEquals("Num tasks is not correct", 1, tasks.size()); + Assertions.assertEquals(1, tasks.size(), + "Num tasks is not correct"); Task task = tasks.values().iterator().next(); - Assert.assertEquals("Task state not correct", TaskState.FAILED, task - .getReport().getTaskState()); + Assertions.assertEquals(TaskState.FAILED, task.getReport().getTaskState(), + "Task state not correct"); Map attempts = tasks.values().iterator().next() .getAttempts(); - Assert.assertEquals("Num attempts is not correct", 4, attempts.size()); + Assertions.assertEquals(4, attempts.size(), + "Num attempts is not correct"); Iterator it = attempts.values().iterator(); TaskAttemptReport report = it.next().getReport(); - Assert.assertEquals("Attempt state not correct", TaskAttemptState.FAILED, - report.getTaskAttemptState()); - Assert.assertEquals("Diagnostic Information is not Correct", - "Test Diagnostic Event", report.getDiagnosticInfo()); + Assertions.assertEquals(TaskAttemptState.FAILED, report.getTaskAttemptState(), + "Attempt state not correct"); + Assertions.assertEquals("Test Diagnostic Event", report.getDiagnosticInfo(), + "Diagnostic Information is not Correct"); report = it.next().getReport(); - Assert.assertEquals("Attempt state not correct", TaskAttemptState.FAILED, - report.getTaskAttemptState()); + Assertions.assertEquals(TaskAttemptState.FAILED, report.getTaskAttemptState(), + "Attempt state not correct "); } private void testTaskAttemptAssignedFailHistory @@ -477,8 +480,8 @@ public class TestTaskAttempt{ Job job = app.submit(conf); app.waitForState(job, JobState.FAILED); Map tasks = job.getTasks(); - Assert.assertTrue("No Ta Started JH Event", app.getTaStartJHEvent()); - Assert.assertTrue("No Ta Failed JH Event", app.getTaFailedJHEvent()); + Assertions.assertTrue(app.getTaStartJHEvent(), "No Ta Started JH Event"); + Assertions.assertTrue(app.getTaFailedJHEvent(), "No Ta Failed JH Event"); } private void testTaskAttemptAssignedKilledHistory @@ -518,8 +521,8 @@ public class TestTaskAttempt{ if (event.getType() == org.apache.hadoop.mapreduce.jobhistory.EventType.MAP_ATTEMPT_FAILED) { TaskAttemptUnsuccessfulCompletion datum = (TaskAttemptUnsuccessfulCompletion) event .getHistoryEvent().getDatum(); - Assert.assertEquals("Diagnostic Information is not Correct", - "Test Diagnostic Event", datum.get(8).toString()); + Assertions.assertEquals("Test Diagnostic Event", datum.get(8).toString(), + "Diagnostic Information is not Correct"); } } }; @@ -638,8 +641,8 @@ public class TestTaskAttempt{ taImpl.handle(new TaskAttemptEvent(attemptId, TaskAttemptEventType.TA_CONTAINER_LAUNCH_FAILED)); assertFalse(eventHandler.internalError); - assertEquals("Task attempt is not assigned on the local node", - Locality.NODE_LOCAL, taImpl.getLocality()); + assertEquals(Locality.NODE_LOCAL, taImpl.getLocality(), + "Task attempt is not assigned on the local node"); } @Test @@ -695,10 +698,10 @@ public class TestTaskAttempt{ .isEqualTo(TaskAttemptState.RUNNING); taImpl.handle(new TaskAttemptEvent(attemptId, TaskAttemptEventType.TA_CONTAINER_CLEANED)); - assertFalse("InternalError occurred trying to handle TA_CONTAINER_CLEANED", - eventHandler.internalError); - assertEquals("Task attempt is not assigned on the local rack", - Locality.RACK_LOCAL, taImpl.getLocality()); + assertFalse(eventHandler.internalError, + "InternalError occurred trying to handle TA_CONTAINER_CLEANED"); + assertEquals(Locality.RACK_LOCAL, taImpl.getLocality(), + "Task attempt is not assigned on the local rack"); } @Test @@ -757,10 +760,10 @@ public class TestTaskAttempt{ .isEqualTo(TaskAttemptState.COMMIT_PENDING); taImpl.handle(new TaskAttemptEvent(attemptId, TaskAttemptEventType.TA_CONTAINER_CLEANED)); - assertFalse("InternalError occurred trying to handle TA_CONTAINER_CLEANED", - eventHandler.internalError); - assertEquals("Task attempt is assigned locally", Locality.OFF_SWITCH, - taImpl.getLocality()); + assertFalse(eventHandler.internalError, + "InternalError occurred trying to handle TA_CONTAINER_CLEANED"); + assertEquals(Locality.OFF_SWITCH,taImpl.getLocality(), + "Task attempt is assigned locally"); } @Test @@ -832,8 +835,8 @@ public class TestTaskAttempt{ assertThat(taImpl.getState()) .withFailMessage("Task attempt is not in FAILED state, still") .isEqualTo(TaskAttemptState.FAILED); - assertFalse("InternalError occurred trying to handle TA_CONTAINER_CLEANED", - eventHandler.internalError); + assertFalse(eventHandler.internalError, + "InternalError occurred trying to handle TA_CONTAINER_CLEANED"); } @@ -883,16 +886,15 @@ public class TestTaskAttempt{ TaskAttemptEventType.TA_SCHEDULE)); taImpl.handle(new TaskAttemptDiagnosticsUpdateEvent(attemptId, "Task got killed")); - assertFalse( - "InternalError occurred trying to handle TA_DIAGNOSTICS_UPDATE on assigned task", - eventHandler.internalError); + assertFalse(eventHandler.internalError, + "InternalError occurred trying to handle TA_DIAGNOSTICS_UPDATE on assigned task"); try { taImpl.handle(new TaskAttemptEvent(attemptId, TaskAttemptEventType.TA_KILL)); - Assert.assertTrue("No exception on UNASSIGNED STATE KILL event", true); + Assertions.assertTrue(true, "No exception on UNASSIGNED STATE KILL event"); } catch (Exception e) { - Assert.assertFalse( - "Exception not expected for UNASSIGNED STATE KILL event", true); + Assertions.assertFalse(true, + "Exception not expected for UNASSIGNED STATE KILL event"); } } @@ -962,8 +964,8 @@ public class TestTaskAttempt{ assertThat(taImpl.getState()) .withFailMessage("Task attempt is not in KILLED state, still") .isEqualTo(TaskAttemptState.KILLED); - assertFalse("InternalError occurred trying to handle TA_CONTAINER_CLEANED", - eventHandler.internalError); + assertFalse(eventHandler.internalError, + "InternalError occurred trying to handle TA_CONTAINER_CLEANED"); } @Test @@ -1009,9 +1011,8 @@ public class TestTaskAttempt{ when(container.getNodeHttpAddress()).thenReturn("localhost:0"); taImpl.handle(new TaskAttemptDiagnosticsUpdateEvent(attemptId, "Task got killed")); - assertFalse( - "InternalError occurred trying to handle TA_DIAGNOSTICS_UPDATE on assigned task", - eventHandler.internalError); + assertFalse(eventHandler.internalError, + "InternalError occurred trying to handle TA_DIAGNOSTICS_UPDATE on assigned task"); } @Test @@ -1072,8 +1073,8 @@ public class TestTaskAttempt{ .withFailMessage("Task attempt is not in SUCCEEDED state") .isEqualTo(TaskAttemptState.SUCCEEDED); - assertTrue("Task Attempt finish time is not greater than 0", - taImpl.getFinishTime() > 0); + assertTrue(taImpl.getFinishTime() > 0, + "Task Attempt finish time is not greater than 0"); Long finishTime = taImpl.getFinishTime(); Thread.sleep(5); @@ -1084,9 +1085,9 @@ public class TestTaskAttempt{ .withFailMessage("Task attempt is not in FAILED state") .isEqualTo(TaskAttemptState.FAILED); - assertEquals("After TA_TOO_MANY_FETCH_FAILURE," - + " Task attempt finish time is not the same ", - finishTime, Long.valueOf(taImpl.getFinishTime())); + assertEquals(finishTime, Long.valueOf(taImpl.getFinishTime()), + "After TA_TOO_MANY_FETCH_FAILURE," + + " Task attempt finish time is not the same "); } private void containerKillBeforeAssignment(boolean scheduleAttempt) @@ -1114,7 +1115,7 @@ public class TestTaskAttempt{ assertThat(taImpl.getInternalState()) .withFailMessage("Task attempt's internal state is not KILLED") .isEqualTo(TaskAttemptStateInternal.KILLED); - assertFalse("InternalError occurred", eventHandler.internalError); + assertFalse(eventHandler.internalError, "InternalError occurred"); TaskEvent event = eventHandler.lastTaskEvent; assertEquals(TaskEventType.T_ATTEMPT_KILLED, event.getType()); // In NEW state, new map attempt should not be rescheduled. @@ -1238,8 +1239,8 @@ public class TestTaskAttempt{ .isEqualTo(TaskAttemptState.RUNNING); taImpl.handle(new TaskAttemptEvent(attemptId, TaskAttemptEventType.TA_KILL)); - assertFalse("InternalError occurred trying to handle TA_KILL", - eventHandler.internalError); + assertFalse(eventHandler.internalError, + "InternalError occurred trying to handle TA_KILL"); assertThat(taImpl.getInternalState()) .withFailMessage("Task should be in KILL_CONTAINER_CLEANUP state") .isEqualTo(TaskAttemptStateInternal.KILL_CONTAINER_CLEANUP); @@ -1301,8 +1302,8 @@ public class TestTaskAttempt{ .isEqualTo(TaskAttemptStateInternal.COMMIT_PENDING); taImpl.handle(new TaskAttemptEvent(attemptId, TaskAttemptEventType.TA_KILL)); - assertFalse("InternalError occurred trying to handle TA_KILL", - eventHandler.internalError); + assertFalse(eventHandler.internalError, + "InternalError occurred trying to handle TA_KILL"); assertThat(taImpl.getInternalState()) .withFailMessage("Task should be in KILL_CONTAINER_CLEANUP state") .isEqualTo(TaskAttemptStateInternal.KILL_CONTAINER_CLEANUP); @@ -1348,7 +1349,7 @@ public class TestTaskAttempt{ .withFailMessage("Task attempt is not in KILLED state") .isEqualTo(TaskAttemptState.KILLED); - assertFalse("InternalError occurred", eventHandler.internalError); + assertFalse(eventHandler.internalError, "InternalError occurred"); } @Test @@ -1359,32 +1360,30 @@ public class TestTaskAttempt{ taImpl.handle(new TaskAttemptEvent(taImpl.getID(), TaskAttemptEventType.TA_DONE)); - assertEquals("Task attempt is not in SUCCEEDED state", - TaskAttemptState.SUCCEEDED, taImpl.getState()); - assertEquals("Task attempt's internal state is not " + - "SUCCESS_FINISHING_CONTAINER", - TaskAttemptStateInternal.SUCCESS_FINISHING_CONTAINER, - taImpl.getInternalState()); + assertEquals(TaskAttemptState.SUCCEEDED, taImpl.getState(), + "Task attempt is not in SUCCEEDED state"); + assertEquals(TaskAttemptStateInternal.SUCCESS_FINISHING_CONTAINER, + taImpl.getInternalState(), "Task attempt's internal state is not " + + "SUCCESS_FINISHING_CONTAINER"); // If the map only task is killed when it is in SUCCESS_FINISHING_CONTAINER // state, the state will move to SUCCESS_CONTAINER_CLEANUP taImpl.handle(new TaskAttemptEvent(taImpl.getID(), TaskAttemptEventType.TA_KILL)); - assertEquals("Task attempt is not in SUCCEEDED state", - TaskAttemptState.SUCCEEDED, taImpl.getState()); - assertEquals("Task attempt's internal state is not " + - "SUCCESS_CONTAINER_CLEANUP", - TaskAttemptStateInternal.SUCCESS_CONTAINER_CLEANUP, - taImpl.getInternalState()); + assertEquals(TaskAttemptState.SUCCEEDED, taImpl.getState(), + "Task attempt is not in SUCCEEDED state"); + assertEquals(TaskAttemptStateInternal.SUCCESS_CONTAINER_CLEANUP, + taImpl.getInternalState(), "Task attempt's internal state is not " + + "SUCCESS_CONTAINER_CLEANUP"); taImpl.handle(new TaskAttemptEvent(taImpl.getID(), TaskAttemptEventType.TA_CONTAINER_CLEANED)); - assertEquals("Task attempt is not in SUCCEEDED state", - TaskAttemptState.SUCCEEDED, taImpl.getState()); - assertEquals("Task attempt's internal state is not SUCCEEDED state", - TaskAttemptStateInternal.SUCCEEDED, taImpl.getInternalState()); + assertEquals(TaskAttemptState.SUCCEEDED, taImpl.getState(), + "Task attempt is not in SUCCEEDED state"); + assertEquals(TaskAttemptStateInternal.SUCCEEDED, taImpl.getInternalState(), + "Task attempt's internal state is not SUCCEEDED state"); - assertFalse("InternalError occurred", eventHandler.internalError); + assertFalse(eventHandler.internalError, "InternalError occurred"); } @Test @@ -1414,7 +1413,7 @@ public class TestTaskAttempt{ assertThat(taImpl.getInternalState()) .withFailMessage("Task attempt's internal state is not KILLED") .isEqualTo(TaskAttemptStateInternal.KILLED); - assertFalse("InternalError occurred", eventHandler.internalError); + assertFalse(eventHandler.internalError, "InternalError occurred"); TaskEvent event = eventHandler.lastTaskEvent; assertEquals(TaskEventType.T_ATTEMPT_KILLED, event.getType()); // Send an attempt killed event to TaskImpl forwarding the same reschedule @@ -1430,22 +1429,21 @@ public class TestTaskAttempt{ taImpl.handle(new TaskAttemptEvent(taImpl.getID(), TaskAttemptEventType.TA_DONE)); - assertEquals("Task attempt is not in SUCCEEDED state", - TaskAttemptState.SUCCEEDED, taImpl.getState()); - assertEquals("Task attempt's internal state is not " + - "SUCCESS_FINISHING_CONTAINER", - TaskAttemptStateInternal.SUCCESS_FINISHING_CONTAINER, - taImpl.getInternalState()); + assertEquals(TaskAttemptState.SUCCEEDED, taImpl.getState(), + "Task attempt is not in SUCCEEDED state"); + assertEquals(TaskAttemptStateInternal.SUCCESS_FINISHING_CONTAINER, + taImpl.getInternalState(), "Task attempt's internal state is not " + + "SUCCESS_FINISHING_CONTAINER"); taImpl.handle(new TaskAttemptEvent(taImpl.getID(), TaskAttemptEventType.TA_CONTAINER_CLEANED)); // Succeeded taImpl.handle(new TaskAttemptKillEvent(taImpl.getID(),"", true)); - assertEquals("Task attempt is not in SUCCEEDED state", - TaskAttemptState.SUCCEEDED, taImpl.getState()); - assertEquals("Task attempt's internal state is not SUCCEEDED", - TaskAttemptStateInternal.SUCCEEDED, taImpl.getInternalState()); - assertFalse("InternalError occurred", eventHandler.internalError); + assertEquals(TaskAttemptState.SUCCEEDED, taImpl.getState(), + "Task attempt is not in SUCCEEDED state"); + assertEquals(TaskAttemptStateInternal.SUCCEEDED, taImpl.getInternalState(), + "Task attempt's internal state is not SUCCEEDED"); + assertFalse(eventHandler.internalError, "InternalError occurred"); TaskEvent event = eventHandler.lastTaskEvent; assertEquals(TaskEventType.T_ATTEMPT_SUCCEEDED, event.getType()); } @@ -1498,7 +1496,7 @@ public class TestTaskAttempt{ .withFailMessage("Task attempt is not in FAILED state") .isEqualTo(TaskAttemptState.FAILED); - assertFalse("InternalError occurred", eventHandler.internalError); + assertFalse(eventHandler.internalError, "InternalError occurred"); } @Test @@ -1531,7 +1529,7 @@ public class TestTaskAttempt{ .withFailMessage("Task attempt is not in FAILED state") .isEqualTo(TaskAttemptState.FAILED); - assertFalse("InternalError occurred", eventHandler.internalError); + assertFalse(eventHandler.internalError, "InternalError occurred"); } @Test @@ -1561,7 +1559,7 @@ public class TestTaskAttempt{ "SUCCESS_FINISHING_CONTAINER") .isEqualTo(TaskAttemptStateInternal.SUCCESS_FINISHING_CONTAINER); - assertFalse("InternalError occurred", eventHandler.internalError); + assertFalse(eventHandler.internalError, "InternalError occurred"); } @Test @@ -1592,7 +1590,7 @@ public class TestTaskAttempt{ "SUCCESS_CONTAINER_CLEANUP") .isEqualTo(TaskAttemptStateInternal.SUCCESS_CONTAINER_CLEANUP); - assertFalse("InternalError occurred", eventHandler.internalError); + assertFalse(eventHandler.internalError, "InternalError occurred"); } @Test @@ -1619,7 +1617,7 @@ public class TestTaskAttempt{ "FAIL_CONTAINER_CLEANUP") .isEqualTo(TaskAttemptStateInternal.FAIL_CONTAINER_CLEANUP); - assertFalse("InternalError occurred", eventHandler.internalError); + assertFalse(eventHandler.internalError, "InternalError occurred"); } @Test @@ -1636,8 +1634,8 @@ public class TestTaskAttempt{ ResourceInformation resourceInfo = getResourceInfoFromContainerRequest(taImpl, eventHandler). getResourceInformation(CUSTOM_RESOURCE_NAME); - assertEquals("Expecting the default unit (G)", - "G", resourceInfo.getUnits()); + assertEquals("G", resourceInfo.getUnits(), + "Expecting the default unit (G)"); assertEquals(7L, resourceInfo.getValue()); } @@ -1654,8 +1652,8 @@ public class TestTaskAttempt{ ResourceInformation resourceInfo = getResourceInfoFromContainerRequest(taImpl, eventHandler). getResourceInformation(CUSTOM_RESOURCE_NAME); - assertEquals("Expecting the specified unit (m)", - "m", resourceInfo.getUnits()); + assertEquals("m", resourceInfo.getUnits(), + "Expecting the specified unit (m)"); assertEquals(3L, resourceInfo.getValue()); } @@ -1752,18 +1750,20 @@ public class TestTaskAttempt{ } } - @Test(expected=IllegalArgumentException.class) + @Test public void testReducerMemoryRequestMultipleName() { - EventHandler eventHandler = mock(EventHandler.class); - Clock clock = SystemClock.getInstance(); - JobConf jobConf = new JobConf(); - for (String memoryName : ImmutableList.of( - MRJobConfig.RESOURCE_TYPE_NAME_MEMORY, - MRJobConfig.RESOURCE_TYPE_ALTERNATIVE_NAME_MEMORY)) { - jobConf.set(MRJobConfig.REDUCE_RESOURCE_TYPE_PREFIX + memoryName, - "3Gi"); - } - createReduceTaskAttemptImplForTest(eventHandler, clock, jobConf); + assertThrows(IllegalArgumentException.class, () -> { + EventHandler eventHandler = mock(EventHandler.class); + Clock clock = SystemClock.getInstance(); + JobConf jobConf = new JobConf(); + for (String memoryName : ImmutableList.of( + MRJobConfig.RESOURCE_TYPE_NAME_MEMORY, + MRJobConfig.RESOURCE_TYPE_ALTERNATIVE_NAME_MEMORY)) { + jobConf.set(MRJobConfig.REDUCE_RESOURCE_TYPE_PREFIX + memoryName, + "3Gi"); + } + createReduceTaskAttemptImplForTest(eventHandler, clock, jobConf); + }); } @Test @@ -1853,21 +1853,24 @@ public class TestTaskAttempt{ containerRequestEvents.add((ContainerRequestEvent) e); } } - assertEquals("Expected one ContainerRequestEvent after scheduling " - + "task attempt", 1, containerRequestEvents.size()); + assertEquals(1, containerRequestEvents.size(), + "Expected one ContainerRequestEvent after scheduling " + + "task attempt"); return containerRequestEvents.get(0).getCapability(); } - @Test(expected=IllegalArgumentException.class) + @Test public void testReducerCustomResourceTypeWithInvalidUnit() { - initResourceTypes(); - EventHandler eventHandler = mock(EventHandler.class); - Clock clock = SystemClock.getInstance(); - JobConf jobConf = new JobConf(); - jobConf.set(MRJobConfig.REDUCE_RESOURCE_TYPE_PREFIX - + CUSTOM_RESOURCE_NAME, "3z"); - createReduceTaskAttemptImplForTest(eventHandler, clock, jobConf); + assertThrows(IllegalArgumentException.class, () -> { + initResourceTypes(); + EventHandler eventHandler = mock(EventHandler.class); + Clock clock = SystemClock.getInstance(); + JobConf jobConf = new JobConf(); + jobConf.set(MRJobConfig.REDUCE_RESOURCE_TYPE_PREFIX + + CUSTOM_RESOURCE_NAME, "3z"); + createReduceTaskAttemptImplForTest(eventHandler, clock, jobConf); + }); } @Test @@ -1882,22 +1885,19 @@ public class TestTaskAttempt{ // move in two steps to the desired state (cannot get there directly) taImpl.handle(new TaskAttemptEvent(taImpl.getID(), TaskAttemptEventType.TA_DONE)); - assertEquals("Task attempt's internal state is not " + - "SUCCESS_FINISHING_CONTAINER", - TaskAttemptStateInternal.SUCCESS_FINISHING_CONTAINER, - taImpl.getInternalState()); + assertEquals(TaskAttemptStateInternal.SUCCESS_FINISHING_CONTAINER, + taImpl.getInternalState(), "Task attempt's internal state is not " + + "SUCCESS_FINISHING_CONTAINER"); taImpl.handle(new TaskAttemptEvent(taImpl.getID(), TaskAttemptEventType.TA_TIMED_OUT)); - assertEquals("Task attempt's internal state is not " + - "SUCCESS_CONTAINER_CLEANUP", - TaskAttemptStateInternal.SUCCESS_CONTAINER_CLEANUP, - taImpl.getInternalState()); + assertEquals(TaskAttemptStateInternal.SUCCESS_CONTAINER_CLEANUP, + taImpl.getInternalState(), "Task attempt's internal state is not " + + "SUCCESS_CONTAINER_CLEANUP"); taImpl.handle(new TaskAttemptKillEvent(mapTAId, "", true)); - assertEquals("Task attempt is not in KILLED state", - TaskAttemptState.KILLED, - taImpl.getState()); + assertEquals(TaskAttemptState.KILLED, + taImpl.getState(), "Task attempt is not in KILLED state"); } @Test @@ -1912,24 +1912,21 @@ public class TestTaskAttempt{ // move in two steps to the desired state (cannot get there directly) taImpl.handle(new TaskAttemptEvent(taImpl.getID(), TaskAttemptEventType.TA_DONE)); - assertEquals("Task attempt's internal state is not " + - "SUCCESS_FINISHING_CONTAINER", - TaskAttemptStateInternal.SUCCESS_FINISHING_CONTAINER, - taImpl.getInternalState()); + assertEquals(TaskAttemptStateInternal.SUCCESS_FINISHING_CONTAINER, + taImpl.getInternalState(), "Task attempt's internal state is not " + + "SUCCESS_FINISHING_CONTAINER"); taImpl.handle(new TaskAttemptEvent(taImpl.getID(), TaskAttemptEventType.TA_TIMED_OUT)); - assertEquals("Task attempt's internal state is not " + - "SUCCESS_CONTAINER_CLEANUP", - TaskAttemptStateInternal.SUCCESS_CONTAINER_CLEANUP, - taImpl.getInternalState()); + assertEquals(TaskAttemptStateInternal.SUCCESS_CONTAINER_CLEANUP, + taImpl.getInternalState(), "Task attempt's internal state is not " + + "SUCCESS_CONTAINER_CLEANUP"); taImpl.handle(new TaskAttemptTooManyFetchFailureEvent(taImpl.getID(), reduceTAId, "Host")); - assertEquals("Task attempt is not in FAILED state", - TaskAttemptState.FAILED, - taImpl.getState()); - assertFalse("InternalError occurred", eventHandler.internalError); + assertEquals(TaskAttemptState.FAILED, + taImpl.getState(), "Task attempt is not in FAILED state"); + assertFalse(eventHandler.internalError, "InternalError occurred"); } private void initResourceTypes() { @@ -1951,17 +1948,15 @@ public class TestTaskAttempt{ taImpl.handle(new TaskAttemptEvent(taImpl.getID(), TaskAttemptEventType.TA_DONE)); - assertEquals("Task attempt's internal state is not " + - "SUCCESS_FINISHING_CONTAINER", - TaskAttemptStateInternal.SUCCESS_FINISHING_CONTAINER, - taImpl.getInternalState()); + assertEquals(TaskAttemptStateInternal.SUCCESS_FINISHING_CONTAINER, + taImpl.getInternalState(), "Task attempt's internal state is not " + + "SUCCESS_FINISHING_CONTAINER"); taImpl.handle(new TaskAttemptTooManyFetchFailureEvent(taImpl.getID(), reduceTAId, "Host")); - assertEquals("Task attempt is not in FAILED state", - TaskAttemptState.FAILED, - taImpl.getState()); - assertFalse("InternalError occurred", eventHandler.internalError); + assertEquals(TaskAttemptState.FAILED, + taImpl.getState(), "Task attempt is not in FAILED state"); + assertFalse(eventHandler.internalError, "InternalError occurred"); } private void setupTaskAttemptFinishingMonitor( diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskAttemptContainerRequest.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskAttemptContainerRequest.java index 585b949d7f9..3939e2e5153 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskAttemptContainerRequest.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskAttemptContainerRequest.java @@ -27,8 +27,8 @@ import java.util.Arrays; import java.util.HashMap; import java.util.Map; -import org.junit.After; -import org.junit.Assert; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.Assertions; import org.apache.hadoop.fs.CommonConfigurationKeysPublic; import org.apache.hadoop.fs.FileStatus; @@ -58,12 +58,12 @@ import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.api.records.ContainerLaunchContext; import org.apache.hadoop.yarn.event.EventHandler; import org.apache.hadoop.yarn.util.SystemClock; -import org.junit.Test; +import org.junit.jupiter.api.Test; @SuppressWarnings({"rawtypes"}) public class TestTaskAttemptContainerRequest { - @After + @AfterEach public void cleanup() { UserGroupInformation.reset(); } @@ -114,7 +114,8 @@ public class TestTaskAttemptContainerRequest { mock(WrappedJvmID.class), taListener, credentials); - Assert.assertEquals("ACLs mismatch", acls, launchCtx.getApplicationACLs()); + Assertions.assertEquals(acls, launchCtx.getApplicationACLs(), + "ACLs mismatch"); Credentials launchCredentials = new Credentials(); DataInputByteBuffer dibb = new DataInputByteBuffer(); @@ -125,17 +126,18 @@ public class TestTaskAttemptContainerRequest { for (Token token : credentials.getAllTokens()) { Token launchToken = launchCredentials.getToken(token.getService()); - Assert.assertNotNull("Token " + token.getService() + " is missing", - launchToken); - Assert.assertEquals("Token " + token.getService() + " mismatch", - token, launchToken); + Assertions.assertNotNull(launchToken, + "Token " + token.getService() + " is missing"); + Assertions.assertEquals(token, launchToken, + "Token " + token.getService() + " mismatch"); } // verify the secret key is in the launch context - Assert.assertNotNull("Secret key missing", - launchCredentials.getSecretKey(SECRET_KEY_ALIAS)); - Assert.assertTrue("Secret key mismatch", Arrays.equals(SECRET_KEY, - launchCredentials.getSecretKey(SECRET_KEY_ALIAS))); + Assertions.assertNotNull(launchCredentials.getSecretKey(SECRET_KEY_ALIAS), + "Secret key missing"); + Assertions.assertTrue(Arrays.equals(SECRET_KEY, + launchCredentials.getSecretKey(SECRET_KEY_ALIAS)), + "Secret key mismatch"); } static public class StubbedFS extends RawLocalFileSystem { diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskImpl.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskImpl.java index 1225c4308cc..8cad334d124 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskImpl.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskImpl.java @@ -17,10 +17,10 @@ */ package org.apache.hadoop.mapreduce.v2.app.job.impl; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertTrue; -import static org.junit.Assert.fail; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.when; @@ -65,9 +65,9 @@ import org.apache.hadoop.yarn.event.InlineDispatcher; import org.apache.hadoop.yarn.util.Clock; import org.apache.hadoop.yarn.util.Records; import org.apache.hadoop.yarn.util.SystemClock; -import org.junit.After; -import org.junit.Before; -import org.junit.Test; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -234,7 +234,7 @@ public class TestTaskImpl { } - @Before + @BeforeEach @SuppressWarnings("unchecked") public void setup() { dispatcher = new InlineDispatcher(); @@ -273,7 +273,7 @@ public class TestTaskImpl { startCount, metrics, appContext, taskType); } - @After + @AfterEach public void teardown() { taskAttempts.clear(); } @@ -510,7 +510,7 @@ public class TestTaskImpl { assertTaskScheduledState(); } - @Test + @Test public void testTaskProgress() { LOG.info("--- START: testTaskProgress ---"); mockTask = createMockTask(TaskType.MAP); @@ -587,10 +587,10 @@ public class TestTaskImpl { mockTask.handle(new TaskTAttemptEvent(getLastAttempt().getAttemptId(), TaskEventType.T_ATTEMPT_SUCCEEDED)); - assertFalse("First attempt should not commit", - mockTask.canCommit(taskAttempts.get(0).getAttemptId())); - assertTrue("Second attempt should commit", - mockTask.canCommit(getLastAttempt().getAttemptId())); + assertFalse(mockTask.canCommit(taskAttempts.get(0).getAttemptId()), + "First attempt should not commit"); + assertTrue(mockTask.canCommit(getLastAttempt().getAttemptId()), + "Second attempt should commit"); assertTaskSucceededState(); } @@ -879,7 +879,8 @@ public class TestTaskImpl { baseAttempt.setProgress(1.0f); Counters taskCounters = mockTask.getCounters(); - assertEquals("wrong counters for task", specAttemptCounters, taskCounters); + assertEquals(specAttemptCounters, taskCounters, + "wrong counters for task"); } public static class MockTaskAttemptEventHandler implements EventHandler { diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java index dda93b682b3..3d8f2b849b8 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java @@ -44,7 +44,7 @@ import org.apache.hadoop.yarn.api.protocolrecords.ResourceLocalizationRequest; import org.apache.hadoop.yarn.api.protocolrecords.ResourceLocalizationResponse; import org.apache.hadoop.yarn.api.protocolrecords.RestartContainerResponse; import org.apache.hadoop.yarn.api.protocolrecords.RollbackResponse; -import org.junit.Assert; +import org.junit.jupiter.api.Assertions; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.CommonConfigurationKeysPublic; import org.apache.hadoop.ipc.Server; @@ -93,7 +93,8 @@ import org.apache.hadoop.yarn.security.ContainerTokenIdentifier; import org.apache.hadoop.yarn.server.api.records.MasterKey; import org.apache.hadoop.yarn.server.nodemanager.security.NMTokenSecretManagerInNM; import org.apache.hadoop.yarn.util.Records; -import org.junit.Test; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -107,9 +108,9 @@ public class TestContainerLauncher { static final Logger LOG = LoggerFactory.getLogger(TestContainerLauncher.class); - @Test (timeout = 10000) + @Test + @Timeout(10000) public void testPoolSize() throws InterruptedException { - ApplicationId appId = ApplicationId.newInstance(12345, 67); ApplicationAttemptId appAttemptId = ApplicationAttemptId.newInstance( appId, 3); @@ -127,10 +128,10 @@ public class TestContainerLauncher { // No events yet assertThat(containerLauncher.initialPoolSize).isEqualTo( MRJobConfig.DEFAULT_MR_AM_CONTAINERLAUNCHER_THREADPOOL_INITIAL_SIZE); - Assert.assertEquals(0, threadPool.getPoolSize()); - Assert.assertEquals(containerLauncher.initialPoolSize, + Assertions.assertEquals(0, threadPool.getPoolSize()); + Assertions.assertEquals(containerLauncher.initialPoolSize, threadPool.getCorePoolSize()); - Assert.assertNull(containerLauncher.foundErrors); + Assertions.assertNull(containerLauncher.foundErrors); containerLauncher.expectedCorePoolSize = containerLauncher.initialPoolSize; for (int i = 0; i < 10; i++) { @@ -141,8 +142,8 @@ public class TestContainerLauncher { ContainerLauncher.EventType.CONTAINER_REMOTE_LAUNCH)); } waitForEvents(containerLauncher, 10); - Assert.assertEquals(10, threadPool.getPoolSize()); - Assert.assertNull(containerLauncher.foundErrors); + Assertions.assertEquals(10, threadPool.getPoolSize()); + Assertions.assertNull(containerLauncher.foundErrors); // Same set of hosts, so no change containerLauncher.finishEventHandling = true; @@ -153,7 +154,7 @@ public class TestContainerLauncher { + ". Timeout is " + timeOut); Thread.sleep(1000); } - Assert.assertEquals(10, containerLauncher.numEventsProcessed.get()); + Assertions.assertEquals(10, containerLauncher.numEventsProcessed.get()); containerLauncher.finishEventHandling = false; for (int i = 0; i < 10; i++) { ContainerId containerId = ContainerId.newContainerId(appAttemptId, @@ -165,8 +166,8 @@ public class TestContainerLauncher { ContainerLauncher.EventType.CONTAINER_REMOTE_LAUNCH)); } waitForEvents(containerLauncher, 20); - Assert.assertEquals(10, threadPool.getPoolSize()); - Assert.assertNull(containerLauncher.foundErrors); + Assertions.assertEquals(10, threadPool.getPoolSize()); + Assertions.assertNull(containerLauncher.foundErrors); // Different hosts, there should be an increase in core-thread-pool size to // 21(11hosts+10buffer) @@ -179,8 +180,8 @@ public class TestContainerLauncher { containerId, "host11:1234", null, ContainerLauncher.EventType.CONTAINER_REMOTE_LAUNCH)); waitForEvents(containerLauncher, 21); - Assert.assertEquals(11, threadPool.getPoolSize()); - Assert.assertNull(containerLauncher.foundErrors); + Assertions.assertEquals(11, threadPool.getPoolSize()); + Assertions.assertNull(containerLauncher.foundErrors); containerLauncher.stop(); @@ -194,7 +195,8 @@ public class TestContainerLauncher { assertThat(containerLauncher.initialPoolSize).isEqualTo(20); } - @Test(timeout = 5000) + @Test + @Timeout(5000) public void testPoolLimits() throws InterruptedException { ApplicationId appId = ApplicationId.newInstance(12345, 67); ApplicationAttemptId appAttemptId = ApplicationAttemptId.newInstance( @@ -222,8 +224,8 @@ public class TestContainerLauncher { ContainerLauncher.EventType.CONTAINER_REMOTE_LAUNCH)); } waitForEvents(containerLauncher, 10); - Assert.assertEquals(10, threadPool.getPoolSize()); - Assert.assertNull(containerLauncher.foundErrors); + Assertions.assertEquals(10, threadPool.getPoolSize()); + Assertions.assertNull(containerLauncher.foundErrors); // 4 more different hosts, but thread pool size should be capped at 12 containerLauncher.expectedCorePoolSize = 12 ; @@ -233,14 +235,14 @@ public class TestContainerLauncher { ContainerLauncher.EventType.CONTAINER_REMOTE_LAUNCH)); } waitForEvents(containerLauncher, 12); - Assert.assertEquals(12, threadPool.getPoolSize()); - Assert.assertNull(containerLauncher.foundErrors); + Assertions.assertEquals(12, threadPool.getPoolSize()); + Assertions.assertNull(containerLauncher.foundErrors); // Make some threads ideal so that remaining events are also done. containerLauncher.finishEventHandling = true; waitForEvents(containerLauncher, 14); - Assert.assertEquals(12, threadPool.getPoolSize()); - Assert.assertNull(containerLauncher.foundErrors); + Assertions.assertEquals(12, threadPool.getPoolSize()); + Assertions.assertNull(containerLauncher.foundErrors); containerLauncher.stop(); } @@ -254,13 +256,13 @@ public class TestContainerLauncher { + ". It is now " + containerLauncher.numEventsProcessing.get()); Thread.sleep(1000); } - Assert.assertEquals(expectedNumEvents, + Assertions.assertEquals(expectedNumEvents, containerLauncher.numEventsProcessing.get()); } - @Test(timeout = 15000) + @Test + @Timeout(15000) public void testSlowNM() throws Exception { - conf = new Configuration(); int maxAttempts = 1; conf.setInt(MRJobConfig.MAP_MAX_ATTEMPTS, maxAttempts); @@ -290,15 +292,16 @@ public class TestContainerLauncher { app.waitForState(job, JobState.RUNNING); Map tasks = job.getTasks(); - Assert.assertEquals("Num tasks is not correct", 1, tasks.size()); + Assertions.assertEquals(1, tasks.size(), + "Num tasks is not correct"); Task task = tasks.values().iterator().next(); app.waitForState(task, TaskState.SCHEDULED); Map attempts = tasks.values().iterator() .next().getAttempts(); - Assert.assertEquals("Num attempts is not correct", maxAttempts, - attempts.size()); + Assertions.assertEquals(maxAttempts, attempts.size(), + "Num attempts is not correct"); TaskAttempt attempt = attempts.values().iterator().next(); app.waitForInternalState((TaskAttemptImpl) attempt, @@ -309,9 +312,9 @@ public class TestContainerLauncher { String diagnostics = attempt.getDiagnostics().toString(); LOG.info("attempt.getDiagnostics: " + diagnostics); - Assert.assertTrue(diagnostics.contains("Container launch failed for " + Assertions.assertTrue(diagnostics.contains("Container launch failed for " + "container_0_0000_01_000000 : ")); - Assert + Assertions .assertTrue(diagnostics .contains("java.net.SocketTimeoutException: 3000 millis timeout while waiting for channel")); @@ -440,7 +443,7 @@ public class TestContainerLauncher { MRApp.newContainerTokenIdentifier(request.getContainerToken()); // Validate that the container is what RM is giving. - Assert.assertEquals(MRApp.NM_HOST + ":" + MRApp.NM_PORT, + Assertions.assertEquals(MRApp.NM_HOST + ":" + MRApp.NM_PORT, containerTokenIdentifier.getNmHostAddress()); StartContainersResponse response = recordFactory diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncherImpl.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncherImpl.java index 88ba8943ceb..136eda213f4 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncherImpl.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncherImpl.java @@ -79,8 +79,9 @@ import org.apache.hadoop.yarn.exceptions.YarnException; import org.apache.hadoop.yarn.factories.RecordFactory; import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider; import org.apache.hadoop.yarn.security.ContainerTokenIdentifier; -import org.junit.Before; -import org.junit.Test; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; import org.mockito.ArgumentCaptor; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -94,7 +95,7 @@ public class TestContainerLauncherImpl { private Map serviceResponse = new HashMap(); - @Before + @BeforeEach public void setup() throws IOException { serviceResponse.clear(); serviceResponse.put(ShuffleHandler.MAPREDUCE_SHUFFLE_SERVICEID, @@ -168,7 +169,8 @@ public class TestContainerLauncherImpl { return MRBuilderUtils.newTaskAttemptId(tID, id); } - @Test(timeout = 5000) + @Test + @Timeout(5000) public void testHandle() throws Exception { LOG.info("STARTING testHandle"); AppContext mockContext = mock(AppContext.class); @@ -226,7 +228,8 @@ public class TestContainerLauncherImpl { } } - @Test(timeout = 5000) + @Test + @Timeout(5000) public void testOutOfOrder() throws Exception { LOG.info("STARTING testOutOfOrder"); AppContext mockContext = mock(AppContext.class); @@ -300,7 +303,8 @@ public class TestContainerLauncherImpl { } } - @Test(timeout = 5000) + @Test + @Timeout(5000) public void testMyShutdown() throws Exception { LOG.info("in test Shutdown"); @@ -352,7 +356,8 @@ public class TestContainerLauncherImpl { } @SuppressWarnings({ "rawtypes", "unchecked" }) - @Test(timeout = 5000) + @Test + @Timeout(5000) public void testContainerCleaned() throws Exception { LOG.info("STARTING testContainerCleaned"); diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/local/TestLocalContainerAllocator.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/local/TestLocalContainerAllocator.java index de4977205b0..b5bf4b6e2ff 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/local/TestLocalContainerAllocator.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/local/TestLocalContainerAllocator.java @@ -69,8 +69,8 @@ import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; import org.apache.hadoop.yarn.ipc.RPCUtil; import org.apache.hadoop.yarn.security.AMRMTokenIdentifier; import org.apache.hadoop.yarn.util.resource.Resources; -import org.junit.Assert; -import org.junit.Test; +import org.junit.jupiter.api.Assertions; +import org.junit.jupiter.api.Test; import org.mockito.ArgumentCaptor; public class TestLocalContainerAllocator { @@ -90,7 +90,7 @@ public class TestLocalContainerAllocator { lca.start(); try { lca.heartbeat(); - Assert.fail("heartbeat was supposed to throw"); + Assertions.fail("heartbeat was supposed to throw"); } catch (YarnException e) { // YarnException is expected } finally { @@ -104,7 +104,7 @@ public class TestLocalContainerAllocator { lca.start(); try { lca.heartbeat(); - Assert.fail("heartbeat was supposed to throw"); + Assertions.fail("heartbeat was supposed to throw"); } catch (YarnRuntimeException e) { // YarnRuntimeException is expected } finally { @@ -172,14 +172,13 @@ public class TestLocalContainerAllocator { } } - Assert.assertEquals("too many AMRM tokens", 1, tokenCount); - Assert.assertArrayEquals("token identifier not updated", - newToken.getIdentifier(), ugiToken.getIdentifier()); - Assert.assertArrayEquals("token password not updated", - newToken.getPassword(), ugiToken.getPassword()); - Assert.assertEquals("AMRM token service not updated", - new Text(ClientRMProxy.getAMRMTokenService(conf)), - ugiToken.getService()); + Assertions.assertEquals(1, tokenCount, "too many AMRM tokens"); + Assertions.assertArrayEquals(newToken.getIdentifier(), ugiToken.getIdentifier(), + "token identifier not updated"); + Assertions.assertArrayEquals(newToken.getPassword(), ugiToken.getPassword(), + "token password not updated"); + Assertions.assertEquals(new Text(ClientRMProxy.getAMRMTokenService(conf)), + ugiToken.getService(), "AMRM token service not updated"); } @Test @@ -202,7 +201,7 @@ public class TestLocalContainerAllocator { verify(eventHandler, times(1)).handle(containerAssignedCaptor.capture()); Container container = containerAssignedCaptor.getValue().getContainer(); Resource containerResource = container.getResource(); - Assert.assertNotNull(containerResource); + Assertions.assertNotNull(containerResource); assertThat(containerResource.getMemorySize()).isEqualTo(0); assertThat(containerResource.getVirtualCores()).isEqualTo(0); } @@ -282,8 +281,8 @@ public class TestLocalContainerAllocator { @Override public AllocateResponse allocate(AllocateRequest request) throws YarnException, IOException { - Assert.assertEquals("response ID mismatch", - responseId, request.getResponseId()); + Assertions.assertEquals(responseId, request.getResponseId(), + "response ID mismatch"); ++responseId; org.apache.hadoop.yarn.api.records.Token yarnToken = null; if (amToken != null) { diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/metrics/TestMRAppMetrics.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/metrics/TestMRAppMetrics.java index 3fd4cb028a5..eaa06b65810 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/metrics/TestMRAppMetrics.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/metrics/TestMRAppMetrics.java @@ -25,19 +25,20 @@ import org.apache.hadoop.metrics2.MetricsRecordBuilder; import static org.apache.hadoop.test.MetricsAsserts.*; import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem; -import org.junit.After; -import org.junit.Test; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.Test; import static org.mockito.Mockito.*; public class TestMRAppMetrics { - @After + @AfterEach public void tearDown() { DefaultMetricsSystem.shutdown(); } - @Test public void testNames() { + @Test + public void testNames() { Job job = mock(Job.class); Task mapTask = mock(Task.class); when(mapTask.getType()).thenReturn(TaskType.MAP); diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/rm/TestRMCommunicator.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/rm/TestRMCommunicator.java index 52db7b5f770..43154339e37 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/rm/TestRMCommunicator.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/rm/TestRMCommunicator.java @@ -23,7 +23,8 @@ import org.apache.hadoop.mapreduce.v2.app.client.ClientService; import org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.AllocatorRunnable; import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; import org.apache.hadoop.yarn.util.Clock; -import org.junit.Test; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; import org.mockito.stubbing.Answer; import static org.mockito.Mockito.doThrow; @@ -45,7 +46,8 @@ public class TestRMCommunicator { } } - @Test(timeout = 2000) + @Test + @Timeout(2000) public void testRMContainerAllocatorExceptionIsHandled() throws Exception { ClientService mockClientService = mock(ClientService.class); AppContext mockContext = mock(AppContext.class); @@ -66,7 +68,8 @@ public class TestRMCommunicator { testRunnable.run(); } - @Test(timeout = 2000) + @Test + @Timeout(2000) public void testRMContainerAllocatorYarnRuntimeExceptionIsHandled() throws Exception { ClientService mockClientService = mock(ClientService.class); diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/rm/TestRMContainerAllocator.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/rm/TestRMContainerAllocator.java index 1578ef0aba4..fe2f3072141 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/rm/TestRMContainerAllocator.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/rm/TestRMContainerAllocator.java @@ -21,6 +21,7 @@ package org.apache.hadoop.mapreduce.v2.app.rm; import static org.apache.hadoop.mapreduce.v2.app.rm.ContainerRequestCreator.createRequest; import static org.assertj.core.api.Assertions.assertThat; import static org.junit.Assert.assertEquals; +import static org.junit.jupiter.api.Assertions.assertThrows; import static org.mockito.ArgumentMatchers.any; import static org.mockito.ArgumentMatchers.anyFloat; import static org.mockito.ArgumentMatchers.anyInt; @@ -147,13 +148,15 @@ import org.apache.hadoop.yarn.server.utils.BuilderUtils; import org.apache.hadoop.yarn.util.Clock; import org.apache.hadoop.yarn.util.ControlledClock; import org.apache.hadoop.yarn.util.Records; +import org.apache.hadoop.yarn.util.resource.Resources; import org.apache.hadoop.yarn.util.SystemClock; -import org.junit.After; -import org.junit.Assert; -import org.junit.Before; -import org.junit.Test; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.Assertions; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; import java.util.function.Supplier; +import org.junit.jupiter.api.Timeout; import org.mockito.InOrder; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -166,7 +169,7 @@ public class TestRMContainerAllocator { static final RecordFactory recordFactory = RecordFactoryProvider .getRecordFactory(null); - @Before + @BeforeEach public void setup() { MyContainerAllocator.getJobUpdatedNodeEvents().clear(); MyContainerAllocator.getTaskAttemptKillEvents().clear(); @@ -175,7 +178,7 @@ public class TestRMContainerAllocator { UserGroupInformation.setLoginUser(null); } - @After + @AfterEach public void tearDown() { DefaultMetricsSystem.shutdown(); } @@ -230,8 +233,8 @@ public class TestRMContainerAllocator { // as nodes are not added, no allocations List assigned = allocator.schedule(); rm.drainEvents(); - Assert.assertEquals("No of assignments must be 0", 0, assigned.size()); - Assert.assertEquals(4, rm.getMyFifoScheduler().lastAsk.size()); + Assertions.assertEquals(0, assigned.size(), "No of assignments must be 0"); + Assertions.assertEquals(4, rm.getMyFifoScheduler().lastAsk.size()); // send another request with different resource and priority ContainerRequestEvent event3 = ContainerRequestCreator.createRequest(jobId, @@ -242,8 +245,8 @@ public class TestRMContainerAllocator { // as nodes are not added, no allocations assigned = allocator.schedule(); rm.drainEvents(); - Assert.assertEquals("No of assignments must be 0", 0, assigned.size()); - Assert.assertEquals(3, rm.getMyFifoScheduler().lastAsk.size()); + Assertions.assertEquals(0, assigned.size(), "No of assignments must be 0"); + Assertions.assertEquals(3, rm.getMyFifoScheduler().lastAsk.size()); // update resources in scheduler nodeManager1.nodeHeartbeat(true); // Node heartbeat @@ -253,14 +256,14 @@ public class TestRMContainerAllocator { assigned = allocator.schedule(); rm.drainEvents(); - Assert.assertEquals(0, rm.getMyFifoScheduler().lastAsk.size()); + Assertions.assertEquals(0, rm.getMyFifoScheduler().lastAsk.size()); checkAssignments(new ContainerRequestEvent[] {event1, event2, event3}, assigned, false); // check that the assigned container requests are cancelled allocator.schedule(); rm.drainEvents(); - Assert.assertEquals(5, rm.getMyFifoScheduler().lastAsk.size()); + Assertions.assertEquals(5, rm.getMyFifoScheduler().lastAsk.size()); } @Test @@ -322,7 +325,7 @@ public class TestRMContainerAllocator { // as nodes are not added, no allocations List assigned = allocator.schedule(); rm.drainEvents(); - Assert.assertEquals("No of assignments must be 0", 0, assigned.size()); + Assertions.assertEquals(0, assigned.size(), "No of assignments must be 0"); // update resources in scheduler // Node heartbeat from rack-local first. This makes node h3 the first in the @@ -341,7 +344,7 @@ public class TestRMContainerAllocator { for(TaskAttemptContainerAssignedEvent event : assigned) { if(event.getTaskAttemptID().equals(event3.getAttemptID())) { assigned.remove(event); - Assert.assertEquals("h3", event.getContainer().getNodeId().getHost()); + Assertions.assertEquals("h3", event.getContainer().getNodeId().getHost()); break; } } @@ -401,7 +404,7 @@ public class TestRMContainerAllocator { // as nodes are not added, no allocations List assigned = allocator.schedule(); rm.drainEvents(); - Assert.assertEquals("No of assignments must be 0", 0, assigned.size()); + Assertions.assertEquals(0, assigned.size(), "No of assignments must be 0"); // update resources in scheduler nodeManager1.nodeHeartbeat(true); // Node heartbeat @@ -415,7 +418,8 @@ public class TestRMContainerAllocator { assigned, false); } - @Test(timeout = 30000) + @Test + @Timeout(30000) public void testReducerRampdownDiagnostics() throws Exception { LOG.info("Running tesReducerRampdownDiagnostics"); @@ -466,11 +470,12 @@ public class TestRMContainerAllocator { } final String killEventMessage = allocator.getTaskAttemptKillEvents().get(0) .getMessage(); - Assert.assertTrue("No reducer rampDown preemption message", - killEventMessage.contains(RMContainerAllocator.RAMPDOWN_DIAGNOSTIC)); + Assertions.assertTrue(killEventMessage.contains(RMContainerAllocator.RAMPDOWN_DIAGNOSTIC), + "No reducer rampDown preemption message"); } - @Test(timeout = 30000) + @Test + @Timeout(30000) public void testPreemptReducers() throws Exception { LOG.info("Running testPreemptReducers"); @@ -498,8 +503,8 @@ public class TestRMContainerAllocator { 0, 0, 0, 0, 0, 0, "jobfile", null, false, "")); MyContainerAllocator allocator = new MyContainerAllocator(rm, conf, appAttemptId, mockJob, SystemClock.getInstance()); - allocator.setMapResourceRequest(BuilderUtils.newResource(1024, 1)); - allocator.setReduceResourceRequest(BuilderUtils.newResource(1024, 1)); + allocator.setMapResourceRequest(Resources.createResource(1024)); + allocator.setReduceResourceRequest(Resources.createResource(1024)); RMContainerAllocator.AssignedRequests assignedRequests = allocator.getAssignedRequests(); RMContainerAllocator.ScheduledRequests scheduledRequests = @@ -513,11 +518,12 @@ public class TestRMContainerAllocator { mock(Container.class)); allocator.preemptReducesIfNeeded(); - Assert.assertEquals("The reducer is not preempted", - 1, assignedRequests.preemptionWaitingReduces.size()); + Assertions.assertEquals(1, assignedRequests.preemptionWaitingReduces.size(), + "The reducer is not preempted"); } - @Test(timeout = 30000) + @Test + @Timeout(30000) public void testNonAggressivelyPreemptReducers() throws Exception { LOG.info("Running testNonAggressivelyPreemptReducers"); @@ -552,8 +558,8 @@ public class TestRMContainerAllocator { clock.setTime(1); MyContainerAllocator allocator = new MyContainerAllocator(rm, conf, appAttemptId, mockJob, clock); - allocator.setMapResourceRequest(BuilderUtils.newResource(1024, 1)); - allocator.setReduceResourceRequest(BuilderUtils.newResource(1024, 1)); + allocator.setMapResourceRequest(Resources.createResource(1024)); + allocator.setReduceResourceRequest(Resources.createResource(1024)); RMContainerAllocator.AssignedRequests assignedRequests = allocator.getAssignedRequests(); RMContainerAllocator.ScheduledRequests scheduledRequests = @@ -570,16 +576,17 @@ public class TestRMContainerAllocator { clock.setTime(clock.getTime() + 1); allocator.preemptReducesIfNeeded(); - Assert.assertEquals("The reducer is aggressively preeempted", 0, - assignedRequests.preemptionWaitingReduces.size()); + Assertions.assertEquals(0, assignedRequests.preemptionWaitingReduces.size(), + "The reducer is aggressively preeempted"); clock.setTime(clock.getTime() + (preemptThreshold) * 1000); allocator.preemptReducesIfNeeded(); - Assert.assertEquals("The reducer is not preeempted", 1, - assignedRequests.preemptionWaitingReduces.size()); + Assertions.assertEquals(1, assignedRequests.preemptionWaitingReduces.size(), + "The reducer is not preeempted"); } - @Test(timeout = 30000) + @Test + @Timeout(30000) public void testUnconditionalPreemptReducers() throws Exception { LOG.info("Running testForcePreemptReducers"); @@ -616,8 +623,8 @@ public class TestRMContainerAllocator { clock.setTime(1); MyContainerAllocator allocator = new MyContainerAllocator(rm, conf, appAttemptId, mockJob, clock); - allocator.setMapResourceRequest(BuilderUtils.newResource(1024, 1)); - allocator.setReduceResourceRequest(BuilderUtils.newResource(1024, 1)); + allocator.setMapResourceRequest(Resources.createResource(1024)); + allocator.setReduceResourceRequest(Resources.createResource(1024)); RMContainerAllocator.AssignedRequests assignedRequests = allocator.getAssignedRequests(); RMContainerAllocator.ScheduledRequests scheduledRequests = @@ -634,18 +641,19 @@ public class TestRMContainerAllocator { clock.setTime(clock.getTime() + 1); allocator.preemptReducesIfNeeded(); - Assert.assertEquals("The reducer is preeempted too soon", 0, - assignedRequests.preemptionWaitingReduces.size()); + Assertions.assertEquals(0, assignedRequests.preemptionWaitingReduces.size(), + "The reducer is preeempted too soon"); clock.setTime(clock.getTime() + 1000 * forcePreemptThresholdSecs); allocator.preemptReducesIfNeeded(); - Assert.assertEquals("The reducer is not preeempted", 1, - assignedRequests.preemptionWaitingReduces.size()); + Assertions.assertEquals(1, assignedRequests.preemptionWaitingReduces.size(), + "The reducer is not preeempted"); } - @Test(timeout = 30000) + @Test + @Timeout(30000) public void testExcessReduceContainerAssign() throws Exception { - final Configuration conf = new Configuration(); + final Configuration conf = new Configuration(); conf.setFloat(MRJobConfig.COMPLETED_MAPS_FOR_REDUCE_SLOWSTART, 0.0f); final MyResourceManager2 rm = new MyResourceManager2(conf); rm.start(); @@ -742,7 +750,7 @@ public class TestRMContainerAllocator { allocator.schedule(); // verify all of the host-specific asks were sent plus one for the // default rack and one for the ANY request - Assert.assertEquals(3, mockScheduler.lastAsk.size()); + Assertions.assertEquals(3, mockScheduler.lastAsk.size()); // verify ResourceRequest sent for MAP have appropriate node // label expression as per the configuration validateLabelsRequests(mockScheduler.lastAsk.get(0), false); @@ -753,7 +761,7 @@ public class TestRMContainerAllocator { ContainerId cid0 = mockScheduler.assignContainer("map", false); allocator.schedule(); // default rack and one for the ANY request - Assert.assertEquals(3, mockScheduler.lastAsk.size()); + Assertions.assertEquals(3, mockScheduler.lastAsk.size()); validateLabelsRequests(mockScheduler.lastAsk.get(0), true); validateLabelsRequests(mockScheduler.lastAsk.get(1), true); validateLabelsRequests(mockScheduler.lastAsk.get(2), true); @@ -768,14 +776,14 @@ public class TestRMContainerAllocator { case "map": case "reduce": case NetworkTopology.DEFAULT_RACK: - Assert.assertNull(resourceRequest.getNodeLabelExpression()); + Assertions.assertNull(resourceRequest.getNodeLabelExpression()); break; case "*": - Assert.assertEquals(isReduce ? "ReduceNodes" : "MapNodes", + Assertions.assertEquals(isReduce ? "ReduceNodes" : "MapNodes", resourceRequest.getNodeLabelExpression()); break; default: - Assert.fail("Invalid resource location " + Assertions.fail("Invalid resource location " + resourceRequest.getResourceName()); } } @@ -929,7 +937,7 @@ public class TestRMContainerAllocator { // as nodes are not added, no allocations List assigned = allocator.schedule(); rm.drainEvents(); - Assert.assertEquals("No of assignments must be 0", 0, assigned.size()); + Assertions.assertEquals(0, assigned.size(), "No of assignments must be 0"); // update resources in scheduler nodeManager1.nodeHeartbeat(true); // Node heartbeat @@ -944,8 +952,8 @@ public class TestRMContainerAllocator { // validate that no container is assigned to h1 as it doesn't have 2048 for (TaskAttemptContainerAssignedEvent assig : assigned) { - Assert.assertFalse("Assigned count not correct", "h1".equals(assig - .getContainer().getNodeId().getHost())); + Assertions.assertFalse("h1".equals(assig.getContainer().getNodeId().getHost()), + "Assigned count not correct"); } } @@ -1036,7 +1044,7 @@ public class TestRMContainerAllocator { }; }; - Assert.assertEquals(0.0, rmApp.getProgress(), 0.0); + Assertions.assertEquals(0.0, rmApp.getProgress(), 0.0); mrApp.submit(conf); Job job = mrApp.getContext().getAllJobs().entrySet().iterator().next() @@ -1075,23 +1083,23 @@ public class TestRMContainerAllocator { allocator.schedule(); // Send heartbeat rm.drainEvents(); - Assert.assertEquals(0.05f, job.getProgress(), 0.001f); - Assert.assertEquals(0.05f, rmApp.getProgress(), 0.001f); + Assertions.assertEquals(0.05f, job.getProgress(), 0.001f); + Assertions.assertEquals(0.05f, rmApp.getProgress(), 0.001f); // Finish off 1 map. Iterator it = job.getTasks().values().iterator(); finishNextNTasks(rmDispatcher, amNodeManager, mrApp, it, 1); allocator.schedule(); rm.drainEvents(); - Assert.assertEquals(0.095f, job.getProgress(), 0.001f); - Assert.assertEquals(0.095f, rmApp.getProgress(), 0.001f); + Assertions.assertEquals(0.095f, job.getProgress(), 0.001f); + Assertions.assertEquals(0.095f, rmApp.getProgress(), 0.001f); // Finish off 7 more so that map-progress is 80% finishNextNTasks(rmDispatcher, amNodeManager, mrApp, it, 7); allocator.schedule(); rm.drainEvents(); - Assert.assertEquals(0.41f, job.getProgress(), 0.001f); - Assert.assertEquals(0.41f, rmApp.getProgress(), 0.001f); + Assertions.assertEquals(0.41f, job.getProgress(), 0.001f); + Assertions.assertEquals(0.41f, rmApp.getProgress(), 0.001f); // Finish off the 2 remaining maps finishNextNTasks(rmDispatcher, amNodeManager, mrApp, it, 2); @@ -1115,16 +1123,16 @@ public class TestRMContainerAllocator { allocator.schedule(); rm.drainEvents(); - Assert.assertEquals(0.59f, job.getProgress(), 0.001f); - Assert.assertEquals(0.59f, rmApp.getProgress(), 0.001f); + Assertions.assertEquals(0.59f, job.getProgress(), 0.001f); + Assertions.assertEquals(0.59f, rmApp.getProgress(), 0.001f); // Finish off the remaining 8 reduces. finishNextNTasks(rmDispatcher, amNodeManager, mrApp, it, 8); allocator.schedule(); rm.drainEvents(); // Remaining is JobCleanup - Assert.assertEquals(0.95f, job.getProgress(), 0.001f); - Assert.assertEquals(0.95f, rmApp.getProgress(), 0.001f); + Assertions.assertEquals(0.95f, job.getProgress(), 0.001f); + Assertions.assertEquals(0.95f, rmApp.getProgress(), 0.001f); } private void finishNextNTasks(DrainDispatcher rmDispatcher, MockNM node, @@ -1188,7 +1196,7 @@ public class TestRMContainerAllocator { }; }; - Assert.assertEquals(0.0, rmApp.getProgress(), 0.0); + Assertions.assertEquals(0.0, rmApp.getProgress(), 0.0); mrApp.submit(conf); Job job = mrApp.getContext().getAllJobs().entrySet().iterator().next() @@ -1223,8 +1231,8 @@ public class TestRMContainerAllocator { allocator.schedule(); // Send heartbeat rm.drainEvents(); - Assert.assertEquals(0.05f, job.getProgress(), 0.001f); - Assert.assertEquals(0.05f, rmApp.getProgress(), 0.001f); + Assertions.assertEquals(0.05f, job.getProgress(), 0.001f); + Assertions.assertEquals(0.05f, rmApp.getProgress(), 0.001f); Iterator it = job.getTasks().values().iterator(); @@ -1232,22 +1240,22 @@ public class TestRMContainerAllocator { finishNextNTasks(rmDispatcher, amNodeManager, mrApp, it, 1); allocator.schedule(); rm.drainEvents(); - Assert.assertEquals(0.14f, job.getProgress(), 0.001f); - Assert.assertEquals(0.14f, rmApp.getProgress(), 0.001f); + Assertions.assertEquals(0.14f, job.getProgress(), 0.001f); + Assertions.assertEquals(0.14f, rmApp.getProgress(), 0.001f); // Finish off 5 more map so that map-progress is 60% finishNextNTasks(rmDispatcher, amNodeManager, mrApp, it, 5); allocator.schedule(); rm.drainEvents(); - Assert.assertEquals(0.59f, job.getProgress(), 0.001f); - Assert.assertEquals(0.59f, rmApp.getProgress(), 0.001f); + Assertions.assertEquals(0.59f, job.getProgress(), 0.001f); + Assertions.assertEquals(0.59f, rmApp.getProgress(), 0.001f); // Finish off remaining map so that map-progress is 100% finishNextNTasks(rmDispatcher, amNodeManager, mrApp, it, 4); allocator.schedule(); rm.drainEvents(); - Assert.assertEquals(0.95f, job.getProgress(), 0.001f); - Assert.assertEquals(0.95f, rmApp.getProgress(), 0.001f); + Assertions.assertEquals(0.95f, job.getProgress(), 0.001f); + Assertions.assertEquals(0.95f, rmApp.getProgress(), 0.001f); } @Test @@ -1298,17 +1306,17 @@ public class TestRMContainerAllocator { nm1.nodeHeartbeat(true); rm.drainEvents(); - Assert.assertEquals(1, allocator.getJobUpdatedNodeEvents().size()); - Assert.assertEquals(3, allocator.getJobUpdatedNodeEvents().get(0).getUpdatedNodes().size()); + Assertions.assertEquals(1, allocator.getJobUpdatedNodeEvents().size()); + Assertions.assertEquals(3, allocator.getJobUpdatedNodeEvents().get(0).getUpdatedNodes().size()); allocator.getJobUpdatedNodeEvents().clear(); // get the assignment assigned = allocator.schedule(); rm.drainEvents(); - Assert.assertEquals(1, assigned.size()); - Assert.assertEquals(nm1.getNodeId(), assigned.get(0).getContainer().getNodeId()); + Assertions.assertEquals(1, assigned.size()); + Assertions.assertEquals(nm1.getNodeId(), assigned.get(0).getContainer().getNodeId()); // no updated nodes reported - Assert.assertTrue(allocator.getJobUpdatedNodeEvents().isEmpty()); - Assert.assertTrue(allocator.getTaskAttemptKillEvents().isEmpty()); + Assertions.assertTrue(allocator.getJobUpdatedNodeEvents().isEmpty()); + Assertions.assertTrue(allocator.getTaskAttemptKillEvents().isEmpty()); // mark nodes bad nm1.nodeHeartbeat(false); @@ -1318,23 +1326,23 @@ public class TestRMContainerAllocator { // schedule response returns updated nodes assigned = allocator.schedule(); rm.drainEvents(); - Assert.assertEquals(0, assigned.size()); + Assertions.assertEquals(0, assigned.size()); // updated nodes are reported - Assert.assertEquals(1, allocator.getJobUpdatedNodeEvents().size()); - Assert.assertEquals(1, allocator.getTaskAttemptKillEvents().size()); - Assert.assertEquals(2, + Assertions.assertEquals(1, allocator.getJobUpdatedNodeEvents().size()); + Assertions.assertEquals(1, allocator.getTaskAttemptKillEvents().size()); + Assertions.assertEquals(2, allocator.getJobUpdatedNodeEvents().get(0).getUpdatedNodes().size()); - Assert.assertEquals(attemptId, + Assertions.assertEquals(attemptId, allocator.getTaskAttemptKillEvents().get(0).getTaskAttemptID()); allocator.getJobUpdatedNodeEvents().clear(); allocator.getTaskAttemptKillEvents().clear(); assigned = allocator.schedule(); rm.drainEvents(); - Assert.assertEquals(0, assigned.size()); + Assertions.assertEquals(0, assigned.size()); // no updated nodes reported - Assert.assertTrue(allocator.getJobUpdatedNodeEvents().isEmpty()); - Assert.assertTrue(allocator.getTaskAttemptKillEvents().isEmpty()); + Assertions.assertTrue(allocator.getJobUpdatedNodeEvents().isEmpty()); + Assertions.assertTrue(allocator.getTaskAttemptKillEvents().isEmpty()); } @Test @@ -1403,7 +1411,7 @@ public class TestRMContainerAllocator { // as nodes are not added, no allocations List assigned = allocator.schedule(); rm.drainEvents(); - Assert.assertEquals("No of assignments must be 0", 0, assigned.size()); + Assertions.assertEquals(0, assigned.size(), "No of assignments must be 0"); // Send events to blacklist nodes h1 and h2 ContainerFailedEvent f1 = createFailEvent(jobId, 1, "h1", false); @@ -1417,9 +1425,9 @@ public class TestRMContainerAllocator { rm.drainEvents(); assigned = allocator.schedule(); - Assert.assertEquals("No of assignments must be 0", 0, assigned.size()); + Assertions.assertEquals(0, assigned.size(), "No of assignments must be 0"); rm.drainEvents(); - Assert.assertEquals("No of assignments must be 0", 0, assigned.size()); + Assertions.assertEquals(0, assigned.size(), "No of assignments must be 0"); assertBlacklistAdditionsAndRemovals(2, 0, rm); // mark h1/h2 as bad nodes @@ -1430,7 +1438,7 @@ public class TestRMContainerAllocator { assigned = allocator.schedule(); rm.drainEvents(); assertBlacklistAdditionsAndRemovals(0, 0, rm); - Assert.assertEquals("No of assignments must be 0", 0, assigned.size()); + Assertions.assertEquals(0, assigned.size(), "No of assignments must be 0"); nodeManager3.nodeHeartbeat(true); // Node heartbeat rm.drainEvents(); @@ -1438,12 +1446,12 @@ public class TestRMContainerAllocator { rm.drainEvents(); assertBlacklistAdditionsAndRemovals(0, 0, rm); - Assert.assertTrue("No of assignments must be 3", assigned.size() == 3); + Assertions.assertTrue(assigned.size() == 3, "No of assignments must be 3"); // validate that all containers are assigned to h3 for (TaskAttemptContainerAssignedEvent assig : assigned) { - Assert.assertTrue("Assigned container host not correct", "h3".equals(assig - .getContainer().getNodeId().getHost())); + Assertions.assertTrue("h3".equals(assig.getContainer().getNodeId().getHost()), + "Assigned container host not correct"); } } @@ -1488,7 +1496,8 @@ public class TestRMContainerAllocator { assigned = getContainerOnHost(jobId, 1, 1024, new String[] {"h1"}, nodeManagers[0], allocator, 0, 0, 0, 0, rm); - Assert.assertEquals("No of assignments must be 1", 1, assigned.size()); + Assertions.assertEquals(1, assigned.size(), + "No of assignments must be 1"); LOG.info("Failing container _1 on H1 (Node should be blacklisted and" + " ignore blacklisting enabled"); @@ -1503,47 +1512,51 @@ public class TestRMContainerAllocator { assigned = getContainerOnHost(jobId, 2, 1024, new String[] {"h1"}, nodeManagers[0], allocator, 1, 0, 0, 1, rm); - Assert.assertEquals("No of assignments must be 0", 0, assigned.size()); + Assertions.assertEquals(0, assigned.size(), + "No of assignments must be 0"); // Known=1, blacklisted=1, ignore should be true - assign 1 assigned = getContainerOnHost(jobId, 2, 1024, new String[] {"h1"}, nodeManagers[0], allocator, 0, 0, 0, 0, rm); - Assert.assertEquals("No of assignments must be 1", 1, assigned.size()); + Assertions.assertEquals(1, assigned.size(), + "No of assignments must be 1"); nodeManagers[nmNum] = registerNodeManager(nmNum++, rm); // Known=2, blacklisted=1, ignore should be true - assign 1 anyway. assigned = getContainerOnHost(jobId, 3, 1024, new String[] {"h2"}, nodeManagers[1], allocator, 0, 0, 0, 0, rm); - Assert.assertEquals("No of assignments must be 1", 1, assigned.size()); + Assertions.assertEquals(1, assigned.size(), + "No of assignments must be 1"); nodeManagers[nmNum] = registerNodeManager(nmNum++, rm); // Known=3, blacklisted=1, ignore should be true - assign 1 anyway. assigned = getContainerOnHost(jobId, 4, 1024, new String[] {"h3"}, nodeManagers[2], allocator, 0, 0, 0, 0, rm); - Assert.assertEquals("No of assignments must be 1", 1, assigned.size()); + Assertions.assertEquals(1, assigned.size(), + "No of assignments must be 1"); // Known=3, blacklisted=1, ignore should be true - assign 1 assigned = getContainerOnHost(jobId, 5, 1024, new String[] {"h1"}, nodeManagers[0], allocator, 0, 0, 0, 0, rm); - Assert.assertEquals("No of assignments must be 1", 1, assigned.size()); + Assertions.assertEquals(1, assigned.size(), "No of assignments must be 1"); nodeManagers[nmNum] = registerNodeManager(nmNum++, rm); // Known=4, blacklisted=1, ignore should be false - assign 1 anyway assigned = getContainerOnHost(jobId, 6, 1024, new String[] {"h4"}, nodeManagers[3], allocator, 0, 0, 1, 0, rm); - Assert.assertEquals("No of assignments must be 1", 1, assigned.size()); + Assertions.assertEquals(1, assigned.size(), "No of assignments must be 1"); // Test blacklisting re-enabled. // Known=4, blacklisted=1, ignore should be false - no assignment on h1 assigned = getContainerOnHost(jobId, 7, 1024, new String[] {"h1"}, nodeManagers[0], allocator, 0, 0, 0, 0, rm); - Assert.assertEquals("No of assignments must be 0", 0, assigned.size()); + Assertions.assertEquals(0, assigned.size(), "No of assignments must be 0"); // RMContainerRequestor would have created a replacement request. // Blacklist h2 @@ -1556,20 +1569,20 @@ public class TestRMContainerAllocator { assigned = getContainerOnHost(jobId, 8, 1024, new String[] {"h1"}, nodeManagers[0], allocator, 1, 0, 0, 2, rm); - Assert.assertEquals("No of assignments must be 0", 0, assigned.size()); + Assertions.assertEquals(0, assigned.size(), "No of assignments must be 0"); // Known=4, blacklisted=2, ignore should be true. Should assign 2 // containers. assigned = getContainerOnHost(jobId, 8, 1024, new String[] {"h1"}, nodeManagers[0], allocator, 0, 0, 0, 0, rm); - Assert.assertEquals("No of assignments must be 2", 2, assigned.size()); + Assertions.assertEquals(2, assigned.size(), "No of assignments must be 2"); // Known=4, blacklisted=2, ignore should be true. assigned = getContainerOnHost(jobId, 9, 1024, new String[] {"h2"}, nodeManagers[1], allocator, 0, 0, 0, 0, rm); - Assert.assertEquals("No of assignments must be 1", 1, assigned.size()); + Assertions.assertEquals(1, assigned.size(), "No of assignments must be 1"); // Test blacklist while ignore blacklisting enabled ContainerFailedEvent f3 = createFailEvent(jobId, 4, "h3", false); @@ -1580,7 +1593,7 @@ public class TestRMContainerAllocator { assigned = getContainerOnHost(jobId, 10, 1024, new String[] {"h3"}, nodeManagers[2], allocator, 0, 0, 0, 0, rm); - Assert.assertEquals("No of assignments must be 1", 1, assigned.size()); + Assertions.assertEquals(1, assigned.size(), "No of assignments must be 1"); // Assign on 5 more nodes - to re-enable blacklisting for (int i = 0; i < 5; i++) { @@ -1589,14 +1602,15 @@ public class TestRMContainerAllocator { getContainerOnHost(jobId, 11 + i, 1024, new String[] {String.valueOf(5 + i)}, nodeManagers[4 + i], allocator, 0, 0, (i == 4 ? 3 : 0), 0, rm); - Assert.assertEquals("No of assignments must be 1", 1, assigned.size()); + Assertions.assertEquals(1, assigned.size(), "No of assignments must be 1"); } // Test h3 (blacklisted while ignoring blacklisting) is blacklisted. assigned = getContainerOnHost(jobId, 20, 1024, new String[] {"h3"}, nodeManagers[2], allocator, 0, 0, 0, 0, rm); - Assert.assertEquals("No of assignments must be 0", 0, assigned.size()); + Assertions.assertEquals(0, assigned.size(), + "No of assignments must be 0"); } private MockNM registerNodeManager(int i, MyResourceManager rm) @@ -1623,7 +1637,8 @@ public class TestRMContainerAllocator { rm.drainEvents(); assertBlacklistAdditionsAndRemovals( expectedAdditions1, expectedRemovals1, rm); - Assert.assertEquals("No of assignments must be 0", 0, assigned.size()); + Assertions.assertEquals(0, assigned.size(), + "No of assignments must be 0"); // Heartbeat from the required nodeManager mockNM.nodeHeartbeat(true); @@ -1688,7 +1703,8 @@ public class TestRMContainerAllocator { // as nodes are not added, no allocations List assigned = allocator.schedule(); rm.drainEvents(); - Assert.assertEquals("No of assignments must be 0", 0, assigned.size()); + Assertions.assertEquals(0, assigned.size(), + "No of assignments must be 0"); LOG.info("h1 Heartbeat (To actually schedule the containers)"); // update resources in scheduler @@ -1699,7 +1715,8 @@ public class TestRMContainerAllocator { assigned = allocator.schedule(); rm.drainEvents(); assertBlacklistAdditionsAndRemovals(0, 0, rm); - Assert.assertEquals("No of assignments must be 1", 1, assigned.size()); + Assertions.assertEquals(1, assigned.size(), + "No of assignments must be 1"); LOG.info("Failing container _1 on H1 (should blacklist the node)"); // Send events to blacklist nodes h1 and h2 @@ -1717,7 +1734,8 @@ public class TestRMContainerAllocator { assigned = allocator.schedule(); rm.drainEvents(); assertBlacklistAdditionsAndRemovals(1, 0, rm); - Assert.assertEquals("No of assignments must be 0", 0, assigned.size()); + Assertions.assertEquals(0, assigned.size(), + "No of assignments must be 0"); // send another request with different resource and priority ContainerRequestEvent event3 = @@ -1738,7 +1756,8 @@ public class TestRMContainerAllocator { assigned = allocator.schedule(); rm.drainEvents(); assertBlacklistAdditionsAndRemovals(0, 0, rm); - Assert.assertEquals("No of assignments must be 0", 0, assigned.size()); + Assertions.assertEquals(0, assigned.size(), + "No of assignments must be 0"); //RMContainerAllocator gets assigned a p:5 on a blacklisted node. @@ -1747,7 +1766,8 @@ public class TestRMContainerAllocator { assigned = allocator.schedule(); rm.drainEvents(); assertBlacklistAdditionsAndRemovals(0, 0, rm); - Assert.assertEquals("No of assignments must be 0", 0, assigned.size()); + Assertions.assertEquals(0, assigned.size(), + "No of assignments must be 0"); //Hearbeat from H3 to schedule on this host. LOG.info("h3 Heartbeat (To re-schedule the containers)"); @@ -1766,27 +1786,29 @@ public class TestRMContainerAllocator { " with priority " + assig.getContainer().getPriority()); } - Assert.assertEquals("No of assignments must be 2", 2, assigned.size()); + Assertions.assertEquals(2, assigned.size(), + "No of assignments must be 2"); // validate that all containers are assigned to h3 for (TaskAttemptContainerAssignedEvent assig : assigned) { - Assert.assertEquals("Assigned container " + assig.getContainer().getId() - + " host not correct", "h3", assig.getContainer().getNodeId().getHost()); + Assertions.assertEquals("h3", assig.getContainer().getNodeId().getHost(), + "Assigned container " + assig.getContainer().getId() + + " host not correct"); } } private static void assertBlacklistAdditionsAndRemovals( int expectedAdditions, int expectedRemovals, MyResourceManager rm) { - Assert.assertEquals(expectedAdditions, + Assertions.assertEquals(expectedAdditions, rm.getMyFifoScheduler().lastBlacklistAdditions.size()); - Assert.assertEquals(expectedRemovals, + Assertions.assertEquals(expectedRemovals, rm.getMyFifoScheduler().lastBlacklistRemovals.size()); } private static void assertAsksAndReleases(int expectedAsk, int expectedRelease, MyResourceManager rm) { - Assert.assertEquals(expectedAsk, rm.getMyFifoScheduler().lastAsk.size()); - Assert.assertEquals(expectedRelease, + Assertions.assertEquals(expectedAsk, rm.getMyFifoScheduler().lastAsk.size()); + Assertions.assertEquals(expectedRelease, rm.getMyFifoScheduler().lastRelease.size()); } @@ -1929,17 +1951,17 @@ public class TestRMContainerAllocator { private void checkAssignments(ContainerRequestEvent[] requests, List assignments, boolean checkHostMatch) { - Assert.assertNotNull("Container not assigned", assignments); - Assert.assertEquals("Assigned count not correct", requests.length, - assignments.size()); + Assertions.assertNotNull(assignments, "Container not assigned"); + Assertions.assertEquals(requests.length, assignments.size(), + "Assigned count not correct"); // check for uniqueness of containerIDs Set containerIds = new HashSet(); for (TaskAttemptContainerAssignedEvent assigned : assignments) { containerIds.add(assigned.getContainer().getId()); } - Assert.assertEquals("Assigned containers must be different", assignments - .size(), containerIds.size()); + Assertions.assertEquals(assignments.size(), containerIds.size(), + "Assigned containers must be different"); // check for all assignment for (ContainerRequestEvent req : requests) { @@ -1956,14 +1978,14 @@ public class TestRMContainerAllocator { private void checkAssignment(ContainerRequestEvent request, TaskAttemptContainerAssignedEvent assigned, boolean checkHostMatch) { - Assert.assertNotNull("Nothing assigned to attempt " - + request.getAttemptID(), assigned); - Assert.assertEquals("assigned to wrong attempt", request.getAttemptID(), - assigned.getTaskAttemptID()); + Assertions.assertNotNull(assigned, "Nothing assigned to attempt " + + request.getAttemptID()); + Assertions.assertEquals(request.getAttemptID(), assigned.getTaskAttemptID(), + "assigned to wrong attempt"); if (checkHostMatch) { - Assert.assertTrue("Not assigned to requested host", Arrays.asList( - request.getHosts()).contains( - assigned.getContainer().getNodeId().getHost())); + Assertions.assertTrue(Arrays.asList(request.getHosts()).contains( + assigned.getContainer().getNodeId().getHost()), + "Not assigned to requested host"); } } @@ -2184,8 +2206,8 @@ public class TestRMContainerAllocator { int scheduledReduces = 0; int assignedMaps = 2; int assignedReduces = 0; - Resource mapResourceReqt = BuilderUtils.newResource(1024, 1); - Resource reduceResourceReqt = BuilderUtils.newResource(2 * 1024, 1); + Resource mapResourceReqt = Resources.createResource(1024); + Resource reduceResourceReqt = Resources.createResource(2 * 1024); int numPendingReduces = 4; float maxReduceRampupLimit = 0.5f; float reduceSlowStart = 0.2f; @@ -2220,7 +2242,7 @@ public class TestRMContainerAllocator { verify(allocator, never()).scheduleAllReduces(); succeededMaps = 3; - doReturn(BuilderUtils.newResource(0, 0)).when(allocator).getResourceLimit(); + doReturn(Resources.createResource(0)).when(allocator).getResourceLimit(); allocator.scheduleReduces( totalMaps, succeededMaps, scheduledMaps, scheduledReduces, @@ -2231,7 +2253,7 @@ public class TestRMContainerAllocator { verify(allocator, times(1)).setIsReduceStarted(true); // Test reduce ramp-up - doReturn(BuilderUtils.newResource(100 * 1024, 100 * 1)).when(allocator) + doReturn(Resources.createResource(100 * 1024, 100 * 1)).when(allocator) .getResourceLimit(); allocator.scheduleReduces( totalMaps, succeededMaps, @@ -2245,7 +2267,7 @@ public class TestRMContainerAllocator { // Test reduce ramp-down scheduledReduces = 3; - doReturn(BuilderUtils.newResource(10 * 1024, 10 * 1)).when(allocator) + doReturn(Resources.createResource(10 * 1024, 10 * 1)).when(allocator) .getResourceLimit(); allocator.scheduleReduces( totalMaps, succeededMaps, @@ -2261,7 +2283,7 @@ public class TestRMContainerAllocator { // should be invoked twice. scheduledMaps = 2; assignedReduces = 2; - doReturn(BuilderUtils.newResource(10 * 1024, 10 * 1)).when(allocator) + doReturn(Resources.createResource(10 * 1024, 10 * 1)).when(allocator) .getResourceLimit(); allocator.scheduleReduces( totalMaps, succeededMaps, @@ -2279,7 +2301,7 @@ public class TestRMContainerAllocator { // Test ramp-down when enough memory but not enough cpu resource scheduledMaps = 10; assignedReduces = 0; - doReturn(BuilderUtils.newResource(100 * 1024, 5 * 1)).when(allocator) + doReturn(Resources.createResource(100 * 1024, 5 * 1)).when(allocator) .getResourceLimit(); allocator.scheduleReduces(totalMaps, succeededMaps, scheduledMaps, scheduledReduces, assignedMaps, assignedReduces, mapResourceReqt, @@ -2288,7 +2310,7 @@ public class TestRMContainerAllocator { verify(allocator, times(3)).rampDownReduces(anyInt()); // Test ramp-down when enough cpu but not enough memory resource - doReturn(BuilderUtils.newResource(10 * 1024, 100 * 1)).when(allocator) + doReturn(Resources.createResource(10 * 1024, 100 * 1)).when(allocator) .getResourceLimit(); allocator.scheduleReduces(totalMaps, succeededMaps, scheduledMaps, scheduledReduces, assignedMaps, assignedReduces, mapResourceReqt, @@ -2350,13 +2372,13 @@ public class TestRMContainerAllocator { allocator.recalculatedReduceSchedule = false; allocator.schedule(); - Assert.assertFalse("Unexpected recalculate of reduce schedule", - allocator.recalculatedReduceSchedule); + Assertions.assertFalse(allocator.recalculatedReduceSchedule, + "Unexpected recalculate of reduce schedule"); doReturn(1).when(job).getCompletedMaps(); allocator.schedule(); - Assert.assertTrue("Expected recalculate of reduce schedule", - allocator.recalculatedReduceSchedule); + Assertions.assertTrue(allocator.recalculatedReduceSchedule, + "Expected recalculate of reduce schedule"); } @Test @@ -2394,14 +2416,14 @@ public class TestRMContainerAllocator { Thread.sleep(10); timeToWaitMs -= 10; } - Assert.assertEquals(5, allocator.getLastHeartbeatTime()); + Assertions.assertEquals(5, allocator.getLastHeartbeatTime()); clock.setTime(7); timeToWaitMs = 5000; while (allocator.getLastHeartbeatTime() != 7 && timeToWaitMs > 0) { Thread.sleep(10); timeToWaitMs -= 10; } - Assert.assertEquals(7, allocator.getLastHeartbeatTime()); + Assertions.assertEquals(7, allocator.getLastHeartbeatTime()); final AtomicBoolean callbackCalled = new AtomicBoolean(false); allocator.runOnNextHeartbeat(new Runnable() { @@ -2416,8 +2438,8 @@ public class TestRMContainerAllocator { Thread.sleep(10); timeToWaitMs -= 10; } - Assert.assertEquals(8, allocator.getLastHeartbeatTime()); - Assert.assertTrue(callbackCalled.get()); + Assertions.assertEquals(8, allocator.getLastHeartbeatTime()); + Assertions.assertTrue(callbackCalled.get()); } @Test @@ -2445,12 +2467,12 @@ public class TestRMContainerAllocator { TaskAttemptEvent event = allocator.createContainerFinishedEvent(status, attemptId); - Assert.assertEquals(TaskAttemptEventType.TA_CONTAINER_COMPLETED, + Assertions.assertEquals(TaskAttemptEventType.TA_CONTAINER_COMPLETED, event.getType()); TaskAttemptEvent abortedEvent = allocator.createContainerFinishedEvent( abortedStatus, attemptId); - Assert.assertEquals(TaskAttemptEventType.TA_KILL, abortedEvent.getType()); + Assertions.assertEquals(TaskAttemptEventType.TA_KILL, abortedEvent.getType()); // PREEMPTED ContainerId containerId2 = @@ -2463,12 +2485,12 @@ public class TestRMContainerAllocator { TaskAttemptEvent event2 = allocator.createContainerFinishedEvent(status2, attemptId); - Assert.assertEquals(TaskAttemptEventType.TA_CONTAINER_COMPLETED, + Assertions.assertEquals(TaskAttemptEventType.TA_CONTAINER_COMPLETED, event2.getType()); TaskAttemptEvent abortedEvent2 = allocator.createContainerFinishedEvent( preemptedStatus, attemptId); - Assert.assertEquals(TaskAttemptEventType.TA_KILL, abortedEvent2.getType()); + Assertions.assertEquals(TaskAttemptEventType.TA_KILL, abortedEvent2.getType()); // KILLED_BY_CONTAINER_SCHEDULER ContainerId containerId3 = @@ -2482,12 +2504,12 @@ public class TestRMContainerAllocator { TaskAttemptEvent event3 = allocator.createContainerFinishedEvent(status3, attemptId); - Assert.assertEquals(TaskAttemptEventType.TA_CONTAINER_COMPLETED, + Assertions.assertEquals(TaskAttemptEventType.TA_CONTAINER_COMPLETED, event3.getType()); TaskAttemptEvent abortedEvent3 = allocator.createContainerFinishedEvent( killedByContainerSchedulerStatus, attemptId); - Assert.assertEquals(TaskAttemptEventType.TA_KILL, abortedEvent3.getType()); + Assertions.assertEquals(TaskAttemptEventType.TA_KILL, abortedEvent3.getType()); } @Test @@ -2528,9 +2550,9 @@ public class TestRMContainerAllocator { MyContainerAllocator allocator = (MyContainerAllocator) mrApp.getContainerAllocator(); amDispatcher.await(); - Assert.assertTrue(allocator.isApplicationMasterRegistered()); + Assertions.assertTrue(allocator.isApplicationMasterRegistered()); mrApp.stop(); - Assert.assertTrue(allocator.isUnregistered()); + Assertions.assertTrue(allocator.isUnregistered()); } // Step-1 : AM send allocate request for 2 ContainerRequests and 1 @@ -2610,8 +2632,8 @@ public class TestRMContainerAllocator { List assignedContainers = allocator.schedule(); rm1.drainEvents(); - Assert.assertEquals("No of assignments must be 0", 0, - assignedContainers.size()); + Assertions.assertEquals(0, assignedContainers.size(), + "No of assignments must be 0"); // Why ask is 3, not 4? --> ask from blacklisted node h2 is removed assertAsksAndReleases(3, 0, rm1); assertBlacklistAdditionsAndRemovals(1, 0, rm1); @@ -2622,14 +2644,14 @@ public class TestRMContainerAllocator { // Step-2 : 2 containers are allocated by RM. assignedContainers = allocator.schedule(); rm1.drainEvents(); - Assert.assertEquals("No of assignments must be 2", 2, - assignedContainers.size()); + Assertions.assertEquals(2, assignedContainers.size(), + "No of assignments must be 2"); assertAsksAndReleases(0, 0, rm1); assertBlacklistAdditionsAndRemovals(0, 0, rm1); assignedContainers = allocator.schedule(); - Assert.assertEquals("No of assignments must be 0", 0, - assignedContainers.size()); + Assertions.assertEquals(0, assignedContainers.size(), + "No of assignments must be 0"); assertAsksAndReleases(3, 0, rm1); assertBlacklistAdditionsAndRemovals(0, 0, rm1); @@ -2648,8 +2670,8 @@ public class TestRMContainerAllocator { allocator.sendDeallocate(deallocate1); assignedContainers = allocator.schedule(); - Assert.assertEquals("No of assignments must be 0", 0, - assignedContainers.size()); + Assertions.assertEquals(0, assignedContainers.size(), + "No of assignments must be 0"); assertAsksAndReleases(3, 1, rm1); assertBlacklistAdditionsAndRemovals(0, 0, rm1); @@ -2661,7 +2683,7 @@ public class TestRMContainerAllocator { // NM should be rebooted on heartbeat, even first heartbeat for nm2 NodeHeartbeatResponse hbResponse = nm1.nodeHeartbeat(true); - Assert.assertEquals(NodeAction.RESYNC, hbResponse.getNodeAction()); + Assertions.assertEquals(NodeAction.RESYNC, hbResponse.getNodeAction()); // new NM to represent NM re-register nm1 = new MockNM("h1:1234", 10240, rm2.getResourceTrackerService()); @@ -2714,12 +2736,12 @@ public class TestRMContainerAllocator { assignedContainers = allocator.schedule(); rm2.drainEvents(); - Assert.assertEquals("Number of container should be 3", 3, - assignedContainers.size()); + Assertions.assertEquals(3, assignedContainers.size(), + "Number of container should be 3"); for (TaskAttemptContainerAssignedEvent assig : assignedContainers) { - Assert.assertTrue("Assigned count not correct", - "h1".equals(assig.getContainer().getNodeId().getHost())); + Assertions.assertTrue("h1".equals(assig.getContainer().getNodeId().getHost()), + "Assigned count not correct"); } rm1.stop(); @@ -2763,7 +2785,7 @@ public class TestRMContainerAllocator { allocator.sendRequests(Arrays.asList(mapRequestEvt)); allocator.schedule(); - Assert.assertEquals(0, mockScheduler.lastAnyAskMap); + Assertions.assertEquals(0, mockScheduler.lastAnyAskMap); } @Test @@ -2806,7 +2828,7 @@ public class TestRMContainerAllocator { allocator.scheduleAllReduces(); allocator.schedule(); - Assert.assertEquals(0, mockScheduler.lastAnyAskReduce); + Assertions.assertEquals(0, mockScheduler.lastAnyAskReduce); } @Test @@ -2841,19 +2863,20 @@ public class TestRMContainerAllocator { allocator.jobEvents.clear(); try { allocator.schedule(); - Assert.fail("Should Have Exception"); + Assertions.fail("Should Have Exception"); } catch (RMContainerAllocationException e) { - Assert.assertTrue(e.getMessage().contains("Could not contact RM after")); + Assertions.assertTrue(e.getMessage().contains("Could not contact RM after")); } rm1.drainEvents(); - Assert.assertEquals("Should Have 1 Job Event", 1, - allocator.jobEvents.size()); + Assertions.assertEquals(1, allocator.jobEvents.size(), + "Should Have 1 Job Event"); JobEvent event = allocator.jobEvents.get(0); - Assert.assertTrue("Should Reboot", - event.getType().equals(JobEventType.JOB_AM_REBOOT)); + Assertions.assertTrue(event.getType().equals(JobEventType.JOB_AM_REBOOT), + "Should Reboot"); } - @Test(timeout=60000) + @Test + @Timeout(60000) public void testAMRMTokenUpdate() throws Exception { LOG.info("Running testAMRMTokenUpdate"); @@ -2891,7 +2914,7 @@ public class TestRMContainerAllocator { final Token oldToken = rm.getRMContext().getRMApps() .get(appId).getRMAppAttempt(appAttemptId).getAMRMToken(); - Assert.assertNotNull("app should have a token", oldToken); + Assertions.assertNotNull(oldToken, "app should have a token"); UserGroupInformation testUgi = UserGroupInformation.createUserForTesting( "someuser", new String[0]); Token newToken = testUgi.doAs( @@ -2906,7 +2929,7 @@ public class TestRMContainerAllocator { long startTime = Time.monotonicNow(); while (currentToken == oldToken) { if (Time.monotonicNow() - startTime > 20000) { - Assert.fail("Took to long to see AMRM token change"); + Assertions.fail("Took to long to see AMRM token change"); } Thread.sleep(100); allocator.schedule(); @@ -2929,13 +2952,13 @@ public class TestRMContainerAllocator { } } - Assert.assertEquals("too many AMRM tokens", 1, tokenCount); - Assert.assertArrayEquals("token identifier not updated", - newToken.getIdentifier(), ugiToken.getIdentifier()); - Assert.assertArrayEquals("token password not updated", - newToken.getPassword(), ugiToken.getPassword()); - Assert.assertEquals("AMRM token service not updated", - new Text(rmAddr), ugiToken.getService()); + Assertions.assertEquals(1, tokenCount, "too many AMRM tokens"); + Assertions.assertArrayEquals(newToken.getIdentifier(), ugiToken.getIdentifier(), + "token identifier not updated"); + Assertions.assertArrayEquals(newToken.getPassword(), ugiToken.getPassword(), + "token password not updated"); + Assertions.assertEquals(new Text(rmAddr), ugiToken.getService(), + "AMRM token service not updated"); } @Test @@ -2975,7 +2998,7 @@ public class TestRMContainerAllocator { @Override protected void setRequestLimit(Priority priority, Resource capability, int limit) { - Assert.fail("setRequestLimit() should not be invoked"); + Assertions.fail("setRequestLimit() should not be invoked"); } }; @@ -3057,22 +3080,22 @@ public class TestRMContainerAllocator { // verify all of the host-specific asks were sent plus one for the // default rack and one for the ANY request - Assert.assertEquals(reqMapEvents.length + 2, mockScheduler.lastAsk.size()); + Assertions.assertEquals(reqMapEvents.length + 2, mockScheduler.lastAsk.size()); // verify AM is only asking for the map limit overall - Assert.assertEquals(MAP_LIMIT, mockScheduler.lastAnyAskMap); + Assertions.assertEquals(MAP_LIMIT, mockScheduler.lastAnyAskMap); // assign a map task and verify we do not ask for any more maps ContainerId cid0 = mockScheduler.assignContainer("h0", false); allocator.schedule(); allocator.schedule(); - Assert.assertEquals(2, mockScheduler.lastAnyAskMap); + Assertions.assertEquals(2, mockScheduler.lastAnyAskMap); // complete the map task and verify that we ask for one more mockScheduler.completeContainer(cid0); allocator.schedule(); allocator.schedule(); - Assert.assertEquals(3, mockScheduler.lastAnyAskMap); + Assertions.assertEquals(3, mockScheduler.lastAnyAskMap); // assign three more maps and verify we ask for no more maps ContainerId cid1 = mockScheduler.assignContainer("h1", false); @@ -3080,7 +3103,7 @@ public class TestRMContainerAllocator { ContainerId cid3 = mockScheduler.assignContainer("h3", false); allocator.schedule(); allocator.schedule(); - Assert.assertEquals(0, mockScheduler.lastAnyAskMap); + Assertions.assertEquals(0, mockScheduler.lastAnyAskMap); // complete two containers and verify we only asked for one more // since at that point all maps should be scheduled/completed @@ -3088,7 +3111,7 @@ public class TestRMContainerAllocator { mockScheduler.completeContainer(cid3); allocator.schedule(); allocator.schedule(); - Assert.assertEquals(1, mockScheduler.lastAnyAskMap); + Assertions.assertEquals(1, mockScheduler.lastAnyAskMap); // allocate the last container and complete the first one // and verify there are no more map asks. @@ -3096,76 +3119,77 @@ public class TestRMContainerAllocator { ContainerId cid4 = mockScheduler.assignContainer("h4", false); allocator.schedule(); allocator.schedule(); - Assert.assertEquals(0, mockScheduler.lastAnyAskMap); + Assertions.assertEquals(0, mockScheduler.lastAnyAskMap); // complete the last map mockScheduler.completeContainer(cid4); allocator.schedule(); allocator.schedule(); - Assert.assertEquals(0, mockScheduler.lastAnyAskMap); + Assertions.assertEquals(0, mockScheduler.lastAnyAskMap); // verify only reduce limit being requested - Assert.assertEquals(REDUCE_LIMIT, mockScheduler.lastAnyAskReduce); + Assertions.assertEquals(REDUCE_LIMIT, mockScheduler.lastAnyAskReduce); // assign a reducer and verify ask goes to zero cid0 = mockScheduler.assignContainer("h0", true); allocator.schedule(); allocator.schedule(); - Assert.assertEquals(0, mockScheduler.lastAnyAskReduce); + Assertions.assertEquals(0, mockScheduler.lastAnyAskReduce); // complete the reducer and verify we ask for another mockScheduler.completeContainer(cid0); allocator.schedule(); allocator.schedule(); - Assert.assertEquals(1, mockScheduler.lastAnyAskReduce); + Assertions.assertEquals(1, mockScheduler.lastAnyAskReduce); // assign a reducer and verify ask goes to zero cid0 = mockScheduler.assignContainer("h0", true); allocator.schedule(); allocator.schedule(); - Assert.assertEquals(0, mockScheduler.lastAnyAskReduce); + Assertions.assertEquals(0, mockScheduler.lastAnyAskReduce); // complete the reducer and verify no more reducers mockScheduler.completeContainer(cid0); allocator.schedule(); allocator.schedule(); - Assert.assertEquals(0, mockScheduler.lastAnyAskReduce); + Assertions.assertEquals(0, mockScheduler.lastAnyAskReduce); allocator.close(); } - @Test(expected = RMContainerAllocationException.class) + @Test public void testAttemptNotFoundCausesRMCommunicatorException() throws Exception { + assertThrows(RMContainerAllocationException.class, () -> { + Configuration conf = new Configuration(); + MyResourceManager rm = new MyResourceManager(conf); + rm.start(); - Configuration conf = new Configuration(); - MyResourceManager rm = new MyResourceManager(conf); - rm.start(); + // Submit the application + RMApp app = MockRMAppSubmitter.submitWithMemory(1024, rm); + rm.drainEvents(); - // Submit the application - RMApp app = MockRMAppSubmitter.submitWithMemory(1024, rm); - rm.drainEvents(); + MockNM amNodeManager = rm.registerNode("amNM:1234", 2048); + amNodeManager.nodeHeartbeat(true); + rm.drainEvents(); - MockNM amNodeManager = rm.registerNode("amNM:1234", 2048); - amNodeManager.nodeHeartbeat(true); - rm.drainEvents(); + ApplicationAttemptId appAttemptId = app.getCurrentAppAttempt() + .getAppAttemptId(); + rm.sendAMLaunched(appAttemptId); + rm.drainEvents(); - ApplicationAttemptId appAttemptId = app.getCurrentAppAttempt() - .getAppAttemptId(); - rm.sendAMLaunched(appAttemptId); - rm.drainEvents(); + JobId jobId = MRBuilderUtils.newJobId(appAttemptId.getApplicationId(), 0); + Job mockJob = mock(Job.class); + when(mockJob.getReport()).thenReturn( + MRBuilderUtils.newJobReport(jobId, "job", "user", JobState.RUNNING, 0, + 0, 0, 0, 0, 0, 0, "jobfile", null, false, "")); + MyContainerAllocator allocator = new MyContainerAllocator(rm, conf, + appAttemptId, mockJob); - JobId jobId = MRBuilderUtils.newJobId(appAttemptId.getApplicationId(), 0); - Job mockJob = mock(Job.class); - when(mockJob.getReport()).thenReturn( - MRBuilderUtils.newJobReport(jobId, "job", "user", JobState.RUNNING, 0, - 0, 0, 0, 0, 0, 0, "jobfile", null, false, "")); - MyContainerAllocator allocator = new MyContainerAllocator(rm, conf, - appAttemptId, mockJob); - - // Now kill the application - rm.killApp(app.getApplicationId()); - rm.waitForState(app.getApplicationId(), RMAppState.KILLED); - allocator.schedule(); + // Now kill the application + rm.killApp(app.getApplicationId()); + rm.waitForState(app.getApplicationId(), RMAppState.KILLED); + allocator.schedule(); + }); } @Test @@ -3246,29 +3270,29 @@ public class TestRMContainerAllocator { rm.drainEvents(); // One map is assigned. - Assert.assertEquals(1, allocator.getAssignedRequests().maps.size()); + Assertions.assertEquals(1, allocator.getAssignedRequests().maps.size()); // Send deallocate request for map so that no maps are assigned after this. ContainerAllocatorEvent deallocate = createDeallocateEvent(jobId, 1, false); allocator.sendDeallocate(deallocate); // Now one reducer should be scheduled and one should be pending. - Assert.assertEquals(1, allocator.getScheduledRequests().reduces.size()); - Assert.assertEquals(1, allocator.getNumOfPendingReduces()); + Assertions.assertEquals(1, allocator.getScheduledRequests().reduces.size()); + Assertions.assertEquals(1, allocator.getNumOfPendingReduces()); // No map should be assigned and one should be scheduled. - Assert.assertEquals(1, allocator.getScheduledRequests().maps.size()); - Assert.assertEquals(0, allocator.getAssignedRequests().maps.size()); + Assertions.assertEquals(1, allocator.getScheduledRequests().maps.size()); + Assertions.assertEquals(0, allocator.getAssignedRequests().maps.size()); - Assert.assertEquals(6, allocator.getAsk().size()); + Assertions.assertEquals(6, allocator.getAsk().size()); for (ResourceRequest req : allocator.getAsk()) { boolean isReduce = req.getPriority().equals(RMContainerAllocator.PRIORITY_REDUCE); if (isReduce) { // 1 reducer each asked on h2, * and default-rack - Assert.assertTrue((req.getResourceName().equals("*") || + Assertions.assertTrue((req.getResourceName().equals("*") || req.getResourceName().equals("/default-rack") || req.getResourceName().equals("h2")) && req.getNumContainers() == 1); } else { //map // 0 mappers asked on h1 and 1 each on * and default-rack - Assert.assertTrue(((req.getResourceName().equals("*") || + Assertions.assertTrue(((req.getResourceName().equals("*") || req.getResourceName().equals("/default-rack")) && req.getNumContainers() == 1) || (req.getResourceName().equals("h1") && req.getNumContainers() == 0)); @@ -3281,17 +3305,17 @@ public class TestRMContainerAllocator { // After allocate response from scheduler, all scheduled reduces are ramped // down and move to pending. 3 asks are also updated with 0 containers to // indicate ramping down of reduces to scheduler. - Assert.assertEquals(0, allocator.getScheduledRequests().reduces.size()); - Assert.assertEquals(2, allocator.getNumOfPendingReduces()); - Assert.assertEquals(3, allocator.getAsk().size()); + Assertions.assertEquals(0, allocator.getScheduledRequests().reduces.size()); + Assertions.assertEquals(2, allocator.getNumOfPendingReduces()); + Assertions.assertEquals(3, allocator.getAsk().size()); for (ResourceRequest req : allocator.getAsk()) { - Assert.assertEquals( + Assertions.assertEquals( RMContainerAllocator.PRIORITY_REDUCE, req.getPriority()); - Assert.assertTrue(req.getResourceName().equals("*") || + Assertions.assertTrue(req.getResourceName().equals("*") || req.getResourceName().equals("/default-rack") || req.getResourceName().equals("h2")); - Assert.assertEquals(Resource.newInstance(1024, 1), req.getCapability()); - Assert.assertEquals(0, req.getNumContainers()); + Assertions.assertEquals(Resource.newInstance(1024, 1), req.getCapability()); + Assertions.assertEquals(0, req.getNumContainers()); } } @@ -3416,29 +3440,29 @@ public class TestRMContainerAllocator { rm.drainEvents(); // One map is assigned. - Assert.assertEquals(1, allocator.getAssignedRequests().maps.size()); + Assertions.assertEquals(1, allocator.getAssignedRequests().maps.size()); // Send deallocate request for map so that no maps are assigned after this. ContainerAllocatorEvent deallocate = createDeallocateEvent(jobId, 1, false); allocator.sendDeallocate(deallocate); // Now one reducer should be scheduled and one should be pending. - Assert.assertEquals(1, allocator.getScheduledRequests().reduces.size()); - Assert.assertEquals(1, allocator.getNumOfPendingReduces()); + Assertions.assertEquals(1, allocator.getScheduledRequests().reduces.size()); + Assertions.assertEquals(1, allocator.getNumOfPendingReduces()); // No map should be assigned and one should be scheduled. - Assert.assertEquals(1, allocator.getScheduledRequests().maps.size()); - Assert.assertEquals(0, allocator.getAssignedRequests().maps.size()); + Assertions.assertEquals(1, allocator.getScheduledRequests().maps.size()); + Assertions.assertEquals(0, allocator.getAssignedRequests().maps.size()); - Assert.assertEquals(6, allocator.getAsk().size()); + Assertions.assertEquals(6, allocator.getAsk().size()); for (ResourceRequest req : allocator.getAsk()) { boolean isReduce = req.getPriority().equals(RMContainerAllocator.PRIORITY_REDUCE); if (isReduce) { // 1 reducer each asked on h2, * and default-rack - Assert.assertTrue((req.getResourceName().equals("*") || + Assertions.assertTrue((req.getResourceName().equals("*") || req.getResourceName().equals("/default-rack") || req.getResourceName().equals("h2")) && req.getNumContainers() == 1); } else { //map // 0 mappers asked on h1 and 1 each on * and default-rack - Assert.assertTrue(((req.getResourceName().equals("*") || + Assertions.assertTrue(((req.getResourceName().equals("*") || req.getResourceName().equals("/default-rack")) && req.getNumContainers() == 1) || (req.getResourceName().equals("h1") && req.getNumContainers() == 0)); @@ -3454,17 +3478,17 @@ public class TestRMContainerAllocator { // After allocate response from scheduler, all scheduled reduces are ramped // down and move to pending. 3 asks are also updated with 0 containers to // indicate ramping down of reduces to scheduler. - Assert.assertEquals(0, allocator.getScheduledRequests().reduces.size()); - Assert.assertEquals(2, allocator.getNumOfPendingReduces()); - Assert.assertEquals(3, allocator.getAsk().size()); + Assertions.assertEquals(0, allocator.getScheduledRequests().reduces.size()); + Assertions.assertEquals(2, allocator.getNumOfPendingReduces()); + Assertions.assertEquals(3, allocator.getAsk().size()); for (ResourceRequest req : allocator.getAsk()) { - Assert.assertEquals( + Assertions.assertEquals( RMContainerAllocator.PRIORITY_REDUCE, req.getPriority()); - Assert.assertTrue(req.getResourceName().equals("*") || + Assertions.assertTrue(req.getResourceName().equals("*") || req.getResourceName().equals("/default-rack") || req.getResourceName().equals("h2")); - Assert.assertEquals(Resource.newInstance(1024, 1), req.getCapability()); - Assert.assertEquals(0, req.getNumContainers()); + Assertions.assertEquals(Resource.newInstance(1024, 1), req.getCapability()); + Assertions.assertEquals(0, req.getNumContainers()); } } @@ -3552,14 +3576,14 @@ public class TestRMContainerAllocator { rm.drainEvents(); // Two maps are assigned. - Assert.assertEquals(2, allocator.getAssignedRequests().maps.size()); + Assertions.assertEquals(2, allocator.getAssignedRequests().maps.size()); // Send deallocate request for map so that no maps are assigned after this. ContainerAllocatorEvent deallocate1 = createDeallocateEvent(jobId, 1, false); allocator.sendDeallocate(deallocate1); ContainerAllocatorEvent deallocate2 = createDeallocateEvent(jobId, 2, false); allocator.sendDeallocate(deallocate2); // No map should be assigned. - Assert.assertEquals(0, allocator.getAssignedRequests().maps.size()); + Assertions.assertEquals(0, allocator.getAssignedRequests().maps.size()); nodeManager.nodeHeartbeat(true); rm.drainEvents(); @@ -3583,18 +3607,18 @@ public class TestRMContainerAllocator { allocator.schedule(); rm.drainEvents(); // One reducer is assigned and one map is scheduled - Assert.assertEquals(1, allocator.getScheduledRequests().maps.size()); - Assert.assertEquals(1, allocator.getAssignedRequests().reduces.size()); + Assertions.assertEquals(1, allocator.getScheduledRequests().maps.size()); + Assertions.assertEquals(1, allocator.getAssignedRequests().reduces.size()); // Headroom enough to run a mapper if headroom is taken as it is but wont be // enough if scheduled reducers resources are deducted. rm.getMyFifoScheduler().forceResourceLimit(Resource.newInstance(1260, 2)); allocator.schedule(); rm.drainEvents(); // After allocate response, the one assigned reducer is preempted and killed - Assert.assertEquals(1, MyContainerAllocator.getTaskAttemptKillEvents().size()); - Assert.assertEquals(RMContainerAllocator.RAMPDOWN_DIAGNOSTIC, + Assertions.assertEquals(1, MyContainerAllocator.getTaskAttemptKillEvents().size()); + Assertions.assertEquals(RMContainerAllocator.RAMPDOWN_DIAGNOSTIC, MyContainerAllocator.getTaskAttemptKillEvents().get(0).getMessage()); - Assert.assertEquals(1, allocator.getNumOfPendingReduces()); + Assertions.assertEquals(1, allocator.getNumOfPendingReduces()); } private static class MockScheduler implements ApplicationMasterProtocol { diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/rm/TestResourceCalculatorUtils.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/rm/TestResourceCalculatorUtils.java index cab8f544416..27cd3678535 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/rm/TestResourceCalculatorUtils.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/rm/TestResourceCalculatorUtils.java @@ -19,8 +19,8 @@ package org.apache.hadoop.mapreduce.v2.app.rm; import org.apache.hadoop.yarn.api.records.Resource; -import org.junit.Assert; -import org.junit.Test; +import org.junit.jupiter.api.Assertions; +import org.junit.jupiter.api.Test; import java.util.EnumSet; @@ -59,17 +59,17 @@ public class TestResourceCalculatorUtils { Resource nonZeroResource, int expectedNumberOfContainersForMemoryOnly, int expectedNumberOfContainersOverall) { - Assert.assertEquals("Incorrect number of available containers for Memory", - expectedNumberOfContainersForMemoryOnly, + Assertions.assertEquals(expectedNumberOfContainersForMemoryOnly, ResourceCalculatorUtils.computeAvailableContainers( clusterAvailableResources, nonZeroResource, - EnumSet.of(SchedulerResourceTypes.MEMORY))); + EnumSet.of(SchedulerResourceTypes.MEMORY)), + "Incorrect number of available containers for Memory"); - Assert.assertEquals("Incorrect number of available containers overall", - expectedNumberOfContainersOverall, + Assertions.assertEquals(expectedNumberOfContainersOverall, ResourceCalculatorUtils.computeAvailableContainers( clusterAvailableResources, nonZeroResource, EnumSet.of(SchedulerResourceTypes.CPU, - SchedulerResourceTypes.MEMORY))); + SchedulerResourceTypes.MEMORY)), + "Incorrect number of available containers overall"); } } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/speculate/TestDataStatistics.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/speculate/TestDataStatistics.java index d5b817c4828..3ac360ef53f 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/speculate/TestDataStatistics.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/speculate/TestDataStatistics.java @@ -18,8 +18,8 @@ package org.apache.hadoop.mapreduce.v2.app.speculate; -import org.junit.Assert; -import org.junit.Test; +import org.junit.jupiter.api.Assertions; +import org.junit.jupiter.api.Test; public class TestDataStatistics { @@ -28,21 +28,21 @@ public class TestDataStatistics { @Test public void testEmptyDataStatistics() throws Exception { DataStatistics statistics = new DataStatistics(); - Assert.assertEquals(0, statistics.count(), TOL); - Assert.assertEquals(0, statistics.mean(), TOL); - Assert.assertEquals(0, statistics.var(), TOL); - Assert.assertEquals(0, statistics.std(), TOL); - Assert.assertEquals(0, statistics.outlier(1.0f), TOL); + Assertions.assertEquals(0, statistics.count(), TOL); + Assertions.assertEquals(0, statistics.mean(), TOL); + Assertions.assertEquals(0, statistics.var(), TOL); + Assertions.assertEquals(0, statistics.std(), TOL); + Assertions.assertEquals(0, statistics.outlier(1.0f), TOL); } @Test public void testSingleEntryDataStatistics() throws Exception { DataStatistics statistics = new DataStatistics(17.29); - Assert.assertEquals(1, statistics.count(), TOL); - Assert.assertEquals(17.29, statistics.mean(), TOL); - Assert.assertEquals(0, statistics.var(), TOL); - Assert.assertEquals(0, statistics.std(), TOL); - Assert.assertEquals(17.29, statistics.outlier(1.0f), TOL); + Assertions.assertEquals(1, statistics.count(), TOL); + Assertions.assertEquals(17.29, statistics.mean(), TOL); + Assertions.assertEquals(0, statistics.var(), TOL); + Assertions.assertEquals(0, statistics.std(), TOL); + Assertions.assertEquals(17.29, statistics.outlier(1.0f), TOL); } @Test @@ -50,24 +50,24 @@ public class TestDataStatistics { DataStatistics statistics = new DataStatistics(); statistics.add(17); statistics.add(29); - Assert.assertEquals(2, statistics.count(), TOL); - Assert.assertEquals(23.0, statistics.mean(), TOL); - Assert.assertEquals(36.0, statistics.var(), TOL); - Assert.assertEquals(6.0, statistics.std(), TOL); - Assert.assertEquals(29.0, statistics.outlier(1.0f), TOL); + Assertions.assertEquals(2, statistics.count(), TOL); + Assertions.assertEquals(23.0, statistics.mean(), TOL); + Assertions.assertEquals(36.0, statistics.var(), TOL); + Assertions.assertEquals(6.0, statistics.std(), TOL); + Assertions.assertEquals(29.0, statistics.outlier(1.0f), TOL); } @Test public void testUpdateStatistics() throws Exception { DataStatistics statistics = new DataStatistics(17); statistics.add(29); - Assert.assertEquals(2, statistics.count(), TOL); - Assert.assertEquals(23.0, statistics.mean(), TOL); - Assert.assertEquals(36.0, statistics.var(), TOL); + Assertions.assertEquals(2, statistics.count(), TOL); + Assertions.assertEquals(23.0, statistics.mean(), TOL); + Assertions.assertEquals(36.0, statistics.var(), TOL); statistics.updateStatistics(17, 29); - Assert.assertEquals(2, statistics.count(), TOL); - Assert.assertEquals(29.0, statistics.mean(), TOL); - Assert.assertEquals(0.0, statistics.var(), TOL); + Assertions.assertEquals(2, statistics.count(), TOL); + Assertions.assertEquals(29.0, statistics.mean(), TOL); + Assertions.assertEquals(0.0, statistics.var(), TOL); } } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/speculate/forecast/TestSimpleExponentialForecast.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/speculate/forecast/TestSimpleExponentialForecast.java index b669df765ba..5324e0cff7e 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/speculate/forecast/TestSimpleExponentialForecast.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/speculate/forecast/TestSimpleExponentialForecast.java @@ -21,8 +21,8 @@ package org.apache.hadoop.mapreduce.v2.app.speculate.forecast; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.yarn.util.ControlledClock; -import org.junit.Assert; -import org.junit.Test; +import org.junit.jupiter.api.Assertions; +import org.junit.jupiter.api.Test; /** * Testing the statistical model of simple exponential estimator. @@ -100,21 +100,21 @@ public class TestSimpleExponentialForecast { @Test public void testSimpleExponentialForecastLinearInc() throws Exception { int res = incTestSimpleExponentialForecast(); - Assert.assertEquals("We got the wrong estimate from simple exponential.", - res, 0); + Assertions.assertEquals(res, 0, + "We got the wrong estimate from simple exponential."); } @Test public void testSimpleExponentialForecastLinearDec() throws Exception { int res = decTestSimpleExponentialForecast(); - Assert.assertEquals("We got the wrong estimate from simple exponential.", - res, 0); + Assertions.assertEquals(res, 0, + "We got the wrong estimate from simple exponential."); } @Test public void testSimpleExponentialForecastZeros() throws Exception { int res = zeroTestSimpleExponentialForecast(); - Assert.assertEquals("We got the wrong estimate from simple exponential.", - res, 0); + Assertions.assertEquals(res, 0, + "We got the wrong estimate from simple exponential."); } } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebApp.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebApp.java index adb6a573670..4b8ed0163d5 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebApp.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebApp.java @@ -19,7 +19,7 @@ package org.apache.hadoop.mapreduce.v2.app.webapp; import static org.apache.hadoop.mapreduce.v2.app.webapp.AMParams.APP_ID; -import static org.junit.Assert.assertEquals; +import static org.junit.jupiter.api.Assertions.assertEquals; import java.io.ByteArrayOutputStream; import java.io.File; @@ -39,7 +39,7 @@ import javax.net.ssl.SSLException; import org.apache.hadoop.mapreduce.MRJobConfig; import org.apache.hadoop.security.ssl.KeyStoreTestUtil; -import org.junit.Assert; +import org.junit.jupiter.api.Assertions; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.http.HttpConfig.Policy; @@ -65,14 +65,17 @@ import org.apache.hadoop.yarn.webapp.WebApps; import org.apache.hadoop.yarn.webapp.test.WebAppTests; import org.apache.hadoop.yarn.webapp.util.WebAppUtils; import org.apache.http.HttpStatus; -import org.junit.After; -import org.junit.Rule; -import org.junit.Test; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.Test; import org.apache.hadoop.thirdparty.com.google.common.net.HttpHeaders; import com.google.inject.Injector; -import org.junit.contrib.java.lang.system.EnvironmentVariables; +import org.junit.jupiter.api.extension.ExtendWith; +import uk.org.webcompere.systemstubs.environment.EnvironmentVariables; +import uk.org.webcompere.systemstubs.jupiter.SystemStub; +import uk.org.webcompere.systemstubs.jupiter.SystemStubsExtension; +@ExtendWith(SystemStubsExtension.class) public class TestAMWebApp { private static final File TEST_DIR = new File( @@ -80,12 +83,13 @@ public class TestAMWebApp { System.getProperty("java.io.tmpdir")), TestAMWebApp.class.getName()); - @After + @AfterEach public void tearDown() { TEST_DIR.delete(); } - @Test public void testAppControllerIndex() { + @Test + public void testAppControllerIndex() { AppContext ctx = new MockAppContext(0, 1, 1, 1); Injector injector = WebAppTests.createMockInjector(AppContext.class, ctx); AppController controller = injector.getInstance(AppController.class); @@ -93,25 +97,29 @@ public class TestAMWebApp { assertEquals(ctx.getApplicationID().toString(), controller.get(APP_ID,"")); } - @Test public void testAppView() { + @Test + public void testAppView() { WebAppTests.testPage(AppView.class, AppContext.class, new MockAppContext(0, 1, 1, 1)); } - @Test public void testJobView() { + @Test + public void testJobView() { AppContext appContext = new MockAppContext(0, 1, 1, 1); Map params = getJobParams(appContext); WebAppTests.testPage(JobPage.class, AppContext.class, appContext, params); } - @Test public void testTasksView() { + @Test + public void testTasksView() { AppContext appContext = new MockAppContext(0, 1, 1, 1); Map params = getTaskParams(appContext); WebAppTests.testPage(TasksPage.class, AppContext.class, appContext, params); } - @Test public void testTaskView() { + @Test + public void testTaskView() { AppContext appContext = new MockAppContext(0, 1, 1, 1); Map params = getTaskParams(appContext); App app = new App(appContext); @@ -138,19 +146,22 @@ public class TestAMWebApp { return params; } - @Test public void testConfView() { + @Test + public void testConfView() { WebAppTests.testPage(JobConfPage.class, AppContext.class, new MockAppContext(0, 1, 1, 1)); } - @Test public void testCountersView() { + @Test + public void testCountersView() { AppContext appContext = new MockAppContext(0, 1, 1, 1); Map params = getJobParams(appContext); WebAppTests.testPage(CountersPage.class, AppContext.class, appContext, params); } - @Test public void testSingleCounterView() { + @Test + public void testSingleCounterView() { AppContext appContext = new MockAppContext(0, 1, 1, 1); Job job = appContext.getAllJobs().values().iterator().next(); // add a failed task to the job without any counters @@ -165,14 +176,16 @@ public class TestAMWebApp { appContext, params); } - @Test public void testTaskCountersView() { + @Test + public void testTaskCountersView() { AppContext appContext = new MockAppContext(0, 1, 1, 1); Map params = getTaskParams(appContext); WebAppTests.testPage(CountersPage.class, AppContext.class, appContext, params); } - @Test public void testSingleTaskCounterView() { + @Test + public void testSingleTaskCounterView() { AppContext appContext = new MockAppContext(0, 1, 1, 2); Map params = getTaskParams(appContext); params.put(AMParams.COUNTER_GROUP, @@ -213,7 +226,7 @@ public class TestAMWebApp { InputStream in = conn.getInputStream(); ByteArrayOutputStream out = new ByteArrayOutputStream(); IOUtils.copyBytes(in, out, 1024); - Assert.assertTrue(out.toString().contains("MapReduce Application")); + Assertions.assertTrue(out.toString().contains("MapReduce Application")); // https:// is not accessible. URL httpsUrl = new URL("https://" + hostPort); @@ -221,7 +234,7 @@ public class TestAMWebApp { HttpURLConnection httpsConn = (HttpURLConnection) httpsUrl.openConnection(); httpsConn.getInputStream(); - Assert.fail("https:// is not accessible, expected to fail"); + Assertions.fail("https:// is not accessible, expected to fail"); } catch (SSLException e) { // expected } @@ -230,9 +243,8 @@ public class TestAMWebApp { app.verifyCompleted(); } - @Rule - public final EnvironmentVariables environmentVariables - = new EnvironmentVariables(); + @SystemStub + public EnvironmentVariables environmentVariables; @Test public void testMRWebAppSSLEnabled() throws Exception { @@ -270,7 +282,7 @@ public class TestAMWebApp { InputStream in = httpsConn.getInputStream(); ByteArrayOutputStream out = new ByteArrayOutputStream(); IOUtils.copyBytes(in, out, 1024); - Assert.assertTrue(out.toString().contains("MapReduce Application")); + Assertions.assertTrue(out.toString().contains("MapReduce Application")); // http:// is not accessible. URL httpUrl = new URL("http://" + hostPort); @@ -278,7 +290,7 @@ public class TestAMWebApp { HttpURLConnection httpConn = (HttpURLConnection) httpUrl.openConnection(); httpConn.getResponseCode(); - Assert.fail("http:// is not accessible, expected to fail"); + Assertions.fail("http:// is not accessible, expected to fail"); } catch (SocketException e) { // expected } @@ -337,7 +349,7 @@ public class TestAMWebApp { InputStream in = httpsConn.getInputStream(); ByteArrayOutputStream out = new ByteArrayOutputStream(); IOUtils.copyBytes(in, out, 1024); - Assert.assertTrue(out.toString().contains("MapReduce Application")); + Assertions.assertTrue(out.toString().contains("MapReduce Application")); // Try with wrong client cert KeyPair otherClientKeyPair = KeyStoreTestUtil.generateKeyPair("RSA"); @@ -349,7 +361,7 @@ public class TestAMWebApp { HttpURLConnection httpConn = (HttpURLConnection) httpsUrl.openConnection(); httpConn.getResponseCode(); - Assert.fail("Wrong client certificate, expected to fail"); + Assertions.fail("Wrong client certificate, expected to fail"); } catch (SSLException e) { // expected } @@ -404,9 +416,9 @@ public class TestAMWebApp { String expectedURL = scheme + conf.get(YarnConfiguration.PROXY_ADDRESS) + ProxyUriUtils.getPath(app.getAppID(), "/mapreduce", true); - Assert.assertEquals(expectedURL, + Assertions.assertEquals(expectedURL, conn.getHeaderField(HttpHeaders.LOCATION)); - Assert.assertEquals(HttpStatus.SC_MOVED_TEMPORARILY, + Assertions.assertEquals(HttpStatus.SC_MOVED_TEMPORARILY, conn.getResponseCode()); app.waitForState(job, JobState.SUCCEEDED); app.verifyCompleted(); diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServices.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServices.java index 5def1d91494..8c9a2d3fa0c 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServices.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServices.java @@ -19,9 +19,9 @@ package org.apache.hadoop.mapreduce.v2.app.webapp; import static org.apache.hadoop.yarn.webapp.WebServicesTestUtils.assertResponseStatusCode; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertTrue; -import static org.junit.Assert.fail; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; import java.io.StringReader; import java.util.Set; @@ -35,6 +35,7 @@ import org.apache.hadoop.http.JettyUtils; import org.apache.hadoop.mapreduce.v2.app.AppContext; import org.apache.hadoop.mapreduce.v2.app.MockAppContext; import org.apache.hadoop.util.Sets; +import org.apache.hadoop.util.XMLUtils; import org.apache.hadoop.yarn.webapp.GenericExceptionHandler; import org.apache.hadoop.yarn.webapp.GuiceServletConfig; import org.apache.hadoop.yarn.webapp.JerseyTestBase; @@ -42,8 +43,8 @@ import org.apache.hadoop.yarn.webapp.WebServicesTestUtils; import org.codehaus.jettison.json.JSONArray; import org.codehaus.jettison.json.JSONException; import org.codehaus.jettison.json.JSONObject; -import org.junit.Before; -import org.junit.Test; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; import org.w3c.dom.Document; import org.w3c.dom.Element; import org.w3c.dom.NodeList; @@ -92,7 +93,7 @@ public class TestAMWebServices extends JerseyTestBase { Guice.createInjector(new WebServletModule())); } - @Before + @BeforeEach @Override public void setUp() throws Exception { super.setUp(); @@ -116,7 +117,7 @@ public class TestAMWebServices extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); verifyAMInfo(json.getJSONObject("info"), appContext); } @@ -128,7 +129,7 @@ public class TestAMWebServices extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); verifyAMInfo(json.getJSONObject("info"), appContext); } @@ -140,7 +141,7 @@ public class TestAMWebServices extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); verifyAMInfo(json.getJSONObject("info"), appContext); } @@ -164,7 +165,7 @@ public class TestAMWebServices extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); verifyAMInfo(json.getJSONObject("info"), appContext); } @@ -177,7 +178,7 @@ public class TestAMWebServices extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); verifyAMInfo(json.getJSONObject("info"), appContext); } @@ -189,7 +190,7 @@ public class TestAMWebServices extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); verifyAMInfo(json.getJSONObject("info"), appContext); } @@ -263,7 +264,7 @@ public class TestAMWebServices extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); verifyBlacklistedNodesInfo(json, appContext); } @@ -281,7 +282,7 @@ public class TestAMWebServices extends JerseyTestBase { public void verifyAMInfo(JSONObject info, AppContext ctx) throws JSONException { - assertEquals("incorrect number of elements", 5, info.length()); + assertEquals(5, info.length(), "incorrect number of elements"); verifyAMInfoGeneric(ctx, info.getString("appId"), info.getString("user"), info.getString("name"), info.getLong("startedOn"), @@ -290,13 +291,13 @@ public class TestAMWebServices extends JerseyTestBase { public void verifyAMInfoXML(String xml, AppContext ctx) throws JSONException, Exception { - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); Document dom = db.parse(is); NodeList nodes = dom.getElementsByTagName("info"); - assertEquals("incorrect number of elements", 1, nodes.getLength()); + assertEquals(1, nodes.getLength(), "incorrect number of elements"); for (int i = 0; i < nodes.getLength(); i++) { Element element = (Element) nodes.item(i); @@ -319,8 +320,8 @@ public class TestAMWebServices extends JerseyTestBase { WebServicesTestUtils.checkStringMatch("name", ctx.getApplicationName(), name); - assertEquals("startedOn incorrect", ctx.getStartTime(), startedOn); - assertTrue("elapsedTime not greater then 0", (elapsedTime > 0)); + assertEquals(ctx.getStartTime(), startedOn, "startedOn incorrect"); + assertTrue((elapsedTime > 0), "elapsedTime not greater then 0"); } @@ -335,17 +336,17 @@ public class TestAMWebServices extends JerseyTestBase { public void verifyBlacklistedNodesInfoXML(String xml, AppContext ctx) throws JSONException, Exception { - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); Document dom = db.parse(is); NodeList infonodes = dom.getElementsByTagName("blacklistednodesinfo"); - assertEquals("incorrect number of elements", 1, infonodes.getLength()); + assertEquals(1, infonodes.getLength(), "incorrect number of elements"); NodeList nodes = dom.getElementsByTagName("blacklistedNodes"); Set blacklistedNodes = ctx.getBlacklistedNodes(); - assertEquals("incorrect number of elements", blacklistedNodes.size(), - nodes.getLength()); + assertEquals(blacklistedNodes.size(), + nodes.getLength(), "incorrect number of elements"); for (int i = 0; i < nodes.getLength(); i++) { Element element = (Element) nodes.item(i); assertTrue( diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesAttempt.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesAttempt.java index f20ac6ff1b8..fb5c8dbeb46 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesAttempt.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesAttempt.java @@ -18,7 +18,7 @@ package org.apache.hadoop.mapreduce.v2.app.webapp; -import static org.junit.Assert.assertEquals; +import static org.junit.jupiter.api.Assertions.assertEquals; import java.io.StringReader; import java.util.Enumeration; @@ -44,13 +44,14 @@ import org.apache.hadoop.mapreduce.v2.app.job.TaskAttempt; import org.apache.hadoop.mapreduce.v2.util.MRApps; import org.apache.hadoop.security.authentication.server.AuthenticationFilter; import org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler; +import org.apache.hadoop.util.XMLUtils; import org.apache.hadoop.yarn.webapp.GenericExceptionHandler; import org.apache.hadoop.yarn.webapp.GuiceServletConfig; import org.apache.hadoop.yarn.webapp.JerseyTestBase; import org.apache.hadoop.yarn.webapp.WebServicesTestUtils; import org.codehaus.jettison.json.JSONObject; -import org.junit.Before; -import org.junit.Test; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; import org.w3c.dom.Document; import org.w3c.dom.Element; import org.w3c.dom.NodeList; @@ -117,7 +118,7 @@ public class TestAMWebServicesAttempt extends JerseyTestBase { Guice.createInjector(new WebServletModule())); } - @Before + @BeforeEach @Override public void setUp() throws Exception { super.setUp(); @@ -156,7 +157,7 @@ public class TestAMWebServicesAttempt extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); assertEquals(att.getState().toString(), json.get("state")); } } @@ -185,7 +186,7 @@ public class TestAMWebServicesAttempt extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); @@ -225,7 +226,8 @@ public class TestAMWebServicesAttempt extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), + "incorrect number of elements"); assertEquals(TaskAttemptState.KILLED.toString(), json.get("state")); } } @@ -259,7 +261,7 @@ public class TestAMWebServicesAttempt extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesAttempts.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesAttempts.java index 32d054ff5c5..d534759319f 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesAttempts.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesAttempts.java @@ -19,11 +19,11 @@ package org.apache.hadoop.mapreduce.v2.app.webapp; import static org.apache.hadoop.yarn.webapp.WebServicesTestUtils.assertResponseStatusCode; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertNotNull; -import static org.junit.Assert.assertTrue; -import static org.junit.Assert.fail; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; import java.io.StringReader; import java.util.List; @@ -44,6 +44,7 @@ import org.apache.hadoop.mapreduce.v2.app.job.Job; import org.apache.hadoop.mapreduce.v2.app.job.Task; import org.apache.hadoop.mapreduce.v2.app.job.TaskAttempt; import org.apache.hadoop.mapreduce.v2.util.MRApps; +import org.apache.hadoop.util.XMLUtils; import org.apache.hadoop.yarn.webapp.GenericExceptionHandler; import org.apache.hadoop.yarn.webapp.GuiceServletConfig; import org.apache.hadoop.yarn.webapp.JerseyTestBase; @@ -51,8 +52,8 @@ import org.apache.hadoop.yarn.webapp.WebServicesTestUtils; import org.codehaus.jettison.json.JSONArray; import org.codehaus.jettison.json.JSONException; import org.codehaus.jettison.json.JSONObject; -import org.junit.Before; -import org.junit.Test; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; import org.w3c.dom.Document; import org.w3c.dom.Element; import org.w3c.dom.NodeList; @@ -100,7 +101,7 @@ public class TestAMWebServicesAttempts extends JerseyTestBase { Guice.createInjector(new WebServletModule())); } - @Before + @BeforeEach @Override public void setUp() throws Exception { super.setUp(); @@ -192,13 +193,13 @@ public class TestAMWebServicesAttempts extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); Document dom = db.parse(is); NodeList attempts = dom.getElementsByTagName("taskAttempts"); - assertEquals("incorrect number of elements", 1, attempts.getLength()); + assertEquals(1, attempts.getLength(), "incorrect number of elements"); NodeList nodes = dom.getElementsByTagName("taskAttempt"); verifyAMTaskAttemptsXML(nodes, task); @@ -228,7 +229,7 @@ public class TestAMWebServicesAttempts extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); JSONObject info = json.getJSONObject("taskAttempt"); verifyAMTaskAttempt(info, att, task.getType()); } @@ -258,7 +259,7 @@ public class TestAMWebServicesAttempts extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); JSONObject info = json.getJSONObject("taskAttempt"); verifyAMTaskAttempt(info, att, task.getType()); } @@ -287,7 +288,7 @@ public class TestAMWebServicesAttempts extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); JSONObject info = json.getJSONObject("taskAttempt"); verifyAMTaskAttempt(info, att, task.getType()); } @@ -316,7 +317,7 @@ public class TestAMWebServicesAttempts extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); @@ -390,7 +391,7 @@ public class TestAMWebServicesAttempts extends JerseyTestBase { + JettyUtils.UTF_8, response.getType().toString()); JSONObject msg = response.getEntity(JSONObject.class); JSONObject exception = msg.getJSONObject("RemoteException"); - assertEquals("incorrect number of elements", 3, exception.length()); + assertEquals(3, exception.length(), "incorrect number of elements"); String message = exception.getString("message"); String type = exception.getString("exception"); String classname = exception.getString("javaClassName"); @@ -433,9 +434,9 @@ public class TestAMWebServicesAttempts extends JerseyTestBase { public void verifyAMTaskAttempt(JSONObject info, TaskAttempt att, TaskType ttype) throws JSONException { if (ttype == TaskType.REDUCE) { - assertEquals("incorrect number of elements", 17, info.length()); + assertEquals(17, info.length(), "incorrect number of elements"); } else { - assertEquals("incorrect number of elements", 12, info.length()); + assertEquals(12, info.length(), "incorrect number of elements"); } verifyTaskAttemptGeneric(att, ttype, info.getString("id"), @@ -454,9 +455,9 @@ public class TestAMWebServicesAttempts extends JerseyTestBase { public void verifyAMTaskAttempts(JSONObject json, Task task) throws JSONException { - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); JSONObject attempts = json.getJSONObject("taskAttempts"); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); JSONArray arr = attempts.getJSONArray("taskAttempt"); for (TaskAttempt att : task.getAttempts().values()) { TaskAttemptId id = att.getID(); @@ -470,13 +471,13 @@ public class TestAMWebServicesAttempts extends JerseyTestBase { verifyAMTaskAttempt(info, att, task.getType()); } } - assertTrue("task attempt with id: " + attid - + " not in web service output", found); + assertTrue(found, "task attempt with id: " + attid + + " not in web service output"); } } public void verifyAMTaskAttemptsXML(NodeList nodes, Task task) { - assertEquals("incorrect number of elements", 1, nodes.getLength()); + assertEquals(1, nodes.getLength(), "incorrect number of elements"); for (TaskAttempt att : task.getAttempts().values()) { TaskAttemptId id = att.getID(); @@ -484,15 +485,14 @@ public class TestAMWebServicesAttempts extends JerseyTestBase { Boolean found = false; for (int i = 0; i < nodes.getLength(); i++) { Element element = (Element) nodes.item(i); - assertFalse("task attempt should not contain any attributes, it can lead to incorrect JSON marshaling", - element.hasAttributes()); + assertFalse(element.hasAttributes(), "task attempt should not contain any attributes, it can lead to incorrect JSON marshaling"); if (attid.matches(WebServicesTestUtils.getXmlString(element, "id"))) { found = true; verifyAMTaskAttemptXML(element, att, task.getType()); } } - assertTrue("task with id: " + attid + " not in web service output", found); + assertTrue(found, "task with id: " + attid + " not in web service output"); } } @@ -527,26 +527,26 @@ public class TestAMWebServicesAttempts extends JerseyTestBase { ta.getAssignedContainerID().toString(), assignedContainerId); - assertEquals("startTime wrong", ta.getLaunchTime(), startTime); - assertEquals("finishTime wrong", ta.getFinishTime(), finishTime); - assertEquals("elapsedTime wrong", finishTime - startTime, elapsedTime); - assertEquals("progress wrong", ta.getProgress() * 100, progress, 1e-3f); + assertEquals(ta.getLaunchTime(), startTime, "startTime wrong"); + assertEquals(ta.getFinishTime(), finishTime, "finishTime wrong"); + assertEquals(finishTime - startTime, elapsedTime, "elapsedTime wrong"); + assertEquals(ta.getProgress() * 100, progress, 1e-3f, "progress wrong"); } public void verifyReduceTaskAttemptGeneric(TaskAttempt ta, long shuffleFinishTime, long mergeFinishTime, long elapsedShuffleTime, long elapsedMergeTime, long elapsedReduceTime) { - assertEquals("shuffleFinishTime wrong", ta.getShuffleFinishTime(), - shuffleFinishTime); - assertEquals("mergeFinishTime wrong", ta.getSortFinishTime(), - mergeFinishTime); - assertEquals("elapsedShuffleTime wrong", - ta.getShuffleFinishTime() - ta.getLaunchTime(), elapsedShuffleTime); - assertEquals("elapsedMergeTime wrong", - ta.getSortFinishTime() - ta.getShuffleFinishTime(), elapsedMergeTime); - assertEquals("elapsedReduceTime wrong", - ta.getFinishTime() - ta.getSortFinishTime(), elapsedReduceTime); + assertEquals(ta.getShuffleFinishTime(), + shuffleFinishTime, "shuffleFinishTime wrong"); + assertEquals(ta.getSortFinishTime(), + mergeFinishTime, "mergeFinishTime wrong"); + assertEquals(ta.getShuffleFinishTime() - ta.getLaunchTime(), elapsedShuffleTime, + "elapsedShuffleTime wrong"); + assertEquals(ta.getSortFinishTime() - ta.getShuffleFinishTime(), elapsedMergeTime, + "elapsedMergeTime wrong"); + assertEquals(ta.getFinishTime() - ta.getSortFinishTime(), elapsedReduceTime, + "elapsedReduceTime wrong"); } @Test @@ -571,7 +571,7 @@ public class TestAMWebServicesAttempts extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); JSONObject info = json.getJSONObject("jobTaskAttemptCounters"); verifyAMJobTaskAttemptCounters(info, att); } @@ -600,7 +600,7 @@ public class TestAMWebServicesAttempts extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); @@ -616,7 +616,7 @@ public class TestAMWebServicesAttempts extends JerseyTestBase { public void verifyAMJobTaskAttemptCounters(JSONObject info, TaskAttempt att) throws JSONException { - assertEquals("incorrect number of elements", 2, info.length()); + assertEquals(2, info.length(), "incorrect number of elements"); WebServicesTestUtils.checkStringMatch("id", MRApps.toString(att.getID()), info.getString("id")); @@ -627,15 +627,15 @@ public class TestAMWebServicesAttempts extends JerseyTestBase { for (int i = 0; i < counterGroups.length(); i++) { JSONObject counterGroup = counterGroups.getJSONObject(i); String name = counterGroup.getString("counterGroupName"); - assertTrue("name not set", (name != null && !name.isEmpty())); + assertTrue((name != null && !name.isEmpty()), "name not set"); JSONArray counters = counterGroup.getJSONArray("counter"); for (int j = 0; j < counters.length(); j++) { JSONObject counter = counters.getJSONObject(j); String counterName = counter.getString("name"); - assertTrue("name not set", - (counterName != null && !counterName.isEmpty())); + assertTrue((counterName != null && !counterName.isEmpty()), + "name not set"); long value = counter.getLong("value"); - assertTrue("value >= 0", value >= 0); + assertTrue(value >= 0, "value >= 0"); } } } @@ -653,20 +653,19 @@ public class TestAMWebServicesAttempts extends JerseyTestBase { for (int j = 0; j < groups.getLength(); j++) { Element counters = (Element) groups.item(j); - assertNotNull("should have counters in the web service info", counters); + assertNotNull(counters, "should have counters in the web service info"); String name = WebServicesTestUtils.getXmlString(counters, "counterGroupName"); - assertTrue("name not set", (name != null && !name.isEmpty())); + assertTrue((name != null && !name.isEmpty()), "name not set"); NodeList counterArr = counters.getElementsByTagName("counter"); for (int z = 0; z < counterArr.getLength(); z++) { Element counter = (Element) counterArr.item(z); String counterName = WebServicesTestUtils.getXmlString(counter, "name"); - assertTrue("counter name not set", - (counterName != null && !counterName.isEmpty())); + assertTrue((counterName != null && !counterName.isEmpty()), "counter name not set"); long value = WebServicesTestUtils.getXmlLong(counter, "value"); - assertTrue("value not >= 0", value >= 0); + assertTrue(value >= 0, "value not >= 0"); } } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesJobConf.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesJobConf.java index ee7bb0e3c27..5d147339de2 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesJobConf.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesJobConf.java @@ -18,10 +18,10 @@ package org.apache.hadoop.mapreduce.v2.app.webapp; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertNotNull; -import static org.junit.Assert.assertTrue; -import static org.junit.Assert.fail; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; import java.io.File; import java.io.IOException; @@ -44,6 +44,7 @@ import org.apache.hadoop.mapreduce.v2.app.AppContext; import org.apache.hadoop.mapreduce.v2.app.MockAppContext; import org.apache.hadoop.mapreduce.v2.app.job.Job; import org.apache.hadoop.mapreduce.v2.util.MRApps; +import org.apache.hadoop.util.XMLUtils; import org.apache.hadoop.yarn.webapp.GenericExceptionHandler; import org.apache.hadoop.yarn.webapp.GuiceServletConfig; import org.apache.hadoop.yarn.webapp.JerseyTestBase; @@ -51,9 +52,9 @@ import org.apache.hadoop.yarn.webapp.WebServicesTestUtils; import org.codehaus.jettison.json.JSONArray; import org.codehaus.jettison.json.JSONException; import org.codehaus.jettison.json.JSONObject; -import org.junit.AfterClass; -import org.junit.Before; -import org.junit.Test; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.BeforeEach; import org.w3c.dom.Document; import org.w3c.dom.Element; import org.w3c.dom.NodeList; @@ -125,7 +126,7 @@ public class TestAMWebServicesJobConf extends JerseyTestBase { Guice.createInjector(new WebServletModule())); } - @Before + @BeforeEach @Override public void setUp() throws Exception { super.setUp(); @@ -134,7 +135,7 @@ public class TestAMWebServicesJobConf extends JerseyTestBase { Guice.createInjector(new WebServletModule())); } - @AfterClass + @AfterAll static public void stop() { FileUtil.fullyDelete(testConfDir); } @@ -160,7 +161,7 @@ public class TestAMWebServicesJobConf extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); JSONObject info = json.getJSONObject("conf"); verifyAMJobConf(info, jobsMap.get(id)); } @@ -179,7 +180,7 @@ public class TestAMWebServicesJobConf extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); JSONObject info = json.getJSONObject("conf"); verifyAMJobConf(info, jobsMap.get(id)); } @@ -197,7 +198,7 @@ public class TestAMWebServicesJobConf extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); JSONObject info = json.getJSONObject("conf"); verifyAMJobConf(info, jobsMap.get(id)); } @@ -216,7 +217,7 @@ public class TestAMWebServicesJobConf extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); @@ -228,7 +229,7 @@ public class TestAMWebServicesJobConf extends JerseyTestBase { public void verifyAMJobConf(JSONObject info, Job job) throws JSONException { - assertEquals("incorrect number of elements", 2, info.length()); + assertEquals(2, info.length(), "incorrect number of elements"); WebServicesTestUtils.checkStringMatch("path", job.getConfFile().toString(), info.getString("path")); @@ -239,14 +240,14 @@ public class TestAMWebServicesJobConf extends JerseyTestBase { JSONObject prop = properties.getJSONObject(i); String name = prop.getString("name"); String value = prop.getString("value"); - assertTrue("name not set", (name != null && !name.isEmpty())); - assertTrue("value not set", (value != null && !value.isEmpty())); + assertTrue((name != null && !name.isEmpty()), "name not set"); + assertTrue((value != null && !value.isEmpty()), "value not set"); } } public void verifyAMJobConfXML(NodeList nodes, Job job) { - assertEquals("incorrect number of elements", 1, nodes.getLength()); + assertEquals(1, nodes.getLength(), "incorrect number of elements"); for (int i = 0; i < nodes.getLength(); i++) { Element element = (Element) nodes.item(i); @@ -259,11 +260,11 @@ public class TestAMWebServicesJobConf extends JerseyTestBase { for (int j = 0; j < properties.getLength(); j++) { Element property = (Element) properties.item(j); - assertNotNull("should have counters in the web service info", property); + assertNotNull(property, "should have counters in the web service info"); String name = WebServicesTestUtils.getXmlString(property, "name"); String value = WebServicesTestUtils.getXmlString(property, "value"); - assertTrue("name not set", (name != null && !name.isEmpty())); - assertTrue("name not set", (value != null && !value.isEmpty())); + assertTrue((name != null && !name.isEmpty()), "name not set"); + assertTrue((value != null && !value.isEmpty()), "name not set"); } } } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesJobs.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesJobs.java index cc57134d236..1ff4bc475b4 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesJobs.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesJobs.java @@ -20,10 +20,10 @@ package org.apache.hadoop.mapreduce.v2.app.webapp; import static org.apache.hadoop.yarn.util.StringHelper.ujoin; import static org.apache.hadoop.yarn.webapp.WebServicesTestUtils.assertResponseStatusCode; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertNotNull; -import static org.junit.Assert.assertTrue; -import static org.junit.Assert.fail; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; import java.io.StringReader; import java.util.List; @@ -44,6 +44,7 @@ import org.apache.hadoop.mapreduce.v2.app.MockAppContext; import org.apache.hadoop.mapreduce.v2.app.job.Job; import org.apache.hadoop.mapreduce.v2.util.MRApps; import org.apache.hadoop.security.authorize.AccessControlList; +import org.apache.hadoop.util.XMLUtils; import org.apache.hadoop.yarn.api.records.NodeId; import org.apache.hadoop.yarn.util.Times; import org.apache.hadoop.yarn.webapp.GenericExceptionHandler; @@ -53,8 +54,8 @@ import org.apache.hadoop.yarn.webapp.WebServicesTestUtils; import org.codehaus.jettison.json.JSONArray; import org.codehaus.jettison.json.JSONException; import org.codehaus.jettison.json.JSONObject; -import org.junit.Before; -import org.junit.Test; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; import org.w3c.dom.Document; import org.w3c.dom.Element; import org.w3c.dom.NodeList; @@ -102,7 +103,7 @@ public class TestAMWebServicesJobs extends JerseyTestBase { Guice.createInjector(new WebServletModule())); } - @Before + @BeforeEach @Override public void setUp() throws Exception { super.setUp(); @@ -127,7 +128,7 @@ public class TestAMWebServicesJobs extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); JSONObject jobs = json.getJSONObject("jobs"); JSONArray arr = jobs.getJSONArray("job"); JSONObject info = arr.getJSONObject(0); @@ -145,7 +146,7 @@ public class TestAMWebServicesJobs extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); JSONObject jobs = json.getJSONObject("jobs"); JSONArray arr = jobs.getJSONArray("job"); JSONObject info = arr.getJSONObject(0); @@ -162,7 +163,7 @@ public class TestAMWebServicesJobs extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); JSONObject jobs = json.getJSONObject("jobs"); JSONArray arr = jobs.getJSONArray("job"); JSONObject info = arr.getJSONObject(0); @@ -180,15 +181,15 @@ public class TestAMWebServicesJobs extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); Document dom = db.parse(is); NodeList jobs = dom.getElementsByTagName("jobs"); - assertEquals("incorrect number of elements", 1, jobs.getLength()); + assertEquals(1, jobs.getLength(), "incorrect number of elements"); NodeList job = dom.getElementsByTagName("job"); - assertEquals("incorrect number of elements", 1, job.getLength()); + assertEquals(1, job.getLength(), "incorrect number of elements"); verifyAMJobXML(job, appContext); } @@ -206,7 +207,7 @@ public class TestAMWebServicesJobs extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); JSONObject info = json.getJSONObject("job"); verifyAMJob(info, jobsMap.get(id)); } @@ -226,7 +227,7 @@ public class TestAMWebServicesJobs extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); JSONObject info = json.getJSONObject("job"); verifyAMJob(info, jobsMap.get(id)); } @@ -244,7 +245,7 @@ public class TestAMWebServicesJobs extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); JSONObject info = json.getJSONObject("job"); verifyAMJob(info, jobsMap.get(id)); } @@ -266,7 +267,7 @@ public class TestAMWebServicesJobs extends JerseyTestBase { response.getType().toString()); JSONObject msg = response.getEntity(JSONObject.class); JSONObject exception = msg.getJSONObject("RemoteException"); - assertEquals("incorrect number of elements", 3, exception.length()); + assertEquals(3, exception.length(), "incorrect number of elements"); String message = exception.getString("message"); String type = exception.getString("exception"); String classname = exception.getString("javaClassName"); @@ -294,7 +295,7 @@ public class TestAMWebServicesJobs extends JerseyTestBase { response.getType().toString()); JSONObject msg = response.getEntity(JSONObject.class); JSONObject exception = msg.getJSONObject("RemoteException"); - assertEquals("incorrect number of elements", 3, exception.length()); + assertEquals(3, exception.length(), "incorrect number of elements"); String message = exception.getString("message"); String type = exception.getString("exception"); String classname = exception.getString("javaClassName"); @@ -318,7 +319,7 @@ public class TestAMWebServicesJobs extends JerseyTestBase { response.getType().toString()); JSONObject msg = response.getEntity(JSONObject.class); JSONObject exception = msg.getJSONObject("RemoteException"); - assertEquals("incorrect number of elements", 3, exception.length()); + assertEquals(3, exception.length(), "incorrect number of elements"); String message = exception.getString("message"); String type = exception.getString("exception"); String classname = exception.getString("javaClassName"); @@ -342,7 +343,7 @@ public class TestAMWebServicesJobs extends JerseyTestBase { response.getType().toString()); String msg = response.getEntity(String.class); System.out.println(msg); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(msg)); @@ -382,7 +383,7 @@ public class TestAMWebServicesJobs extends JerseyTestBase { response.getType().toString()); JSONObject msg = response.getEntity(JSONObject.class); JSONObject exception = msg.getJSONObject("RemoteException"); - assertEquals("incorrect number of elements", 3, exception.length()); + assertEquals(3, exception.length(), "incorrect number of elements"); String message = exception.getString("message"); String type = exception.getString("exception"); String classname = exception.getString("javaClassName"); @@ -411,7 +412,7 @@ public class TestAMWebServicesJobs extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); @@ -424,7 +425,7 @@ public class TestAMWebServicesJobs extends JerseyTestBase { public void verifyAMJob(JSONObject info, Job job) throws JSONException { - assertEquals("incorrect number of elements", 31, info.length()); + assertEquals(31, info.length(), "incorrect number of elements"); // everyone access fields verifyAMJobGeneric(job, info.getString("id"), info.getString("user"), @@ -475,8 +476,8 @@ public class TestAMWebServicesJobs extends JerseyTestBase { } else { fail("should have acls in the web service info"); } - assertTrue("acl: " + expectName + " not found in webservice output", - found); + assertTrue(found, + "acl: " + expectName + " not found in webservice output"); } } @@ -484,14 +485,14 @@ public class TestAMWebServicesJobs extends JerseyTestBase { public void verifyAMJobXML(NodeList nodes, AppContext appContext) { - assertEquals("incorrect number of elements", 1, nodes.getLength()); + assertEquals(1, nodes.getLength(), "incorrect number of elements"); for (int i = 0; i < nodes.getLength(); i++) { Element element = (Element) nodes.item(i); Job job = appContext.getJob(MRApps.toJobID(WebServicesTestUtils .getXmlString(element, "id"))); - assertNotNull("Job not found - output incorrect", job); + assertNotNull(job, "Job not found - output incorrect"); verifyAMJobGeneric(job, WebServicesTestUtils.getXmlString(element, "id"), WebServicesTestUtils.getXmlString(element, "user"), @@ -550,8 +551,8 @@ public class TestAMWebServicesJobs extends JerseyTestBase { } else { fail("should have acls in the web service info"); } - assertTrue("acl: " + expectName + " not found in webservice output", - found); + assertTrue(found, + "acl: " + expectName + " not found in webservice output"); } } } @@ -571,21 +572,21 @@ public class TestAMWebServicesJobs extends JerseyTestBase { WebServicesTestUtils.checkStringMatch("state", job.getState().toString(), state); - assertEquals("startTime incorrect", report.getStartTime(), startTime); - assertEquals("finishTime incorrect", report.getFinishTime(), finishTime); - assertEquals("elapsedTime incorrect", - Times.elapsed(report.getStartTime(), report.getFinishTime()), - elapsedTime); - assertEquals("mapsTotal incorrect", job.getTotalMaps(), mapsTotal); - assertEquals("mapsCompleted incorrect", job.getCompletedMaps(), - mapsCompleted); - assertEquals("reducesTotal incorrect", job.getTotalReduces(), reducesTotal); - assertEquals("reducesCompleted incorrect", job.getCompletedReduces(), - reducesCompleted); - assertEquals("mapProgress incorrect", report.getMapProgress() * 100, - mapProgress, 0); - assertEquals("reduceProgress incorrect", report.getReduceProgress() * 100, - reduceProgress, 0); + assertEquals(report.getStartTime(), startTime, "startTime incorrect"); + assertEquals(report.getFinishTime(), finishTime, "finishTime incorrect"); + assertEquals(Times.elapsed(report.getStartTime(), report.getFinishTime()), + elapsedTime, "elapsedTime incorrect"); + assertEquals(job.getTotalMaps(), mapsTotal, "mapsTotal incorrect"); + assertEquals(job.getCompletedMaps(), mapsCompleted, + "mapsCompleted incorrect"); + assertEquals(job.getTotalReduces(), reducesTotal, + "reducesTotal incorrect"); + assertEquals(job.getCompletedReduces(), reducesCompleted, + "reducesCompleted incorrect"); + assertEquals(report.getMapProgress() * 100, mapProgress, 0, + "mapProgress incorrect"); + assertEquals(report.getReduceProgress() * 100, reduceProgress, 0, + "reduceProgress incorrect"); } public void verifyAMJobGenericSecure(Job job, int mapsPending, @@ -608,28 +609,27 @@ public class TestAMWebServicesJobs extends JerseyTestBase { WebServicesTestUtils.checkStringMatch("diagnostics", diagString, diagnostics); - assertEquals("isUber incorrect", job.isUber(), uberized); + assertEquals(job.isUber(), uberized, "isUber incorrect"); // unfortunately the following fields are all calculated in JobInfo // so not easily accessible without doing all the calculations again. // For now just make sure they are present. - assertTrue("mapsPending not >= 0", mapsPending >= 0); - assertTrue("mapsRunning not >= 0", mapsRunning >= 0); - assertTrue("reducesPending not >= 0", reducesPending >= 0); - assertTrue("reducesRunning not >= 0", reducesRunning >= 0); + assertTrue(mapsPending >= 0, "mapsPending not >= 0"); + assertTrue(mapsRunning >= 0, "mapsRunning not >= 0"); + assertTrue(reducesPending >= 0, "reducesPending not >= 0"); + assertTrue(reducesRunning >= 0, "reducesRunning not >= 0"); - assertTrue("newReduceAttempts not >= 0", newReduceAttempts >= 0); - assertTrue("runningReduceAttempts not >= 0", runningReduceAttempts >= 0); - assertTrue("failedReduceAttempts not >= 0", failedReduceAttempts >= 0); - assertTrue("killedReduceAttempts not >= 0", killedReduceAttempts >= 0); - assertTrue("successfulReduceAttempts not >= 0", - successfulReduceAttempts >= 0); + assertTrue(newReduceAttempts >= 0, "newReduceAttempts not >= 0"); + assertTrue(runningReduceAttempts >= 0, "runningReduceAttempts not >= 0"); + assertTrue(failedReduceAttempts >= 0, "failedReduceAttempts not >= 0"); + assertTrue(killedReduceAttempts >= 0, "killedReduceAttempts not >= 0"); + assertTrue(successfulReduceAttempts >= 0, "successfulReduceAttempts not >= 0"); - assertTrue("newMapAttempts not >= 0", newMapAttempts >= 0); - assertTrue("runningMapAttempts not >= 0", runningMapAttempts >= 0); - assertTrue("failedMapAttempts not >= 0", failedMapAttempts >= 0); - assertTrue("killedMapAttempts not >= 0", killedMapAttempts >= 0); - assertTrue("successfulMapAttempts not >= 0", successfulMapAttempts >= 0); + assertTrue(newMapAttempts >= 0, "newMapAttempts not >= 0"); + assertTrue(runningMapAttempts >= 0, "runningMapAttempts not >= 0"); + assertTrue(failedMapAttempts >= 0, "failedMapAttempts not >= 0"); + assertTrue(killedMapAttempts >= 0, "killedMapAttempts not >= 0"); + assertTrue(successfulMapAttempts >= 0, "successfulMapAttempts not >= 0"); } @@ -646,7 +646,8 @@ public class TestAMWebServicesJobs extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), + "incorrect number of elements"); JSONObject info = json.getJSONObject("jobCounters"); verifyAMJobCounters(info, jobsMap.get(id)); } @@ -665,7 +666,8 @@ public class TestAMWebServicesJobs extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), + "incorrect number of elements"); JSONObject info = json.getJSONObject("jobCounters"); verifyAMJobCounters(info, jobsMap.get(id)); } @@ -683,7 +685,8 @@ public class TestAMWebServicesJobs extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), + "incorrect number of elements"); JSONObject info = json.getJSONObject("jobCounters"); verifyAMJobCounters(info, jobsMap.get(id)); } @@ -702,7 +705,7 @@ public class TestAMWebServicesJobs extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); @@ -715,7 +718,8 @@ public class TestAMWebServicesJobs extends JerseyTestBase { public void verifyAMJobCounters(JSONObject info, Job job) throws JSONException { - assertEquals("incorrect number of elements", 2, info.length()); + assertEquals(2, info.length(), + "incorrect number of elements"); WebServicesTestUtils.checkStringMatch("id", MRApps.toString(job.getID()), info.getString("id")); @@ -725,22 +729,22 @@ public class TestAMWebServicesJobs extends JerseyTestBase { for (int i = 0; i < counterGroups.length(); i++) { JSONObject counterGroup = counterGroups.getJSONObject(i); String name = counterGroup.getString("counterGroupName"); - assertTrue("name not set", (name != null && !name.isEmpty())); + assertTrue((name != null && !name.isEmpty()), "name not set"); JSONArray counters = counterGroup.getJSONArray("counter"); for (int j = 0; j < counters.length(); j++) { JSONObject counter = counters.getJSONObject(j); String counterName = counter.getString("name"); - assertTrue("counter name not set", - (counterName != null && !counterName.isEmpty())); + assertTrue((counterName != null && !counterName.isEmpty()), + "counter name not set"); long mapValue = counter.getLong("mapCounterValue"); - assertTrue("mapCounterValue >= 0", mapValue >= 0); + assertTrue(mapValue >= 0, "mapCounterValue >= 0"); long reduceValue = counter.getLong("reduceCounterValue"); - assertTrue("reduceCounterValue >= 0", reduceValue >= 0); + assertTrue(reduceValue >= 0, "reduceCounterValue >= 0"); long totalValue = counter.getLong("totalCounterValue"); - assertTrue("totalCounterValue >= 0", totalValue >= 0); + assertTrue(totalValue >= 0, "totalCounterValue >= 0"); } } @@ -751,7 +755,7 @@ public class TestAMWebServicesJobs extends JerseyTestBase { for (int i = 0; i < nodes.getLength(); i++) { Element element = (Element) nodes.item(i); - assertNotNull("Job not found - output incorrect", job); + assertNotNull(job, "Job not found - output incorrect"); WebServicesTestUtils.checkStringMatch("id", MRApps.toString(job.getID()), WebServicesTestUtils.getXmlString(element, "id")); @@ -761,29 +765,30 @@ public class TestAMWebServicesJobs extends JerseyTestBase { for (int j = 0; j < groups.getLength(); j++) { Element counters = (Element) groups.item(j); - assertNotNull("should have counters in the web service info", counters); + assertNotNull(counters, + "should have counters in the web service info"); String name = WebServicesTestUtils.getXmlString(counters, "counterGroupName"); - assertTrue("name not set", (name != null && !name.isEmpty())); + assertTrue((name != null && !name.isEmpty()), "name not set"); NodeList counterArr = counters.getElementsByTagName("counter"); for (int z = 0; z < counterArr.getLength(); z++) { Element counter = (Element) counterArr.item(z); String counterName = WebServicesTestUtils.getXmlString(counter, "name"); - assertTrue("counter name not set", - (counterName != null && !counterName.isEmpty())); + assertTrue((counterName != null && !counterName.isEmpty()), + "counter name not set"); long mapValue = WebServicesTestUtils.getXmlLong(counter, "mapCounterValue"); - assertTrue("mapCounterValue not >= 0", mapValue >= 0); + assertTrue(mapValue >= 0, "mapCounterValue not >= 0"); long reduceValue = WebServicesTestUtils.getXmlLong(counter, "reduceCounterValue"); - assertTrue("reduceCounterValue >= 0", reduceValue >= 0); + assertTrue(reduceValue >= 0, "reduceCounterValue >= 0"); long totalValue = WebServicesTestUtils.getXmlLong(counter, "totalCounterValue"); - assertTrue("totalCounterValue >= 0", totalValue >= 0); + assertTrue(totalValue >= 0, "totalCounterValue >= 0"); } } } @@ -802,7 +807,7 @@ public class TestAMWebServicesJobs extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); JSONObject info = json.getJSONObject("jobAttempts"); verifyJobAttempts(info, jobsMap.get(id)); } @@ -821,7 +826,7 @@ public class TestAMWebServicesJobs extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); JSONObject info = json.getJSONObject("jobAttempts"); verifyJobAttempts(info, jobsMap.get(id)); } @@ -840,7 +845,7 @@ public class TestAMWebServicesJobs extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); JSONObject info = json.getJSONObject("jobAttempts"); verifyJobAttempts(info, jobsMap.get(id)); } @@ -859,13 +864,14 @@ public class TestAMWebServicesJobs extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); Document dom = db.parse(is); NodeList attempts = dom.getElementsByTagName("jobAttempts"); - assertEquals("incorrect number of elements", 1, attempts.getLength()); + assertEquals(1, attempts.getLength(), + "incorrect number of elements"); NodeList info = dom.getElementsByTagName("jobAttempt"); verifyJobAttemptsXML(info, jobsMap.get(id)); } @@ -875,7 +881,8 @@ public class TestAMWebServicesJobs extends JerseyTestBase { throws JSONException { JSONArray attempts = info.getJSONArray("jobAttempt"); - assertEquals("incorrect number of elements", 2, attempts.length()); + assertEquals(2, attempts.length(), + "incorrect number of elements"); for (int i = 0; i < attempts.length(); i++) { JSONObject attempt = attempts.getJSONObject(i); verifyJobAttemptsGeneric(job, attempt.getString("nodeHttpAddress"), @@ -887,7 +894,8 @@ public class TestAMWebServicesJobs extends JerseyTestBase { public void verifyJobAttemptsXML(NodeList nodes, Job job) { - assertEquals("incorrect number of elements", 2, nodes.getLength()); + assertEquals(2, nodes.getLength(), + "incorrect number of elements"); for (int i = 0; i < nodes.getLength(); i++) { Element element = (Element) nodes.item(i); verifyJobAttemptsGeneric(job, @@ -913,17 +921,17 @@ public class TestAMWebServicesJobs extends JerseyTestBase { + nmHttpPort, nodeHttpAddress); WebServicesTestUtils.checkStringMatch("nodeId", NodeId.newInstance(nmHost, nmPort).toString(), nodeId); - assertTrue("startime not greater than 0", startTime > 0); + assertTrue(startTime > 0, "startime not greater than 0"); WebServicesTestUtils.checkStringMatch("containerId", amInfo .getContainerId().toString(), containerId); String localLogsLink =ujoin("node", "containerlogs", containerId, job.getUserName()); - assertTrue("logsLink", logsLink.contains(localLogsLink)); + assertTrue(logsLink.contains(localLogsLink), "logsLink"); } } - assertTrue("attempt: " + id + " was not found", attemptFound); + assertTrue(attemptFound, "attempt: " + id + " was not found"); } } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesTasks.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesTasks.java index ab4d818f338..211d81801d6 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesTasks.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesTasks.java @@ -19,10 +19,10 @@ package org.apache.hadoop.mapreduce.v2.app.webapp; import static org.apache.hadoop.yarn.webapp.WebServicesTestUtils.assertResponseStatusCode; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertNotNull; -import static org.junit.Assert.assertTrue; -import static org.junit.Assert.fail; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; import java.io.StringReader; import java.util.Map; @@ -42,6 +42,7 @@ import org.apache.hadoop.mapreduce.v2.app.MockAppContext; import org.apache.hadoop.mapreduce.v2.app.job.Job; import org.apache.hadoop.mapreduce.v2.app.job.Task; import org.apache.hadoop.mapreduce.v2.util.MRApps; +import org.apache.hadoop.util.XMLUtils; import org.apache.hadoop.yarn.webapp.GenericExceptionHandler; import org.apache.hadoop.yarn.webapp.GuiceServletConfig; import org.apache.hadoop.yarn.webapp.JerseyTestBase; @@ -49,8 +50,8 @@ import org.apache.hadoop.yarn.webapp.WebServicesTestUtils; import org.codehaus.jettison.json.JSONArray; import org.codehaus.jettison.json.JSONException; import org.codehaus.jettison.json.JSONObject; -import org.junit.Before; -import org.junit.Test; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; import org.w3c.dom.Document; import org.w3c.dom.Element; import org.w3c.dom.NodeList; @@ -98,7 +99,7 @@ public class TestAMWebServicesTasks extends JerseyTestBase { Guice.createInjector(new WebServletModule())); } - @Before + @BeforeEach @Override public void setUp() throws Exception { super.setUp(); @@ -126,10 +127,10 @@ public class TestAMWebServicesTasks extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); JSONObject tasks = json.getJSONObject("tasks"); JSONArray arr = tasks.getJSONArray("task"); - assertEquals("incorrect number of elements", 2, arr.length()); + assertEquals(2, arr.length(), "incorrect number of elements"); verifyAMTask(arr, jobsMap.get(id), null); } @@ -146,10 +147,10 @@ public class TestAMWebServicesTasks extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); JSONObject tasks = json.getJSONObject("tasks"); JSONArray arr = tasks.getJSONArray("task"); - assertEquals("incorrect number of elements", 2, arr.length()); + assertEquals(2, arr.length(), "incorrect number of elements"); verifyAMTask(arr, jobsMap.get(id), null); } @@ -167,10 +168,10 @@ public class TestAMWebServicesTasks extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); JSONObject tasks = json.getJSONObject("tasks"); JSONArray arr = tasks.getJSONArray("task"); - assertEquals("incorrect number of elements", 2, arr.length()); + assertEquals(2, arr.length(), "incorrect number of elements"); verifyAMTask(arr, jobsMap.get(id), null); } @@ -189,13 +190,13 @@ public class TestAMWebServicesTasks extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); Document dom = db.parse(is); NodeList tasks = dom.getElementsByTagName("tasks"); - assertEquals("incorrect number of elements", 1, tasks.getLength()); + assertEquals(1, tasks.getLength(), "incorrect number of elements"); NodeList task = dom.getElementsByTagName("task"); verifyAMTaskXML(task, jobsMap.get(id)); } @@ -214,10 +215,10 @@ public class TestAMWebServicesTasks extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); JSONObject tasks = json.getJSONObject("tasks"); JSONArray arr = tasks.getJSONArray("task"); - assertEquals("incorrect number of elements", 1, arr.length()); + assertEquals(1, arr.length(), "incorrect number of elements"); verifyAMTask(arr, jobsMap.get(id), type); } } @@ -235,10 +236,10 @@ public class TestAMWebServicesTasks extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); JSONObject tasks = json.getJSONObject("tasks"); JSONArray arr = tasks.getJSONArray("task"); - assertEquals("incorrect number of elements", 1, arr.length()); + assertEquals(1, arr.length(), "incorrect number of elements"); verifyAMTask(arr, jobsMap.get(id), type); } } @@ -264,7 +265,7 @@ public class TestAMWebServicesTasks extends JerseyTestBase { response.getType().toString()); JSONObject msg = response.getEntity(JSONObject.class); JSONObject exception = msg.getJSONObject("RemoteException"); - assertEquals("incorrect number of elements", 3, exception.length()); + assertEquals(3, exception.length(), "incorrect number of elements"); String message = exception.getString("message"); String type = exception.getString("exception"); String classname = exception.getString("javaClassName"); @@ -293,7 +294,7 @@ public class TestAMWebServicesTasks extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); JSONObject info = json.getJSONObject("task"); verifyAMSingleTask(info, task); } @@ -315,7 +316,7 @@ public class TestAMWebServicesTasks extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); JSONObject info = json.getJSONObject("task"); verifyAMSingleTask(info, task); } @@ -337,7 +338,7 @@ public class TestAMWebServicesTasks extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); JSONObject info = json.getJSONObject("task"); verifyAMSingleTask(info, task); } @@ -362,7 +363,7 @@ public class TestAMWebServicesTasks extends JerseyTestBase { response.getType().toString()); JSONObject msg = response.getEntity(JSONObject.class); JSONObject exception = msg.getJSONObject("RemoteException"); - assertEquals("incorrect number of elements", 3, exception.length()); + assertEquals(3, exception.length(), "incorrect number of elements"); String message = exception.getString("message"); String type = exception.getString("exception"); String classname = exception.getString("javaClassName"); @@ -397,7 +398,7 @@ public class TestAMWebServicesTasks extends JerseyTestBase { response.getType().toString()); JSONObject msg = response.getEntity(JSONObject.class); JSONObject exception = msg.getJSONObject("RemoteException"); - assertEquals("incorrect number of elements", 3, exception.length()); + assertEquals(3, exception.length(), "incorrect number of elements"); String message = exception.getString("message"); String type = exception.getString("exception"); String classname = exception.getString("javaClassName"); @@ -430,7 +431,7 @@ public class TestAMWebServicesTasks extends JerseyTestBase { response.getType().toString()); JSONObject msg = response.getEntity(JSONObject.class); JSONObject exception = msg.getJSONObject("RemoteException"); - assertEquals("incorrect number of elements", 3, exception.length()); + assertEquals(3, exception.length(), "incorrect number of elements"); String message = exception.getString("message"); String type = exception.getString("exception"); String classname = exception.getString("javaClassName"); @@ -465,7 +466,7 @@ public class TestAMWebServicesTasks extends JerseyTestBase { response.getType().toString()); JSONObject msg = response.getEntity(JSONObject.class); JSONObject exception = msg.getJSONObject("RemoteException"); - assertEquals("incorrect number of elements", 3, exception.length()); + assertEquals(3, exception.length(), "incorrect number of elements"); String message = exception.getString("message"); String type = exception.getString("exception"); String classname = exception.getString("javaClassName"); @@ -500,7 +501,7 @@ public class TestAMWebServicesTasks extends JerseyTestBase { response.getType().toString()); JSONObject msg = response.getEntity(JSONObject.class); JSONObject exception = msg.getJSONObject("RemoteException"); - assertEquals("incorrect number of elements", 3, exception.length()); + assertEquals(3, exception.length(), "incorrect number of elements"); String message = exception.getString("message"); String type = exception.getString("exception"); String classname = exception.getString("javaClassName"); @@ -533,7 +534,7 @@ public class TestAMWebServicesTasks extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); @@ -549,7 +550,7 @@ public class TestAMWebServicesTasks extends JerseyTestBase { public void verifyAMSingleTask(JSONObject info, Task task) throws JSONException { - assertEquals("incorrect number of elements", 9, info.length()); + assertEquals(9, info.length(), "incorrect number of elements"); verifyTaskGeneric(task, info.getString("id"), info.getString("state"), info.getString("type"), info.getString("successfulAttempt"), @@ -573,7 +574,7 @@ public class TestAMWebServicesTasks extends JerseyTestBase { verifyAMSingleTask(info, task); } } - assertTrue("task with id: " + tid + " not in web service output", found); + assertTrue(found, "task with id: " + tid + " not in web service output"); } } } @@ -592,12 +593,12 @@ public class TestAMWebServicesTasks extends JerseyTestBase { WebServicesTestUtils.checkStringMatch("state", report.getTaskState() .toString(), state); // not easily checked without duplicating logic, just make sure its here - assertNotNull("successfulAttempt null", successfulAttempt); - assertEquals("startTime wrong", report.getStartTime(), startTime); - assertEquals("finishTime wrong", report.getFinishTime(), finishTime); - assertEquals("elapsedTime wrong", finishTime - startTime, elapsedTime); - assertEquals("progress wrong", report.getProgress() * 100, progress, 1e-3f); - assertEquals("status wrong", report.getStatus(), status); + assertNotNull(successfulAttempt, "successfulAttempt null"); + assertEquals(report.getStartTime(), startTime, "startTime wrong"); + assertEquals(report.getFinishTime(), finishTime, "finishTime wrong"); + assertEquals(finishTime - startTime, elapsedTime, "elapsedTime wrong"); + assertEquals(report.getProgress() * 100, progress, 1e-3f, "progress wrong"); + assertEquals(report.getStatus(), status, "status wrong"); } public void verifyAMSingleTaskXML(Element element, Task task) { @@ -614,7 +615,7 @@ public class TestAMWebServicesTasks extends JerseyTestBase { public void verifyAMTaskXML(NodeList nodes, Job job) { - assertEquals("incorrect number of elements", 2, nodes.getLength()); + assertEquals(2, nodes.getLength(), "incorrect number of elements"); for (Task task : job.getTasks().values()) { TaskId id = task.getID(); @@ -628,7 +629,7 @@ public class TestAMWebServicesTasks extends JerseyTestBase { verifyAMSingleTaskXML(element, task); } } - assertTrue("task with id: " + tid + " not in web service output", found); + assertTrue(found, "task with id: " + tid + " not in web service output"); } } @@ -647,7 +648,7 @@ public class TestAMWebServicesTasks extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); JSONObject info = json.getJSONObject("jobTaskCounters"); verifyAMJobTaskCounters(info, task); } @@ -669,7 +670,7 @@ public class TestAMWebServicesTasks extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); JSONObject info = json.getJSONObject("jobTaskCounters"); verifyAMJobTaskCounters(info, task); } @@ -691,7 +692,7 @@ public class TestAMWebServicesTasks extends JerseyTestBase { assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); JSONObject json = response.getEntity(JSONObject.class); - assertEquals("incorrect number of elements", 1, json.length()); + assertEquals(1, json.length(), "incorrect number of elements"); JSONObject info = json.getJSONObject("jobTaskCounters"); verifyAMJobTaskCounters(info, task); } @@ -713,7 +714,7 @@ public class TestAMWebServicesTasks extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); @@ -727,7 +728,7 @@ public class TestAMWebServicesTasks extends JerseyTestBase { public void verifyAMJobTaskCounters(JSONObject info, Task task) throws JSONException { - assertEquals("incorrect number of elements", 2, info.length()); + assertEquals(2, info.length(), "incorrect number of elements"); WebServicesTestUtils.checkStringMatch("id", MRApps.toString(task.getID()), info.getString("id")); @@ -737,15 +738,14 @@ public class TestAMWebServicesTasks extends JerseyTestBase { for (int i = 0; i < counterGroups.length(); i++) { JSONObject counterGroup = counterGroups.getJSONObject(i); String name = counterGroup.getString("counterGroupName"); - assertTrue("name not set", (name != null && !name.isEmpty())); + assertTrue((name != null && !name.isEmpty()), "name not set"); JSONArray counters = counterGroup.getJSONArray("counter"); for (int j = 0; j < counters.length(); j++) { JSONObject counter = counters.getJSONObject(j); String counterName = counter.getString("name"); - assertTrue("name not set", - (counterName != null && !counterName.isEmpty())); + assertTrue((counterName != null && !counterName.isEmpty()), "name not set"); long value = counter.getLong("value"); - assertTrue("value >= 0", value >= 0); + assertTrue(value >= 0, "value >= 0"); } } } @@ -764,20 +764,20 @@ public class TestAMWebServicesTasks extends JerseyTestBase { for (int j = 0; j < groups.getLength(); j++) { Element counters = (Element) groups.item(j); - assertNotNull("should have counters in the web service info", counters); + assertNotNull(counters, "should have counters in the web service info"); String name = WebServicesTestUtils.getXmlString(counters, "counterGroupName"); - assertTrue("name not set", (name != null && !name.isEmpty())); + assertTrue((name != null && !name.isEmpty()), "name not set"); NodeList counterArr = counters.getElementsByTagName("counter"); for (int z = 0; z < counterArr.getLength(); z++) { Element counter = (Element) counterArr.item(z); String counterName = WebServicesTestUtils.getXmlString(counter, "name"); - assertTrue("counter name not set", - (counterName != null && !counterName.isEmpty())); + assertTrue((counterName != null && !counterName.isEmpty()), + "counter name not set"); long value = WebServicesTestUtils.getXmlLong(counter, "value"); - assertTrue("value not >= 0", value >= 0); + assertTrue(value >= 0, "value not >= 0"); } } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAppController.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAppController.java index ba5c4301214..d8376e1b51a 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAppController.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAppController.java @@ -37,8 +37,8 @@ import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.webapp.Controller.RequestContext; import org.apache.hadoop.yarn.webapp.MimeType; import org.apache.hadoop.yarn.webapp.ResponseInfo; -import org.junit.Before; -import org.junit.Test; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; import static org.junit.Assert.*; public class TestAppController { @@ -48,7 +48,7 @@ public class TestAppController { private Job job; private static final String taskId = "task_01_01_m_01"; - @Before + @BeforeEach public void setUp() throws IOException { AppContext context = mock(AppContext.class); when(context.getApplicationID()).thenReturn( diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestBlocks.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestBlocks.java index 82b8a37dbea..24fb901c958 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestBlocks.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestBlocks.java @@ -26,7 +26,7 @@ import java.util.Map; import org.apache.hadoop.mapreduce.MRJobConfig; import org.apache.hadoop.mapreduce.util.MRJobConfUtil; import org.apache.hadoop.yarn.webapp.View; -import org.junit.Test; +import org.junit.jupiter.api.Test; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.mapreduce.v2.api.records.JobId; @@ -52,7 +52,7 @@ import org.apache.hadoop.yarn.webapp.view.HtmlBlock; import org.apache.hadoop.yarn.webapp.view.HtmlBlock.Block; import static org.mockito.Mockito.*; -import static org.junit.Assert.*; +import static org.junit.jupiter.api.Assertions.*; public class TestBlocks { private ByteArrayOutputStream data = new ByteArrayOutputStream(); diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IndexCache.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IndexCache.java index 0e24bbe5330..70328ea0075 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IndexCache.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IndexCache.java @@ -23,7 +23,7 @@ import java.util.concurrent.LinkedBlockingQueue; import java.util.concurrent.atomic.AtomicInteger; import org.apache.hadoop.fs.Path; -import org.apache.hadoop.mapreduce.server.tasktracker.TTConfig; +import org.apache.hadoop.mapreduce.MRJobConfig; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -43,7 +43,7 @@ class IndexCache { public IndexCache(JobConf conf) { this.conf = conf; totalMemoryAllowed = - conf.getInt(TTConfig.TT_INDEX_CACHE, 10) * 1024 * 1024; + conf.getInt(MRJobConfig.SHUFFLE_INDEX_CACHE, 10) * 1024 * 1024; LOG.info("IndexCache created with max memory = " + totalMemoryAllowed); } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/QueueConfigurationParser.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/QueueConfigurationParser.java index cbc8e526f0d..f6d9ce59160 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/QueueConfigurationParser.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/QueueConfigurationParser.java @@ -21,7 +21,10 @@ import org.apache.hadoop.io.IOUtils; import org.apache.hadoop.mapreduce.MRConfig; import org.apache.hadoop.mapreduce.QueueState; import org.apache.hadoop.security.authorize.AccessControlList; +import org.apache.hadoop.util.XMLUtils; + import static org.apache.hadoop.mapred.QueueManager.toFullPropertyName; + import org.xml.sax.SAXException; import org.w3c.dom.Document; import org.w3c.dom.Element; @@ -88,7 +91,7 @@ class QueueConfigurationParser { static final String VALUE_TAG = "value"; /** - * Default constructor for DeperacatedQueueConfigurationParser + * Default constructor for QueueConfigurationParser. */ QueueConfigurationParser() { @@ -158,8 +161,9 @@ class QueueConfigurationParser { */ protected Queue loadResource(InputStream resourceInput) throws ParserConfigurationException, SAXException, IOException { - DocumentBuilderFactory docBuilderFactory - = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory docBuilderFactory = + XMLUtils.newSecureDocumentBuilderFactory(); + //ignore all comments inside the xml file docBuilderFactory.setIgnoringComments(true); diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/MultipleOutputs.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/MultipleOutputs.java index 3ef6601fbfe..a214420df80 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/MultipleOutputs.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/MultipleOutputs.java @@ -17,15 +17,39 @@ */ package org.apache.hadoop.mapred.lib; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.StringTokenizer; +import java.util.concurrent.Callable; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.concurrent.ThreadFactory; +import java.util.concurrent.atomic.AtomicBoolean; + +import org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; +import org.apache.hadoop.classification.VisibleForTesting; import org.apache.hadoop.fs.FileSystem; -import org.apache.hadoop.mapred.*; +import org.apache.hadoop.mapred.FileOutputFormat; +import org.apache.hadoop.mapred.JobConf; +import org.apache.hadoop.mapred.OutputCollector; +import org.apache.hadoop.mapred.OutputFormat; +import org.apache.hadoop.mapred.RecordWriter; +import org.apache.hadoop.mapred.Reporter; +import org.apache.hadoop.mapreduce.MRConfig; import org.apache.hadoop.util.Progressable; -import java.io.IOException; -import java.util.*; - /** * The MultipleOutputs class simplifies writing to additional outputs other * than the job default output via the OutputCollector passed to @@ -132,6 +156,7 @@ public class MultipleOutputs { * Counters group used by the counters of MultipleOutputs. */ private static final String COUNTERS_GROUP = MultipleOutputs.class.getName(); + private static final Logger LOG = LoggerFactory.getLogger(MultipleOutputs.class); /** * Checks if a named output is alreadyDefined or not. @@ -381,6 +406,11 @@ public class MultipleOutputs { private Map recordWriters; private boolean countersEnabled; + @VisibleForTesting + synchronized void setRecordWriters(Map recordWriters) { + this.recordWriters = recordWriters; + } + /** * Creates and initializes multiple named outputs support, it should be * instantiated in the Mapper/Reducer configure method. @@ -528,8 +558,41 @@ public class MultipleOutputs { * could not be closed properly. */ public void close() throws IOException { + int nThreads = conf.getInt(MRConfig.MULTIPLE_OUTPUTS_CLOSE_THREAD_COUNT, + MRConfig.DEFAULT_MULTIPLE_OUTPUTS_CLOSE_THREAD_COUNT); + AtomicBoolean encounteredException = new AtomicBoolean(false); + ThreadFactory threadFactory = new ThreadFactoryBuilder().setNameFormat("MultipleOutputs-close") + .setUncaughtExceptionHandler(((t, e) -> { + LOG.error("Thread " + t + " failed unexpectedly", e); + encounteredException.set(true); + })).build(); + ExecutorService executorService = Executors.newFixedThreadPool(nThreads, threadFactory); + + List> callableList = new ArrayList<>(recordWriters.size()); + for (RecordWriter writer : recordWriters.values()) { - writer.close(null); + callableList.add(() -> { + try { + writer.close(null); + } catch (IOException e) { + LOG.error("Error while closing MultipleOutput file", e); + encounteredException.set(true); + } + return null; + }); + } + try { + executorService.invokeAll(callableList); + } catch (InterruptedException e) { + LOG.warn("Closing is Interrupted"); + Thread.currentThread().interrupt(); + } finally { + executorService.shutdown(); + } + + if (encounteredException.get()) { + throw new IOException( + "One or more threads encountered exception during close. See prior errors."); } } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java index d0b13f30fe0..8d2277b3e39 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java @@ -159,7 +159,7 @@ public class JobSubmissionFiles { fs.setPermission(stagingArea, JOB_DIR_PERMISSION); } } catch (FileNotFoundException e) { - fs.mkdirs(stagingArea, new FsPermission(JOB_DIR_PERMISSION)); + FileSystem.mkdirs(fs, stagingArea, new FsPermission(JOB_DIR_PERMISSION)); } return stagingArea; } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRConfig.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRConfig.java index b4d91491e1c..8671eb30b99 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRConfig.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRConfig.java @@ -131,5 +131,7 @@ public interface MRConfig { String MASTER_WEBAPP_UI_ACTIONS_ENABLED = "mapreduce.webapp.ui-actions.enabled"; boolean DEFAULT_MASTER_WEBAPP_UI_ACTIONS_ENABLED = true; + String MULTIPLE_OUTPUTS_CLOSE_THREAD_COUNT = "mapreduce.multiple-outputs-close-threads"; + int DEFAULT_MULTIPLE_OUTPUTS_CLOSE_THREAD_COUNT = 10; } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java index 4523565f300..02fa4eca636 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java @@ -577,6 +577,8 @@ public interface MRJobConfig { public static final String MAX_SHUFFLE_FETCH_HOST_FAILURES = "mapreduce.reduce.shuffle.max-host-failures"; public static final int DEFAULT_MAX_SHUFFLE_FETCH_HOST_FAILURES = 5; + public static final String SHUFFLE_INDEX_CACHE = "mapreduce.reduce.shuffle.indexcache.mb"; + public static final String REDUCE_SKIP_INCR_PROC_COUNT = "mapreduce.reduce.skip.proc-count.auto-incr"; public static final String REDUCE_SKIP_MAXGROUPS = "mapreduce.reduce.skip.maxgroups"; diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/MultipleOutputs.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/MultipleOutputs.java index a4d8acbbbfe..3c36dfb8bba 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/MultipleOutputs.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/MultipleOutputs.java @@ -19,14 +19,23 @@ package org.apache.hadoop.mapreduce.lib.output; import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; +import org.apache.hadoop.classification.VisibleForTesting; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.mapreduce.*; -import org.apache.hadoop.mapreduce.lib.output.MultipleOutputs; import org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl; import org.apache.hadoop.util.ReflectionUtils; import java.io.IOException; import java.util.*; +import java.util.concurrent.Callable; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.concurrent.ThreadFactory; +import java.util.concurrent.atomic.AtomicBoolean; + +import org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; /** * The MultipleOutputs class simplifies writing output data @@ -191,6 +200,8 @@ public class MultipleOutputs { * Counters group used by the counters of MultipleOutputs. */ private static final String COUNTERS_GROUP = MultipleOutputs.class.getName(); + private static final Logger LOG = + LoggerFactory.getLogger(org.apache.hadoop.mapred.lib.MultipleOutputs.class); /** * Cache for the taskContexts @@ -345,6 +356,11 @@ public class MultipleOutputs { return job.getConfiguration().getBoolean(COUNTERS_ENABLED, false); } + @VisibleForTesting + synchronized void setRecordWriters(Map> recordWriters) { + this.recordWriters = recordWriters; + } + /** * Wraps RecordWriter to increment counters. */ @@ -568,8 +584,43 @@ public class MultipleOutputs { */ @SuppressWarnings("unchecked") public void close() throws IOException, InterruptedException { + Configuration conf = context.getConfiguration(); + int nThreads = conf.getInt(MRConfig.MULTIPLE_OUTPUTS_CLOSE_THREAD_COUNT, + MRConfig.DEFAULT_MULTIPLE_OUTPUTS_CLOSE_THREAD_COUNT); + AtomicBoolean encounteredException = new AtomicBoolean(false); + ThreadFactory threadFactory = new ThreadFactoryBuilder().setNameFormat("MultipleOutputs-close") + .setUncaughtExceptionHandler(((t, e) -> { + LOG.error("Thread " + t + " failed unexpectedly", e); + encounteredException.set(true); + })).build(); + ExecutorService executorService = Executors.newFixedThreadPool(nThreads, threadFactory); + + List> callableList = new ArrayList<>(recordWriters.size()); + for (RecordWriter writer : recordWriters.values()) { - writer.close(context); + callableList.add(() -> { + try { + writer.close(context); + } catch (IOException e) { + LOG.error("Error while closing MultipleOutput file", e); + encounteredException.set(true); + } + return null; + }); + } + try { + executorService.invokeAll(callableList); + } catch (InterruptedException e) { + LOG.warn("Closing is Interrupted"); + Thread.currentThread().interrupt(); + } finally { + executorService.shutdown(); + } + + if (encounteredException.get()) { + throw new IOException( + "One or more threads encountered exception during close. See prior errors."); } } } + diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/server/tasktracker/TTConfig.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/server/tasktracker/TTConfig.java index f75ad05d295..c0e91804dcb 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/server/tasktracker/TTConfig.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/server/tasktracker/TTConfig.java @@ -29,6 +29,12 @@ import org.apache.hadoop.mapreduce.MRConfig; @InterfaceStability.Evolving public interface TTConfig extends MRConfig { + /** + * @deprecated Use + * {@link org.apache.hadoop.mapreduce.MRJobConfig#SHUFFLE_INDEX_CACHE} + * instead + */ + @Deprecated public static final String TT_INDEX_CACHE = "mapreduce.tasktracker.indexcache.mb"; public static final String TT_MAP_SLOTS = diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java index 1da5b2f5d3f..e013d017b15 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java @@ -53,14 +53,15 @@ import org.slf4j.LoggerFactory; import org.apache.hadoop.classification.VisibleForTesting; -class Fetcher extends Thread { +@VisibleForTesting +public class Fetcher extends Thread { private static final Logger LOG = LoggerFactory.getLogger(Fetcher.class); - /** Number of ms before timing out a copy */ + /** Number of ms before timing out a copy. */ private static final int DEFAULT_STALLED_COPY_TIMEOUT = 3 * 60 * 1000; - /** Basic/unit connection timeout (in milliseconds) */ + /** Basic/unit connection timeout (in milliseconds). */ private final static int UNIT_CONNECT_TIMEOUT = 60 * 1000; /* Default read timeout (in milliseconds) */ @@ -72,10 +73,12 @@ class Fetcher extends Thread { private static final String FETCH_RETRY_AFTER_HEADER = "Retry-After"; protected final Reporter reporter; - private enum ShuffleErrors{IO_ERROR, WRONG_LENGTH, BAD_ID, WRONG_MAP, + @VisibleForTesting + public enum ShuffleErrors{IO_ERROR, WRONG_LENGTH, BAD_ID, WRONG_MAP, CONNECTION, WRONG_REDUCE} - - private final static String SHUFFLE_ERR_GRP_NAME = "Shuffle Errors"; + + @VisibleForTesting + public final static String SHUFFLE_ERR_GRP_NAME = "Shuffle Errors"; private final JobConf jobConf; private final Counters.Counter connectionErrs; private final Counters.Counter ioErrs; @@ -83,8 +86,8 @@ class Fetcher extends Thread { private final Counters.Counter badIdErrs; private final Counters.Counter wrongMapErrs; private final Counters.Counter wrongReduceErrs; - protected final MergeManager merger; - protected final ShuffleSchedulerImpl scheduler; + protected final MergeManager merger; + protected final ShuffleSchedulerImpl scheduler; protected final ShuffleClientMetrics metrics; protected final ExceptionReporter exceptionReporter; protected final int id; @@ -111,7 +114,7 @@ class Fetcher extends Thread { private static SSLFactory sslFactory; public Fetcher(JobConf job, TaskAttemptID reduceId, - ShuffleSchedulerImpl scheduler, MergeManager merger, + ShuffleSchedulerImpl scheduler, MergeManager merger, Reporter reporter, ShuffleClientMetrics metrics, ExceptionReporter exceptionReporter, SecretKey shuffleKey) { this(job, reduceId, scheduler, merger, reporter, metrics, @@ -120,7 +123,7 @@ class Fetcher extends Thread { @VisibleForTesting Fetcher(JobConf job, TaskAttemptID reduceId, - ShuffleSchedulerImpl scheduler, MergeManager merger, + ShuffleSchedulerImpl scheduler, MergeManager merger, Reporter reporter, ShuffleClientMetrics metrics, ExceptionReporter exceptionReporter, SecretKey shuffleKey, int id) { @@ -315,9 +318,8 @@ class Fetcher extends Thread { return; } - if(LOG.isDebugEnabled()) { - LOG.debug("Fetcher " + id + " going to fetch from " + host + " for: " - + maps); + if (LOG.isDebugEnabled()) { + LOG.debug("Fetcher " + id + " going to fetch from " + host + " for: " + maps); } // List of maps to be fetched yet @@ -411,8 +413,8 @@ class Fetcher extends Thread { shouldWait = false; } catch (IOException e) { if (!fetchRetryEnabled) { - // throw exception directly if fetch's retry is not enabled - throw e; + // throw exception directly if fetch's retry is not enabled + throw e; } if ((Time.monotonicNow() - startTime) >= this.fetchRetryTimeout) { LOG.warn("Failed to connect to host: " + url + "after " @@ -489,7 +491,7 @@ class Fetcher extends Thread { DataInputStream input, Set remaining, boolean canRetry) throws IOException { - MapOutput mapOutput = null; + MapOutput mapOutput = null; TaskAttemptID mapId = null; long decompressedLength = -1; long compressedLength = -1; @@ -611,7 +613,7 @@ class Fetcher extends Thread { // First time to retry. long currentTime = Time.monotonicNow(); if (retryStartTime == 0) { - retryStartTime = currentTime; + retryStartTime = currentTime; } // Retry is not timeout, let's do retry with throwing an exception. @@ -628,7 +630,7 @@ class Fetcher extends Thread { } /** - * Do some basic verification on the input received -- Being defensive + * Do some basic verification on the input received -- Being defensive. * @param compressedLength * @param decompressedLength * @param forReduce @@ -695,8 +697,7 @@ class Fetcher extends Thread { * only on the last failure. Instead of connecting with a timeout of * X, we try connecting with a timeout of x < X but multiple times. */ - private void connect(URLConnection connection, int connectionTimeout) - throws IOException { + private void connect(URLConnection connection, int connectionTimeout) throws IOException { int unit = 0; if (connectionTimeout < 0) { throw new IOException("Invalid timeout " diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java index 18f2a33555a..ec6ad158ddc 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java @@ -879,4 +879,9 @@ public class MergeManagerImpl implements MergeManager { return super.compareTo(obj); } } + + @VisibleForTesting + OnDiskMerger getOnDiskMerger() { + return onDiskMerger; + } } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeThread.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeThread.java index 4a40cb17d46..c617569da33 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeThread.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeThread.java @@ -25,6 +25,7 @@ import java.util.List; import java.util.Set; import java.util.concurrent.atomic.AtomicInteger; +import org.apache.hadoop.classification.VisibleForTesting; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -109,4 +110,14 @@ abstract class MergeThread extends Thread { } public abstract void merge(List inputs) throws IOException; + + @VisibleForTesting + int getMergeFactor() { + return mergeFactor; + } + + @VisibleForTesting + LinkedList> getPendingToBeMerged() { + return pendingToBeMerged; + } } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/ConfigUtil.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/ConfigUtil.java index cf120342dd1..95074de231c 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/ConfigUtil.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/ConfigUtil.java @@ -80,8 +80,6 @@ public class ConfigUtil { JTConfig.JT_TASKCACHE_LEVELS), new DeprecationDelta("mapred.job.tracker.retire.jobs", JTConfig.JT_RETIREJOBS), - new DeprecationDelta("mapred.tasktracker.indexcache.mb", - TTConfig.TT_INDEX_CACHE), new DeprecationDelta("mapred.tasktracker.map.tasks.maximum", TTConfig.TT_MAP_SLOTS), new DeprecationDelta("mapred.tasktracker.memory_calculator_plugin", @@ -290,6 +288,10 @@ public class ConfigUtil { MRJobConfig.REDUCE_LOG_LEVEL), new DeprecationDelta("mapreduce.job.counters.limit", MRJobConfig.COUNTERS_MAX_KEY), + new DeprecationDelta("mapred.tasktracker.indexcache.mb", + MRJobConfig.SHUFFLE_INDEX_CACHE), + new DeprecationDelta("mapreduce.tasktracker.indexcache.mb", + MRJobConfig.SHUFFLE_INDEX_CACHE), new DeprecationDelta("jobclient.completion.poll.interval", Job.COMPLETION_POLL_INTERVAL_KEY), new DeprecationDelta("jobclient.progress.monitor.poll.interval", diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml index 848d33d9245..7e1b49c925f 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml @@ -1660,14 +1660,18 @@ yarn.app.mapreduce.client-am.ipc.max-retries 3 The number of client retries to the AM - before reconnecting - to the RM to fetch Application Status. + to the RM to fetch Application Status. + In other words, it is the ipc.client.connect.max.retries to be used during + reconnecting to the RM and fetching Application Status. yarn.app.mapreduce.client-am.ipc.max-retries-on-timeouts 3 The number of client retries on socket timeouts to the AM - before - reconnecting to the RM to fetch Application Status. + reconnecting to the RM to fetch Application Status. + In other words, it is the ipc.client.connect.max.retries.on.timeouts to be used during + reconnecting to the RM and fetching Application Status. diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/MapReduce_Compatibility_Hadoop1_Hadoop2.md b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/MapReduce_Compatibility_Hadoop1_Hadoop2.md index fc66a1665c2..2b8454479b1 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/MapReduce_Compatibility_Hadoop1_Hadoop2.md +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/MapReduce_Compatibility_Hadoop1_Hadoop2.md @@ -41,7 +41,7 @@ We cannot ensure complete binary compatibility with the applications that use ** Not Supported ------------- -MRAdmin has been removed in MRv2 because because `mradmin` commands no longer exist. They have been replaced by the commands in `rmadmin`. We neither support binary compatibility nor source compatibility for the applications that use this class directly. +MRAdmin has been removed in MRv2 because `mradmin` commands no longer exist. They have been replaced by the commands in `rmadmin`. We neither support binary compatibility nor source compatibility for the applications that use this class directly. Tradeoffs between MRv1 Users and Early MRv2 Adopters ---------------------------------------------------- diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/MapredCommands.md b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/MapredCommands.md index 176fcacd2d2..6c2141820d8 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/MapredCommands.md +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/MapredCommands.md @@ -30,7 +30,7 @@ Hadoop has an option parsing framework that employs parsing generic options as w |:---- |:---- | | SHELL\_OPTIONS | The common set of shell options. These are documented on the [Hadoop Commands Reference](../../hadoop-project-dist/hadoop-common/CommandsManual.html#Shell_Options) page. | | GENERIC\_OPTIONS | The common set of options supported by multiple commands. See the [Hadoop Commands Reference](../../hadoop-project-dist/hadoop-common/CommandsManual.html#Generic_Options) for more information. | -| COMMAND COMMAND\_OPTIONS | Various commands with their options are described in the following sections. The commands have been grouped into [User Commands](#User_Commands) and [Administration Commands](#Administration_Commands). | +| COMMAND\_OPTIONS | Various commands with their options are described in the following sections. The commands have been grouped into [User Commands](#User_Commands) and [Administration Commands](#Administration_Commands). | User Commands ------------- diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/manifest_committer_architecture.md b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/manifest_committer_architecture.md index d2b4f1ee8b4..55806fb6f5b 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/manifest_committer_architecture.md +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/manifest_committer_architecture.md @@ -113,7 +113,7 @@ No renaming takes place —the files are left in their original location. The directory treewalk is single-threaded, then it is `O(directories)`, with each directory listing using one or more paged LIST calls. -This is simple, and for most tasks, the scan is off the critical path of of the job. +This is simple, and for most tasks, the scan is off the critical path of the job. Statistics analysis may justify moving to parallel scans in future. @@ -332,4 +332,4 @@ Any store/FS which supports auditing is able to collect this data and include in their logs. To ease backporting, all audit integration is in the single class -`org.apache.hadoop.mapreduce.lib.output.committer.manifest.impl.AuditingIntegration`. \ No newline at end of file +`org.apache.hadoop.mapreduce.lib.output.committer.manifest.impl.AuditingIntegration`. diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/manifest_committer_protocol.md b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/manifest_committer_protocol.md index c5af28de70c..2da9b2f500d 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/manifest_committer_protocol.md +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/manifest_committer_protocol.md @@ -213,7 +213,7 @@ the same correctness guarantees as the v1 algorithm. attempt working directory to their final destination path, holding back on the final manifestation POST. 1. A JSON file containing all information needed to complete the upload of all - files in the task attempt is written to the Job Attempt directory of of the + files in the task attempt is written to the Job Attempt directory of the wrapped committer working with HDFS. 1. Job commit: load in all the manifest files in the HDFS job attempt directory, then issued the POST request to complete the uploads. These are parallelised. diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestIndexCache.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestIndexCache.java index dabce770e82..2df5cc2dc06 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestIndexCache.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestIndexCache.java @@ -30,7 +30,7 @@ import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.FSDataOutputStream; import org.apache.hadoop.security.UserGroupInformation; -import org.apache.hadoop.mapreduce.server.tasktracker.TTConfig; +import org.apache.hadoop.mapreduce.MRJobConfig; import org.junit.Before; import org.junit.Test; @@ -56,7 +56,7 @@ public class TestIndexCache { r.setSeed(seed); System.out.println("seed: " + seed); fs.delete(p, true); - conf.setInt(TTConfig.TT_INDEX_CACHE, 1); + conf.setInt(MRJobConfig.SHUFFLE_INDEX_CACHE, 1); final int partsPerMap = 1000; final int bytesPerFile = partsPerMap * 24; IndexCache cache = new IndexCache(conf); @@ -127,7 +127,7 @@ public class TestIndexCache { public void testBadIndex() throws Exception { final int parts = 30; fs.delete(p, true); - conf.setInt(TTConfig.TT_INDEX_CACHE, 1); + conf.setInt(MRJobConfig.SHUFFLE_INDEX_CACHE, 1); IndexCache cache = new IndexCache(conf); Path f = new Path(p, "badindex"); @@ -159,7 +159,7 @@ public class TestIndexCache { @Test public void testInvalidReduceNumberOrLength() throws Exception { fs.delete(p, true); - conf.setInt(TTConfig.TT_INDEX_CACHE, 1); + conf.setInt(MRJobConfig.SHUFFLE_INDEX_CACHE, 1); final int partsPerMap = 1000; final int bytesPerFile = partsPerMap * 24; IndexCache cache = new IndexCache(conf); @@ -205,7 +205,7 @@ public class TestIndexCache { // fails with probability of 100% on code before MAPREDUCE-2541, // so it is repeatable in practice. fs.delete(p, true); - conf.setInt(TTConfig.TT_INDEX_CACHE, 10); + conf.setInt(MRJobConfig.SHUFFLE_INDEX_CACHE, 10); // Make a big file so removeMapThread almost surely runs faster than // getInfoThread final int partsPerMap = 100000; @@ -251,7 +251,7 @@ public class TestIndexCache { @Test public void testCreateRace() throws Exception { fs.delete(p, true); - conf.setInt(TTConfig.TT_INDEX_CACHE, 1); + conf.setInt(MRJobConfig.SHUFFLE_INDEX_CACHE, 1); final int partsPerMap = 1000; final int bytesPerFile = partsPerMap * 24; final IndexCache cache = new IndexCache(conf); diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/TestJobSubmissionFiles.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/TestJobSubmissionFiles.java index ab3f7a0a937..6e9c80813fc 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/TestJobSubmissionFiles.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/TestJobSubmissionFiles.java @@ -19,6 +19,7 @@ package org.apache.hadoop.mapreduce; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.CommonConfigurationKeys; import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.FileSystemTestHelper; @@ -33,6 +34,8 @@ import static org.junit.Assert.assertEquals; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.when; +import org.apache.hadoop.hdfs.MiniDFSCluster; +import org.apache.hadoop.hdfs.HdfsConfiguration; /** * Tests for JobSubmissionFiles Utility class. */ @@ -139,4 +142,26 @@ public class TestJobSubmissionFiles { assertEquals(stagingPath, JobSubmissionFiles.getStagingDir(cluster, conf, user)); } + + @Test + public void testDirPermission() throws Exception { + Cluster cluster = mock(Cluster.class); + HdfsConfiguration conf = new HdfsConfiguration(); + conf.set(CommonConfigurationKeys.FS_PERMISSIONS_UMASK_KEY, "700"); + MiniDFSCluster dfsCluster = null; + try { + dfsCluster = new MiniDFSCluster.Builder(conf).numDataNodes(2).build(); + FileSystem fs = dfsCluster.getFileSystem(); + UserGroupInformation user = UserGroupInformation + .createUserForTesting(USER_1_SHORT_NAME, GROUP_NAMES); + Path stagingPath = new Path(fs.getUri().toString() + "/testDirPermission"); + when(cluster.getStagingAreaDir()).thenReturn(stagingPath); + Path res = JobSubmissionFiles.getStagingDir(cluster, conf, user); + assertEquals(new FsPermission(0700), fs.getFileStatus(res).getPermission()); + } finally { + if (dfsCluster != null) { + dfsCluster.shutdown(); + } + } + } } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestMergeManager.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestMergeManager.java index 4e718b85a82..063daada6d7 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestMergeManager.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestMergeManager.java @@ -23,7 +23,6 @@ import static org.junit.Assert.assertTrue; import static org.mockito.Mockito.mock; import java.io.IOException; -import java.net.URISyntaxException; import java.util.ArrayList; import java.util.LinkedList; import java.util.List; @@ -44,7 +43,6 @@ import org.apache.hadoop.mapred.MapOutputFile; import org.apache.hadoop.mapreduce.MRJobConfig; import org.apache.hadoop.mapreduce.TaskAttemptID; import org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.CompressAwarePath; -import org.apache.hadoop.test.Whitebox; import org.junit.Assert; import org.junit.Test; @@ -217,8 +215,7 @@ public class TestMergeManager { @SuppressWarnings({ "unchecked", "deprecation" }) @Test(timeout=10000) - public void testOnDiskMerger() throws IOException, URISyntaxException, - InterruptedException { + public void testOnDiskMerger() throws IOException { JobConf jobConf = new JobConf(); final int SORT_FACTOR = 5; jobConf.setInt(MRJobConfig.IO_SORT_FACTOR, SORT_FACTOR); @@ -229,12 +226,8 @@ public class TestMergeManager { new MergeManagerImpl(null, jobConf, fs, null , null, null, null, null, null, null, null, null, null, mapOutputFile); - MergeThread, IntWritable, IntWritable> - onDiskMerger = (MergeThread, - IntWritable, IntWritable>) Whitebox.getInternalState(manager, - "onDiskMerger"); - int mergeFactor = (Integer) Whitebox.getInternalState(onDiskMerger, - "mergeFactor"); + MergeThread onDiskMerger = manager.getOnDiskMerger(); + int mergeFactor = onDiskMerger.getMergeFactor(); // make sure the io.sort.factor is set properly assertEquals(mergeFactor, SORT_FACTOR); @@ -252,9 +245,7 @@ public class TestMergeManager { } //Check that the files pending to be merged are in sorted order. - LinkedList> pendingToBeMerged = - (LinkedList>) Whitebox.getInternalState( - onDiskMerger, "pendingToBeMerged"); + LinkedList> pendingToBeMerged = onDiskMerger.getPendingToBeMerged(); assertTrue("No inputs were added to list pending to merge", pendingToBeMerged.size() > 0); for(int i = 0; i < pendingToBeMerged.size(); ++i) { diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs-plugins/pom.xml b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs-plugins/pom.xml index bedfc1fc1af..37d4464cd76 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs-plugins/pom.xml +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs-plugins/pom.xml @@ -36,6 +36,21 @@ org.apache.hadoop hadoop-yarn-common + + org.junit.jupiter + junit-jupiter-api + test + + + org.junit.jupiter + junit-jupiter-engine + test + + + org.junit.platform + junit-platform-launcher + test + org.apache.hadoop hadoop-mapreduce-client-common diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs-plugins/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestMapReduceTrackingUriPlugin.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs-plugins/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestMapReduceTrackingUriPlugin.java index 9291097cbf4..55346d9f459 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs-plugins/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestMapReduceTrackingUriPlugin.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs-plugins/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestMapReduceTrackingUriPlugin.java @@ -18,7 +18,7 @@ package org.apache.hadoop.mapreduce.v2.hs.webapp; -import static org.junit.Assert.assertEquals; +import static org.junit.jupiter.api.Assertions.assertEquals; import java.net.URI; import java.net.URISyntaxException; @@ -27,11 +27,11 @@ import org.apache.hadoop.http.HttpConfig; import org.apache.hadoop.mapreduce.v2.jobhistory.JHAdminConfig; import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.conf.YarnConfiguration; -import org.junit.Test; +import org.junit.jupiter.api.Test; public class TestMapReduceTrackingUriPlugin { @Test - public void testProducesHistoryServerUriForAppId() + void testProducesHistoryServerUriForAppId() throws URISyntaxException { final String historyAddress = "example.net:424242"; YarnConfiguration conf = new YarnConfiguration(); @@ -49,7 +49,7 @@ public class TestMapReduceTrackingUriPlugin { } @Test - public void testProducesHistoryServerUriWithHTTPS() + void testProducesHistoryServerUriWithHTTPS() throws URISyntaxException { final String historyAddress = "example.net:404040"; YarnConfiguration conf = new YarnConfiguration(); diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServices.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServices.java index b4a4566bb53..f1dc6260d74 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServices.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServices.java @@ -37,6 +37,7 @@ import org.apache.hadoop.mapreduce.v2.hs.JobHistory; import org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer; import org.apache.hadoop.mapreduce.v2.hs.MockHistoryContext; import org.apache.hadoop.util.VersionInfo; +import org.apache.hadoop.util.XMLUtils; import org.apache.hadoop.yarn.api.ApplicationClientProtocol; import org.apache.hadoop.yarn.webapp.GenericExceptionHandler; import org.apache.hadoop.yarn.webapp.GuiceServletConfig; @@ -284,7 +285,7 @@ public class TestHsWebServices extends JerseyTestBase { public void verifyHSInfoXML(String xml, AppContext ctx) throws JSONException, Exception { - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServicesAttempts.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServicesAttempts.java index 708a60b821f..3ca6db3ab4a 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServicesAttempts.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServicesAttempts.java @@ -47,6 +47,7 @@ import org.apache.hadoop.mapreduce.v2.app.job.TaskAttempt; import org.apache.hadoop.mapreduce.v2.hs.HistoryContext; import org.apache.hadoop.mapreduce.v2.hs.MockHistoryContext; import org.apache.hadoop.mapreduce.v2.util.MRApps; +import org.apache.hadoop.util.XMLUtils; import org.apache.hadoop.yarn.api.ApplicationClientProtocol; import org.apache.hadoop.yarn.webapp.GenericExceptionHandler; import org.apache.hadoop.yarn.webapp.GuiceServletConfig; @@ -207,7 +208,7 @@ public class TestHsWebServicesAttempts extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); @@ -331,7 +332,7 @@ public class TestHsWebServicesAttempts extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); @@ -618,7 +619,7 @@ public class TestHsWebServicesAttempts extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServicesJobConf.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServicesJobConf.java index 62a53979e9f..21df6394736 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServicesJobConf.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServicesJobConf.java @@ -48,6 +48,7 @@ import org.apache.hadoop.mapreduce.v2.app.job.Job; import org.apache.hadoop.mapreduce.v2.hs.HistoryContext; import org.apache.hadoop.mapreduce.v2.hs.MockHistoryContext; import org.apache.hadoop.mapreduce.v2.util.MRApps; +import org.apache.hadoop.util.XMLUtils; import org.apache.hadoop.yarn.api.ApplicationClientProtocol; import org.apache.hadoop.yarn.webapp.GenericExceptionHandler; import org.apache.hadoop.yarn.webapp.GuiceServletConfig; @@ -230,7 +231,7 @@ public class TestHsWebServicesJobConf extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServicesJobs.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServicesJobs.java index 906b4ad41b2..05ed2775a4b 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServicesJobs.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServicesJobs.java @@ -45,6 +45,7 @@ import org.apache.hadoop.mapreduce.v2.app.job.Job; import org.apache.hadoop.mapreduce.v2.hs.HistoryContext; import org.apache.hadoop.mapreduce.v2.hs.MockHistoryContext; import org.apache.hadoop.mapreduce.v2.util.MRApps; +import org.apache.hadoop.util.XMLUtils; import org.apache.hadoop.yarn.api.ApplicationClientProtocol; import org.apache.hadoop.yarn.webapp.GenericExceptionHandler; import org.apache.hadoop.yarn.webapp.GuiceServletConfig; @@ -190,7 +191,7 @@ public class TestHsWebServicesJobs extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); @@ -422,7 +423,7 @@ public class TestHsWebServicesJobs extends JerseyTestBase { response.getType().toString()); String msg = response.getEntity(String.class); System.out.println(msg); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(msg)); @@ -489,7 +490,7 @@ public class TestHsWebServicesJobs extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); @@ -612,7 +613,7 @@ public class TestHsWebServicesJobs extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); @@ -769,7 +770,7 @@ public class TestHsWebServicesJobs extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServicesTasks.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServicesTasks.java index bcef55feda0..47329cc39f8 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServicesTasks.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServicesTasks.java @@ -46,6 +46,7 @@ import org.apache.hadoop.mapreduce.v2.app.job.Task; import org.apache.hadoop.mapreduce.v2.hs.HistoryContext; import org.apache.hadoop.mapreduce.v2.hs.MockHistoryContext; import org.apache.hadoop.mapreduce.v2.util.MRApps; +import org.apache.hadoop.util.XMLUtils; import org.apache.hadoop.yarn.api.ApplicationClientProtocol; import org.apache.hadoop.yarn.webapp.GenericExceptionHandler; import org.apache.hadoop.yarn.webapp.GuiceServletConfig; @@ -202,7 +203,7 @@ public class TestHsWebServicesTasks extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); @@ -549,7 +550,7 @@ public class TestHsWebServicesTasks extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); @@ -729,7 +730,7 @@ public class TestHsWebServicesTasks extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/pom.xml b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/pom.xml index 75f250e1d72..17358a37da3 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/pom.xml +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/pom.xml @@ -110,6 +110,7 @@ org.hsqldb hsqldb test + jdk8 diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestQueueConfigurationParser.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestQueueConfigurationParser.java index 26d697a6165..bdfe0f5dc69 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestQueueConfigurationParser.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestQueueConfigurationParser.java @@ -28,13 +28,14 @@ import javax.xml.transform.TransformerFactory; import javax.xml.transform.dom.DOMSource; import javax.xml.transform.stream.StreamResult; +import org.apache.hadoop.util.XMLUtils; + import org.w3c.dom.Document; import org.w3c.dom.Element; import static org.junit.Assert.*; import org.junit.Test; - public class TestQueueConfigurationParser { /** * test xml generation @@ -64,7 +65,7 @@ public class TestQueueConfigurationParser { DOMSource domSource = new DOMSource(e); StringWriter writer = new StringWriter(); StreamResult result = new StreamResult(writer); - TransformerFactory tf = TransformerFactory.newInstance(); + TransformerFactory tf = XMLUtils.newSecureTransformerFactory(); Transformer transformer = tf.newTransformer(); transformer.transform(domSource, result); String str= writer.toString(); diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestReduceFetchFromPartialMem.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestReduceFetchFromPartialMem.java index 9b04f64ac60..1b99ce0c0aa 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestReduceFetchFromPartialMem.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestReduceFetchFromPartialMem.java @@ -26,6 +26,7 @@ import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.io.WritableComparator; import org.apache.hadoop.mapreduce.TaskCounter; +import org.apache.hadoop.mapreduce.task.reduce.Fetcher; import org.junit.After; import org.junit.Before; import org.junit.Test; @@ -37,6 +38,7 @@ import java.util.Arrays; import java.util.Formatter; import java.util.Iterator; +import static org.apache.hadoop.mapreduce.task.reduce.Fetcher.SHUFFLE_ERR_GRP_NAME; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; import static org.junit.Assert.fail; @@ -87,6 +89,9 @@ public class TestReduceFetchFromPartialMem { final long spill = c.findCounter(TaskCounter.SPILLED_RECORDS).getCounter(); assertTrue("Expected some records not spilled during reduce" + spill + ")", spill < 2 * out); // spilled map records, some records at the reduce + long shuffleIoErrors = + c.getGroup(SHUFFLE_ERR_GRP_NAME).getCounter(Fetcher.ShuffleErrors.IO_ERROR.toString()); + assertEquals(0, shuffleIoErrors); } /** diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/lib/TestMultipleOutputs.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/lib/TestMultipleOutputs.java index f3e58930eac..8829a093b13 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/lib/TestMultipleOutputs.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/lib/TestMultipleOutputs.java @@ -32,6 +32,7 @@ import org.apache.hadoop.mapred.JobClient; import org.apache.hadoop.mapred.JobConf; import org.apache.hadoop.mapred.Mapper; import org.apache.hadoop.mapred.OutputCollector; +import org.apache.hadoop.mapred.RecordWriter; import org.apache.hadoop.mapred.Reducer; import org.apache.hadoop.mapred.Reporter; import org.apache.hadoop.mapred.RunningJob; @@ -46,11 +47,16 @@ import java.io.BufferedReader; import java.io.DataOutputStream; import java.io.IOException; import java.io.InputStreamReader; +import java.util.Arrays; import java.util.Iterator; +import java.util.Map; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertTrue; +import static org.mockito.Mockito.doThrow; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; public class TestMultipleOutputs extends HadoopTestCase { @@ -70,6 +76,19 @@ public class TestMultipleOutputs extends HadoopTestCase { _testMOWithJavaSerialization(true); } + @SuppressWarnings("unchecked") + @Test(expected = IOException.class) + public void testParallelCloseIOException() throws IOException { + RecordWriter writer = mock(RecordWriter.class); + Map recordWriters = mock(Map.class); + when(recordWriters.values()).thenReturn(Arrays.asList(writer, writer)); + doThrow(new IOException("test IO exception")).when(writer).close(null); + JobConf conf = createJobConf(); + MultipleOutputs mos = new MultipleOutputs(conf); + mos.setRecordWriters(recordWriters); + mos.close(); + } + private static final Path ROOT_DIR = new Path("testing/mo"); private static final Path IN_DIR = new Path(ROOT_DIR, "input"); private static final Path OUT_DIR = new Path(ROOT_DIR, "output"); @@ -307,6 +326,7 @@ public class TestMultipleOutputs extends HadoopTestCase { } + @SuppressWarnings({"unchecked"}) public static class MOMap implements Mapper { diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/lib/output/TestMRMultipleOutputs.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/lib/output/TestMRMultipleOutputs.java index babd20e66c4..717163ce243 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/lib/output/TestMRMultipleOutputs.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/lib/output/TestMRMultipleOutputs.java @@ -31,7 +31,9 @@ import org.apache.hadoop.mapreduce.CounterGroup; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.MapReduceTestUtil; import org.apache.hadoop.mapreduce.Mapper; +import org.apache.hadoop.mapreduce.RecordWriter; import org.apache.hadoop.mapreduce.Reducer; + import org.junit.After; import org.junit.Before; import org.junit.Test; @@ -39,10 +41,15 @@ import org.junit.Test; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; +import java.util.Arrays; +import java.util.Map; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertTrue; +import static org.mockito.Mockito.doThrow; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; public class TestMRMultipleOutputs extends HadoopTestCase { @@ -62,6 +69,20 @@ public class TestMRMultipleOutputs extends HadoopTestCase { _testMOWithJavaSerialization(true); } + @SuppressWarnings("unchecked") + @Test(expected = IOException.class) + public void testParallelCloseIOException() throws IOException, InterruptedException { + RecordWriter writer = mock(RecordWriter.class); + Map recordWriters = mock(Map.class); + when(recordWriters.values()).thenReturn(Arrays.asList(writer, writer)); + Mapper.Context taskInputOutputContext = mock(Mapper.Context.class); + when(taskInputOutputContext.getConfiguration()).thenReturn(createJobConf()); + doThrow(new IOException("test IO exception")).when(writer).close(taskInputOutputContext); + MultipleOutputs mos = new MultipleOutputs(taskInputOutputContext); + mos.setRecordWriters(recordWriters); + mos.close(); + } + private static String localPathRoot = System.getProperty("test.build.data", "/tmp"); private static final Path ROOT_DIR = new Path(localPathRoot, "testing/mo"); @@ -85,7 +106,7 @@ public class TestMRMultipleOutputs extends HadoopTestCase { fs.delete(ROOT_DIR, true); super.tearDown(); } - + protected void _testMOWithJavaSerialization(boolean withCounters) throws Exception { String input = "a\nb\nc\nd\ne\nc\nd\ne"; diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedChunkedFile.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedChunkedFile.java index 99d4a4cb426..1f009a49195 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedChunkedFile.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedChunkedFile.java @@ -23,6 +23,9 @@ import java.io.IOException; import java.io.RandomAccessFile; import org.apache.hadoop.classification.VisibleForTesting; +import io.netty.buffer.ByteBuf; +import io.netty.buffer.ByteBufAllocator; +import io.netty.handler.stream.ChunkedFile; import org.apache.hadoop.io.ReadaheadPool; import org.apache.hadoop.io.ReadaheadPool.ReadaheadRequest; import org.apache.hadoop.io.nativeio.NativeIO; @@ -31,8 +34,6 @@ import org.slf4j.LoggerFactory; import static org.apache.hadoop.io.nativeio.NativeIO.POSIX.POSIX_FADV_DONTNEED; -import org.jboss.netty.handler.stream.ChunkedFile; - public class FadvisedChunkedFile extends ChunkedFile { private static final Logger LOG = @@ -64,16 +65,16 @@ public class FadvisedChunkedFile extends ChunkedFile { } @Override - public Object nextChunk() throws Exception { + public ByteBuf readChunk(ByteBufAllocator allocator) throws Exception { synchronized (closeLock) { if (fd.valid()) { if (manageOsCache && readaheadPool != null) { readaheadRequest = readaheadPool .readaheadStream( - identifier, fd, getCurrentOffset(), readaheadLength, - getEndOffset(), readaheadRequest); + identifier, fd, currentOffset(), readaheadLength, + endOffset(), readaheadRequest); } - return super.nextChunk(); + return super.readChunk(allocator); } else { return null; } @@ -88,12 +89,12 @@ public class FadvisedChunkedFile extends ChunkedFile { readaheadRequest = null; } if (fd.valid() && - manageOsCache && getEndOffset() - getStartOffset() > 0) { + manageOsCache && endOffset() - startOffset() > 0) { try { NativeIO.POSIX.getCacheManipulator().posixFadviseIfPossible( identifier, fd, - getStartOffset(), getEndOffset() - getStartOffset(), + startOffset(), endOffset() - startOffset(), POSIX_FADV_DONTNEED); } catch (Throwable t) { LOG.warn("Failed to manage OS cache for " + identifier + diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedFileRegion.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedFileRegion.java index 1d3f162c901..9290a282e39 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedFileRegion.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedFileRegion.java @@ -25,6 +25,7 @@ import java.nio.ByteBuffer; import java.nio.channels.FileChannel; import java.nio.channels.WritableByteChannel; +import io.netty.channel.DefaultFileRegion; import org.apache.hadoop.io.ReadaheadPool; import org.apache.hadoop.io.ReadaheadPool.ReadaheadRequest; import org.apache.hadoop.io.nativeio.NativeIO; @@ -33,8 +34,6 @@ import org.slf4j.LoggerFactory; import static org.apache.hadoop.io.nativeio.NativeIO.POSIX.POSIX_FADV_DONTNEED; -import org.jboss.netty.channel.DefaultFileRegion; - import org.apache.hadoop.classification.VisibleForTesting; public class FadvisedFileRegion extends DefaultFileRegion { @@ -77,8 +76,8 @@ public class FadvisedFileRegion extends DefaultFileRegion { throws IOException { if (readaheadPool != null && readaheadLength > 0) { readaheadRequest = readaheadPool.readaheadStream(identifier, fd, - getPosition() + position, readaheadLength, - getPosition() + getCount(), readaheadRequest); + position() + position, readaheadLength, + position() + count(), readaheadRequest); } if(this.shuffleTransferToAllowed) { @@ -147,11 +146,11 @@ public class FadvisedFileRegion extends DefaultFileRegion { @Override - public void releaseExternalResources() { + protected void deallocate() { if (readaheadRequest != null) { readaheadRequest.cancel(); } - super.releaseExternalResources(); + super.deallocate(); } /** @@ -159,10 +158,10 @@ public class FadvisedFileRegion extends DefaultFileRegion { * we don't need the region to be cached anymore. */ public void transferSuccessful() { - if (manageOsCache && getCount() > 0) { + if (manageOsCache && count() > 0) { try { NativeIO.POSIX.getCacheManipulator().posixFadviseIfPossible(identifier, - fd, getPosition(), getCount(), POSIX_FADV_DONTNEED); + fd, position(), count(), POSIX_FADV_DONTNEED); } catch (Throwable t) { LOG.warn("Failed to manage OS cache for " + identifier, t); } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/LoggingHttpResponseEncoder.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/LoggingHttpResponseEncoder.java new file mode 100644 index 00000000000..c7b98ce166c --- /dev/null +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/LoggingHttpResponseEncoder.java @@ -0,0 +1,106 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0 + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.mapred; + +import io.netty.buffer.ByteBuf; +import io.netty.channel.ChannelHandlerContext; +import io.netty.channel.ChannelPromise; +import io.netty.handler.codec.http.HttpHeaders; +import io.netty.handler.codec.http.HttpResponse; +import io.netty.handler.codec.http.HttpResponseEncoder; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.List; + +class LoggingHttpResponseEncoder extends HttpResponseEncoder { + private static final Logger LOG = LoggerFactory.getLogger(LoggingHttpResponseEncoder.class); + private final boolean logStacktraceOfEncodingMethods; + + LoggingHttpResponseEncoder(boolean logStacktraceOfEncodingMethods) { + this.logStacktraceOfEncodingMethods = logStacktraceOfEncodingMethods; + } + + @Override + public boolean acceptOutboundMessage(Object msg) throws Exception { + printExecutingMethod(); + LOG.info("OUTBOUND MESSAGE: " + msg); + return super.acceptOutboundMessage(msg); + } + + @Override + protected void encodeInitialLine(ByteBuf buf, HttpResponse response) throws Exception { + LOG.debug("Executing method: {}, response: {}", + getExecutingMethodName(), response); + logStacktraceIfRequired(); + super.encodeInitialLine(buf, response); + } + + @Override + protected void encode(ChannelHandlerContext ctx, Object msg, + List out) throws Exception { + LOG.debug("Encoding to channel {}: {}", ctx.channel(), msg); + printExecutingMethod(); + logStacktraceIfRequired(); + super.encode(ctx, msg, out); + } + + @Override + protected void encodeHeaders(HttpHeaders headers, ByteBuf buf) { + printExecutingMethod(); + super.encodeHeaders(headers, buf); + } + + @Override + public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise + promise) throws Exception { + LOG.debug("Writing to channel {}: {}", ctx.channel(), msg); + printExecutingMethod(); + super.write(ctx, msg, promise); + } + + private void logStacktraceIfRequired() { + if (logStacktraceOfEncodingMethods) { + LOG.debug("Stacktrace: ", new Throwable()); + } + } + + private void printExecutingMethod() { + String methodName = getExecutingMethodName(1); + LOG.debug("Executing method: {}", methodName); + } + + private String getExecutingMethodName() { + return getExecutingMethodName(0); + } + + private String getExecutingMethodName(int additionalSkipFrames) { + try { + StackTraceElement[] stackTrace = Thread.currentThread().getStackTrace(); + // Array items (indices): + // 0: java.lang.Thread.getStackTrace(...) + // 1: TestShuffleHandler$LoggingHttpResponseEncoder.getExecutingMethodName(...) + int skipFrames = 2 + additionalSkipFrames; + String methodName = stackTrace[skipFrames].getMethodName(); + String className = this.getClass().getSimpleName(); + return className + "#" + methodName; + } catch (Throwable t) { + LOG.error("Error while getting execution method name", t); + return "unknown"; + } + } +} diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java index 448082f7fe8..e4a43f85b94 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java @@ -18,19 +18,20 @@ package org.apache.hadoop.mapred; +import static io.netty.buffer.Unpooled.wrappedBuffer; +import static io.netty.handler.codec.http.HttpHeaderNames.CONTENT_TYPE; +import static io.netty.handler.codec.http.HttpMethod.GET; +import static io.netty.handler.codec.http.HttpResponseStatus.BAD_REQUEST; +import static io.netty.handler.codec.http.HttpResponseStatus.FORBIDDEN; +import static io.netty.handler.codec.http.HttpResponseStatus.INTERNAL_SERVER_ERROR; +import static io.netty.handler.codec.http.HttpResponseStatus.METHOD_NOT_ALLOWED; +import static io.netty.handler.codec.http.HttpResponseStatus.NOT_FOUND; +import static io.netty.handler.codec.http.HttpResponseStatus.OK; +import static io.netty.handler.codec.http.HttpResponseStatus.UNAUTHORIZED; +import static io.netty.handler.codec.http.HttpVersion.HTTP_1_1; +import static org.apache.hadoop.mapred.ShuffleHandler.NettyChannelHelper.*; import static org.fusesource.leveldbjni.JniDBFactory.asString; import static org.fusesource.leveldbjni.JniDBFactory.bytes; -import static org.jboss.netty.buffer.ChannelBuffers.wrappedBuffer; -import static org.jboss.netty.handler.codec.http.HttpHeaders.Names.CONTENT_TYPE; -import static org.jboss.netty.handler.codec.http.HttpMethod.GET; -import static org.jboss.netty.handler.codec.http.HttpResponseStatus.BAD_REQUEST; -import static org.jboss.netty.handler.codec.http.HttpResponseStatus.FORBIDDEN; -import static org.jboss.netty.handler.codec.http.HttpResponseStatus.INTERNAL_SERVER_ERROR; -import static org.jboss.netty.handler.codec.http.HttpResponseStatus.METHOD_NOT_ALLOWED; -import static org.jboss.netty.handler.codec.http.HttpResponseStatus.NOT_FOUND; -import static org.jboss.netty.handler.codec.http.HttpResponseStatus.OK; -import static org.jboss.netty.handler.codec.http.HttpResponseStatus.UNAUTHORIZED; -import static org.jboss.netty.handler.codec.http.HttpVersion.HTTP_1_1; import java.io.File; import java.io.FileNotFoundException; @@ -54,6 +55,44 @@ import java.util.regex.Pattern; import javax.crypto.SecretKey; +import io.netty.bootstrap.ServerBootstrap; +import io.netty.buffer.Unpooled; +import io.netty.channel.Channel; +import io.netty.channel.ChannelFuture; +import io.netty.channel.ChannelFutureListener; +import io.netty.channel.ChannelHandler; +import io.netty.channel.ChannelHandlerContext; +import io.netty.channel.ChannelInboundHandlerAdapter; +import io.netty.channel.ChannelInitializer; +import io.netty.channel.ChannelOption; +import io.netty.channel.ChannelOutboundHandlerAdapter; +import io.netty.channel.ChannelPipeline; +import io.netty.channel.ChannelPromise; +import io.netty.channel.EventLoopGroup; +import io.netty.channel.group.ChannelGroup; +import io.netty.channel.group.DefaultChannelGroup; +import io.netty.channel.nio.NioEventLoopGroup; +import io.netty.channel.socket.SocketChannel; +import io.netty.channel.socket.nio.NioServerSocketChannel; +import io.netty.handler.codec.TooLongFrameException; +import io.netty.handler.codec.http.DefaultFullHttpResponse; +import io.netty.handler.codec.http.DefaultHttpResponse; +import io.netty.handler.codec.http.FullHttpResponse; +import io.netty.handler.codec.http.HttpObjectAggregator; +import io.netty.handler.codec.http.HttpRequest; +import io.netty.handler.codec.http.HttpRequestDecoder; +import io.netty.handler.codec.http.HttpResponse; +import io.netty.handler.codec.http.HttpResponseEncoder; +import io.netty.handler.codec.http.HttpResponseStatus; +import io.netty.handler.codec.http.LastHttpContent; +import io.netty.handler.codec.http.QueryStringDecoder; +import io.netty.handler.ssl.SslHandler; +import io.netty.handler.stream.ChunkedWriteHandler; +import io.netty.handler.timeout.IdleState; +import io.netty.handler.timeout.IdleStateEvent; +import io.netty.handler.timeout.IdleStateHandler; +import io.netty.util.CharsetUtil; +import io.netty.util.concurrent.DefaultEventExecutorGroup; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.DataInputByteBuffer; @@ -79,7 +118,6 @@ import org.apache.hadoop.security.ssl.SSLFactory; import org.apache.hadoop.security.token.Token; import org.apache.hadoop.util.DiskChecker; import org.apache.hadoop.util.Shell; -import org.apache.hadoop.util.concurrent.HadoopExecutors; import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.proto.YarnServerCommonProtos.VersionProto; import org.apache.hadoop.yarn.server.api.ApplicationInitializationContext; @@ -94,42 +132,6 @@ import org.fusesource.leveldbjni.internal.NativeDB; import org.iq80.leveldb.DB; import org.iq80.leveldb.DBException; import org.iq80.leveldb.Options; -import org.jboss.netty.bootstrap.ServerBootstrap; -import org.jboss.netty.buffer.ChannelBuffers; -import org.jboss.netty.channel.Channel; -import org.jboss.netty.channel.ChannelFactory; -import org.jboss.netty.channel.ChannelFuture; -import org.jboss.netty.channel.ChannelFutureListener; -import org.jboss.netty.channel.ChannelHandler; -import org.jboss.netty.channel.ChannelHandlerContext; -import org.jboss.netty.channel.ChannelPipeline; -import org.jboss.netty.channel.ChannelPipelineFactory; -import org.jboss.netty.channel.ChannelStateEvent; -import org.jboss.netty.channel.Channels; -import org.jboss.netty.channel.ExceptionEvent; -import org.jboss.netty.channel.MessageEvent; -import org.jboss.netty.channel.SimpleChannelUpstreamHandler; -import org.jboss.netty.channel.group.ChannelGroup; -import org.jboss.netty.channel.group.DefaultChannelGroup; -import org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory; -import org.jboss.netty.handler.codec.frame.TooLongFrameException; -import org.jboss.netty.handler.codec.http.DefaultHttpResponse; -import org.jboss.netty.handler.codec.http.HttpChunkAggregator; -import org.jboss.netty.handler.codec.http.HttpRequest; -import org.jboss.netty.handler.codec.http.HttpRequestDecoder; -import org.jboss.netty.handler.codec.http.HttpResponse; -import org.jboss.netty.handler.codec.http.HttpResponseEncoder; -import org.jboss.netty.handler.codec.http.HttpResponseStatus; -import org.jboss.netty.handler.codec.http.QueryStringDecoder; -import org.jboss.netty.handler.ssl.SslHandler; -import org.jboss.netty.handler.stream.ChunkedWriteHandler; -import org.jboss.netty.handler.timeout.IdleState; -import org.jboss.netty.handler.timeout.IdleStateAwareChannelHandler; -import org.jboss.netty.handler.timeout.IdleStateEvent; -import org.jboss.netty.handler.timeout.IdleStateHandler; -import org.jboss.netty.util.CharsetUtil; -import org.jboss.netty.util.HashedWheelTimer; -import org.jboss.netty.util.Timer; import org.eclipse.jetty.http.HttpHeader; import org.slf4j.LoggerFactory; @@ -182,19 +184,29 @@ public class ShuffleHandler extends AuxiliaryService { public static final HttpResponseStatus TOO_MANY_REQ_STATUS = new HttpResponseStatus(429, "TOO MANY REQUESTS"); - // This should kept in sync with Fetcher.FETCH_RETRY_DELAY_DEFAULT + // This should be kept in sync with Fetcher.FETCH_RETRY_DELAY_DEFAULT public static final long FETCH_RETRY_DELAY = 1000L; public static final String RETRY_AFTER_HEADER = "Retry-After"; + static final String ENCODER_HANDLER_NAME = "encoder"; private int port; - private ChannelFactory selector; - private final ChannelGroup accepted = new DefaultChannelGroup(); + private EventLoopGroup bossGroup; + private EventLoopGroup workerGroup; + private ServerBootstrap bootstrap; + private Channel ch; + private final ChannelGroup accepted = + new DefaultChannelGroup(new DefaultEventExecutorGroup(5).next()); + private final AtomicInteger activeConnections = new AtomicInteger(); protected HttpPipelineFactory pipelineFact; private int sslFileBufferSize; + + //TODO snemeth add a config option for these later, this is temporarily disabled for now. + private boolean useOutboundExceptionHandler = false; + private boolean useOutboundLogger = false; /** * Should the shuffle use posix_fadvise calls to manage the OS cache during - * sendfile + * sendfile. */ private boolean manageOsCache; private int readaheadLength; @@ -204,7 +216,7 @@ public class ShuffleHandler extends AuxiliaryService { private int maxSessionOpenFiles; private ReadaheadPool readaheadPool = ReadaheadPool.getInstance(); - private Map userRsrc; + private Map userRsrc; private JobTokenSecretManager secretManager; private DB stateDb = null; @@ -235,7 +247,7 @@ public class ShuffleHandler extends AuxiliaryService { public static final String CONNECTION_CLOSE = "close"; public static final String SUFFLE_SSL_FILE_BUFFER_SIZE_KEY = - "mapreduce.shuffle.ssl.file.buffer.size"; + "mapreduce.shuffle.ssl.file.buffer.size"; public static final int DEFAULT_SUFFLE_SSL_FILE_BUFFER_SIZE = 60 * 1024; @@ -255,7 +267,7 @@ public class ShuffleHandler extends AuxiliaryService { public static final boolean DEFAULT_SHUFFLE_TRANSFERTO_ALLOWED = true; public static final boolean WINDOWS_DEFAULT_SHUFFLE_TRANSFERTO_ALLOWED = false; - private static final String TIMEOUT_HANDLER = "timeout"; + static final String TIMEOUT_HANDLER = "timeout"; /* the maximum number of files a single GET request can open simultaneously during shuffle @@ -267,7 +279,6 @@ public class ShuffleHandler extends AuxiliaryService { boolean connectionKeepAliveEnabled = false; private int connectionKeepAliveTimeOut; private int mapOutputMetaInfoCacheSize; - private Timer timer; @Metrics(about="Shuffle output metrics", context="mapred") static class ShuffleMetrics implements ChannelFutureListener { @@ -291,6 +302,49 @@ public class ShuffleHandler extends AuxiliaryService { } } + static class NettyChannelHelper { + static ChannelFuture writeToChannel(Channel ch, Object obj) { + LOG.debug("Writing {} to channel: {}", obj.getClass().getSimpleName(), ch.id()); + return ch.writeAndFlush(obj); + } + + static ChannelFuture writeToChannelAndClose(Channel ch, Object obj) { + return writeToChannel(ch, obj).addListener(ChannelFutureListener.CLOSE); + } + + static ChannelFuture writeToChannelAndAddLastHttpContent(Channel ch, HttpResponse obj) { + writeToChannel(ch, obj); + return writeLastHttpContentToChannel(ch); + } + + static ChannelFuture writeLastHttpContentToChannel(Channel ch) { + LOG.debug("Writing LastHttpContent, channel id: {}", ch.id()); + return ch.writeAndFlush(LastHttpContent.EMPTY_LAST_CONTENT); + } + + static ChannelFuture closeChannel(Channel ch) { + LOG.debug("Closing channel, channel id: {}", ch.id()); + return ch.close(); + } + + static void closeChannels(ChannelGroup channelGroup) { + channelGroup.close().awaitUninterruptibly(10, TimeUnit.SECONDS); + } + + public static ChannelFuture closeAsIdle(Channel ch, int timeout) { + LOG.debug("Closing channel as writer was idle for {} seconds", timeout); + return closeChannel(ch); + } + + public static void channelActive(Channel ch) { + LOG.debug("Executing channelActive, channel id: {}", ch.id()); + } + + public static void channelInactive(Channel ch) { + LOG.debug("Executing channelInactive, channel id: {}", ch.id()); + } + } + private final MetricsSystem ms; final ShuffleMetrics metrics; @@ -298,29 +352,36 @@ public class ShuffleHandler extends AuxiliaryService { private ReduceContext reduceContext; - public ReduceMapFileCount(ReduceContext rc) { + ReduceMapFileCount(ReduceContext rc) { this.reduceContext = rc; } @Override public void operationComplete(ChannelFuture future) throws Exception { + LOG.trace("operationComplete"); if (!future.isSuccess()) { - future.getChannel().close(); + LOG.error("Future is unsuccessful. Cause: ", future.cause()); + closeChannel(future.channel()); return; } int waitCount = this.reduceContext.getMapsToWait().decrementAndGet(); if (waitCount == 0) { + LOG.trace("Finished with all map outputs"); + //HADOOP-15327: Need to send an instance of LastHttpContent to define HTTP + //message boundaries. See details in jira. + writeLastHttpContentToChannel(future.channel()); metrics.operationComplete(future); // Let the idle timer handler close keep-alive connections if (reduceContext.getKeepAlive()) { - ChannelPipeline pipeline = future.getChannel().getPipeline(); + ChannelPipeline pipeline = future.channel().pipeline(); TimeoutHandler timeoutHandler = (TimeoutHandler)pipeline.get(TIMEOUT_HANDLER); timeoutHandler.setEnabledTimeout(true); } else { - future.getChannel().close(); + closeChannel(future.channel()); } } else { + LOG.trace("operationComplete, waitCount > 0, invoking sendMap with reduceContext"); pipelineFact.getSHUFFLE().sendMap(reduceContext); } } @@ -331,7 +392,6 @@ public class ShuffleHandler extends AuxiliaryService { * Allows sendMapOutput calls from operationComplete() */ private static class ReduceContext { - private List mapIds; private AtomicInteger mapsToWait; private AtomicInteger mapsToSend; @@ -342,7 +402,7 @@ public class ShuffleHandler extends AuxiliaryService { private String jobId; private final boolean keepAlive; - public ReduceContext(List mapIds, int rId, + ReduceContext(List mapIds, int rId, ChannelHandlerContext context, String usr, Map mapOutputInfoMap, String jobId, boolean keepAlive) { @@ -448,7 +508,8 @@ public class ShuffleHandler extends AuxiliaryService { * shuffle data requests. * @return the serialized version of the jobToken. */ - public static ByteBuffer serializeServiceData(Token jobToken) throws IOException { + public static ByteBuffer serializeServiceData(Token jobToken) + throws IOException { //TODO these bytes should be versioned DataOutputBuffer jobToken_dob = new DataOutputBuffer(); jobToken.write(jobToken_dob); @@ -505,6 +566,11 @@ public class ShuffleHandler extends AuxiliaryService { DEFAULT_MAX_SHUFFLE_CONNECTIONS); int maxShuffleThreads = conf.getInt(MAX_SHUFFLE_THREADS, DEFAULT_MAX_SHUFFLE_THREADS); + // Since Netty 4.x, the value of 0 threads would default to: + // io.netty.channel.MultithreadEventLoopGroup.DEFAULT_EVENT_LOOP_THREADS + // by simply passing 0 value to NioEventLoopGroup constructor below. + // However, this logic to determinte thread count + // was in place so we can keep it for now. if (maxShuffleThreads == 0) { maxShuffleThreads = 2 * Runtime.getRuntime().availableProcessors(); } @@ -520,16 +586,14 @@ public class ShuffleHandler extends AuxiliaryService { DEFAULT_SHUFFLE_MAX_SESSION_OPEN_FILES); ThreadFactory bossFactory = new ThreadFactoryBuilder() - .setNameFormat("ShuffleHandler Netty Boss #%d") - .build(); + .setNameFormat("ShuffleHandler Netty Boss #%d") + .build(); ThreadFactory workerFactory = new ThreadFactoryBuilder() - .setNameFormat("ShuffleHandler Netty Worker #%d") - .build(); + .setNameFormat("ShuffleHandler Netty Worker #%d") + .build(); - selector = new NioServerSocketChannelFactory( - HadoopExecutors.newCachedThreadPool(bossFactory), - HadoopExecutors.newCachedThreadPool(workerFactory), - maxShuffleThreads); + bossGroup = new NioEventLoopGroup(maxShuffleThreads, bossFactory); + workerGroup = new NioEventLoopGroup(maxShuffleThreads, workerFactory); super.serviceInit(new Configuration(conf)); } @@ -540,22 +604,24 @@ public class ShuffleHandler extends AuxiliaryService { userRsrc = new ConcurrentHashMap(); secretManager = new JobTokenSecretManager(); recoverState(conf); - ServerBootstrap bootstrap = new ServerBootstrap(selector); - // Timer is shared across entire factory and must be released separately - timer = new HashedWheelTimer(); try { - pipelineFact = new HttpPipelineFactory(conf, timer); + pipelineFact = new HttpPipelineFactory(conf); } catch (Exception ex) { throw new RuntimeException(ex); } - bootstrap.setOption("backlog", conf.getInt(SHUFFLE_LISTEN_QUEUE_SIZE, - DEFAULT_SHUFFLE_LISTEN_QUEUE_SIZE)); - bootstrap.setOption("child.keepAlive", true); - bootstrap.setPipelineFactory(pipelineFact); + + bootstrap = new ServerBootstrap(); + bootstrap.group(bossGroup, workerGroup) + .channel(NioServerSocketChannel.class) + .option(ChannelOption.SO_BACKLOG, + conf.getInt(SHUFFLE_LISTEN_QUEUE_SIZE, + DEFAULT_SHUFFLE_LISTEN_QUEUE_SIZE)) + .childOption(ChannelOption.SO_KEEPALIVE, true) + .childHandler(pipelineFact); port = conf.getInt(SHUFFLE_PORT_CONFIG_KEY, DEFAULT_SHUFFLE_PORT); - Channel ch = bootstrap.bind(new InetSocketAddress(port)); + ch = bootstrap.bind(new InetSocketAddress(port)).sync().channel(); accepted.add(ch); - port = ((InetSocketAddress)ch.getLocalAddress()).getPort(); + port = ((InetSocketAddress)ch.localAddress()).getPort(); conf.set(SHUFFLE_PORT_CONFIG_KEY, Integer.toString(port)); pipelineFact.SHUFFLE.setPort(port); LOG.info(getName() + " listening on port " + port); @@ -576,18 +642,12 @@ public class ShuffleHandler extends AuxiliaryService { @Override protected void serviceStop() throws Exception { - accepted.close().awaitUninterruptibly(10, TimeUnit.SECONDS); - if (selector != null) { - ServerBootstrap bootstrap = new ServerBootstrap(selector); - bootstrap.releaseExternalResources(); - } + closeChannels(accepted); + if (pipelineFact != null) { pipelineFact.destroy(); } - if (timer != null) { - // Release this shared timer resource - timer.stop(); - } + if (stateDb != null) { stateDb.close(); } @@ -744,7 +804,7 @@ public class ShuffleHandler extends AuxiliaryService { JobShuffleInfoProto proto = JobShuffleInfoProto.parseFrom(data); String user = proto.getUser(); TokenProto tokenProto = proto.getJobToken(); - Token jobToken = new Token( + Token jobToken = new Token<>( tokenProto.getIdentifier().toByteArray(), tokenProto.getPassword().toByteArray(), new Text(tokenProto.getKind()), new Text(tokenProto.getService())); @@ -785,29 +845,47 @@ public class ShuffleHandler extends AuxiliaryService { } } - static class TimeoutHandler extends IdleStateAwareChannelHandler { + @VisibleForTesting + public void setUseOutboundExceptionHandler(boolean useHandler) { + this.useOutboundExceptionHandler = useHandler; + } + static class TimeoutHandler extends IdleStateHandler { + private final int connectionKeepAliveTimeOut; private boolean enabledTimeout; + TimeoutHandler(int connectionKeepAliveTimeOut) { + //disable reader timeout + //set writer timeout to configured timeout value + //disable all idle timeout + super(0, connectionKeepAliveTimeOut, 0, TimeUnit.SECONDS); + this.connectionKeepAliveTimeOut = connectionKeepAliveTimeOut; + } + + @VisibleForTesting + public int getConnectionKeepAliveTimeOut() { + return connectionKeepAliveTimeOut; + } + void setEnabledTimeout(boolean enabledTimeout) { this.enabledTimeout = enabledTimeout; } @Override public void channelIdle(ChannelHandlerContext ctx, IdleStateEvent e) { - if (e.getState() == IdleState.WRITER_IDLE && enabledTimeout) { - e.getChannel().close(); + if (e.state() == IdleState.WRITER_IDLE && enabledTimeout) { + closeAsIdle(ctx.channel(), connectionKeepAliveTimeOut); } } } - class HttpPipelineFactory implements ChannelPipelineFactory { + class HttpPipelineFactory extends ChannelInitializer { + private static final int MAX_CONTENT_LENGTH = 1 << 16; final Shuffle SHUFFLE; private SSLFactory sslFactory; - private final ChannelHandler idleStateHandler; - public HttpPipelineFactory(Configuration conf, Timer timer) throws Exception { + HttpPipelineFactory(Configuration conf) throws Exception { SHUFFLE = getShuffle(conf); if (conf.getBoolean(MRConfig.SHUFFLE_SSL_ENABLED_KEY, MRConfig.SHUFFLE_SSL_ENABLED_DEFAULT)) { @@ -815,7 +893,6 @@ public class ShuffleHandler extends AuxiliaryService { sslFactory = new SSLFactory(SSLFactory.Mode.SERVER, conf); sslFactory.init(); } - this.idleStateHandler = new IdleStateHandler(timer, 0, connectionKeepAliveTimeOut, 0); } public Shuffle getSHUFFLE() { @@ -828,30 +905,39 @@ public class ShuffleHandler extends AuxiliaryService { } } - @Override - public ChannelPipeline getPipeline() throws Exception { - ChannelPipeline pipeline = Channels.pipeline(); + @Override protected void initChannel(SocketChannel ch) throws Exception { + ChannelPipeline pipeline = ch.pipeline(); if (sslFactory != null) { pipeline.addLast("ssl", new SslHandler(sslFactory.createSSLEngine())); } pipeline.addLast("decoder", new HttpRequestDecoder()); - pipeline.addLast("aggregator", new HttpChunkAggregator(1 << 16)); - pipeline.addLast("encoder", new HttpResponseEncoder()); + pipeline.addLast("aggregator", new HttpObjectAggregator(MAX_CONTENT_LENGTH)); + pipeline.addLast(ENCODER_HANDLER_NAME, useOutboundLogger ? + new LoggingHttpResponseEncoder(false) : new HttpResponseEncoder()); pipeline.addLast("chunking", new ChunkedWriteHandler()); pipeline.addLast("shuffle", SHUFFLE); - pipeline.addLast("idle", idleStateHandler); - pipeline.addLast(TIMEOUT_HANDLER, new TimeoutHandler()); - return pipeline; + if (useOutboundExceptionHandler) { + //https://stackoverflow.com/questions/50612403/catch-all-exception-handling-for-outbound-channelhandler + pipeline.addLast("outboundExceptionHandler", new ChannelOutboundHandlerAdapter() { + @Override + public void write(ChannelHandlerContext ctx, Object msg, + ChannelPromise promise) throws Exception { + promise.addListener(ChannelFutureListener.FIRE_EXCEPTION_ON_FAILURE); + super.write(ctx, msg, promise); + } + }); + } + pipeline.addLast(TIMEOUT_HANDLER, new TimeoutHandler(connectionKeepAliveTimeOut)); // TODO factor security manager into pipeline // TODO factor out encode/decode to permit binary shuffle // TODO factor out decode of index to permit alt. models } } - class Shuffle extends SimpleChannelUpstreamHandler { + @ChannelHandler.Sharable + class Shuffle extends ChannelInboundHandlerAdapter { private final IndexCache indexCache; - private final - LoadingCache pathCache; + private final LoadingCache pathCache; private int port; @@ -904,65 +990,84 @@ public class ShuffleHandler extends AuxiliaryService { } @Override - public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent evt) + public void channelActive(ChannelHandlerContext ctx) throws Exception { - super.channelOpen(ctx, evt); - - if ((maxShuffleConnections > 0) && (accepted.size() >= maxShuffleConnections)) { + NettyChannelHelper.channelActive(ctx.channel()); + int numConnections = activeConnections.incrementAndGet(); + if ((maxShuffleConnections > 0) && (numConnections > maxShuffleConnections)) { LOG.info(String.format("Current number of shuffle connections (%d) is " + - "greater than or equal to the max allowed shuffle connections (%d)", + "greater than the max allowed shuffle connections (%d)", accepted.size(), maxShuffleConnections)); - Map headers = new HashMap(1); + Map headers = new HashMap<>(1); // notify fetchers to backoff for a while before closing the connection // if the shuffle connection limit is hit. Fetchers are expected to // handle this notification gracefully, that is, not treating this as a // fetch failure. headers.put(RETRY_AFTER_HEADER, String.valueOf(FETCH_RETRY_DELAY)); sendError(ctx, "", TOO_MANY_REQ_STATUS, headers); - return; + } else { + super.channelActive(ctx); + accepted.add(ctx.channel()); + LOG.debug("Added channel: {}, channel id: {}. Accepted number of connections={}", + ctx.channel(), ctx.channel().id(), activeConnections.get()); } - accepted.add(evt.getChannel()); } @Override - public void messageReceived(ChannelHandlerContext ctx, MessageEvent evt) + public void channelInactive(ChannelHandlerContext ctx) throws Exception { + NettyChannelHelper.channelInactive(ctx.channel()); + super.channelInactive(ctx); + int noOfConnections = activeConnections.decrementAndGet(); + LOG.debug("New value of Accepted number of connections={}", noOfConnections); + } + + @Override + public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception { - HttpRequest request = (HttpRequest) evt.getMessage(); - if (request.getMethod() != GET) { - sendError(ctx, METHOD_NOT_ALLOWED); - return; + Channel channel = ctx.channel(); + LOG.trace("Executing channelRead, channel id: {}", channel.id()); + HttpRequest request = (HttpRequest) msg; + LOG.debug("Received HTTP request: {}, channel id: {}", request, channel.id()); + if (request.method() != GET) { + sendError(ctx, METHOD_NOT_ALLOWED); + return; } // Check whether the shuffle version is compatible - if (!ShuffleHeader.DEFAULT_HTTP_HEADER_NAME.equals( - request.headers() != null ? - request.headers().get(ShuffleHeader.HTTP_HEADER_NAME) : null) - || !ShuffleHeader.DEFAULT_HTTP_HEADER_VERSION.equals( - request.headers() != null ? - request.headers() - .get(ShuffleHeader.HTTP_HEADER_VERSION) : null)) { + String shuffleVersion = ShuffleHeader.DEFAULT_HTTP_HEADER_VERSION; + String httpHeaderName = ShuffleHeader.DEFAULT_HTTP_HEADER_NAME; + if (request.headers() != null) { + shuffleVersion = request.headers().get(ShuffleHeader.HTTP_HEADER_VERSION); + httpHeaderName = request.headers().get(ShuffleHeader.HTTP_HEADER_NAME); + LOG.debug("Received from request header: ShuffleVersion={} header name={}, channel id: {}", + shuffleVersion, httpHeaderName, channel.id()); + } + if (request.headers() == null || + !ShuffleHeader.DEFAULT_HTTP_HEADER_NAME.equals(httpHeaderName) || + !ShuffleHeader.DEFAULT_HTTP_HEADER_VERSION.equals(shuffleVersion)) { sendError(ctx, "Incompatible shuffle request version", BAD_REQUEST); } - final Map> q = - new QueryStringDecoder(request.getUri()).getParameters(); + final Map> q = + new QueryStringDecoder(request.uri()).parameters(); final List keepAliveList = q.get("keepAlive"); boolean keepAliveParam = false; if (keepAliveList != null && keepAliveList.size() == 1) { keepAliveParam = Boolean.valueOf(keepAliveList.get(0)); if (LOG.isDebugEnabled()) { - LOG.debug("KeepAliveParam : " + keepAliveList - + " : " + keepAliveParam); + LOG.debug("KeepAliveParam: {} : {}, channel id: {}", + keepAliveList, keepAliveParam, channel.id()); } } final List mapIds = splitMaps(q.get("map")); final List reduceQ = q.get("reduce"); final List jobQ = q.get("job"); if (LOG.isDebugEnabled()) { - LOG.debug("RECV: " + request.getUri() + + LOG.debug("RECV: " + request.uri() + "\n mapId: " + mapIds + "\n reduceId: " + reduceQ + "\n jobId: " + jobQ + - "\n keepAlive: " + keepAliveParam); + "\n keepAlive: " + keepAliveParam + + "\n channel id: " + channel.id()); } if (mapIds == null || reduceQ == null || jobQ == null) { @@ -986,7 +1091,7 @@ public class ShuffleHandler extends AuxiliaryService { sendError(ctx, "Bad job parameter", BAD_REQUEST); return; } - final String reqUri = request.getUri(); + final String reqUri = request.uri(); if (null == reqUri) { // TODO? add upstream? sendError(ctx, FORBIDDEN); @@ -1004,8 +1109,7 @@ public class ShuffleHandler extends AuxiliaryService { Map mapOutputInfoMap = new HashMap(); - Channel ch = evt.getChannel(); - ChannelPipeline pipeline = ch.getPipeline(); + ChannelPipeline pipeline = channel.pipeline(); TimeoutHandler timeoutHandler = (TimeoutHandler)pipeline.get(TIMEOUT_HANDLER); timeoutHandler.setEnabledTimeout(false); @@ -1013,16 +1117,29 @@ public class ShuffleHandler extends AuxiliaryService { try { populateHeaders(mapIds, jobId, user, reduceId, request, - response, keepAliveParam, mapOutputInfoMap); + response, keepAliveParam, mapOutputInfoMap); } catch(IOException e) { - ch.write(response); - LOG.error("Shuffle error in populating headers :", e); - String errorMessage = getErrorMessage(e); - sendError(ctx,errorMessage , INTERNAL_SERVER_ERROR); + //HADOOP-15327 + // Need to send an instance of LastHttpContent to define HTTP + // message boundaries. + //Sending a HTTP 200 OK + HTTP 500 later (sendError) + // is quite a non-standard way of crafting HTTP responses, + // but we need to keep backward compatibility. + // See more details in jira. + writeToChannelAndAddLastHttpContent(channel, response); + LOG.error("Shuffle error while populating headers. Channel id: " + channel.id(), e); + sendError(ctx, getErrorMessage(e), INTERNAL_SERVER_ERROR); return; } - ch.write(response); - //Initialize one ReduceContext object per messageReceived call + writeToChannel(channel, response).addListener((ChannelFutureListener) future -> { + if (future.isSuccess()) { + LOG.debug("Written HTTP response object successfully. Channel id: {}", channel.id()); + } else { + LOG.error("Error while writing HTTP response object: {}. " + + "Cause: {}, channel id: {}", response, future.cause(), channel.id()); + } + }); + //Initialize one ReduceContext object per channelRead call boolean keepAlive = keepAliveParam || connectionKeepAliveEnabled; ReduceContext reduceContext = new ReduceContext(mapIds, reduceId, ctx, user, mapOutputInfoMap, jobId, keepAlive); @@ -1044,9 +1161,8 @@ public class ShuffleHandler extends AuxiliaryService { * @param reduceContext used to call sendMapOutput with correct params. * @return the ChannelFuture of the sendMapOutput, can be null. */ - public ChannelFuture sendMap(ReduceContext reduceContext) - throws Exception { - + public ChannelFuture sendMap(ReduceContext reduceContext) { + LOG.trace("Executing sendMap"); ChannelFuture nextMap = null; if (reduceContext.getMapsToSend().get() < reduceContext.getMapIds().size()) { @@ -1059,21 +1175,24 @@ public class ShuffleHandler extends AuxiliaryService { info = getMapOutputInfo(mapId, reduceContext.getReduceId(), reduceContext.getJobId(), reduceContext.getUser()); } + LOG.trace("Calling sendMapOutput"); nextMap = sendMapOutput( reduceContext.getCtx(), - reduceContext.getCtx().getChannel(), + reduceContext.getCtx().channel(), reduceContext.getUser(), mapId, reduceContext.getReduceId(), info); - if (null == nextMap) { + if (nextMap == null) { + //This can only happen if spill file was not found sendError(reduceContext.getCtx(), NOT_FOUND); + LOG.trace("Returning nextMap: null"); return null; } nextMap.addListener(new ReduceMapFileCount(reduceContext)); } catch (IOException e) { if (e instanceof DiskChecker.DiskErrorException) { - LOG.error("Shuffle error :" + e); + LOG.error("Shuffle error: " + e); } else { - LOG.error("Shuffle error :", e); + LOG.error("Shuffle error: ", e); } String errorMessage = getErrorMessage(e); sendError(reduceContext.getCtx(), errorMessage, @@ -1125,8 +1244,7 @@ public class ShuffleHandler extends AuxiliaryService { } } - IndexRecord info = - indexCache.getIndexInformation(mapId, reduce, pathInfo.indexPath, user); + IndexRecord info = indexCache.getIndexInformation(mapId, reduce, pathInfo.indexPath, user); if (LOG.isDebugEnabled()) { LOG.debug("getMapOutputInfo: jobId=" + jobId + ", mapId=" + mapId + @@ -1155,7 +1273,6 @@ public class ShuffleHandler extends AuxiliaryService { outputInfo.indexRecord.rawLength, reduce); DataOutputBuffer dob = new DataOutputBuffer(); header.write(dob); - contentLength += outputInfo.indexRecord.partLength; contentLength += dob.getLength(); } @@ -1183,14 +1300,10 @@ public class ShuffleHandler extends AuxiliaryService { protected void setResponseHeaders(HttpResponse response, boolean keepAliveParam, long contentLength) { if (!connectionKeepAliveEnabled && !keepAliveParam) { - if (LOG.isDebugEnabled()) { - LOG.debug("Setting connection close header..."); - } - response.headers().set(HttpHeader.CONNECTION.asString(), - CONNECTION_CLOSE); + response.headers().set(HttpHeader.CONNECTION.asString(), CONNECTION_CLOSE); } else { response.headers().set(HttpHeader.CONTENT_LENGTH.asString(), - String.valueOf(contentLength)); + String.valueOf(contentLength)); response.headers().set(HttpHeader.CONNECTION.asString(), HttpHeader.KEEP_ALIVE.asString()); response.headers().set(HttpHeader.KEEP_ALIVE.asString(), @@ -1214,29 +1327,29 @@ public class ShuffleHandler extends AuxiliaryService { throws IOException { SecretKey tokenSecret = secretManager.retrieveTokenSecret(appid); if (null == tokenSecret) { - LOG.info("Request for unknown token " + appid); - throw new IOException("could not find jobid"); + LOG.info("Request for unknown token {}, channel id: {}", appid, ctx.channel().id()); + throw new IOException("Could not find jobid"); } - // string to encrypt - String enc_str = SecureShuffleUtils.buildMsgFrom(requestUri); + // encrypting URL + String encryptedURL = SecureShuffleUtils.buildMsgFrom(requestUri); // hash from the fetcher String urlHashStr = request.headers().get(SecureShuffleUtils.HTTP_HEADER_URL_HASH); if (urlHashStr == null) { - LOG.info("Missing header hash for " + appid); + LOG.info("Missing header hash for {}, channel id: {}", appid, ctx.channel().id()); throw new IOException("fetcher cannot be authenticated"); } if (LOG.isDebugEnabled()) { int len = urlHashStr.length(); - LOG.debug("verifying request. enc_str=" + enc_str + "; hash=..." + - urlHashStr.substring(len-len/2, len-1)); + LOG.debug("Verifying request. encryptedURL:{}, hash:{}, channel id: " + + "{}", encryptedURL, + urlHashStr.substring(len - len / 2, len - 1), ctx.channel().id()); } // verify - throws exception - SecureShuffleUtils.verifyReply(urlHashStr, enc_str, tokenSecret); + SecureShuffleUtils.verifyReply(urlHashStr, encryptedURL, tokenSecret); // verification passed - encode the reply - String reply = - SecureShuffleUtils.generateHash(urlHashStr.getBytes(Charsets.UTF_8), - tokenSecret); + String reply = SecureShuffleUtils.generateHash(urlHashStr.getBytes(Charsets.UTF_8), + tokenSecret); response.headers().set( SecureShuffleUtils.HTTP_HEADER_REPLY_URL_HASH, reply); // Put shuffle version into http header @@ -1246,8 +1359,10 @@ public class ShuffleHandler extends AuxiliaryService { ShuffleHeader.DEFAULT_HTTP_HEADER_VERSION); if (LOG.isDebugEnabled()) { int len = reply.length(); - LOG.debug("Fetcher request verfied. enc_str=" + enc_str + ";reply=" + - reply.substring(len-len/2, len-1)); + LOG.debug("Fetcher request verified. " + + "encryptedURL: {}, reply: {}, channel id: {}", + encryptedURL, reply.substring(len - len / 2, len - 1), + ctx.channel().id()); } } @@ -1255,27 +1370,27 @@ public class ShuffleHandler extends AuxiliaryService { String user, String mapId, int reduce, MapOutputInfo mapOutputInfo) throws IOException { final IndexRecord info = mapOutputInfo.indexRecord; - final ShuffleHeader header = - new ShuffleHeader(mapId, info.partLength, info.rawLength, reduce); + final ShuffleHeader header = new ShuffleHeader(mapId, info.partLength, info.rawLength, + reduce); final DataOutputBuffer dob = new DataOutputBuffer(); header.write(dob); - ch.write(wrappedBuffer(dob.getData(), 0, dob.getLength())); + writeToChannel(ch, wrappedBuffer(dob.getData(), 0, dob.getLength())); final File spillfile = new File(mapOutputInfo.mapOutputFileName.toString()); RandomAccessFile spill; try { spill = SecureIOUtils.openForRandomRead(spillfile, "r", user, null); } catch (FileNotFoundException e) { - LOG.info(spillfile + " not found"); + LOG.info("{} not found. Channel id: {}", spillfile, ctx.channel().id()); return null; } ChannelFuture writeFuture; - if (ch.getPipeline().get(SslHandler.class) == null) { + if (ch.pipeline().get(SslHandler.class) == null) { final FadvisedFileRegion partition = new FadvisedFileRegion(spill, info.startOffset, info.partLength, manageOsCache, readaheadLength, readaheadPool, spillfile.getAbsolutePath(), shuffleBufferSize, shuffleTransferToAllowed); - writeFuture = ch.write(partition); + writeFuture = writeToChannel(ch, partition); writeFuture.addListener(new ChannelFutureListener() { // TODO error handling; distinguish IO/connection failures, // attribute to appropriate spill output @@ -1284,7 +1399,7 @@ public class ShuffleHandler extends AuxiliaryService { if (future.isSuccess()) { partition.transferSuccessful(); } - partition.releaseExternalResources(); + partition.deallocate(); } }); } else { @@ -1293,7 +1408,7 @@ public class ShuffleHandler extends AuxiliaryService { info.startOffset, info.partLength, sslFileBufferSize, manageOsCache, readaheadLength, readaheadPool, spillfile.getAbsolutePath()); - writeFuture = ch.write(chunk); + writeFuture = writeToChannel(ch, chunk); } metrics.shuffleConnections.incr(); metrics.shuffleOutputBytes.incr(info.partLength); // optimistic @@ -1307,12 +1422,13 @@ public class ShuffleHandler extends AuxiliaryService { protected void sendError(ChannelHandlerContext ctx, String message, HttpResponseStatus status) { - sendError(ctx, message, status, Collections.emptyMap()); + sendError(ctx, message, status, Collections.emptyMap()); } protected void sendError(ChannelHandlerContext ctx, String msg, HttpResponseStatus status, Map headers) { - HttpResponse response = new DefaultHttpResponse(HTTP_1_1, status); + FullHttpResponse response = new DefaultFullHttpResponse(HTTP_1_1, status, + Unpooled.copiedBuffer(msg, CharsetUtil.UTF_8)); response.headers().set(CONTENT_TYPE, "text/plain; charset=UTF-8"); // Put shuffle version into http header response.headers().set(ShuffleHeader.HTTP_HEADER_NAME, @@ -1322,48 +1438,45 @@ public class ShuffleHandler extends AuxiliaryService { for (Map.Entry header : headers.entrySet()) { response.headers().set(header.getKey(), header.getValue()); } - response.setContent( - ChannelBuffers.copiedBuffer(msg, CharsetUtil.UTF_8)); // Close the connection as soon as the error message is sent. - ctx.getChannel().write(response).addListener(ChannelFutureListener.CLOSE); + writeToChannelAndClose(ctx.channel(), response); } @Override - public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) + public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception { - Channel ch = e.getChannel(); - Throwable cause = e.getCause(); + Channel ch = ctx.channel(); if (cause instanceof TooLongFrameException) { + LOG.trace("TooLongFrameException, channel id: {}", ch.id()); sendError(ctx, BAD_REQUEST); return; } else if (cause instanceof IOException) { if (cause instanceof ClosedChannelException) { - LOG.debug("Ignoring closed channel error", cause); + LOG.debug("Ignoring closed channel error, channel id: " + ch.id(), cause); return; } String message = String.valueOf(cause.getMessage()); if (IGNORABLE_ERROR_MESSAGE.matcher(message).matches()) { - LOG.debug("Ignoring client socket close", cause); + LOG.debug("Ignoring client socket close, channel id: " + ch.id(), cause); return; } } - LOG.error("Shuffle error: ", cause); - if (ch.isConnected()) { - LOG.error("Shuffle error " + e); + LOG.error("Shuffle error. Channel id: " + ch.id(), cause); + if (ch.isActive()) { sendError(ctx, INTERNAL_SERVER_ERROR); } } } - + static class AttemptPathInfo { // TODO Change this over to just store local dir indices, instead of the // entire path. Far more efficient. private final Path indexPath; private final Path dataPath; - public AttemptPathInfo(Path indexPath, Path dataPath) { + AttemptPathInfo(Path indexPath, Path dataPath) { this.indexPath = indexPath; this.dataPath = dataPath; } @@ -1374,7 +1487,7 @@ public class ShuffleHandler extends AuxiliaryService { private final String user; private final String attemptId; - public AttemptPathIdentifier(String jobId, String user, String attemptId) { + AttemptPathIdentifier(String jobId, String user, String attemptId) { this.jobId = jobId; this.user = user; this.attemptId = attemptId; diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestFadvisedFileRegion.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestFadvisedFileRegion.java index 242382e06a0..ce0c0d6aeaf 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestFadvisedFileRegion.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestFadvisedFileRegion.java @@ -104,7 +104,7 @@ public class TestFadvisedFileRegion { Assert.assertEquals(count, targetFile.length()); } finally { if (fileRegion != null) { - fileRegion.releaseExternalResources(); + fileRegion.deallocate(); } IOUtils.cleanupWithLogger(LOG, target); IOUtils.cleanupWithLogger(LOG, targetFile); diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java index af3cb87760c..38500032ef3 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java @@ -17,34 +17,65 @@ */ package org.apache.hadoop.mapred; +import io.netty.channel.ChannelFutureListener; +import io.netty.channel.DefaultFileRegion; +import org.apache.hadoop.thirdparty.com.google.common.collect.Maps; +import io.netty.channel.AbstractChannel; +import io.netty.channel.Channel; +import io.netty.channel.ChannelFuture; +import io.netty.channel.ChannelHandlerContext; +import io.netty.channel.ChannelPipeline; +import io.netty.channel.socket.SocketChannel; +import io.netty.handler.codec.http.HttpMethod; +import io.netty.handler.codec.http.HttpRequest; +import io.netty.handler.codec.http.HttpResponse; +import io.netty.handler.codec.http.HttpResponseEncoder; +import io.netty.handler.codec.http.HttpResponseStatus; +import io.netty.handler.timeout.IdleStateEvent; import org.apache.hadoop.test.GenericTestUtils; + +import static io.netty.buffer.Unpooled.wrappedBuffer; +import static java.util.stream.Collectors.toList; import static org.apache.hadoop.test.MetricsAsserts.assertCounter; import static org.apache.hadoop.test.MetricsAsserts.assertGauge; import static org.apache.hadoop.test.MetricsAsserts.getMetrics; +import static org.hamcrest.MatcherAssert.assertThat; +import static org.junit.Assert.assertNotEquals; import static org.junit.Assert.assertTrue; -import static org.jboss.netty.buffer.ChannelBuffers.wrappedBuffer; -import static org.jboss.netty.handler.codec.http.HttpResponseStatus.OK; -import static org.jboss.netty.handler.codec.http.HttpVersion.HTTP_1_1; import static org.junit.Assert.assertEquals; +import static org.junit.Assert.fail; import static org.junit.Assume.assumeTrue; import static org.mockito.ArgumentMatchers.anyString; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.when; +import java.io.ByteArrayOutputStream; import java.io.DataInputStream; import java.io.EOFException; import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; +import java.io.InputStream; import java.net.HttpURLConnection; +import java.net.InetSocketAddress; +import java.net.Proxy; +import java.net.Socket; import java.net.URL; import java.net.SocketAddress; +import java.net.URLConnection; import java.nio.ByteBuffer; +import java.nio.channels.ClosedChannelException; +import java.nio.charset.StandardCharsets; +import java.nio.file.Files; import java.util.ArrayList; import java.util.Arrays; +import java.util.Collections; import java.util.List; import java.util.Map; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.TimeUnit; +import java.util.function.Consumer; import java.util.zip.CheckedOutputStream; import java.util.zip.Checksum; @@ -71,6 +102,7 @@ import org.apache.hadoop.security.token.Token; import org.apache.hadoop.service.ServiceStateException; import org.apache.hadoop.util.DiskChecker; import org.apache.hadoop.util.PureJavaCrc32; +import org.apache.hadoop.util.Sets; import org.apache.hadoop.util.StringUtils; import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.conf.YarnConfiguration; @@ -79,22 +111,13 @@ import org.apache.hadoop.yarn.server.api.ApplicationTerminationContext; import org.apache.hadoop.yarn.server.api.AuxiliaryLocalPathHandler; import org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer; import org.apache.hadoop.yarn.server.records.Version; -import org.jboss.netty.channel.Channel; -import org.jboss.netty.channel.ChannelFuture; -import org.jboss.netty.channel.ChannelHandlerContext; -import org.jboss.netty.channel.ChannelPipeline; -import org.jboss.netty.channel.socket.SocketChannel; -import org.jboss.netty.channel.MessageEvent; -import org.jboss.netty.channel.AbstractChannel; -import org.jboss.netty.handler.codec.http.DefaultHttpResponse; -import org.jboss.netty.handler.codec.http.HttpRequest; -import org.jboss.netty.handler.codec.http.HttpResponse; -import org.jboss.netty.handler.codec.http.HttpResponseStatus; -import org.jboss.netty.handler.codec.http.HttpMethod; +import org.hamcrest.CoreMatchers; +import org.junit.After; import org.junit.Assert; +import org.junit.Before; +import org.junit.Rule; import org.junit.Test; -import org.mockito.invocation.InvocationOnMock; -import org.mockito.stubbing.Answer; +import org.junit.rules.TestName; import org.mockito.Mockito; import org.eclipse.jetty.http.HttpHeader; import org.slf4j.Logger; @@ -106,10 +129,583 @@ public class TestShuffleHandler { LoggerFactory.getLogger(TestShuffleHandler.class); private static final File ABS_LOG_DIR = GenericTestUtils.getTestDir( TestShuffleHandler.class.getSimpleName() + "LocDir"); + private static final long ATTEMPT_ID = 12345L; + private static final long ATTEMPT_ID_2 = 12346L; + private static final HttpResponseStatus OK_STATUS = new HttpResponseStatus(200, "OK"); + + + //Control test execution properties with these flags + private static final boolean DEBUG_MODE = false; + //WARNING: If this is set to true and proxy server is not running, tests will fail! + private static final boolean USE_PROXY = false; + private static final int HEADER_WRITE_COUNT = 100000; + private static final int ARBITRARY_NEGATIVE_TIMEOUT_SECONDS = -100; + private static TestExecution TEST_EXECUTION; + + private static class TestExecution { + private static final int DEFAULT_KEEP_ALIVE_TIMEOUT_SECONDS = 1; + private static final int DEBUG_KEEP_ALIVE_SECONDS = 1000; + private static final int DEFAULT_PORT = 0; //random port + private static final int FIXED_PORT = 8088; + private static final String PROXY_HOST = "127.0.0.1"; + private static final int PROXY_PORT = 8888; + private static final int CONNECTION_DEBUG_TIMEOUT = 1000000; + private final boolean debugMode; + private final boolean useProxy; + + TestExecution(boolean debugMode, boolean useProxy) { + this.debugMode = debugMode; + this.useProxy = useProxy; + } + + int getKeepAliveTimeout() { + if (debugMode) { + return DEBUG_KEEP_ALIVE_SECONDS; + } + return DEFAULT_KEEP_ALIVE_TIMEOUT_SECONDS; + } + + HttpURLConnection openConnection(URL url) throws IOException { + HttpURLConnection conn; + if (useProxy) { + Proxy proxy + = new Proxy(Proxy.Type.HTTP, new InetSocketAddress(PROXY_HOST, PROXY_PORT)); + conn = (HttpURLConnection) url.openConnection(proxy); + } else { + conn = (HttpURLConnection) url.openConnection(); + } + return conn; + } + + int shuffleHandlerPort() { + if (debugMode) { + return FIXED_PORT; + } else { + return DEFAULT_PORT; + } + } + + void parameterizeConnection(URLConnection conn) { + if (DEBUG_MODE) { + conn.setReadTimeout(CONNECTION_DEBUG_TIMEOUT); + conn.setConnectTimeout(CONNECTION_DEBUG_TIMEOUT); + } + } + } + + private static class ResponseConfig { + private final int headerWriteCount; + private final int mapOutputCount; + private final int contentLengthOfOneMapOutput; + private long headerSize; + public long contentLengthOfResponse; + + ResponseConfig(int headerWriteCount, int mapOutputCount, + int contentLengthOfOneMapOutput) { + if (mapOutputCount <= 0 && contentLengthOfOneMapOutput > 0) { + throw new IllegalStateException("mapOutputCount should be at least 1"); + } + this.headerWriteCount = headerWriteCount; + this.mapOutputCount = mapOutputCount; + this.contentLengthOfOneMapOutput = contentLengthOfOneMapOutput; + } + + private void setHeaderSize(long headerSize) { + this.headerSize = headerSize; + long contentLengthOfAllHeaders = headerWriteCount * headerSize; + this.contentLengthOfResponse = computeContentLengthOfResponse(contentLengthOfAllHeaders); + LOG.debug("Content-length of all headers: {}", contentLengthOfAllHeaders); + LOG.debug("Content-length of one MapOutput: {}", contentLengthOfOneMapOutput); + LOG.debug("Content-length of final HTTP response: {}", contentLengthOfResponse); + } + + private long computeContentLengthOfResponse(long contentLengthOfAllHeaders) { + int mapOutputCountMultiplier = mapOutputCount; + if (mapOutputCount == 0) { + mapOutputCountMultiplier = 1; + } + return (contentLengthOfAllHeaders + contentLengthOfOneMapOutput) * mapOutputCountMultiplier; + } + } + + private enum ShuffleUrlType { + SIMPLE, WITH_KEEPALIVE, WITH_KEEPALIVE_MULTIPLE_MAP_IDS, WITH_KEEPALIVE_NO_MAP_IDS + } + + private static class InputStreamReadResult { + final String asString; + int totalBytesRead; + + InputStreamReadResult(byte[] bytes, int totalBytesRead) { + this.asString = new String(bytes, StandardCharsets.UTF_8); + this.totalBytesRead = totalBytesRead; + } + } + + private static abstract class AdditionalMapOutputSenderOperations { + public abstract ChannelFuture perform(ChannelHandlerContext ctx, Channel ch) throws IOException; + } + + private class ShuffleHandlerForKeepAliveTests extends ShuffleHandler { + final LastSocketAddress lastSocketAddress = new LastSocketAddress(); + final ArrayList failures = new ArrayList<>(); + final ShuffleHeaderProvider shuffleHeaderProvider; + final HeaderPopulator headerPopulator; + MapOutputSender mapOutputSender; + private Consumer channelIdleCallback; + private CustomTimeoutHandler customTimeoutHandler; + private boolean failImmediatelyOnErrors = false; + private boolean closeChannelOnError = true; + private ResponseConfig responseConfig; + + ShuffleHandlerForKeepAliveTests(long attemptId, ResponseConfig responseConfig, + Consumer channelIdleCallback) throws IOException { + this(attemptId, responseConfig); + this.channelIdleCallback = channelIdleCallback; + } + + ShuffleHandlerForKeepAliveTests(long attemptId, ResponseConfig responseConfig) + throws IOException { + this.responseConfig = responseConfig; + this.shuffleHeaderProvider = new ShuffleHeaderProvider(attemptId); + this.responseConfig.setHeaderSize(shuffleHeaderProvider.getShuffleHeaderSize()); + this.headerPopulator = new HeaderPopulator(this, responseConfig, shuffleHeaderProvider, true); + this.mapOutputSender = new MapOutputSender(responseConfig, lastSocketAddress, + shuffleHeaderProvider); + setUseOutboundExceptionHandler(true); + } + + public void setFailImmediatelyOnErrors(boolean failImmediatelyOnErrors) { + this.failImmediatelyOnErrors = failImmediatelyOnErrors; + } + + public void setCloseChannelOnError(boolean closeChannelOnError) { + this.closeChannelOnError = closeChannelOnError; + } + + @Override + protected Shuffle getShuffle(final Configuration conf) { + // replace the shuffle handler with one stubbed for testing + return new Shuffle(conf) { + @Override + protected MapOutputInfo getMapOutputInfo(String mapId, int reduce, + String jobId, String user) { + return null; + } + @Override + protected void verifyRequest(String appid, ChannelHandlerContext ctx, + HttpRequest request, HttpResponse response, URL requestUri) { + } + + @Override + protected void populateHeaders(List mapIds, String jobId, + String user, int reduce, HttpRequest request, + HttpResponse response, boolean keepAliveParam, + Map infoMap) throws IOException { + long contentLength = headerPopulator.populateHeaders( + keepAliveParam); + super.setResponseHeaders(response, keepAliveParam, contentLength); + } + + @Override + protected ChannelFuture sendMapOutput(ChannelHandlerContext ctx, + Channel ch, String user, String mapId, int reduce, + MapOutputInfo info) throws IOException { + return mapOutputSender.send(ctx, ch); + } + + @Override + public void channelActive(ChannelHandlerContext ctx) throws Exception { + ctx.pipeline().replace(HttpResponseEncoder.class, ENCODER_HANDLER_NAME, + new LoggingHttpResponseEncoder(false)); + replaceTimeoutHandlerWithCustom(ctx); + LOG.debug("Modified pipeline: {}", ctx.pipeline()); + super.channelActive(ctx); + } + + private void replaceTimeoutHandlerWithCustom(ChannelHandlerContext ctx) { + TimeoutHandler oldTimeoutHandler = + (TimeoutHandler)ctx.pipeline().get(TIMEOUT_HANDLER); + int timeoutValue = + oldTimeoutHandler.getConnectionKeepAliveTimeOut(); + customTimeoutHandler = new CustomTimeoutHandler(timeoutValue, channelIdleCallback); + ctx.pipeline().replace(TIMEOUT_HANDLER, TIMEOUT_HANDLER, customTimeoutHandler); + } + + @Override + protected void sendError(ChannelHandlerContext ctx, + HttpResponseStatus status) { + String message = "Error while processing request. Status: " + status; + handleError(ctx, message); + if (failImmediatelyOnErrors) { + stop(); + } + } + + @Override + protected void sendError(ChannelHandlerContext ctx, String message, + HttpResponseStatus status) { + String errMessage = String.format("Error while processing request. " + + "Status: " + + "%s, message: %s", status, message); + handleError(ctx, errMessage); + if (failImmediatelyOnErrors) { + stop(); + } + } + }; + } + + private void handleError(ChannelHandlerContext ctx, String message) { + LOG.error(message); + failures.add(new Error(message)); + if (closeChannelOnError) { + LOG.warn("sendError: Closing channel"); + ctx.channel().close(); + } + } + + private class CustomTimeoutHandler extends TimeoutHandler { + private boolean channelIdle = false; + private final Consumer channelIdleCallback; + + CustomTimeoutHandler(int connectionKeepAliveTimeOut, + Consumer channelIdleCallback) { + super(connectionKeepAliveTimeOut); + this.channelIdleCallback = channelIdleCallback; + } + + @Override + public void channelIdle(ChannelHandlerContext ctx, IdleStateEvent e) { + LOG.debug("Channel idle"); + this.channelIdle = true; + if (channelIdleCallback != null) { + LOG.debug("Calling channel idle callback.."); + channelIdleCallback.accept(e); + } + super.channelIdle(ctx, e); + } + } + } + + private static class MapOutputSender { + private final ResponseConfig responseConfig; + private final LastSocketAddress lastSocketAddress; + private final ShuffleHeaderProvider shuffleHeaderProvider; + private AdditionalMapOutputSenderOperations additionalMapOutputSenderOperations; + + MapOutputSender(ResponseConfig responseConfig, LastSocketAddress lastSocketAddress, + ShuffleHeaderProvider shuffleHeaderProvider) { + this.responseConfig = responseConfig; + this.lastSocketAddress = lastSocketAddress; + this.shuffleHeaderProvider = shuffleHeaderProvider; + } + + public ChannelFuture send(ChannelHandlerContext ctx, Channel ch) throws IOException { + LOG.debug("In MapOutputSender#send"); + lastSocketAddress.setAddress(ch.remoteAddress()); + ShuffleHeader header = shuffleHeaderProvider.createNewShuffleHeader(); + ChannelFuture future = writeHeaderNTimes(ch, header, responseConfig.headerWriteCount); + // This is the last operation + // It's safe to increment ShuffleHeader counter for better identification + shuffleHeaderProvider.incrementCounter(); + if (additionalMapOutputSenderOperations != null) { + return additionalMapOutputSenderOperations.perform(ctx, ch); + } + return future; + } + + private ChannelFuture writeHeaderNTimes(Channel ch, ShuffleHeader header, int iterations) + throws IOException { + DataOutputBuffer dob = new DataOutputBuffer(); + for (int i = 0; i < iterations; ++i) { + header.write(dob); + } + LOG.debug("MapOutputSender#writeHeaderNTimes WriteAndFlush big chunk of data, " + + "outputBufferSize: " + dob.size()); + return ch.writeAndFlush(wrappedBuffer(dob.getData(), 0, dob.getLength())); + } + } + + private static class ShuffleHeaderProvider { + private final long attemptId; + private int attemptCounter = 0; + private int cachedSize = Integer.MIN_VALUE; + + ShuffleHeaderProvider(long attemptId) { + this.attemptId = attemptId; + } + + ShuffleHeader createNewShuffleHeader() { + return new ShuffleHeader(String.format("attempt_%s_1_m_1_0%s", attemptId, attemptCounter), + 5678, 5678, 1); + } + + void incrementCounter() { + attemptCounter++; + } + + private int getShuffleHeaderSize() throws IOException { + if (cachedSize != Integer.MIN_VALUE) { + return cachedSize; + } + DataOutputBuffer dob = new DataOutputBuffer(); + ShuffleHeader header = createNewShuffleHeader(); + header.write(dob); + cachedSize = dob.size(); + return cachedSize; + } + } + + private static class HeaderPopulator { + private final ShuffleHandler shuffleHandler; + private final boolean disableKeepAliveConfig; + private final ShuffleHeaderProvider shuffleHeaderProvider; + private final ResponseConfig responseConfig; + + HeaderPopulator(ShuffleHandler shuffleHandler, + ResponseConfig responseConfig, + ShuffleHeaderProvider shuffleHeaderProvider, + boolean disableKeepAliveConfig) { + this.shuffleHandler = shuffleHandler; + this.responseConfig = responseConfig; + this.disableKeepAliveConfig = disableKeepAliveConfig; + this.shuffleHeaderProvider = shuffleHeaderProvider; + } + + public long populateHeaders(boolean keepAliveParam) throws IOException { + // Send some dummy data (populate content length details) + DataOutputBuffer dob = new DataOutputBuffer(); + for (int i = 0; i < responseConfig.headerWriteCount; ++i) { + ShuffleHeader header = + shuffleHeaderProvider.createNewShuffleHeader(); + header.write(dob); + } + // for testing purpose; + // disable connectionKeepAliveEnabled if keepAliveParam is available + if (keepAliveParam && disableKeepAliveConfig) { + shuffleHandler.connectionKeepAliveEnabled = false; + } + return responseConfig.contentLengthOfResponse; + } + } + + private static final class HttpConnectionData { + private final Map> headers; + private HttpURLConnection conn; + private final int payloadLength; + private final SocketAddress socket; + private int responseCode = -1; + + private HttpConnectionData(HttpURLConnection conn, int payloadLength, + SocketAddress socket) { + this.headers = conn.getHeaderFields(); + this.conn = conn; + this.payloadLength = payloadLength; + this.socket = socket; + try { + this.responseCode = conn.getResponseCode(); + } catch (IOException e) { + fail("Failed to read response code from connection: " + conn); + } + } + + static HttpConnectionData create(HttpURLConnection conn, int payloadLength, + SocketAddress socket) { + return new HttpConnectionData(conn, payloadLength, socket); + } + } + + private static final class HttpConnectionAssert { + private final HttpConnectionData connData; + + private HttpConnectionAssert(HttpConnectionData connData) { + this.connData = connData; + } + + static HttpConnectionAssert create(HttpConnectionData connData) { + return new HttpConnectionAssert(connData); + } + + public static void assertKeepAliveConnectionsAreSame( + HttpConnectionHelper httpConnectionHelper) { + assertTrue("At least two connection data " + + "is required to perform this assertion", + httpConnectionHelper.connectionData.size() >= 2); + SocketAddress firstAddress = httpConnectionHelper.getConnectionData(0).socket; + SocketAddress secondAddress = httpConnectionHelper.getConnectionData(1).socket; + Assert.assertNotNull("Initial shuffle address should not be null", + firstAddress); + Assert.assertNotNull("Keep-Alive shuffle address should not be null", + secondAddress); + assertEquals("Initial shuffle address and keep-alive shuffle " + + "address should be the same", firstAddress, secondAddress); + } + + public HttpConnectionAssert expectKeepAliveWithTimeout(long timeout) { + assertEquals(HttpURLConnection.HTTP_OK, connData.responseCode); + assertHeaderValue(HttpHeader.CONNECTION, HttpHeader.KEEP_ALIVE.asString()); + assertHeaderValue(HttpHeader.KEEP_ALIVE, "timeout=" + timeout); + return this; + } + + public HttpConnectionAssert expectBadRequest(long timeout) { + assertEquals(HttpURLConnection.HTTP_BAD_REQUEST, connData.responseCode); + assertHeaderValue(HttpHeader.CONNECTION, HttpHeader.KEEP_ALIVE.asString()); + assertHeaderValue(HttpHeader.KEEP_ALIVE, "timeout=" + timeout); + return this; + } + + public HttpConnectionAssert expectResponseContentLength(long size) { + assertEquals(size, connData.payloadLength); + return this; + } + + private void assertHeaderValue(HttpHeader header, String expectedValue) { + List headerList = connData.headers.get(header.asString()); + Assert.assertNotNull("Got null header value for header: " + header, headerList); + Assert.assertFalse("Got empty header value for header: " + header, headerList.isEmpty()); + assertEquals("Unexpected size of header list for header: " + header, 1, + headerList.size()); + assertEquals(expectedValue, headerList.get(0)); + } + } + + private static class HttpConnectionHelper { + private final LastSocketAddress lastSocketAddress; + List connectionData = new ArrayList<>(); + + HttpConnectionHelper(LastSocketAddress lastSocketAddress) { + this.lastSocketAddress = lastSocketAddress; + } + + public void connectToUrls(String[] urls, ResponseConfig responseConfig) throws IOException { + connectToUrlsInternal(urls, responseConfig, HttpURLConnection.HTTP_OK); + } + + public void connectToUrls(String[] urls, ResponseConfig responseConfig, int expectedHttpStatus) + throws IOException { + connectToUrlsInternal(urls, responseConfig, expectedHttpStatus); + } + + private void connectToUrlsInternal(String[] urls, ResponseConfig responseConfig, + int expectedHttpStatus) throws IOException { + int requests = urls.length; + int expectedConnections = urls.length; + LOG.debug("Will connect to URLs: {}", Arrays.toString(urls)); + for (int reqIdx = 0; reqIdx < requests; reqIdx++) { + String urlString = urls[reqIdx]; + LOG.debug("Connecting to URL: {}", urlString); + URL url = new URL(urlString); + HttpURLConnection conn = TEST_EXECUTION.openConnection(url); + conn.setRequestProperty(ShuffleHeader.HTTP_HEADER_NAME, + ShuffleHeader.DEFAULT_HTTP_HEADER_NAME); + conn.setRequestProperty(ShuffleHeader.HTTP_HEADER_VERSION, + ShuffleHeader.DEFAULT_HTTP_HEADER_VERSION); + TEST_EXECUTION.parameterizeConnection(conn); + conn.connect(); + if (expectedHttpStatus == HttpURLConnection.HTTP_BAD_REQUEST) { + //Catch exception as error are caught with overridden sendError method + //Caught errors will be validated later. + try { + DataInputStream input = new DataInputStream(conn.getInputStream()); + } catch (Exception e) { + expectedConnections--; + continue; + } + } + DataInputStream input = new DataInputStream(conn.getInputStream()); + LOG.debug("Opened DataInputStream for connection: {}/{}", (reqIdx + 1), requests); + ShuffleHeader header = new ShuffleHeader(); + header.readFields(input); + InputStreamReadResult result = readDataFromInputStream(input); + result.totalBytesRead += responseConfig.headerSize; + int expectedContentLength = + Integer.parseInt(conn.getHeaderField(HttpHeader.CONTENT_LENGTH.asString())); + + if (result.totalBytesRead != expectedContentLength) { + throw new IOException(String.format("Premature EOF InputStream. " + + "Expected content-length: %s, " + + "Actual content-length: %s", expectedContentLength, result.totalBytesRead)); + } + connectionData.add(HttpConnectionData + .create(conn, result.totalBytesRead, lastSocketAddress.getSocketAddres())); + input.close(); + LOG.debug("Finished all interactions with URL: {}. Progress: {}/{}", url, (reqIdx + 1), + requests); + } + assertEquals(expectedConnections, connectionData.size()); + } + + void validate(Consumer connDataValidator) { + for (int i = 0; i < connectionData.size(); i++) { + LOG.debug("Validating connection data #{}", (i + 1)); + HttpConnectionData connData = connectionData.get(i); + connDataValidator.accept(connData); + } + } + + HttpConnectionData getConnectionData(int i) { + return connectionData.get(i); + } + + private static InputStreamReadResult readDataFromInputStream( + InputStream input) throws IOException { + ByteArrayOutputStream dataStream = new ByteArrayOutputStream(); + byte[] buffer = new byte[1024]; + int bytesRead; + int totalBytesRead = 0; + while ((bytesRead = input.read(buffer)) != -1) { + dataStream.write(buffer, 0, bytesRead); + totalBytesRead += bytesRead; + } + LOG.debug("Read total bytes: " + totalBytesRead); + dataStream.flush(); + return new InputStreamReadResult(dataStream.toByteArray(), totalBytesRead); + } + } + + class ShuffleHandlerForTests extends ShuffleHandler { + public final ArrayList failures = new ArrayList<>(); + + ShuffleHandlerForTests() { + setUseOutboundExceptionHandler(true); + } + + ShuffleHandlerForTests(MetricsSystem ms) { + super(ms); + setUseOutboundExceptionHandler(true); + } + + @Override + protected Shuffle getShuffle(final Configuration conf) { + return new Shuffle(conf) { + @Override + public void exceptionCaught(ChannelHandlerContext ctx, + Throwable cause) throws Exception { + LOG.debug("ExceptionCaught"); + failures.add(cause); + super.exceptionCaught(ctx, cause); + } + }; + } + } class MockShuffleHandler extends org.apache.hadoop.mapred.ShuffleHandler { - private AuxiliaryLocalPathHandler pathHandler = + final ArrayList failures = new ArrayList<>(); + + private final AuxiliaryLocalPathHandler pathHandler = new TestAuxiliaryLocalPathHandler(); + + MockShuffleHandler() { + setUseOutboundExceptionHandler(true); + } + + MockShuffleHandler(MetricsSystem ms) { + super(ms); + setUseOutboundExceptionHandler(true); + } + @Override protected Shuffle getShuffle(final Configuration conf) { return new Shuffle(conf) { @@ -120,7 +716,7 @@ public class TestShuffleHandler { } @Override protected MapOutputInfo getMapOutputInfo(String mapId, int reduce, - String jobId, String user) throws IOException { + String jobId, String user) { // Do nothing. return null; } @@ -128,7 +724,7 @@ public class TestShuffleHandler { protected void populateHeaders(List mapIds, String jobId, String user, int reduce, HttpRequest request, HttpResponse response, boolean keepAliveParam, - Map infoMap) throws IOException { + Map infoMap) { // Do nothing. } @Override @@ -140,12 +736,20 @@ public class TestShuffleHandler { new ShuffleHeader("attempt_12345_1_m_1_0", 5678, 5678, 1); DataOutputBuffer dob = new DataOutputBuffer(); header.write(dob); - ch.write(wrappedBuffer(dob.getData(), 0, dob.getLength())); + ch.writeAndFlush(wrappedBuffer(dob.getData(), 0, dob.getLength())); dob = new DataOutputBuffer(); for (int i = 0; i < 100; ++i) { header.write(dob); } - return ch.write(wrappedBuffer(dob.getData(), 0, dob.getLength())); + return ch.writeAndFlush(wrappedBuffer(dob.getData(), 0, dob.getLength())); + } + + @Override + public void exceptionCaught(ChannelHandlerContext ctx, + Throwable cause) throws Exception { + LOG.debug("ExceptionCaught"); + failures.add(cause); + super.exceptionCaught(ctx, cause); } }; } @@ -159,24 +763,22 @@ public class TestShuffleHandler { private class TestAuxiliaryLocalPathHandler implements AuxiliaryLocalPathHandler { @Override - public Path getLocalPathForRead(String path) throws IOException { + public Path getLocalPathForRead(String path) { return new Path(ABS_LOG_DIR.getAbsolutePath(), path); } @Override - public Path getLocalPathForWrite(String path) throws IOException { + public Path getLocalPathForWrite(String path) { return new Path(ABS_LOG_DIR.getAbsolutePath()); } @Override - public Path getLocalPathForWrite(String path, long size) - throws IOException { + public Path getLocalPathForWrite(String path, long size) { return new Path(ABS_LOG_DIR.getAbsolutePath()); } @Override - public Iterable getAllLocalPathsForRead(String path) - throws IOException { + public Iterable getAllLocalPathsForRead(String path) { ArrayList paths = new ArrayList<>(); paths.add(new Path(ABS_LOG_DIR.getAbsolutePath())); return paths; @@ -185,16 +787,34 @@ public class TestShuffleHandler { private static class MockShuffleHandler2 extends org.apache.hadoop.mapred.ShuffleHandler { + final ArrayList failures = new ArrayList<>(1); boolean socketKeepAlive = false; + + MockShuffleHandler2() { + setUseOutboundExceptionHandler(true); + } + + MockShuffleHandler2(MetricsSystem ms) { + super(ms); + setUseOutboundExceptionHandler(true); + } + @Override protected Shuffle getShuffle(final Configuration conf) { return new Shuffle(conf) { @Override protected void verifyRequest(String appid, ChannelHandlerContext ctx, - HttpRequest request, HttpResponse response, URL requestUri) - throws IOException { - SocketChannel channel = (SocketChannel)(ctx.getChannel()); - socketKeepAlive = channel.getConfig().isKeepAlive(); + HttpRequest request, HttpResponse response, URL requestUri) { + SocketChannel channel = (SocketChannel)(ctx.channel()); + socketKeepAlive = channel.config().isKeepAlive(); + } + + @Override + public void exceptionCaught(ChannelHandlerContext ctx, + Throwable cause) throws Exception { + LOG.debug("ExceptionCaught"); + failures.add(cause); + super.exceptionCaught(ctx, cause); } }; } @@ -204,6 +824,38 @@ public class TestShuffleHandler { } } + @Rule + public TestName name = new TestName(); + + @Before + public void setup() { + TEST_EXECUTION = new TestExecution(DEBUG_MODE, USE_PROXY); + } + + @After + public void tearDown() { + int port = TEST_EXECUTION.shuffleHandlerPort(); + if (isPortUsed(port)) { + String msg = String.format("Port is being used: %d. " + + "Current testcase name: %s", + port, name.getMethodName()); + throw new IllegalStateException(msg); + } + } + + private static boolean isPortUsed(int port) { + if (port == 0) { + //Don't check if port is 0 + return false; + } + try (Socket ignored = new Socket("localhost", port)) { + return true; + } catch (IOException e) { + LOG.error("Port: {}, port check result: {}", port, e.getMessage()); + return false; + } + } + /** * Test the validation of ShuffleHandler's meta-data's serialization and * de-serialization. @@ -228,21 +880,23 @@ public class TestShuffleHandler { @Test (timeout = 10000) public void testShuffleMetrics() throws Exception { MetricsSystem ms = new MetricsSystemImpl(); - ShuffleHandler sh = new ShuffleHandler(ms); + ShuffleHandler sh = new ShuffleHandlerForTests(ms); ChannelFuture cf = mock(ChannelFuture.class); when(cf.isSuccess()).thenReturn(true).thenReturn(false); sh.metrics.shuffleConnections.incr(); - sh.metrics.shuffleOutputBytes.incr(1*MiB); + sh.metrics.shuffleOutputBytes.incr(MiB); sh.metrics.shuffleConnections.incr(); sh.metrics.shuffleOutputBytes.incr(2*MiB); - checkShuffleMetrics(ms, 3*MiB, 0 , 0, 2); + checkShuffleMetrics(ms, 3*MiB, 0, 0, 2); sh.metrics.operationComplete(cf); sh.metrics.operationComplete(cf); checkShuffleMetrics(ms, 3*MiB, 1, 1, 0); + + sh.stop(); } static void checkShuffleMetrics(MetricsSystem ms, long bytes, int failed, @@ -262,57 +916,54 @@ public class TestShuffleHandler { */ @Test (timeout = 10000) public void testClientClosesConnection() throws Exception { - final ArrayList failures = new ArrayList(1); Configuration conf = new Configuration(); - conf.setInt(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY, 0); - ShuffleHandler shuffleHandler = new ShuffleHandler() { + conf.setInt(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY, TEST_EXECUTION.shuffleHandlerPort()); + ShuffleHandlerForTests shuffleHandler = new ShuffleHandlerForTests() { + @Override protected Shuffle getShuffle(Configuration conf) { // replace the shuffle handler with one stubbed for testing return new Shuffle(conf) { @Override protected MapOutputInfo getMapOutputInfo(String mapId, int reduce, - String jobId, String user) throws IOException { + String jobId, String user) { return null; } @Override protected void populateHeaders(List mapIds, String jobId, String user, int reduce, HttpRequest request, HttpResponse response, boolean keepAliveParam, - Map infoMap) throws IOException { + Map infoMap) { // Only set response headers and skip everything else // send some dummy value for content-length super.setResponseHeaders(response, keepAliveParam, 100); } @Override protected void verifyRequest(String appid, ChannelHandlerContext ctx, - HttpRequest request, HttpResponse response, URL requestUri) - throws IOException { + HttpRequest request, HttpResponse response, URL requestUri) { } @Override protected ChannelFuture sendMapOutput(ChannelHandlerContext ctx, Channel ch, String user, String mapId, int reduce, MapOutputInfo info) throws IOException { - // send a shuffle header and a lot of data down the channel - // to trigger a broken pipe ShuffleHeader header = new ShuffleHeader("attempt_12345_1_m_1_0", 5678, 5678, 1); DataOutputBuffer dob = new DataOutputBuffer(); header.write(dob); - ch.write(wrappedBuffer(dob.getData(), 0, dob.getLength())); + ch.writeAndFlush(wrappedBuffer(dob.getData(), 0, dob.getLength())); dob = new DataOutputBuffer(); for (int i = 0; i < 100000; ++i) { header.write(dob); } - return ch.write(wrappedBuffer(dob.getData(), 0, dob.getLength())); + return ch.writeAndFlush(wrappedBuffer(dob.getData(), 0, dob.getLength())); } @Override protected void sendError(ChannelHandlerContext ctx, HttpResponseStatus status) { if (failures.size() == 0) { failures.add(new Error()); - ctx.getChannel().close(); + ctx.channel().close(); } } @Override @@ -320,7 +971,7 @@ public class TestShuffleHandler { HttpResponseStatus status) { if (failures.size() == 0) { failures.add(new Error()); - ctx.getChannel().close(); + ctx.channel().close(); } } }; @@ -332,26 +983,30 @@ public class TestShuffleHandler { // simulate a reducer that closes early by reading a single shuffle header // then closing the connection URL url = new URL("http://127.0.0.1:" - + shuffleHandler.getConfig().get(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY) - + "/mapOutput?job=job_12345_1&reduce=1&map=attempt_12345_1_m_1_0"); - HttpURLConnection conn = (HttpURLConnection)url.openConnection(); + + shuffleHandler.getConfig().get(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY) + + "/mapOutput?job=job_12345_1&reduce=1&map=attempt_12345_1_m_1_0"); + HttpURLConnection conn = TEST_EXECUTION.openConnection(url); conn.setRequestProperty(ShuffleHeader.HTTP_HEADER_NAME, ShuffleHeader.DEFAULT_HTTP_HEADER_NAME); conn.setRequestProperty(ShuffleHeader.HTTP_HEADER_VERSION, ShuffleHeader.DEFAULT_HTTP_HEADER_VERSION); conn.connect(); DataInputStream input = new DataInputStream(conn.getInputStream()); - Assert.assertEquals(HttpURLConnection.HTTP_OK, conn.getResponseCode()); - Assert.assertEquals("close", + assertEquals(HttpURLConnection.HTTP_OK, conn.getResponseCode()); + assertEquals("close", conn.getHeaderField(HttpHeader.CONNECTION.asString())); ShuffleHeader header = new ShuffleHeader(); header.readFields(input); input.close(); + assertEquals("sendError called when client closed connection", 0, + shuffleHandler.failures.size()); + assertEquals("Should have no caught exceptions", Collections.emptyList(), + shuffleHandler.failures); + shuffleHandler.stop(); - Assert.assertTrue("sendError called when client closed connection", - failures.size() == 0); } + static class LastSocketAddress { SocketAddress lastAddress; void setAddress(SocketAddress lastAddress) { @@ -363,161 +1018,180 @@ public class TestShuffleHandler { } @Test(timeout = 10000) - public void testKeepAlive() throws Exception { - final ArrayList failures = new ArrayList(1); + public void testKeepAliveInitiallyEnabled() throws Exception { Configuration conf = new Configuration(); - conf.setInt(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY, 0); + conf.setInt(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY, TEST_EXECUTION.shuffleHandlerPort()); conf.setBoolean(ShuffleHandler.SHUFFLE_CONNECTION_KEEP_ALIVE_ENABLED, true); - // try setting to -ve keep alive timeout. - conf.setInt(ShuffleHandler.SHUFFLE_CONNECTION_KEEP_ALIVE_TIME_OUT, -100); - final LastSocketAddress lastSocketAddress = new LastSocketAddress(); + conf.setInt(ShuffleHandler.SHUFFLE_CONNECTION_KEEP_ALIVE_TIME_OUT, + TEST_EXECUTION.getKeepAliveTimeout()); + ResponseConfig responseConfig = new ResponseConfig(HEADER_WRITE_COUNT, 0, 0); + ShuffleHandlerForKeepAliveTests shuffleHandler = new ShuffleHandlerForKeepAliveTests( + ATTEMPT_ID, responseConfig); + testKeepAliveWithHttpOk(conf, shuffleHandler, ShuffleUrlType.SIMPLE, + ShuffleUrlType.WITH_KEEPALIVE); + } - ShuffleHandler shuffleHandler = new ShuffleHandler() { - @Override - protected Shuffle getShuffle(final Configuration conf) { - // replace the shuffle handler with one stubbed for testing - return new Shuffle(conf) { + @Test(timeout = 1000000) + public void testKeepAliveInitiallyEnabledTwoKeepAliveUrls() throws Exception { + Configuration conf = new Configuration(); + conf.setInt(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY, TEST_EXECUTION.shuffleHandlerPort()); + conf.setBoolean(ShuffleHandler.SHUFFLE_CONNECTION_KEEP_ALIVE_ENABLED, true); + conf.setInt(ShuffleHandler.SHUFFLE_CONNECTION_KEEP_ALIVE_TIME_OUT, + TEST_EXECUTION.getKeepAliveTimeout()); + ResponseConfig responseConfig = new ResponseConfig(HEADER_WRITE_COUNT, 0, 0); + ShuffleHandlerForKeepAliveTests shuffleHandler = new ShuffleHandlerForKeepAliveTests( + ATTEMPT_ID, responseConfig); + testKeepAliveWithHttpOk(conf, shuffleHandler, ShuffleUrlType.WITH_KEEPALIVE, + ShuffleUrlType.WITH_KEEPALIVE); + } + + //TODO snemeth implement keepalive test that used properly mocked ShuffleHandler + @Test(timeout = 10000) + public void testKeepAliveInitiallyDisabled() throws Exception { + Configuration conf = new Configuration(); + conf.setInt(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY, TEST_EXECUTION.shuffleHandlerPort()); + conf.setBoolean(ShuffleHandler.SHUFFLE_CONNECTION_KEEP_ALIVE_ENABLED, false); + conf.setInt(ShuffleHandler.SHUFFLE_CONNECTION_KEEP_ALIVE_TIME_OUT, + TEST_EXECUTION.getKeepAliveTimeout()); + ResponseConfig responseConfig = new ResponseConfig(HEADER_WRITE_COUNT, 0, 0); + ShuffleHandlerForKeepAliveTests shuffleHandler = new ShuffleHandlerForKeepAliveTests( + ATTEMPT_ID, responseConfig); + testKeepAliveWithHttpOk(conf, shuffleHandler, ShuffleUrlType.WITH_KEEPALIVE, + ShuffleUrlType.WITH_KEEPALIVE); + } + + @Test(timeout = 10000) + public void testKeepAliveMultipleMapAttemptIds() throws Exception { + final int mapOutputContentLength = 11; + final int mapOutputCount = 2; + + Configuration conf = new Configuration(); + conf.setInt(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY, TEST_EXECUTION.shuffleHandlerPort()); + conf.setBoolean(ShuffleHandler.SHUFFLE_CONNECTION_KEEP_ALIVE_ENABLED, true); + conf.setInt(ShuffleHandler.SHUFFLE_CONNECTION_KEEP_ALIVE_TIME_OUT, + TEST_EXECUTION.getKeepAliveTimeout()); + ResponseConfig responseConfig = new ResponseConfig(HEADER_WRITE_COUNT, + mapOutputCount, mapOutputContentLength); + ShuffleHandlerForKeepAliveTests shuffleHandler = new ShuffleHandlerForKeepAliveTests( + ATTEMPT_ID, responseConfig); + shuffleHandler.mapOutputSender.additionalMapOutputSenderOperations = + new AdditionalMapOutputSenderOperations() { @Override - protected MapOutputInfo getMapOutputInfo(String mapId, int reduce, - String jobId, String user) throws IOException { - return null; - } - @Override - protected void verifyRequest(String appid, ChannelHandlerContext ctx, - HttpRequest request, HttpResponse response, URL requestUri) - throws IOException { - } - - @Override - protected void populateHeaders(List mapIds, String jobId, - String user, int reduce, HttpRequest request, - HttpResponse response, boolean keepAliveParam, - Map infoMap) throws IOException { - // Send some dummy data (populate content length details) - ShuffleHeader header = - new ShuffleHeader("attempt_12345_1_m_1_0", 5678, 5678, 1); - DataOutputBuffer dob = new DataOutputBuffer(); - header.write(dob); - dob = new DataOutputBuffer(); - for (int i = 0; i < 100000; ++i) { - header.write(dob); - } - - long contentLength = dob.getLength(); - // for testing purpose; - // disable connectinKeepAliveEnabled if keepAliveParam is available - if (keepAliveParam) { - connectionKeepAliveEnabled = false; - } - - super.setResponseHeaders(response, keepAliveParam, contentLength); - } - - @Override - protected ChannelFuture sendMapOutput(ChannelHandlerContext ctx, - Channel ch, String user, String mapId, int reduce, - MapOutputInfo info) throws IOException { - lastSocketAddress.setAddress(ch.getRemoteAddress()); - HttpResponse response = new DefaultHttpResponse(HTTP_1_1, OK); - - // send a shuffle header and a lot of data down the channel - // to trigger a broken pipe - ShuffleHeader header = - new ShuffleHeader("attempt_12345_1_m_1_0", 5678, 5678, 1); - DataOutputBuffer dob = new DataOutputBuffer(); - header.write(dob); - ch.write(wrappedBuffer(dob.getData(), 0, dob.getLength())); - dob = new DataOutputBuffer(); - for (int i = 0; i < 100000; ++i) { - header.write(dob); - } - return ch.write(wrappedBuffer(dob.getData(), 0, dob.getLength())); - } - - @Override - protected void sendError(ChannelHandlerContext ctx, - HttpResponseStatus status) { - if (failures.size() == 0) { - failures.add(new Error()); - ctx.getChannel().close(); - } - } - - @Override - protected void sendError(ChannelHandlerContext ctx, String message, - HttpResponseStatus status) { - if (failures.size() == 0) { - failures.add(new Error()); - ctx.getChannel().close(); - } + public ChannelFuture perform(ChannelHandlerContext ctx, Channel ch) throws IOException { + File tmpFile = File.createTempFile("test", ".tmp"); + Files.write(tmpFile.toPath(), + "dummytestcontent123456".getBytes(StandardCharsets.UTF_8)); + final DefaultFileRegion partition = new DefaultFileRegion(tmpFile, 0, + mapOutputContentLength); + LOG.debug("Writing response partition: {}, channel: {}", + partition, ch.id()); + return ch.writeAndFlush(partition) + .addListener((ChannelFutureListener) future -> + LOG.debug("Finished Writing response partition: {}, channel: " + + "{}", partition, ch.id())); } }; - } - }; + testKeepAliveWithHttpOk(conf, shuffleHandler, + ShuffleUrlType.WITH_KEEPALIVE_MULTIPLE_MAP_IDS, + ShuffleUrlType.WITH_KEEPALIVE_MULTIPLE_MAP_IDS); + } + + @Test(timeout = 10000) + public void testKeepAliveWithoutMapAttemptIds() throws Exception { + Configuration conf = new Configuration(); + conf.setInt(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY, TEST_EXECUTION.shuffleHandlerPort()); + conf.setBoolean(ShuffleHandler.SHUFFLE_CONNECTION_KEEP_ALIVE_ENABLED, true); + conf.setInt(ShuffleHandler.SHUFFLE_CONNECTION_KEEP_ALIVE_TIME_OUT, + TEST_EXECUTION.getKeepAliveTimeout()); + ResponseConfig responseConfig = new ResponseConfig(HEADER_WRITE_COUNT, 0, 0); + ShuffleHandlerForKeepAliveTests shuffleHandler = new ShuffleHandlerForKeepAliveTests( + ATTEMPT_ID, responseConfig); + shuffleHandler.setFailImmediatelyOnErrors(true); + //Closing channels caused Netty to open another channel + // so 1 request was handled with 2 separate channels, + // ultimately generating 2 * HTTP 400 errors. + // We'd like to avoid this so disabling closing the channel here. + shuffleHandler.setCloseChannelOnError(false); + testKeepAliveWithHttpBadRequest(conf, shuffleHandler, ShuffleUrlType.WITH_KEEPALIVE_NO_MAP_IDS); + } + + private void testKeepAliveWithHttpOk( + Configuration conf, + ShuffleHandlerForKeepAliveTests shuffleHandler, + ShuffleUrlType... shuffleUrlTypes) throws IOException { + testKeepAliveWithHttpStatus(conf, shuffleHandler, shuffleUrlTypes, HttpURLConnection.HTTP_OK); + } + + private void testKeepAliveWithHttpBadRequest( + Configuration conf, + ShuffleHandlerForKeepAliveTests shuffleHandler, + ShuffleUrlType... shuffleUrlTypes) throws IOException { + testKeepAliveWithHttpStatus(conf, shuffleHandler, shuffleUrlTypes, + HttpURLConnection.HTTP_BAD_REQUEST); + } + + private void testKeepAliveWithHttpStatus(Configuration conf, + ShuffleHandlerForKeepAliveTests shuffleHandler, + ShuffleUrlType[] shuffleUrlTypes, + int expectedHttpStatus) throws IOException { + if (expectedHttpStatus != HttpURLConnection.HTTP_BAD_REQUEST) { + assertTrue("Expected at least two shuffle URL types ", + shuffleUrlTypes.length >= 2); + } shuffleHandler.init(conf); shuffleHandler.start(); - String shuffleBaseURL = "http://127.0.0.1:" - + shuffleHandler.getConfig().get( - ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY); - URL url = - new URL(shuffleBaseURL + "/mapOutput?job=job_12345_1&reduce=1&" - + "map=attempt_12345_1_m_1_0"); - HttpURLConnection conn = (HttpURLConnection) url.openConnection(); - conn.setRequestProperty(ShuffleHeader.HTTP_HEADER_NAME, - ShuffleHeader.DEFAULT_HTTP_HEADER_NAME); - conn.setRequestProperty(ShuffleHeader.HTTP_HEADER_VERSION, - ShuffleHeader.DEFAULT_HTTP_HEADER_VERSION); - conn.connect(); - DataInputStream input = new DataInputStream(conn.getInputStream()); - Assert.assertEquals(HttpHeader.KEEP_ALIVE.asString(), - conn.getHeaderField(HttpHeader.CONNECTION.asString())); - Assert.assertEquals("timeout=1", - conn.getHeaderField(HttpHeader.KEEP_ALIVE.asString())); - Assert.assertEquals(HttpURLConnection.HTTP_OK, conn.getResponseCode()); - ShuffleHeader header = new ShuffleHeader(); - header.readFields(input); - byte[] buffer = new byte[1024]; - while (input.read(buffer) != -1) {} - SocketAddress firstAddress = lastSocketAddress.getSocketAddres(); - input.close(); + String[] urls = new String[shuffleUrlTypes.length]; + for (int i = 0; i < shuffleUrlTypes.length; i++) { + ShuffleUrlType url = shuffleUrlTypes[i]; + if (url == ShuffleUrlType.SIMPLE) { + urls[i] = getShuffleUrl(shuffleHandler, ATTEMPT_ID, ATTEMPT_ID); + } else if (url == ShuffleUrlType.WITH_KEEPALIVE) { + urls[i] = getShuffleUrlWithKeepAlive(shuffleHandler, ATTEMPT_ID, ATTEMPT_ID); + } else if (url == ShuffleUrlType.WITH_KEEPALIVE_MULTIPLE_MAP_IDS) { + urls[i] = getShuffleUrlWithKeepAlive(shuffleHandler, ATTEMPT_ID, ATTEMPT_ID, ATTEMPT_ID_2); + } else if (url == ShuffleUrlType.WITH_KEEPALIVE_NO_MAP_IDS) { + urls[i] = getShuffleUrlWithKeepAlive(shuffleHandler, ATTEMPT_ID); + } + } + HttpConnectionHelper connHelper; + try { + connHelper = new HttpConnectionHelper(shuffleHandler.lastSocketAddress); + connHelper.connectToUrls(urls, shuffleHandler.responseConfig, expectedHttpStatus); + if (expectedHttpStatus == HttpURLConnection.HTTP_BAD_REQUEST) { + assertEquals(1, shuffleHandler.failures.size()); + assertThat(shuffleHandler.failures.get(0).getMessage(), + CoreMatchers.containsString("Status: 400 Bad Request, " + + "message: Required param job, map and reduce")); + } + } finally { + shuffleHandler.stop(); + } - // For keepAlive via URL - url = - new URL(shuffleBaseURL + "/mapOutput?job=job_12345_1&reduce=1&" - + "map=attempt_12345_1_m_1_0&keepAlive=true"); - conn = (HttpURLConnection) url.openConnection(); - conn.setRequestProperty(ShuffleHeader.HTTP_HEADER_NAME, - ShuffleHeader.DEFAULT_HTTP_HEADER_NAME); - conn.setRequestProperty(ShuffleHeader.HTTP_HEADER_VERSION, - ShuffleHeader.DEFAULT_HTTP_HEADER_VERSION); - conn.connect(); - input = new DataInputStream(conn.getInputStream()); - Assert.assertEquals(HttpHeader.KEEP_ALIVE.asString(), - conn.getHeaderField(HttpHeader.CONNECTION.asString())); - Assert.assertEquals("timeout=1", - conn.getHeaderField(HttpHeader.KEEP_ALIVE.asString())); - Assert.assertEquals(HttpURLConnection.HTTP_OK, conn.getResponseCode()); - header = new ShuffleHeader(); - header.readFields(input); - input.close(); - SocketAddress secondAddress = lastSocketAddress.getSocketAddres(); - Assert.assertNotNull("Initial shuffle address should not be null", - firstAddress); - Assert.assertNotNull("Keep-Alive shuffle address should not be null", - secondAddress); - Assert.assertEquals("Initial shuffle address and keep-alive shuffle " - + "address should be the same", firstAddress, secondAddress); + //Verify expectations + int configuredTimeout = TEST_EXECUTION.getKeepAliveTimeout(); + int expectedTimeout = configuredTimeout < 0 ? 1 : configuredTimeout; + connHelper.validate(connData -> { + HttpConnectionAssert.create(connData) + .expectKeepAliveWithTimeout(expectedTimeout) + .expectResponseContentLength(shuffleHandler.responseConfig.contentLengthOfResponse); + }); + if (expectedHttpStatus == HttpURLConnection.HTTP_OK) { + HttpConnectionAssert.assertKeepAliveConnectionsAreSame(connHelper); + assertEquals("Unexpected ShuffleHandler failure", Collections.emptyList(), + shuffleHandler.failures); + } } @Test(timeout = 10000) public void testSocketKeepAlive() throws Exception { Configuration conf = new Configuration(); - conf.setInt(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY, 0); + conf.setInt(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY, TEST_EXECUTION.shuffleHandlerPort()); conf.setBoolean(ShuffleHandler.SHUFFLE_CONNECTION_KEEP_ALIVE_ENABLED, true); - // try setting to -ve keep alive timeout. - conf.setInt(ShuffleHandler.SHUFFLE_CONNECTION_KEEP_ALIVE_TIME_OUT, -100); + // try setting to negative keep alive timeout. + conf.setInt(ShuffleHandler.SHUFFLE_CONNECTION_KEEP_ALIVE_TIME_OUT, + ARBITRARY_NEGATIVE_TIMEOUT_SECONDS); HttpURLConnection conn = null; MockShuffleHandler2 shuffleHandler = new MockShuffleHandler2(); AuxiliaryLocalPathHandler pathHandler = @@ -535,14 +1209,16 @@ public class TestShuffleHandler { URL url = new URL(shuffleBaseURL + "/mapOutput?job=job_12345_1&reduce=1&" + "map=attempt_12345_1_m_1_0"); - conn = (HttpURLConnection) url.openConnection(); + conn = TEST_EXECUTION.openConnection(url); conn.setRequestProperty(ShuffleHeader.HTTP_HEADER_NAME, ShuffleHeader.DEFAULT_HTTP_HEADER_NAME); conn.setRequestProperty(ShuffleHeader.HTTP_HEADER_VERSION, ShuffleHeader.DEFAULT_HTTP_HEADER_VERSION); conn.connect(); + int rc = conn.getResponseCode(); conn.getInputStream(); - Assert.assertTrue("socket should be set KEEP_ALIVE", + assertEquals(HttpURLConnection.HTTP_OK, rc); + assertTrue("socket should be set KEEP_ALIVE", shuffleHandler.isSocketKeepAlive()); } finally { if (conn != null) { @@ -550,11 +1226,13 @@ public class TestShuffleHandler { } shuffleHandler.stop(); } + assertEquals("Should have no caught exceptions", + Collections.emptyList(), shuffleHandler.failures); } /** - * simulate a reducer that sends an invalid shuffle-header - sometimes a wrong - * header_name and sometimes a wrong version + * Simulate a reducer that sends an invalid shuffle-header - sometimes a wrong + * header_name and sometimes a wrong version. * * @throws Exception exception */ @@ -562,24 +1240,24 @@ public class TestShuffleHandler { public void testIncompatibleShuffleVersion() throws Exception { final int failureNum = 3; Configuration conf = new Configuration(); - conf.setInt(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY, 0); - ShuffleHandler shuffleHandler = new ShuffleHandler(); + conf.setInt(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY, TEST_EXECUTION.shuffleHandlerPort()); + ShuffleHandler shuffleHandler = new ShuffleHandlerForTests(); shuffleHandler.init(conf); shuffleHandler.start(); // simulate a reducer that closes early by reading a single shuffle header // then closing the connection URL url = new URL("http://127.0.0.1:" - + shuffleHandler.getConfig().get(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY) - + "/mapOutput?job=job_12345_1&reduce=1&map=attempt_12345_1_m_1_0"); + + shuffleHandler.getConfig().get(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY) + + "/mapOutput?job=job_12345_1&reduce=1&map=attempt_12345_1_m_1_0"); for (int i = 0; i < failureNum; ++i) { - HttpURLConnection conn = (HttpURLConnection)url.openConnection(); + HttpURLConnection conn = TEST_EXECUTION.openConnection(url); conn.setRequestProperty(ShuffleHeader.HTTP_HEADER_NAME, i == 0 ? "mapreduce" : "other"); conn.setRequestProperty(ShuffleHeader.HTTP_HEADER_VERSION, i == 1 ? "1.0.0" : "1.0.1"); conn.connect(); - Assert.assertEquals( + assertEquals( HttpURLConnection.HTTP_BAD_REQUEST, conn.getResponseCode()); } @@ -594,10 +1272,14 @@ public class TestShuffleHandler { */ @Test (timeout = 10000) public void testMaxConnections() throws Exception { + final ArrayList failures = new ArrayList<>(); + final int maxAllowedConnections = 3; + final int notAcceptedConnections = 1; + final int connAttempts = maxAllowedConnections + notAcceptedConnections; Configuration conf = new Configuration(); - conf.setInt(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY, 0); - conf.setInt(ShuffleHandler.MAX_SHUFFLE_CONNECTIONS, 3); + conf.setInt(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY, TEST_EXECUTION.shuffleHandlerPort()); + conf.setInt(ShuffleHandler.MAX_SHUFFLE_CONNECTIONS, maxAllowedConnections); ShuffleHandler shuffleHandler = new ShuffleHandler() { @Override protected Shuffle getShuffle(Configuration conf) { @@ -605,7 +1287,7 @@ public class TestShuffleHandler { return new Shuffle(conf) { @Override protected MapOutputInfo getMapOutputInfo(String mapId, int reduce, - String jobId, String user) throws IOException { + String jobId, String user) { // Do nothing. return null; } @@ -613,13 +1295,12 @@ public class TestShuffleHandler { protected void populateHeaders(List mapIds, String jobId, String user, int reduce, HttpRequest request, HttpResponse response, boolean keepAliveParam, - Map infoMap) throws IOException { + Map infoMap) { // Do nothing. } @Override protected void verifyRequest(String appid, ChannelHandlerContext ctx, - HttpRequest request, HttpResponse response, URL requestUri) - throws IOException { + HttpRequest request, HttpResponse response, URL requestUri) { // Do nothing. } @Override @@ -633,30 +1314,38 @@ public class TestShuffleHandler { new ShuffleHeader("dummy_header", 5678, 5678, 1); DataOutputBuffer dob = new DataOutputBuffer(); header.write(dob); - ch.write(wrappedBuffer(dob.getData(), 0, dob.getLength())); + ch.writeAndFlush(wrappedBuffer(dob.getData(), 0, dob.getLength())); dob = new DataOutputBuffer(); for (int i=0; i<100000; ++i) { header.write(dob); } - return ch.write(wrappedBuffer(dob.getData(), 0, dob.getLength())); + return ch.writeAndFlush(wrappedBuffer(dob.getData(), 0, dob.getLength())); + } + + @Override + public void exceptionCaught(ChannelHandlerContext ctx, + Throwable cause) throws Exception { + LOG.debug("ExceptionCaught"); + failures.add(cause); + super.exceptionCaught(ctx, cause); } }; } }; + shuffleHandler.setUseOutboundExceptionHandler(true); shuffleHandler.init(conf); shuffleHandler.start(); // setup connections - int connAttempts = 3; - HttpURLConnection conns[] = new HttpURLConnection[connAttempts]; + HttpURLConnection[] conns = new HttpURLConnection[connAttempts]; for (int i = 0; i < connAttempts; i++) { - String URLstring = "http://127.0.0.1:" + String urlString = "http://127.0.0.1:" + shuffleHandler.getConfig().get(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY) + "/mapOutput?job=job_12345_1&reduce=1&map=attempt_12345_1_m_" + i + "_0"; - URL url = new URL(URLstring); - conns[i] = (HttpURLConnection)url.openConnection(); + URL url = new URL(urlString); + conns[i] = TEST_EXECUTION.openConnection(url); conns[i].setRequestProperty(ShuffleHeader.HTTP_HEADER_NAME, ShuffleHeader.DEFAULT_HTTP_HEADER_NAME); conns[i].setRequestProperty(ShuffleHeader.HTTP_HEADER_VERSION, @@ -668,34 +1357,61 @@ public class TestShuffleHandler { conns[i].connect(); } - //Ensure first connections are okay - conns[0].getInputStream(); - int rc = conns[0].getResponseCode(); - Assert.assertEquals(HttpURLConnection.HTTP_OK, rc); - - conns[1].getInputStream(); - rc = conns[1].getResponseCode(); - Assert.assertEquals(HttpURLConnection.HTTP_OK, rc); - - // This connection should be closed because it to above the limit - try { - rc = conns[2].getResponseCode(); - Assert.assertEquals("Expected a too-many-requests response code", - ShuffleHandler.TOO_MANY_REQ_STATUS.getCode(), rc); - long backoff = Long.valueOf( - conns[2].getHeaderField(ShuffleHandler.RETRY_AFTER_HEADER)); - Assert.assertTrue("The backoff value cannot be negative.", backoff > 0); - conns[2].getInputStream(); - Assert.fail("Expected an IOException"); - } catch (IOException ioe) { - LOG.info("Expected - connection should not be open"); - } catch (NumberFormatException ne) { - Assert.fail("Expected a numerical value for RETRY_AFTER header field"); - } catch (Exception e) { - Assert.fail("Expected a IOException"); + Map> mapOfConnections = Maps.newHashMap(); + for (HttpURLConnection conn : conns) { + try { + conn.getInputStream(); + } catch (IOException ioe) { + LOG.info("Expected - connection should not be open"); + } catch (NumberFormatException ne) { + fail("Expected a numerical value for RETRY_AFTER header field"); + } catch (Exception e) { + fail("Expected a IOException"); + } + int statusCode = conn.getResponseCode(); + LOG.debug("Connection status code: {}", statusCode); + mapOfConnections.putIfAbsent(statusCode, new ArrayList<>()); + List connectionList = mapOfConnections.get(statusCode); + connectionList.add(conn); } + + assertEquals(String.format("Expected only %s and %s response", + OK_STATUS, ShuffleHandler.TOO_MANY_REQ_STATUS), + Sets.newHashSet( + HttpURLConnection.HTTP_OK, + ShuffleHandler.TOO_MANY_REQ_STATUS.code()), + mapOfConnections.keySet()); - shuffleHandler.stop(); + List successfulConnections = + mapOfConnections.get(HttpURLConnection.HTTP_OK); + assertEquals(String.format("Expected exactly %d requests " + + "with %s response", maxAllowedConnections, OK_STATUS), + maxAllowedConnections, successfulConnections.size()); + + //Ensure exactly one connection is HTTP 429 (TOO MANY REQUESTS) + List closedConnections = + mapOfConnections.get(ShuffleHandler.TOO_MANY_REQ_STATUS.code()); + assertEquals(String.format("Expected exactly %d %s response", + notAcceptedConnections, ShuffleHandler.TOO_MANY_REQ_STATUS), + notAcceptedConnections, closedConnections.size()); + + // This connection should be closed because it is above the maximum limit + HttpURLConnection conn = closedConnections.get(0); + assertEquals(String.format("Expected a %s response", + ShuffleHandler.TOO_MANY_REQ_STATUS), + ShuffleHandler.TOO_MANY_REQ_STATUS.code(), conn.getResponseCode()); + long backoff = Long.parseLong( + conn.getHeaderField(ShuffleHandler.RETRY_AFTER_HEADER)); + assertTrue("The backoff value cannot be negative.", backoff > 0); + + shuffleHandler.stop(); + + //It's okay to get a ClosedChannelException. + //All other kinds of exceptions means something went wrong + assertEquals("Should have no caught exceptions", + Collections.emptyList(), failures.stream() + .filter(f -> !(f instanceof ClosedChannelException)) + .collect(toList())); } /** @@ -706,10 +1422,11 @@ public class TestShuffleHandler { */ @Test(timeout = 100000) public void testMapFileAccess() throws IOException { + final ArrayList failures = new ArrayList<>(); // This will run only in NativeIO is enabled as SecureIOUtils need it assumeTrue(NativeIO.isAvailable()); Configuration conf = new Configuration(); - conf.setInt(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY, 0); + conf.setInt(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY, TEST_EXECUTION.shuffleHandlerPort()); conf.setInt(ShuffleHandler.MAX_SHUFFLE_CONNECTIONS, 3); conf.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION, "kerberos"); @@ -720,7 +1437,7 @@ public class TestShuffleHandler { String appAttemptId = "attempt_12345_1_m_1_0"; String user = "randomUser"; String reducerId = "0"; - List fileMap = new ArrayList(); + List fileMap = new ArrayList<>(); createShuffleHandlerFiles(ABS_LOG_DIR, user, appId.toString(), appAttemptId, conf, fileMap); ShuffleHandler shuffleHandler = new ShuffleHandler() { @@ -731,15 +1448,31 @@ public class TestShuffleHandler { @Override protected void verifyRequest(String appid, ChannelHandlerContext ctx, - HttpRequest request, HttpResponse response, URL requestUri) - throws IOException { + HttpRequest request, HttpResponse response, URL requestUri) { // Do nothing. } + @Override + public void exceptionCaught(ChannelHandlerContext ctx, + Throwable cause) throws Exception { + LOG.debug("ExceptionCaught"); + failures.add(cause); + super.exceptionCaught(ctx, cause); + } + + @Override + public void channelActive(ChannelHandlerContext ctx) throws Exception { + ctx.pipeline().replace(HttpResponseEncoder.class, + "loggingResponseEncoder", + new LoggingHttpResponseEncoder(false)); + LOG.debug("Modified pipeline: {}", ctx.pipeline()); + super.channelActive(ctx); + } }; } }; AuxiliaryLocalPathHandler pathHandler = new TestAuxiliaryLocalPathHandler(); + shuffleHandler.setUseOutboundExceptionHandler(true); shuffleHandler.setAuxiliaryLocalPathHandler(pathHandler); shuffleHandler.init(conf); try { @@ -747,13 +1480,13 @@ public class TestShuffleHandler { DataOutputBuffer outputBuffer = new DataOutputBuffer(); outputBuffer.reset(); Token jt = - new Token("identifier".getBytes(), + new Token<>("identifier".getBytes(), "password".getBytes(), new Text(user), new Text("shuffleService")); jt.write(outputBuffer); shuffleHandler - .initializeApplication(new ApplicationInitializationContext(user, - appId, ByteBuffer.wrap(outputBuffer.getData(), 0, - outputBuffer.getLength()))); + .initializeApplication(new ApplicationInitializationContext(user, + appId, ByteBuffer.wrap(outputBuffer.getData(), 0, + outputBuffer.getLength()))); URL url = new URL( "http://127.0.0.1:" @@ -761,32 +1494,37 @@ public class TestShuffleHandler { ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY) + "/mapOutput?job=job_12345_0001&reduce=" + reducerId + "&map=attempt_12345_1_m_1_0"); - HttpURLConnection conn = (HttpURLConnection) url.openConnection(); + HttpURLConnection conn = TEST_EXECUTION.openConnection(url); conn.setRequestProperty(ShuffleHeader.HTTP_HEADER_NAME, ShuffleHeader.DEFAULT_HTTP_HEADER_NAME); conn.setRequestProperty(ShuffleHeader.HTTP_HEADER_VERSION, ShuffleHeader.DEFAULT_HTTP_HEADER_VERSION); conn.connect(); - byte[] byteArr = new byte[10000]; - try { - DataInputStream is = new DataInputStream(conn.getInputStream()); - is.readFully(byteArr); - } catch (EOFException e) { - // ignore - } - // Retrieve file owner name - FileInputStream is = new FileInputStream(fileMap.get(0)); - String owner = NativeIO.POSIX.getFstat(is.getFD()).getOwner(); - is.close(); + DataInputStream is = new DataInputStream(conn.getInputStream()); + InputStreamReadResult result = HttpConnectionHelper.readDataFromInputStream(is); + String receivedString = result.asString; + + //Retrieve file owner name + FileInputStream fis = new FileInputStream(fileMap.get(0)); + String owner = NativeIO.POSIX.getFstat(fis.getFD()).getOwner(); + fis.close(); String message = "Owner '" + owner + "' for path " + fileMap.get(0).getAbsolutePath() + " did not match expected owner '" + user + "'"; - Assert.assertTrue((new String(byteArr)).contains(message)); + assertTrue(String.format("Received string '%s' should contain " + + "message '%s'", receivedString, message), + receivedString.contains(message)); + assertEquals(HttpURLConnection.HTTP_OK, conn.getResponseCode()); + LOG.info("received: " + receivedString); + assertNotEquals("", receivedString); } finally { shuffleHandler.stop(); FileUtil.fullyDelete(ABS_LOG_DIR); } + + assertEquals("Should have no caught exceptions", + Collections.emptyList(), failures); } private static void createShuffleHandlerFiles(File logDir, String user, @@ -794,7 +1532,7 @@ public class TestShuffleHandler { List fileMap) throws IOException { String attemptDir = StringUtils.join(Path.SEPARATOR, - new String[] { logDir.getAbsolutePath(), + new String[] {logDir.getAbsolutePath(), ContainerLocalizer.USERCACHE, user, ContainerLocalizer.APPCACHE, appId, "output", appAttemptId }); File appAttemptDir = new File(attemptDir); @@ -808,8 +1546,7 @@ public class TestShuffleHandler { createMapOutputFile(mapOutputFile, conf); } - private static void - createMapOutputFile(File mapOutputFile, Configuration conf) + private static void createMapOutputFile(File mapOutputFile, Configuration conf) throws IOException { FileOutputStream out = new FileOutputStream(mapOutputFile); out.write("Creating new dummy map output file. Used only for testing" @@ -846,11 +1583,11 @@ public class TestShuffleHandler { final File tmpDir = new File(System.getProperty("test.build.data", System.getProperty("java.io.tmpdir")), TestShuffleHandler.class.getName()); - ShuffleHandler shuffle = new ShuffleHandler(); + ShuffleHandler shuffle = new ShuffleHandlerForTests(); AuxiliaryLocalPathHandler pathHandler = new TestAuxiliaryLocalPathHandler(); shuffle.setAuxiliaryLocalPathHandler(pathHandler); Configuration conf = new Configuration(); - conf.setInt(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY, 0); + conf.setInt(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY, TEST_EXECUTION.shuffleHandlerPort()); conf.setInt(ShuffleHandler.MAX_SHUFFLE_CONNECTIONS, 3); conf.set(YarnConfiguration.NM_LOCAL_DIRS, ABS_LOG_DIR.getAbsolutePath()); @@ -861,10 +1598,10 @@ public class TestShuffleHandler { shuffle.init(conf); shuffle.start(); - // setup a shuffle token for an application + // set up a shuffle token for an application DataOutputBuffer outputBuffer = new DataOutputBuffer(); outputBuffer.reset(); - Token jt = new Token( + Token jt = new Token<>( "identifier".getBytes(), "password".getBytes(), new Text(user), new Text("shuffleService")); jt.write(outputBuffer); @@ -874,11 +1611,11 @@ public class TestShuffleHandler { // verify we are authorized to shuffle int rc = getShuffleResponseCode(shuffle, jt); - Assert.assertEquals(HttpURLConnection.HTTP_OK, rc); + assertEquals(HttpURLConnection.HTTP_OK, rc); // emulate shuffle handler restart shuffle.close(); - shuffle = new ShuffleHandler(); + shuffle = new ShuffleHandlerForTests(); shuffle.setAuxiliaryLocalPathHandler(pathHandler); shuffle.setRecoveryPath(new Path(tmpDir.toString())); shuffle.init(conf); @@ -886,23 +1623,23 @@ public class TestShuffleHandler { // verify we are still authorized to shuffle to the old application rc = getShuffleResponseCode(shuffle, jt); - Assert.assertEquals(HttpURLConnection.HTTP_OK, rc); + assertEquals(HttpURLConnection.HTTP_OK, rc); // shutdown app and verify access is lost shuffle.stopApplication(new ApplicationTerminationContext(appId)); rc = getShuffleResponseCode(shuffle, jt); - Assert.assertEquals(HttpURLConnection.HTTP_UNAUTHORIZED, rc); + assertEquals(HttpURLConnection.HTTP_UNAUTHORIZED, rc); // emulate shuffle handler restart shuffle.close(); - shuffle = new ShuffleHandler(); + shuffle = new ShuffleHandlerForTests(); shuffle.setRecoveryPath(new Path(tmpDir.toString())); shuffle.init(conf); shuffle.start(); // verify we still don't have access rc = getShuffleResponseCode(shuffle, jt); - Assert.assertEquals(HttpURLConnection.HTTP_UNAUTHORIZED, rc); + assertEquals(HttpURLConnection.HTTP_UNAUTHORIZED, rc); } finally { if (shuffle != null) { shuffle.close(); @@ -919,9 +1656,9 @@ public class TestShuffleHandler { System.getProperty("java.io.tmpdir")), TestShuffleHandler.class.getName()); Configuration conf = new Configuration(); - conf.setInt(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY, 0); + conf.setInt(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY, TEST_EXECUTION.shuffleHandlerPort()); conf.setInt(ShuffleHandler.MAX_SHUFFLE_CONNECTIONS, 3); - ShuffleHandler shuffle = new ShuffleHandler(); + ShuffleHandler shuffle = new ShuffleHandlerForTests(); AuxiliaryLocalPathHandler pathHandler = new TestAuxiliaryLocalPathHandler(); shuffle.setAuxiliaryLocalPathHandler(pathHandler); conf.set(YarnConfiguration.NM_LOCAL_DIRS, ABS_LOG_DIR.getAbsolutePath()); @@ -932,10 +1669,10 @@ public class TestShuffleHandler { shuffle.init(conf); shuffle.start(); - // setup a shuffle token for an application + // set up a shuffle token for an application DataOutputBuffer outputBuffer = new DataOutputBuffer(); outputBuffer.reset(); - Token jt = new Token( + Token jt = new Token<>( "identifier".getBytes(), "password".getBytes(), new Text(user), new Text("shuffleService")); jt.write(outputBuffer); @@ -945,11 +1682,11 @@ public class TestShuffleHandler { // verify we are authorized to shuffle int rc = getShuffleResponseCode(shuffle, jt); - Assert.assertEquals(HttpURLConnection.HTTP_OK, rc); + assertEquals(HttpURLConnection.HTTP_OK, rc); // emulate shuffle handler restart shuffle.close(); - shuffle = new ShuffleHandler(); + shuffle = new ShuffleHandlerForTests(); shuffle.setAuxiliaryLocalPathHandler(pathHandler); shuffle.setRecoveryPath(new Path(tmpDir.toString())); shuffle.init(conf); @@ -957,44 +1694,44 @@ public class TestShuffleHandler { // verify we are still authorized to shuffle to the old application rc = getShuffleResponseCode(shuffle, jt); - Assert.assertEquals(HttpURLConnection.HTTP_OK, rc); + assertEquals(HttpURLConnection.HTTP_OK, rc); Version version = Version.newInstance(1, 0); - Assert.assertEquals(version, shuffle.getCurrentVersion()); + assertEquals(version, shuffle.getCurrentVersion()); // emulate shuffle handler restart with compatible version Version version11 = Version.newInstance(1, 1); // update version info before close shuffle shuffle.storeVersion(version11); - Assert.assertEquals(version11, shuffle.loadVersion()); + assertEquals(version11, shuffle.loadVersion()); shuffle.close(); - shuffle = new ShuffleHandler(); + shuffle = new ShuffleHandlerForTests(); shuffle.setAuxiliaryLocalPathHandler(pathHandler); shuffle.setRecoveryPath(new Path(tmpDir.toString())); shuffle.init(conf); shuffle.start(); // shuffle version will be override by CURRENT_VERSION_INFO after restart // successfully. - Assert.assertEquals(version, shuffle.loadVersion()); + assertEquals(version, shuffle.loadVersion()); // verify we are still authorized to shuffle to the old application rc = getShuffleResponseCode(shuffle, jt); - Assert.assertEquals(HttpURLConnection.HTTP_OK, rc); + assertEquals(HttpURLConnection.HTTP_OK, rc); // emulate shuffle handler restart with incompatible version Version version21 = Version.newInstance(2, 1); shuffle.storeVersion(version21); - Assert.assertEquals(version21, shuffle.loadVersion()); + assertEquals(version21, shuffle.loadVersion()); shuffle.close(); - shuffle = new ShuffleHandler(); + shuffle = new ShuffleHandlerForTests(); shuffle.setAuxiliaryLocalPathHandler(pathHandler); shuffle.setRecoveryPath(new Path(tmpDir.toString())); shuffle.init(conf); try { shuffle.start(); - Assert.fail("Incompatible version, should expect fail here."); + fail("Incompatible version, should expect fail here."); } catch (ServiceStateException e) { - Assert.assertTrue("Exception message mismatch", - e.getMessage().contains("Incompatible version for state DB schema:")); + assertTrue("Exception message mismatch", + e.getMessage().contains("Incompatible version for state DB schema:")); } } finally { @@ -1010,7 +1747,7 @@ public class TestShuffleHandler { URL url = new URL("http://127.0.0.1:" + shuffle.getConfig().get(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY) + "/mapOutput?job=job_12345_0001&reduce=0&map=attempt_12345_1_m_1_0"); - HttpURLConnection conn = (HttpURLConnection) url.openConnection(); + HttpURLConnection conn = TEST_EXECUTION.openConnection(url); String encHash = SecureShuffleUtils.hashFromString( SecureShuffleUtils.buildMsgFrom(url), JobTokenSecretManager.createSecretKey(jt.getPassword())); @@ -1028,9 +1765,9 @@ public class TestShuffleHandler { @Test(timeout = 100000) public void testGetMapOutputInfo() throws Exception { - final ArrayList failures = new ArrayList(1); + final ArrayList failures = new ArrayList<>(1); Configuration conf = new Configuration(); - conf.setInt(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY, 0); + conf.setInt(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY, TEST_EXECUTION.shuffleHandlerPort()); conf.setInt(ShuffleHandler.MAX_SHUFFLE_CONNECTIONS, 3); conf.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION, "simple"); @@ -1040,7 +1777,7 @@ public class TestShuffleHandler { String appAttemptId = "attempt_12345_1_m_1_0"; String user = "randomUser"; String reducerId = "0"; - List fileMap = new ArrayList(); + List fileMap = new ArrayList<>(); createShuffleHandlerFiles(ABS_LOG_DIR, user, appId.toString(), appAttemptId, conf, fileMap); AuxiliaryLocalPathHandler pathHandler = new TestAuxiliaryLocalPathHandler(); @@ -1062,7 +1799,7 @@ public class TestShuffleHandler { @Override protected void verifyRequest(String appid, ChannelHandlerContext ctx, HttpRequest request, - HttpResponse response, URL requestUri) throws IOException { + HttpResponse response, URL requestUri) { // Do nothing. } @Override @@ -1070,7 +1807,7 @@ public class TestShuffleHandler { HttpResponseStatus status) { if (failures.size() == 0) { failures.add(new Error(message)); - ctx.getChannel().close(); + ctx.channel().close(); } } @Override @@ -1082,11 +1819,12 @@ public class TestShuffleHandler { new ShuffleHeader("attempt_12345_1_m_1_0", 5678, 5678, 1); DataOutputBuffer dob = new DataOutputBuffer(); header.write(dob); - return ch.write(wrappedBuffer(dob.getData(), 0, dob.getLength())); + return ch.writeAndFlush(wrappedBuffer(dob.getData(), 0, dob.getLength())); } }; } }; + shuffleHandler.setUseOutboundExceptionHandler(true); shuffleHandler.setAuxiliaryLocalPathHandler(pathHandler); shuffleHandler.init(conf); try { @@ -1094,8 +1832,8 @@ public class TestShuffleHandler { DataOutputBuffer outputBuffer = new DataOutputBuffer(); outputBuffer.reset(); Token jt = - new Token("identifier".getBytes(), - "password".getBytes(), new Text(user), new Text("shuffleService")); + new Token<>("identifier".getBytes(), + "password".getBytes(), new Text(user), new Text("shuffleService")); jt.write(outputBuffer); shuffleHandler .initializeApplication(new ApplicationInitializationContext(user, @@ -1108,7 +1846,7 @@ public class TestShuffleHandler { ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY) + "/mapOutput?job=job_12345_0001&reduce=" + reducerId + "&map=attempt_12345_1_m_1_0"); - HttpURLConnection conn = (HttpURLConnection) url.openConnection(); + HttpURLConnection conn = TEST_EXECUTION.openConnection(url); conn.setRequestProperty(ShuffleHeader.HTTP_HEADER_NAME, ShuffleHeader.DEFAULT_HTTP_HEADER_NAME); conn.setRequestProperty(ShuffleHeader.HTTP_HEADER_VERSION, @@ -1122,7 +1860,7 @@ public class TestShuffleHandler { } catch (EOFException e) { // ignore } - Assert.assertEquals("sendError called due to shuffle error", + assertEquals("sendError called due to shuffle error", 0, failures.size()); } finally { shuffleHandler.stop(); @@ -1133,11 +1871,10 @@ public class TestShuffleHandler { @Test(timeout = 4000) public void testSendMapCount() throws Exception { final List listenerList = - new ArrayList(); - + new ArrayList<>(); + int connectionKeepAliveTimeOut = 5; //arbitrary value final ChannelHandlerContext mockCtx = mock(ChannelHandlerContext.class); - final MessageEvent mockEvt = mock(MessageEvent.class); final Channel mockCh = mock(AbstractChannel.class); final ChannelPipeline mockPipeline = mock(ChannelPipeline.class); @@ -1146,29 +1883,23 @@ public class TestShuffleHandler { final ChannelFuture mockFuture = createMockChannelFuture(mockCh, listenerList); final ShuffleHandler.TimeoutHandler timerHandler = - new ShuffleHandler.TimeoutHandler(); + new ShuffleHandler.TimeoutHandler(connectionKeepAliveTimeOut); // Mock Netty Channel Context and Channel behavior - Mockito.doReturn(mockCh).when(mockCtx).getChannel(); - when(mockCh.getPipeline()).thenReturn(mockPipeline); + Mockito.doReturn(mockCh).when(mockCtx).channel(); + when(mockCh.pipeline()).thenReturn(mockPipeline); when(mockPipeline.get( Mockito.any(String.class))).thenReturn(timerHandler); - when(mockCtx.getChannel()).thenReturn(mockCh); - Mockito.doReturn(mockFuture).when(mockCh).write(Mockito.any(Object.class)); - when(mockCh.write(Object.class)).thenReturn(mockFuture); + when(mockCtx.channel()).thenReturn(mockCh); + Mockito.doReturn(mockFuture).when(mockCh).writeAndFlush(Mockito.any(Object.class)); - //Mock MessageEvent behavior - Mockito.doReturn(mockCh).when(mockEvt).getChannel(); - when(mockEvt.getChannel()).thenReturn(mockCh); - Mockito.doReturn(mockHttpRequest).when(mockEvt).getMessage(); - - final ShuffleHandler sh = new MockShuffleHandler(); + final MockShuffleHandler sh = new MockShuffleHandler(); Configuration conf = new Configuration(); sh.init(conf); sh.start(); int maxOpenFiles =conf.getInt(ShuffleHandler.SHUFFLE_MAX_SESSION_OPEN_FILES, ShuffleHandler.DEFAULT_SHUFFLE_MAX_SESSION_OPEN_FILES); - sh.getShuffle(conf).messageReceived(mockCtx, mockEvt); + sh.getShuffle(conf).channelRead(mockCtx, mockHttpRequest); assertTrue("Number of Open files should not exceed the configured " + "value!-Not Expected", listenerList.size() <= maxOpenFiles); @@ -1179,23 +1910,97 @@ public class TestShuffleHandler { listenerList.size() <= maxOpenFiles); } sh.close(); + sh.stop(); + + assertEquals("Should have no caught exceptions", + Collections.emptyList(), sh.failures); + } + + @Test(timeout = 10000) + public void testIdleStateHandlingSpecifiedTimeout() throws Exception { + int timeoutSeconds = 4; + int expectedTimeoutSeconds = timeoutSeconds; + testHandlingIdleState(timeoutSeconds, expectedTimeoutSeconds); + } + + @Test(timeout = 10000) + public void testIdleStateHandlingNegativeTimeoutDefaultsTo1Second() throws Exception { + int expectedTimeoutSeconds = 1; //expected by production code + testHandlingIdleState(ARBITRARY_NEGATIVE_TIMEOUT_SECONDS, expectedTimeoutSeconds); + } + + private String getShuffleUrlWithKeepAlive(ShuffleHandler shuffleHandler, long jobId, + long... attemptIds) { + String url = getShuffleUrl(shuffleHandler, jobId, attemptIds); + return url + "&keepAlive=true"; + } + + private String getShuffleUrl(ShuffleHandler shuffleHandler, long jobId, long... attemptIds) { + String port = shuffleHandler.getConfig().get(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY); + String shuffleBaseURL = "http://127.0.0.1:" + port; + + StringBuilder mapAttemptIds = new StringBuilder(); + for (int i = 0; i < attemptIds.length; i++) { + if (i == 0) { + mapAttemptIds.append("&map="); + } else { + mapAttemptIds.append(","); + } + mapAttemptIds.append(String.format("attempt_%s_1_m_1_0", attemptIds[i])); + } + + String location = String.format("/mapOutput" + + "?job=job_%s_1" + + "&reduce=1" + + "%s", jobId, mapAttemptIds); + return shuffleBaseURL + location; + } + + private void testHandlingIdleState(int configuredTimeoutSeconds, int expectedTimeoutSeconds) + throws IOException, + InterruptedException { + Configuration conf = new Configuration(); + conf.setInt(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY, TEST_EXECUTION.shuffleHandlerPort()); + conf.setBoolean(ShuffleHandler.SHUFFLE_CONNECTION_KEEP_ALIVE_ENABLED, true); + conf.setInt(ShuffleHandler.SHUFFLE_CONNECTION_KEEP_ALIVE_TIME_OUT, configuredTimeoutSeconds); + + final CountDownLatch countdownLatch = new CountDownLatch(1); + ResponseConfig responseConfig = new ResponseConfig(HEADER_WRITE_COUNT, 0, 0); + ShuffleHandlerForKeepAliveTests shuffleHandler = new ShuffleHandlerForKeepAliveTests( + ATTEMPT_ID, responseConfig, + event -> countdownLatch.countDown()); + shuffleHandler.init(conf); + shuffleHandler.start(); + + String shuffleUrl = getShuffleUrl(shuffleHandler, ATTEMPT_ID, ATTEMPT_ID); + String[] urls = new String[] {shuffleUrl}; + HttpConnectionHelper httpConnectionHelper = new HttpConnectionHelper( + shuffleHandler.lastSocketAddress); + long beforeConnectionTimestamp = System.currentTimeMillis(); + httpConnectionHelper.connectToUrls(urls, shuffleHandler.responseConfig); + countdownLatch.await(); + long channelClosedTimestamp = System.currentTimeMillis(); + long secondsPassed = + TimeUnit.SECONDS.convert(channelClosedTimestamp - beforeConnectionTimestamp, + TimeUnit.MILLISECONDS); + assertTrue(String.format("Expected at least %s seconds of timeout. " + + "Actual timeout seconds: %s", expectedTimeoutSeconds, secondsPassed), + secondsPassed >= expectedTimeoutSeconds); + shuffleHandler.stop(); } public ChannelFuture createMockChannelFuture(Channel mockCh, final List listenerList) { final ChannelFuture mockFuture = mock(ChannelFuture.class); - when(mockFuture.getChannel()).thenReturn(mockCh); + when(mockFuture.channel()).thenReturn(mockCh); Mockito.doReturn(true).when(mockFuture).isSuccess(); - Mockito.doAnswer(new Answer() { - @Override - public Object answer(InvocationOnMock invocation) throws Throwable { - //Add ReduceMapFileCount listener to a list - if (invocation.getArguments()[0].getClass() == - ShuffleHandler.ReduceMapFileCount.class) - listenerList.add((ShuffleHandler.ReduceMapFileCount) - invocation.getArguments()[0]); - return null; + Mockito.doAnswer(invocation -> { + //Add ReduceMapFileCount listener to a list + if (invocation.getArguments()[0].getClass() == ShuffleHandler.ReduceMapFileCount.class) { + listenerList.add((ShuffleHandler.ReduceMapFileCount) + invocation.getArguments()[0]); } + return null; }).when(mockFuture).addListener(Mockito.any( ShuffleHandler.ReduceMapFileCount.class)); return mockFuture; @@ -1203,16 +2008,14 @@ public class TestShuffleHandler { public HttpRequest createMockHttpRequest() { HttpRequest mockHttpRequest = mock(HttpRequest.class); - Mockito.doReturn(HttpMethod.GET).when(mockHttpRequest).getMethod(); - Mockito.doAnswer(new Answer() { - @Override - public Object answer(InvocationOnMock invocation) throws Throwable { - String uri = "/mapOutput?job=job_12345_1&reduce=1"; - for (int i = 0; i < 100; i++) - uri = uri.concat("&map=attempt_12345_1_m_" + i + "_0"); - return uri; + Mockito.doReturn(HttpMethod.GET).when(mockHttpRequest).method(); + Mockito.doAnswer(invocation -> { + String uri = "/mapOutput?job=job_12345_1&reduce=1"; + for (int i = 0; i < 100; i++) { + uri = uri.concat("&map=attempt_12345_1_m_" + i + "_0"); } - }).when(mockHttpRequest).getUri(); + return uri; + }).when(mockHttpRequest).uri(); return mockHttpRequest; } } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/resources/log4j.properties b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/resources/log4j.properties index 81a3f6ad5d2..b7d8ad36efc 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/resources/log4j.properties +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/resources/log4j.properties @@ -17,3 +17,5 @@ log4j.threshold=ALL log4j.appender.stdout=org.apache.log4j.ConsoleAppender log4j.appender.stdout.layout=org.apache.log4j.PatternLayout log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2} (%F:%M(%L)) - %m%n +log4j.logger.io.netty=INFO +log4j.logger.org.apache.hadoop.mapred=INFO \ No newline at end of file diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/pom.xml b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/pom.xml index 46ba670faef..24e6e1ec68f 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/pom.xml +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/pom.xml @@ -53,6 +53,21 @@ assertj-core test + + org.junit.jupiter + junit-jupiter-api + test + + + org.junit.jupiter + junit-jupiter-engine + test + + + org.junit.platform + junit-platform-launcher + test + diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/main/java/org/apache/hadoop/mapred/uploader/FrameworkUploader.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/main/java/org/apache/hadoop/mapred/uploader/FrameworkUploader.java index 862d68ebc0a..52b6dde3794 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/main/java/org/apache/hadoop/mapred/uploader/FrameworkUploader.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/main/java/org/apache/hadoop/mapred/uploader/FrameworkUploader.java @@ -331,6 +331,8 @@ public class FrameworkUploader implements Runnable { LOG.info("Compressing tarball"); try (TarArchiveOutputStream out = new TarArchiveOutputStream( targetStream)) { + // Workaround for the compress issue present from 1.21: COMPRESS-587 + out.setBigNumberMode(TarArchiveOutputStream.BIGNUMBER_STAR); for (String fullPath : filteredInputFiles) { LOG.info("Adding " + fullPath); File file = new File(fullPath); diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/test/java/org/apache/hadoop/mapred/uploader/TestFrameworkUploader.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/test/java/org/apache/hadoop/mapred/uploader/TestFrameworkUploader.java index d5d59e66ee3..dcabbac4d05 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/test/java/org/apache/hadoop/mapred/uploader/TestFrameworkUploader.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/test/java/org/apache/hadoop/mapred/uploader/TestFrameworkUploader.java @@ -32,10 +32,9 @@ import org.apache.hadoop.hdfs.DFSConfigKeys; import org.apache.hadoop.hdfs.DistributedFileSystem; import org.apache.hadoop.hdfs.HdfsConfiguration; import org.apache.hadoop.util.Lists; -import org.junit.Assert; -import org.junit.Assume; -import org.junit.Before; -import org.junit.Test; +import org.junit.jupiter.api.Assumptions; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; import java.io.File; import java.io.FileInputStream; @@ -55,6 +54,9 @@ import java.util.Set; import java.util.zip.GZIPInputStream; import static org.assertj.core.api.Assertions.assertThat; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertTrue; import static org.apache.hadoop.fs.FileSystem.FS_DEFAULT_NAME_KEY; /** @@ -63,7 +65,7 @@ import static org.apache.hadoop.fs.FileSystem.FS_DEFAULT_NAME_KEY; public class TestFrameworkUploader { private static String testDir; - @Before + @BeforeEach public void setUp() { String testRootDir = new File(System.getProperty("test.build.data", "/tmp")) @@ -79,11 +81,11 @@ public class TestFrameworkUploader { * @throws IOException test failure */ @Test - public void testHelp() throws IOException { + void testHelp() throws IOException { String[] args = new String[]{"-help"}; FrameworkUploader uploader = new FrameworkUploader(); boolean success = uploader.parseArguments(args); - Assert.assertFalse("Expected to print help", success); + assertFalse(success, "Expected to print help"); assertThat(uploader.input) .withFailMessage("Expected ignore run") .isNull(); @@ -100,11 +102,11 @@ public class TestFrameworkUploader { * @throws IOException test failure */ @Test - public void testWrongArgument() throws IOException { + void testWrongArgument() throws IOException { String[] args = new String[]{"-unexpected"}; FrameworkUploader uploader = new FrameworkUploader(); boolean success = uploader.parseArguments(args); - Assert.assertFalse("Expected to print help", success); + assertFalse(success, "Expected to print help"); } /** @@ -112,7 +114,7 @@ public class TestFrameworkUploader { * @throws IOException test failure */ @Test - public void testArguments() throws IOException { + void testArguments() throws IOException { String[] args = new String[]{ "-input", "A", @@ -126,60 +128,67 @@ public class TestFrameworkUploader { "-timeout", "10"}; FrameworkUploader uploader = new FrameworkUploader(); boolean success = uploader.parseArguments(args); - Assert.assertTrue("Expected to print help", success); - Assert.assertEquals("Input mismatch", "A", - uploader.input); - Assert.assertEquals("Whitelist mismatch", "B", - uploader.whitelist); - Assert.assertEquals("Blacklist mismatch", "C", - uploader.blacklist); - Assert.assertEquals("Target mismatch", "hdfs://C:8020/D", - uploader.target); - Assert.assertEquals("Initial replication mismatch", 100, - uploader.initialReplication); - Assert.assertEquals("Acceptable replication mismatch", 120, - uploader.acceptableReplication); - Assert.assertEquals("Final replication mismatch", 140, - uploader.finalReplication); - Assert.assertEquals("Timeout mismatch", 10, - uploader.timeout); + assertTrue(success, "Expected to print help"); + assertEquals("A", + uploader.input, + "Input mismatch"); + assertEquals("B", + uploader.whitelist, + "Whitelist mismatch"); + assertEquals("C", + uploader.blacklist, + "Blacklist mismatch"); + assertEquals("hdfs://C:8020/D", + uploader.target, + "Target mismatch"); + assertEquals(100, + uploader.initialReplication, + "Initial replication mismatch"); + assertEquals(120, + uploader.acceptableReplication, + "Acceptable replication mismatch"); + assertEquals(140, + uploader.finalReplication, + "Final replication mismatch"); + assertEquals(10, + uploader.timeout, + "Timeout mismatch"); } /** * Test the default ways how to specify filesystems. */ @Test - public void testNoFilesystem() throws IOException { + void testNoFilesystem() throws IOException { FrameworkUploader uploader = new FrameworkUploader(); boolean success = uploader.parseArguments(new String[]{}); - Assert.assertTrue("Expected to parse arguments", success); - Assert.assertEquals( - "Expected", - "file:////usr/lib/mr-framework.tar.gz#mr-framework", uploader.target); + assertTrue(success, "Expected to parse arguments"); + assertEquals( + "file:////usr/lib/mr-framework.tar.gz#mr-framework", uploader.target, "Expected"); } /** * Test the default ways how to specify filesystems. */ @Test - public void testDefaultFilesystem() throws IOException { + void testDefaultFilesystem() throws IOException { FrameworkUploader uploader = new FrameworkUploader(); Configuration conf = new Configuration(); conf.set(FS_DEFAULT_NAME_KEY, "hdfs://namenode:555"); uploader.setConf(conf); boolean success = uploader.parseArguments(new String[]{}); - Assert.assertTrue("Expected to parse arguments", success); - Assert.assertEquals( - "Expected", + assertTrue(success, "Expected to parse arguments"); + assertEquals( "hdfs://namenode:555/usr/lib/mr-framework.tar.gz#mr-framework", - uploader.target); + uploader.target, + "Expected"); } /** * Test the explicit filesystem specification. */ @Test - public void testExplicitFilesystem() throws IOException { + void testExplicitFilesystem() throws IOException { FrameworkUploader uploader = new FrameworkUploader(); Configuration conf = new Configuration(); uploader.setConf(conf); @@ -187,18 +196,18 @@ public class TestFrameworkUploader { "-target", "hdfs://namenode:555/usr/lib/mr-framework.tar.gz#mr-framework" }); - Assert.assertTrue("Expected to parse arguments", success); - Assert.assertEquals( - "Expected", + assertTrue(success, "Expected to parse arguments"); + assertEquals( "hdfs://namenode:555/usr/lib/mr-framework.tar.gz#mr-framework", - uploader.target); + uploader.target, + "Expected"); } /** * Test the conflicting filesystem specification. */ @Test - public void testConflictingFilesystem() throws IOException { + void testConflictingFilesystem() throws IOException { FrameworkUploader uploader = new FrameworkUploader(); Configuration conf = new Configuration(); conf.set(FS_DEFAULT_NAME_KEY, "hdfs://namenode:555"); @@ -207,11 +216,11 @@ public class TestFrameworkUploader { "-target", "file:///usr/lib/mr-framework.tar.gz#mr-framework" }); - Assert.assertTrue("Expected to parse arguments", success); - Assert.assertEquals( - "Expected", + assertTrue(success, "Expected to parse arguments"); + assertEquals( "file:///usr/lib/mr-framework.tar.gz#mr-framework", - uploader.target); + uploader.target, + "Expected"); } /** @@ -219,27 +228,27 @@ public class TestFrameworkUploader { * @throws IOException test failure */ @Test - public void testCollectPackages() throws IOException, UploaderException { + void testCollectPackages() throws IOException, UploaderException { File parent = new File(testDir); try { parent.deleteOnExit(); - Assert.assertTrue("Directory creation failed", parent.mkdirs()); + assertTrue(parent.mkdirs(), "Directory creation failed"); File dirA = new File(parent, "A"); - Assert.assertTrue(dirA.mkdirs()); + assertTrue(dirA.mkdirs()); File dirB = new File(parent, "B"); - Assert.assertTrue(dirB.mkdirs()); + assertTrue(dirB.mkdirs()); File jarA = new File(dirA, "a.jar"); - Assert.assertTrue(jarA.createNewFile()); + assertTrue(jarA.createNewFile()); File jarB = new File(dirA, "b.jar"); - Assert.assertTrue(jarB.createNewFile()); + assertTrue(jarB.createNewFile()); File jarC = new File(dirA, "c.jar"); - Assert.assertTrue(jarC.createNewFile()); + assertTrue(jarC.createNewFile()); File txtD = new File(dirA, "d.txt"); - Assert.assertTrue(txtD.createNewFile()); + assertTrue(txtD.createNewFile()); File jarD = new File(dirB, "d.jar"); - Assert.assertTrue(jarD.createNewFile()); + assertTrue(jarD.createNewFile()); File txtE = new File(dirB, "e.txt"); - Assert.assertTrue(txtE.createNewFile()); + assertTrue(txtE.createNewFile()); FrameworkUploader uploader = new FrameworkUploader(); uploader.whitelist = ".*a\\.jar,.*b\\.jar,.*d\\.jar"; @@ -248,19 +257,22 @@ public class TestFrameworkUploader { File.pathSeparatorChar + dirB.getAbsolutePath() + File.separatorChar + "*"; uploader.collectPackages(); - Assert.assertEquals("Whitelist count error", 3, - uploader.whitelistedFiles.size()); - Assert.assertEquals("Blacklist count error", 1, - uploader.blacklistedFiles.size()); + assertEquals(3, + uploader.whitelistedFiles.size(), + "Whitelist count error"); + assertEquals(1, + uploader.blacklistedFiles.size(), + "Blacklist count error"); - Assert.assertTrue("File not collected", - uploader.filteredInputFiles.contains(jarA.getAbsolutePath())); - Assert.assertFalse("File collected", - uploader.filteredInputFiles.contains(jarB.getAbsolutePath())); - Assert.assertTrue("File not collected", - uploader.filteredInputFiles.contains(jarD.getAbsolutePath())); - Assert.assertEquals("Too many whitelists", 2, - uploader.filteredInputFiles.size()); + assertTrue(uploader.filteredInputFiles.contains(jarA.getAbsolutePath()), + "File not collected"); + assertFalse(uploader.filteredInputFiles.contains(jarB.getAbsolutePath()), + "File collected"); + assertTrue(uploader.filteredInputFiles.contains(jarD.getAbsolutePath()), + "File not collected"); + assertEquals(2, + uploader.filteredInputFiles.size(), + "Too many whitelists"); } finally { FileUtils.deleteDirectory(parent); } @@ -270,10 +282,10 @@ public class TestFrameworkUploader { * Test building a tarball from source jars. */ @Test - public void testBuildTarBall() + void testBuildTarBall() throws IOException, UploaderException, InterruptedException { String[] testFiles = {"upload.tar", "upload.tar.gz"}; - for (String testFile: testFiles) { + for (String testFile : testFiles) { File parent = new File(testDir); try { parent.deleteOnExit(); @@ -304,14 +316,14 @@ public class TestFrameworkUploader { TarArchiveEntry entry2 = result.getNextTarEntry(); fileNames.add(entry2.getName()); sizes.add(entry2.getSize()); - Assert.assertTrue( - "File name error", fileNames.contains("a.jar")); - Assert.assertTrue( - "File size error", sizes.contains((long) 13)); - Assert.assertTrue( - "File name error", fileNames.contains("b.jar")); - Assert.assertTrue( - "File size error", sizes.contains((long) 14)); + assertTrue( + fileNames.contains("a.jar"), "File name error"); + assertTrue( + sizes.contains((long) 13), "File size error"); + assertTrue( + fileNames.contains("b.jar"), "File name error"); + assertTrue( + sizes.contains((long) 14), "File size error"); } finally { if (result != null) { result.close(); @@ -327,7 +339,7 @@ public class TestFrameworkUploader { * Test upload to HDFS. */ @Test - public void testUpload() + void testUpload() throws IOException, UploaderException, InterruptedException { final String fileName = "/upload.tar.gz"; File parent = new File(testDir); @@ -351,14 +363,14 @@ public class TestFrameworkUploader { TarArchiveEntry entry2 = archiveInputStream.getNextTarEntry(); fileNames.add(entry2.getName()); sizes.add(entry2.getSize()); - Assert.assertTrue( - "File name error", fileNames.contains("a.jar")); - Assert.assertTrue( - "File size error", sizes.contains((long) 13)); - Assert.assertTrue( - "File name error", fileNames.contains("b.jar")); - Assert.assertTrue( - "File size error", sizes.contains((long) 14)); + assertTrue( + fileNames.contains("a.jar"), "File name error"); + assertTrue( + sizes.contains((long) 13), "File size error"); + assertTrue( + fileNames.contains("b.jar"), "File name error"); + assertTrue( + sizes.contains((long) 14), "File size error"); } } finally { FileUtils.deleteDirectory(parent); @@ -370,9 +382,9 @@ public class TestFrameworkUploader { */ private FrameworkUploader prepareTree(File parent) throws FileNotFoundException { - Assert.assertTrue(parent.mkdirs()); + assertTrue(parent.mkdirs()); File dirA = new File(parent, "A"); - Assert.assertTrue(dirA.mkdirs()); + assertTrue(dirA.mkdirs()); File jarA = new File(parent, "a.jar"); PrintStream printStream = new PrintStream(new FileOutputStream(jarA)); printStream.println("Hello World!"); @@ -393,7 +405,7 @@ public class TestFrameworkUploader { * Test regex pattern matching and environment variable replacement. */ @Test - public void testEnvironmentReplacement() throws UploaderException { + void testEnvironmentReplacement() throws UploaderException { String input = "C/$A/B,$B,D"; Map map = new HashMap<>(); map.put("A", "X"); @@ -401,7 +413,7 @@ public class TestFrameworkUploader { map.put("C", "Z"); FrameworkUploader uploader = new FrameworkUploader(); String output = uploader.expandEnvironmentVariables(input, map); - Assert.assertEquals("Environment not expanded", "C/X/B,Y,D", output); + assertEquals("C/X/B,Y,D", output, "Environment not expanded"); } @@ -409,7 +421,7 @@ public class TestFrameworkUploader { * Test regex pattern matching and environment variable replacement. */ @Test - public void testRecursiveEnvironmentReplacement() + void testRecursiveEnvironmentReplacement() throws UploaderException { String input = "C/$A/B,$B,D"; Map map = new HashMap<>(); @@ -418,7 +430,7 @@ public class TestFrameworkUploader { map.put("C", "Y"); FrameworkUploader uploader = new FrameworkUploader(); String output = uploader.expandEnvironmentVariables(input, map); - Assert.assertEquals("Environment not expanded", "C/X/B,Y,D", output); + assertEquals("C/X/B,Y,D", output, "Environment not expanded"); } @@ -426,20 +438,20 @@ public class TestFrameworkUploader { * Test native IO. */ @Test - public void testNativeIO() throws IOException { + void testNativeIO() throws IOException { FrameworkUploader uploader = new FrameworkUploader(); File parent = new File(testDir); try { // Create a parent directory parent.deleteOnExit(); - Assert.assertTrue(parent.mkdirs()); + assertTrue(parent.mkdirs()); // Create a target file File targetFile = new File(parent, "a.txt"); - try(FileOutputStream os = new FileOutputStream(targetFile)) { + try (FileOutputStream os = new FileOutputStream(targetFile)) { IOUtils.writeLines(Lists.newArrayList("a", "b"), null, os, StandardCharsets.UTF_8); } - Assert.assertFalse(uploader.checkSymlink(targetFile)); + assertFalse(uploader.checkSymlink(targetFile)); // Create a symlink to the target File symlinkToTarget = new File(parent, "symlinkToTarget.txt"); @@ -449,22 +461,22 @@ public class TestFrameworkUploader { Paths.get(targetFile.getAbsolutePath())); } catch (UnsupportedOperationException e) { // Symlinks are not supported, so ignore the test - Assume.assumeTrue(false); + Assumptions.assumeTrue(false); } - Assert.assertTrue(uploader.checkSymlink(symlinkToTarget)); + assertTrue(uploader.checkSymlink(symlinkToTarget)); // Create a symlink to the target with /./ in the path symlinkToTarget = new File(parent.getAbsolutePath() + - "/./symlinkToTarget2.txt"); + "/./symlinkToTarget2.txt"); try { Files.createSymbolicLink( Paths.get(symlinkToTarget.getAbsolutePath()), Paths.get(targetFile.getAbsolutePath())); } catch (UnsupportedOperationException e) { // Symlinks are not supported, so ignore the test - Assume.assumeTrue(false); + Assumptions.assumeTrue(false); } - Assert.assertTrue(uploader.checkSymlink(symlinkToTarget)); + assertTrue(uploader.checkSymlink(symlinkToTarget)); // Create a symlink outside the current directory File symlinkOutside = new File(parent, "symlinkToParent.txt"); @@ -474,9 +486,9 @@ public class TestFrameworkUploader { Paths.get(parent.getAbsolutePath())); } catch (UnsupportedOperationException e) { // Symlinks are not supported, so ignore the test - Assume.assumeTrue(false); + Assumptions.assumeTrue(false); } - Assert.assertFalse(uploader.checkSymlink(symlinkOutside)); + assertFalse(uploader.checkSymlink(symlinkOutside)); } finally { FileUtils.forceDelete(parent); } @@ -484,14 +496,14 @@ public class TestFrameworkUploader { } @Test - public void testPermissionSettingsOnRestrictiveUmask() + void testPermissionSettingsOnRestrictiveUmask() throws Exception { File parent = new File(testDir); parent.deleteOnExit(); MiniDFSCluster cluster = null; try { - Assert.assertTrue("Directory creation failed", parent.mkdirs()); + assertTrue(parent.mkdirs(), "Directory creation failed"); Configuration hdfsConf = new HdfsConfiguration(); String namenodeDir = new File(MiniDFSCluster.getBaseDirectory(), "name").getAbsolutePath(); @@ -525,7 +537,7 @@ public class TestFrameworkUploader { FileStatus fileStatus = dfs.getFileStatus(new Path(targetPath)); FsPermission perm = fileStatus.getPermission(); - Assert.assertEquals("Permissions", new FsPermission(0644), perm); + assertEquals(new FsPermission(0644), perm, "Permissions"); } finally { if (cluster != null) { cluster.close(); diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml b/hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml index b394fe5be18..fdcab2f2ffb 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml @@ -130,7 +130,7 @@ io.netty - netty + netty-all commons-logging diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml b/hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml index 11932e04e37..e5426a08b3c 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml +++ b/hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml @@ -70,40 +70,40 @@ runtime - org.apache.hadoop - hadoop-hdfs - test - test-jar - - - org.ow2.asm - asm-commons - - - - - org.apache.hadoop - hadoop-yarn-server-tests - test - test-jar - - - org.apache.hadoop - hadoop-mapreduce-client-app - provided - - - org.apache.hadoop - hadoop-mapreduce-client-app - test-jar - test - + org.apache.hadoop + hadoop-hdfs + test + test-jar + + + org.ow2.asm + asm-commons + + + + + org.apache.hadoop + hadoop-yarn-server-tests + test + test-jar + + + org.apache.hadoop + hadoop-mapreduce-client-app + provided + + + org.apache.hadoop + hadoop-mapreduce-client-app + test-jar + test + com.sun.jersey.jersey-test-framework jersey-test-framework-grizzly2 test - + org.apache.hadoop hadoop-mapreduce-client-hs test @@ -112,12 +112,13 @@ org.hsqldb hsqldb provided + jdk8 org.apache.hadoop.thirdparty hadoop-shaded-guava provided - + org.slf4j slf4j-api @@ -127,6 +128,21 @@ assertj-core test + + org.junit.jupiter + junit-jupiter-api + test + + + org.junit.jupiter + junit-jupiter-engine + test + + + org.junit.platform + junit-platform-launcher + test + diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/test/java/org/apache/hadoop/examples/TestAggregateWordCount.java b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/test/java/org/apache/hadoop/examples/TestAggregateWordCount.java index 2bc909c0c36..9828b24d98f 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/test/java/org/apache/hadoop/examples/TestAggregateWordCount.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/test/java/org/apache/hadoop/examples/TestAggregateWordCount.java @@ -21,8 +21,9 @@ import java.io.File; import java.io.IOException; import java.nio.charset.Charset; -import org.junit.After; -import org.junit.Test; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; import org.apache.commons.io.IOUtils; import org.apache.hadoop.fs.FSDataInputStream; @@ -33,14 +34,19 @@ import org.apache.hadoop.mapred.HadoopTestCase; import org.apache.hadoop.util.ExitUtil; import org.apache.hadoop.util.ExitUtil.ExitException; -import static org.junit.Assert.assertEquals; +import static org.junit.jupiter.api.Assertions.assertEquals; public class TestAggregateWordCount extends HadoopTestCase { public TestAggregateWordCount() throws IOException { super(LOCAL_MR, LOCAL_FS, 1, 1); } - @After + @BeforeEach + public void setUp() throws Exception { + super.setUp(); + } + + @AfterEach public void tearDown() throws Exception { FileSystem fs = getFileSystem(); if (fs != null) { @@ -58,7 +64,7 @@ public class TestAggregateWordCount extends HadoopTestCase { private static final Path OUTPUT_PATH = new Path(TEST_DIR, "outPath"); @Test - public void testAggregateTestCount() + void testAggregateTestCount() throws IOException, ClassNotFoundException, InterruptedException { ExitUtil.disableSystemExit(); @@ -70,7 +76,7 @@ public class TestAggregateWordCount extends HadoopTestCase { FileUtil.write(fs, file2, "Hello Hadoop"); String[] args = - new String[] {INPUT_PATH.toString(), OUTPUT_PATH.toString(), "1", + new String[]{INPUT_PATH.toString(), OUTPUT_PATH.toString(), "1", "textinputformat"}; // Run AggregateWordCount Job. diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/test/java/org/apache/hadoop/examples/TestBaileyBorweinPlouffe.java b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/test/java/org/apache/hadoop/examples/TestBaileyBorweinPlouffe.java index 2df2df09117..ffec884795e 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/test/java/org/apache/hadoop/examples/TestBaileyBorweinPlouffe.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/test/java/org/apache/hadoop/examples/TestBaileyBorweinPlouffe.java @@ -18,35 +18,36 @@ package org.apache.hadoop.examples; import java.math.BigInteger; -import org.junit.Test; -import org.junit.Assert; +import org.junit.jupiter.api.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; /** Tests for BaileyBorweinPlouffe */ public class TestBaileyBorweinPlouffe { @Test - public void testMod() { + void testMod() { final BigInteger TWO = BigInteger.ONE.add(BigInteger.ONE); - for(long n = 3; n < 100; n++) { + for (long n = 3; n < 100; n++) { for (long e = 1; e < 100; e++) { final long r = TWO.modPow( BigInteger.valueOf(e), BigInteger.valueOf(n)).longValue(); - Assert.assertEquals("e=" + e + ", n=" + n, r, BaileyBorweinPlouffe - .mod(e, n)); + assertEquals(r, BaileyBorweinPlouffe + .mod(e, n), "e=" + e + ", n=" + n); } } } @Test - public void testHexDigit() { + void testHexDigit() { final long[] answers = {0x43F6, 0xA308, 0x29B7, 0x49F1, 0x8AC8, 0x35EA}; long d = 1; - for(int i = 0; i < answers.length; i++) { - Assert.assertEquals("d=" + d, answers[i], BaileyBorweinPlouffe - .hexDigits(d)); + for (int i = 0; i < answers.length; i++) { + assertEquals(answers[i], BaileyBorweinPlouffe + .hexDigits(d), "d=" + d); d *= 10; } - Assert.assertEquals(0x243FL, BaileyBorweinPlouffe.hexDigits(0)); - } + assertEquals(0x243FL, BaileyBorweinPlouffe.hexDigits(0)); + } } \ No newline at end of file diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/test/java/org/apache/hadoop/examples/TestWordStats.java b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/test/java/org/apache/hadoop/examples/TestWordStats.java index 3f0ffd63f6d..45aa4102bd1 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/test/java/org/apache/hadoop/examples/TestWordStats.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/test/java/org/apache/hadoop/examples/TestWordStats.java @@ -17,8 +17,6 @@ */ package org.apache.hadoop.examples; -import static org.junit.Assert.assertEquals; - import java.io.BufferedReader; import java.io.File; import java.io.IOException; @@ -31,9 +29,11 @@ import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.util.ToolRunner; -import org.junit.AfterClass; -import org.junit.Before; -import org.junit.Test; +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; public class TestWordStats { @@ -241,13 +241,14 @@ public class TestWordStats { return dir.delete(); } - @Before public void setup() throws Exception { + @BeforeEach public void setup() throws Exception { deleteDir(new File(MEAN_OUTPUT)); deleteDir(new File(MEDIAN_OUTPUT)); deleteDir(new File(STDDEV_OUTPUT)); } - @Test public void testGetTheMean() throws Exception { + @Test + void testGetTheMean() throws Exception { String args[] = new String[2]; args[0] = INPUT; args[1] = MEAN_OUTPUT; @@ -261,7 +262,8 @@ public class TestWordStats { assertEquals(mean, wr.read(INPUT), 0.0); } - @Test public void testGetTheMedian() throws Exception { + @Test + void testGetTheMedian() throws Exception { String args[] = new String[2]; args[0] = INPUT; args[1] = MEDIAN_OUTPUT; @@ -275,7 +277,8 @@ public class TestWordStats { assertEquals(median, wr.read(INPUT), 0.0); } - @Test public void testGetTheStandardDeviation() throws Exception { + @Test + void testGetTheStandardDeviation() throws Exception { String args[] = new String[2]; args[0] = INPUT; args[1] = STDDEV_OUTPUT; @@ -289,7 +292,7 @@ public class TestWordStats { assertEquals(stddev, wr.read(INPUT), 0.0); } - @AfterClass public static void cleanup() throws Exception { + @AfterAll public static void cleanup() throws Exception { deleteDir(new File(MEAN_OUTPUT)); deleteDir(new File(MEDIAN_OUTPUT)); deleteDir(new File(STDDEV_OUTPUT)); diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/test/java/org/apache/hadoop/examples/pi/math/TestLongLong.java b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/test/java/org/apache/hadoop/examples/pi/math/TestLongLong.java index 232c53f4d47..318306cf9c3 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/test/java/org/apache/hadoop/examples/pi/math/TestLongLong.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/test/java/org/apache/hadoop/examples/pi/math/TestLongLong.java @@ -19,8 +19,9 @@ package org.apache.hadoop.examples.pi.math; import java.math.BigInteger; import java.util.Random; -import org.junit.Test; -import org.junit.Assert; +import org.junit.jupiter.api.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; public class TestLongLong { @@ -39,12 +40,12 @@ public class TestLongLong { "\na = %x\nb = %x\nll= " + ll + "\nbi= " + bi.toString(16) + "\n", a, b); //System.out.println(s); - Assert.assertEquals(s, bi, ll.toBigInteger()); + assertEquals(bi, ll.toBigInteger(), s); } @Test - public void testMultiplication() { - for(int i = 0; i < 100; i++) { + void testMultiplication() { + for (int i = 0; i < 100; i++) { final long a = nextPositiveLong(); final long b = nextPositiveLong(); verifyMultiplication(a, b); @@ -54,8 +55,8 @@ public class TestLongLong { } @Test - public void testRightShift() { - for(int i = 0; i < 1000; i++) { + void testRightShift() { + for (int i = 0; i < 1000; i++) { final long a = nextPositiveLong(); final long b = nextPositiveLong(); verifyRightShift(a, b); @@ -69,12 +70,12 @@ public class TestLongLong { final String s = String.format( "\na = %x\nb = %x\nll= " + ll + "\nbi= " + bi.toString(16) + "\n", a, b); - Assert.assertEquals(s, bi, ll.toBigInteger()); + assertEquals(bi, ll.toBigInteger(), s); for (int i = 0; i < LongLong.SIZE >> 1; i++) { final long result = ll.shiftRight(i) & MASK; final long expected = bi.shiftRight(i).longValue() & MASK; - Assert.assertEquals(s, expected, result); + assertEquals(expected, result, s); } } } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/test/java/org/apache/hadoop/examples/pi/math/TestModular.java b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/test/java/org/apache/hadoop/examples/pi/math/TestModular.java index a75ec29d1c9..96e70efc76b 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/test/java/org/apache/hadoop/examples/pi/math/TestModular.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/test/java/org/apache/hadoop/examples/pi/math/TestModular.java @@ -21,8 +21,9 @@ import java.math.BigInteger; import java.util.Random; import org.apache.hadoop.examples.pi.Util.Timer; -import org.junit.Assert; -import org.junit.Test; +import org.junit.jupiter.api.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; public class TestModular{ private static final Random RANDOM = new Random(); @@ -52,13 +53,13 @@ public class TestModular{ } @Test - public void testDiv() { - for(long n = 2; n < 100; n++) - for(long r = 1; r < n; r++) { + void testDiv() { + for (long n = 2; n < 100; n++) + for (long r = 1; r < n; r++) { final long a = div(0, r, n); - final long b = (long)((r*1.0/n) * (1L << DIV_VALID_BIT)); + final long b = (long) ((r * 1.0 / n) * (1L << DIV_VALID_BIT)); final String s = String.format("r=%d, n=%d, a=%X, b=%X", r, n, a, b); - Assert.assertEquals(s, b, a); + assertEquals(b, a, s); } } @@ -151,9 +152,8 @@ public class TestModular{ final long answer = rn[i][j][1]; final long s = square_slow(r, n); if (s != answer) { - Assert.assertEquals( - "r=" + r + ", n=" + n + ", answer=" + answer + " but s=" + s, - answer, s); + assertEquals( + answer, s, "r=" + r + ", n=" + n + ", answer=" + answer + " but s=" + s); } } } @@ -168,9 +168,8 @@ public class TestModular{ final long answer = rn[i][j][1]; final long s = square(r, n, r2p64); if (s != answer) { - Assert.assertEquals( - "r=" + r + ", n=" + n + ", answer=" + answer + " but s=" + s, - answer, s); + assertEquals( + answer, s, "r=" + r + ", n=" + n + ", answer=" + answer + " but s=" + s); } } } @@ -185,9 +184,8 @@ public class TestModular{ final BigInteger R = BigInteger.valueOf(r); final long s = R.multiply(R).mod(N).longValue(); if (s != answer) { - Assert.assertEquals( - "r=" + r + ", n=" + n + ", answer=" + answer + " but s=" + s, - answer, s); + assertEquals( + answer, s, "r=" + r + ", n=" + n + ", answer=" + answer + " but s=" + s); } } } @@ -202,9 +200,8 @@ public class TestModular{ final BigInteger R = BigInteger.valueOf(r); final long s = R.modPow(TWO, N).longValue(); if (s != answer) { - Assert.assertEquals( - "r=" + r + ", n=" + n + ", answer=" + answer + " but s=" + s, - answer, s); + assertEquals( + answer, s, "r=" + r + ", n=" + n + ", answer=" + answer + " but s=" + s); } } } @@ -299,9 +296,8 @@ public class TestModular{ final long answer = en[i][j][1]; final long s = Modular.mod(e, n); if (s != answer) { - Assert.assertEquals( - "e=" + e + ", n=" + n + ", answer=" + answer + " but s=" + s, - answer, s); + assertEquals( + answer, s, "e=" + e + ", n=" + n + ", answer=" + answer + " but s=" + s); } } } @@ -316,9 +312,8 @@ public class TestModular{ final long answer = en[i][j][1]; final long s = m2.mod(e); if (s != answer) { - Assert.assertEquals( - "e=" + e + ", n=" + n + ", answer=" + answer + " but s=" + s, - answer, s); + assertEquals( + answer, s, "e=" + e + ", n=" + n + ", answer=" + answer + " but s=" + s); } } } @@ -332,9 +327,8 @@ public class TestModular{ final long answer = en[i][j][1]; final long s = m2.mod2(e); if (s != answer) { - Assert.assertEquals( - "e=" + e + ", n=" + n + ", answer=" + answer + " but s=" + s, - answer, s); + assertEquals( + answer, s, "e=" + e + ", n=" + n + ", answer=" + answer + " but s=" + s); } } } @@ -348,9 +342,8 @@ public class TestModular{ final long answer = en[i][j][1]; final long s = TWO.modPow(BigInteger.valueOf(e), N).longValue(); if (s != answer) { - Assert.assertEquals( - "e=" + e + ", n=" + n + ", answer=" + answer + " but s=" + s, - answer, s); + assertEquals( + answer, s, "e=" + e + ", n=" + n + ", answer=" + answer + " but s=" + s); } } } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/test/java/org/apache/hadoop/examples/pi/math/TestSummation.java b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/test/java/org/apache/hadoop/examples/pi/math/TestSummation.java index 2741962b329..92012ac4d95 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/test/java/org/apache/hadoop/examples/pi/math/TestSummation.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/test/java/org/apache/hadoop/examples/pi/math/TestSummation.java @@ -28,8 +28,9 @@ import org.apache.hadoop.examples.pi.Container; import org.apache.hadoop.examples.pi.Util; import org.apache.hadoop.examples.pi.Util.Timer; import org.apache.hadoop.examples.pi.math.TestModular.Montgomery2; -import org.junit.Test; -import org.junit.Assert; +import org.junit.jupiter.api.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; public class TestSummation { static final Random RANDOM = new Random(); @@ -58,12 +59,12 @@ public class TestSummation { final List combined = Util.combine(a); // Util.out.println("combined=" + combined); - Assert.assertEquals(1, combined.size()); - Assert.assertEquals(sigma, combined.get(0)); + assertEquals(1, combined.size()); + assertEquals(sigma, combined.get(0)); } @Test - public void testSubtract() { + void testSubtract() { final Summation sigma = newSummation(3, 10000, 20); final int size = 10; final List parts = Arrays.asList(sigma.partition(size)); @@ -72,10 +73,10 @@ public class TestSummation { runTestSubtract(sigma, new ArrayList()); runTestSubtract(sigma, parts); - for(int n = 1; n < size; n++) { - for(int j = 0; j < 10; j++) { + for (int n = 1; n < size; n++) { + for (int j = 0; j < 10; j++) { final List diff = new ArrayList(parts); - for(int i = 0; i < n; i++) + for (int i = 0; i < n; i++) diff.remove(RANDOM.nextInt(diff.size())); /// Collections.sort(diff); runTestSubtract(sigma, diff); @@ -132,16 +133,16 @@ public class TestSummation { t.tick("sigma=" + sigma); final double value = sigma.compute(); t.tick("compute=" + value); - Assert.assertEquals(value, sigma.compute_modular(), DOUBLE_DELTA); + assertEquals(value, sigma.compute_modular(), DOUBLE_DELTA); t.tick("compute_modular"); - Assert.assertEquals(value, sigma.compute_montgomery(), DOUBLE_DELTA); + assertEquals(value, sigma.compute_montgomery(), DOUBLE_DELTA); t.tick("compute_montgomery"); - Assert.assertEquals(value, sigma.compute_montgomery2(), DOUBLE_DELTA); + assertEquals(value, sigma.compute_montgomery2(), DOUBLE_DELTA); t.tick("compute_montgomery2"); - Assert.assertEquals(value, sigma.compute_modBigInteger(), DOUBLE_DELTA); + assertEquals(value, sigma.compute_modBigInteger(), DOUBLE_DELTA); t.tick("compute_modBigInteger"); - Assert.assertEquals(value, sigma.compute_modPow(), DOUBLE_DELTA); + assertEquals(value, sigma.compute_modPow(), DOUBLE_DELTA); t.tick("compute_modPow"); } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/test/java/org/apache/hadoop/examples/terasort/TestTeraSort.java b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/test/java/org/apache/hadoop/examples/terasort/TestTeraSort.java index 1bc9f2f56a2..ef1688c0929 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/test/java/org/apache/hadoop/examples/terasort/TestTeraSort.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/test/java/org/apache/hadoop/examples/terasort/TestTeraSort.java @@ -25,14 +25,15 @@ import org.apache.hadoop.fs.Path; import org.apache.hadoop.mapred.FileAlreadyExistsException; import org.apache.hadoop.mapred.HadoopTestCase; import org.apache.hadoop.util.ToolRunner; -import org.junit.After; -import org.junit.Test; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import static org.assertj.core.api.Assertions.assertThat; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.fail; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.fail; public class TestTeraSort extends HadoopTestCase { private static final Logger LOG = LoggerFactory.getLogger(TestTeraSort.class); @@ -42,7 +43,12 @@ public class TestTeraSort extends HadoopTestCase { super(LOCAL_MR, LOCAL_FS, 1, 1); } - @After + @BeforeEach + public void setUp() throws Exception { + super.setUp(); + } + + @AfterEach public void tearDown() throws Exception { getFileSystem().delete(TEST_DIR, true); super.tearDown(); @@ -85,7 +91,7 @@ public class TestTeraSort extends HadoopTestCase { } @Test - public void testTeraSort() throws Exception { + void testTeraSort() throws Exception { // Run TeraGen to generate input for 'terasort' runTeraGen(createJobConf(), SORT_INPUT_PATH); @@ -110,11 +116,11 @@ public class TestTeraSort extends HadoopTestCase { // Run tera-validator to check if sort worked correctly runTeraValidator(createJobConf(), SORT_OUTPUT_PATH, - TERA_OUTPUT_PATH); + TERA_OUTPUT_PATH); } @Test - public void testTeraSortWithLessThanTwoArgs() throws Exception { + void testTeraSortWithLessThanTwoArgs() throws Exception { String[] args = new String[1]; assertThat(new TeraSort().run(args)).isEqualTo(2); } diff --git a/hadoop-mapreduce-project/pom.xml b/hadoop-mapreduce-project/pom.xml index 3ce66a10a84..21554090d78 100644 --- a/hadoop-mapreduce-project/pom.xml +++ b/hadoop-mapreduce-project/pom.xml @@ -41,117 +41,51 @@ hadoop-mapreduce-examples + - com.google.protobuf - protobuf-java - - - org.apache.avro - avro - - - org.eclipse.jetty - jetty-server - - - org.apache.ant - ant - - - io.netty - netty - - - org.apache.velocity - velocity - - - org.slf4j - slf4j-api - - - paranamer-ant - com.thoughtworks.paranamer - - - org.xerial.snappy - snappy-java - - + org.apache.hadoop + hadoop-mapreduce-client-app + ${project.version} org.apache.hadoop - hadoop-common - provided - - - - org.slf4j - slf4j-api - - - org.slf4j - slf4j-log4j12 + hadoop-mapreduce-client-common + ${project.version} org.apache.hadoop - hadoop-annotations - - - org.mockito - mockito-core - test + hadoop-mapreduce-client-core + ${project.version} org.apache.hadoop - hadoop-common - test-jar - test + hadoop-mapreduce-client-hs + ${project.version} org.apache.hadoop - hadoop-hdfs - test + hadoop-mapreduce-client-jobclient + ${project.version} - com.google.inject - guice + org.apache.hadoop + hadoop-mapreduce-client-nativetask + ${project.version} - com.sun.jersey - jersey-server + org.apache.hadoop + hadoop-mapreduce-client-shuffle + ${project.version} - com.sun.jersey.contribs - jersey-guice + org.apache.hadoop + hadoop-mapreduce-examples + ${project.version} - - com.google.inject.extensions - guice-servlet - - - junit - junit - - - io.netty - netty - - - commons-io - commons-io - - - org.hsqldb - hsqldb - compile - - - ${leveldbjni.group} - leveldbjni-all - - diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml index 3aba0f44d55..aada4e9b6de 100644 --- a/hadoop-project/pom.xml +++ b/hadoop-project/pom.xml @@ -31,7 +31,7 @@ - 2022 + 2023 false @@ -50,7 +50,7 @@ 2.12.2 - 2.8.1 + 2.8.2 1.0.13 @@ -70,7 +70,7 @@ 2.12.7 - 2.12.7 + 2.12.7.1 4.5.13 @@ -117,30 +117,29 @@ 1.15 3.2.2 1.21 - 1.0 + 1.9.0 2.11.0 3.12.0 1.1.3 1.1 3.6.1 - 3.8.0 - 1.9 + 3.9.0 + 1.10.0 2.0.2 1.0-alpha-1 3.3.1 4.0.3 6.2.1.jre7 - 2.7.5 - 4.9.3 - 1.4.10 - 1.4.10 + 4.10.0 + 3.2.0 + 1.6.20 + 1.6.20 2.0.6.1 5.2.0 2.2.21 2.9.0 3.2.4 - 3.10.6.Final 4.1.77.Final 1.1.8.2 1.7.1 @@ -174,7 +173,7 @@ 3.1.0 2.5.1 2.6 - 3.2.1 + 3.3.0 2.5 3.1.0 2.3 @@ -184,8 +183,8 @@ 1.3.1 1.0-beta-1 900 - 1.12.262 - 2.5.2 + 1.12.316 + 2.7.1 1.11.2 2.1 0.7 @@ -197,7 +196,7 @@ ${hadoop.version} 1.5.4 - 1.32 + 1.33 1.7.1 2.2.4 4.13.2 @@ -208,9 +207,9 @@ 3.9.0 1.5.6 8.8.2 - 1.0.7.Final + 1.1.3.Final 1.0.2 - 5.3.0 + 5.4.0 2.4.7 9.8.1 v12.22.1 @@ -234,8 +233,17 @@ org.jetbrains.kotlin kotlin-stdlib-common + + com.squareup.okio + okio-jvm + + + com.squareup.okio + okio-jvm + ${okio.version} + org.jetbrains.kotlin kotlin-stdlib @@ -255,8 +263,18 @@ com.squareup.okhttp3 mockwebserver - 4.9.3 + ${okhttp3.version} test + + + com.squareup.okio + okio-jvm + + + org.jetbrains.kotlin + kotlin-stdlib-jdk8 + + jdiff @@ -358,6 +376,11 @@ hadoop-hdfs-client ${hadoop.version} + + org.apache.hadoop + hadoop-hdfs-native-client + ${hadoop.version} + org.apache.hadoop hadoop-hdfs-rbf @@ -391,6 +414,11 @@ hadoop-mapreduce-client-common ${hadoop.version} + + org.apache.hadoop + hadoop-mapreduce-client-nativetask + ${hadoop.version} + org.apache.hadoop hadoop-yarn-api @@ -678,6 +706,7 @@ ${hadoop.version} + org.apache.hadoop hadoop-openstack @@ -970,13 +999,6 @@ - - - io.netty - netty - ${netty3.version} - - io.netty netty-all @@ -1469,6 +1491,7 @@ org.hsqldb hsqldb ${hsqldb.version} + jdk8 io.dropwizard.metrics @@ -1498,7 +1521,7 @@ org.codehaus.jettison jettison - 1.1 + 1.5.3 stax diff --git a/hadoop-project/src/site/markdown/index.md.vm b/hadoop-project/src/site/markdown/index.md.vm index edc38a52861..5e0a46449fa 100644 --- a/hadoop-project/src/site/markdown/index.md.vm +++ b/hadoop-project/src/site/markdown/index.md.vm @@ -15,226 +15,121 @@ Apache Hadoop ${project.version} ================================ -Apache Hadoop ${project.version} incorporates a number of significant -enhancements over the previous major release line (hadoop-2.x). +Apache Hadoop ${project.version} is an update to the Hadoop 3.3.x release branch. -This release is generally available (GA), meaning that it represents a point of -API stability and quality that we consider production-ready. - -Overview -======== +Overview of Changes +=================== Users are encouraged to read the full set of release notes. This page provides an overview of the major changes. -Minimum required Java version increased from Java 7 to Java 8 ------------------- +Azure ABFS: Critical Stream Prefetch Fix +--------------------------------------------- -All Hadoop JARs are now compiled targeting a runtime version of Java 8. -Users still using Java 7 or below must upgrade to Java 8. +The abfs has a critical bug fix +[HADOOP-18546](https://issues.apache.org/jira/browse/HADOOP-18546). +*ABFS. Disable purging list of in-progress reads in abfs stream close().* -Support for erasure coding in HDFS ------------------- +All users of the abfs connector in hadoop releases 3.3.2+ MUST either upgrade +or disable prefetching by setting `fs.azure.readaheadqueue.depth` to `0` -Erasure coding is a method for durably storing data with significant space -savings compared to replication. Standard encodings like Reed-Solomon (10,4) -have a 1.4x space overhead, compared to the 3x overhead of standard HDFS -replication. +Consult the parent JIRA [HADOOP-18521](https://issues.apache.org/jira/browse/HADOOP-18521) +*ABFS ReadBufferManager buffer sharing across concurrent HTTP requests* +for root cause analysis, details on what is affected, and mitigations. -Since erasure coding imposes additional overhead during reconstruction -and performs mostly remote reads, it has traditionally been used for -storing colder, less frequently accessed data. Users should consider -the network and CPU overheads of erasure coding when deploying this -feature. -More details are available in the -[HDFS Erasure Coding](./hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html) -documentation. - -YARN Timeline Service v.2 -------------------- - -We are introducing an early preview (alpha 2) of a major revision of YARN -Timeline Service: v.2. YARN Timeline Service v.2 addresses two major -challenges: improving scalability and reliability of Timeline Service, and -enhancing usability by introducing flows and aggregation. - -YARN Timeline Service v.2 alpha 2 is provided so that users and developers -can test it and provide feedback and suggestions for making it a ready -replacement for Timeline Service v.1.x. It should be used only in a test -capacity. - -More details are available in the -[YARN Timeline Service v.2](./hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html) -documentation. - -Shell script rewrite -------------------- - -The Hadoop shell scripts have been rewritten to fix many long-standing -bugs and include some new features. While an eye has been kept towards -compatibility, some changes may break existing installations. - -Incompatible changes are documented in the release notes, with related -discussion on [HADOOP-9902](https://issues.apache.org/jira/browse/HADOOP-9902). - -More details are available in the -[Unix Shell Guide](./hadoop-project-dist/hadoop-common/UnixShellGuide.html) -documentation. Power users will also be pleased by the -[Unix Shell API](./hadoop-project-dist/hadoop-common/UnixShellAPI.html) -documentation, which describes much of the new functionality, particularly -related to extensibility. - -Shaded client jars ------------------- - -The `hadoop-client` Maven artifact available in 2.x releases pulls -Hadoop's transitive dependencies onto a Hadoop application's classpath. -This can be problematic if the versions of these transitive dependencies -conflict with the versions used by the application. - -[HADOOP-11804](https://issues.apache.org/jira/browse/HADOOP-11804) adds -new `hadoop-client-api` and `hadoop-client-runtime` artifacts that -shade Hadoop's dependencies into a single jar. This avoids leaking -Hadoop's dependencies onto the application's classpath. - -Support for Opportunistic Containers and Distributed Scheduling. --------------------- - -A notion of `ExecutionType` has been introduced, whereby Applications can -now request for containers with an execution type of `Opportunistic`. -Containers of this type can be dispatched for execution at an NM even if -there are no resources available at the moment of scheduling. In such a -case, these containers will be queued at the NM, waiting for resources to -be available for it to start. Opportunistic containers are of lower priority -than the default `Guaranteed` containers and are therefore preempted, -if needed, to make room for Guaranteed containers. This should -improve cluster utilization. - -Opportunistic containers are by default allocated by the central RM, but -support has also been added to allow opportunistic containers to be -allocated by a distributed scheduler which is implemented as an -AMRMProtocol interceptor. - -Please see [documentation](./hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html) -for more details. - -MapReduce task-level native optimization --------------------- - -MapReduce has added support for a native implementation of the map output -collector. For shuffle-intensive jobs, this can lead to a performance -improvement of 30% or more. - -See the release notes for -[MAPREDUCE-2841](https://issues.apache.org/jira/browse/MAPREDUCE-2841) -for more detail. - -Support for more than 2 NameNodes. --------------------- - -The initial implementation of HDFS NameNode high-availability provided -for a single active NameNode and a single Standby NameNode. By replicating -edits to a quorum of three JournalNodes, this architecture is able to -tolerate the failure of any one node in the system. - -However, some deployments require higher degrees of fault-tolerance. -This is enabled by this new feature, which allows users to run multiple -standby NameNodes. For instance, by configuring three NameNodes and -five JournalNodes, the cluster is able to tolerate the failure of two -nodes rather than just one. - -The [HDFS high-availability documentation](./hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html) -has been updated with instructions on how to configure more than two -NameNodes. - -Default ports of multiple services have been changed. ------------------------- - -Previously, the default ports of multiple Hadoop services were in the -Linux ephemeral port range (32768-61000). This meant that at startup, -services would sometimes fail to bind to the port due to a conflict -with another application. - -These conflicting ports have been moved out of the ephemeral range, -affecting the NameNode, Secondary NameNode, DataNode, and KMS. Our -documentation has been updated appropriately, but see the release -notes for [HDFS-9427](https://issues.apache.org/jira/browse/HDFS-9427) and -[HADOOP-12811](https://issues.apache.org/jira/browse/HADOOP-12811) -for a list of port changes. - -Support for Microsoft Azure Data Lake and Aliyun Object Storage System filesystem connectors ---------------------- - -Hadoop now supports integration with Microsoft Azure Data Lake and -Aliyun Object Storage System as alternative Hadoop-compatible filesystems. - -Intra-datanode balancer -------------------- - -A single DataNode manages multiple disks. During normal write operation, -disks will be filled up evenly. However, adding or replacing disks can -lead to significant skew within a DataNode. This situation is not handled -by the existing HDFS balancer, which concerns itself with inter-, not intra-, -DN skew. - -This situation is handled by the new intra-DataNode balancing -functionality, which is invoked via the `hdfs diskbalancer` CLI. -See the disk balancer section in the -[HDFS Commands Guide](./hadoop-project-dist/hadoop-hdfs/HDFSCommands.html) -for more information. - -Reworked daemon and task heap management ---------------------- - -A series of changes have been made to heap management for Hadoop daemons -as well as MapReduce tasks. - -[HADOOP-10950](https://issues.apache.org/jira/browse/HADOOP-10950) introduces -new methods for configuring daemon heap sizes. -Notably, auto-tuning is now possible based on the memory size of the host, -and the `HADOOP_HEAPSIZE` variable has been deprecated. -See the full release notes of HADOOP-10950 for more detail. - -[MAPREDUCE-5785](https://issues.apache.org/jira/browse/MAPREDUCE-5785) -simplifies the configuration of map and reduce task -heap sizes, so the desired heap size no longer needs to be specified -in both the task configuration and as a Java option. -Existing configs that already specify both are not affected by this change. -See the full release notes of MAPREDUCE-5785 for more details. - -HDFS Router-Based Federation ---------------------- -HDFS Router-Based Federation adds a RPC routing layer that provides a federated -view of multiple HDFS namespaces. This is similar to the existing -[ViewFs](./hadoop-project-dist/hadoop-hdfs/ViewFs.html)) and -[HDFS Federation](./hadoop-project-dist/hadoop-hdfs/Federation.html) -functionality, except the mount table is managed on the server-side by the -routing layer rather than on the client. This simplifies access to a federated -cluster for existing HDFS clients. - -See [HDFS-10467](https://issues.apache.org/jira/browse/HDFS-10467) and the -HDFS Router-based Federation -[documentation](./hadoop-project-dist/hadoop-hdfs-rbf/HDFSRouterFederation.html) for -more details. - -API-based configuration of Capacity Scheduler queue configuration ----------------------- - -The OrgQueue extension to the capacity scheduler provides a programmatic way to -change configurations by providing a REST API that users can call to modify -queue configurations. This enables automation of queue configuration management -by administrators in the queue's `administer_queue` ACL. - -See [YARN-5734](https://issues.apache.org/jira/browse/YARN-5734) and the -[Capacity Scheduler documentation](./hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html) for more information. - -YARN Resource Types +Vectored IO API --------------- -The YARN resource model has been generalized to support user-defined countable resource types beyond CPU and memory. For instance, the cluster administrator could define resources like GPUs, software licenses, or locally-attached storage. YARN tasks can then be scheduled based on the availability of these resources. +[HADOOP-18103](https://issues.apache.org/jira/browse/HADOOP-18103). +*High performance vectored read API in Hadoop* -See [YARN-3926](https://issues.apache.org/jira/browse/YARN-3926) and the [YARN resource model documentation](./hadoop-yarn/hadoop-yarn-site/ResourceModel.html) for more information. +The `PositionedReadable` interface has now added an operation for +Vectored IO (also known as Scatter/Gather IO): + +```java +void readVectored(List ranges, IntFunction allocate) +``` + +All the requested ranges will be retrieved into the supplied byte buffers -possibly asynchronously, +possibly in parallel, with results potentially coming in out-of-order. + +1. The default implementation uses a series of `readFully()` calls, so delivers + equivalent performance. +2. The local filesystem uses java native IO calls for higher performance reads than `readFully()`. +3. The S3A filesystem issues parallel HTTP GET requests in different threads. + +Benchmarking of enhanced Apache ORC and Apache Parquet clients through `file://` and `s3a://` +show significant improvements in query performance. + +Further Reading: [FsDataInputStream](./hadoop-project-dist/hadoop-common/filesystem/fsdatainputstream.html). + +Mapreduce: Manifest Committer for Azure ABFS and google GCS +---------------------------------------------------------- + +The new _Intermediate Manifest Committer_ uses a manifest file +to commit the work of successful task attempts, rather than +renaming directories. +Job commit is matter of reading all the manifests, creating the +destination directories (parallelized) and renaming the files, +again in parallel. + +This is both fast and correct on Azure Storage and Google GCS, +and should be used there instead of the classic v1/v2 file +output committers. + +It is also safe to use on HDFS, where it should be faster +than the v1 committer. It is however optimized for +cloud storage where list and rename operations are significantly +slower; the benefits may be less. + +More details are available in the +[manifest committer](./hadoop-mapreduce-client/hadoop-mapreduce-client-core/manifest_committer.html). +documentation. + + +HDFS: Router Based Federation +----------------------------- + +A lot of effort has been invested into stabilizing/improving the HDFS Router Based Federation feature. + +1. HDFS-13522, HDFS-16767 & Related Jiras: Allow Observer Reads in HDFS Router Based Federation. +2. HDFS-13248: RBF supports Client Locality + +HDFS: Dynamic Datanode Reconfiguration +-------------------------------------- + +HDFS-16400, HDFS-16399, HDFS-16396, HDFS-16397, HDFS-16413, HDFS-16457. + +A number of Datanode configuration options can be changed without having to restart +the datanode. This makes it possible to tune deployment configurations without +cluster-wide Datanode Restarts. + +See [DataNode.java](https://github.com/apache/hadoop/blob/branch-3.3.5/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L346-L361) +for the list of dynamically reconfigurable attributes. + + +Transitive CVE fixes +-------------------- + +A lot of dependencies have been upgraded to address recent CVEs. +Many of the CVEs were not actually exploitable through the Hadoop +so much of this work is just due diligence. +However applications which have all the library is on a class path may +be vulnerable, and the ugprades should also reduce the number of false +positives security scanners report. + +We have not been able to upgrade every single dependency to the latest +version there is. Some of those changes are just going to be incompatible. +If you have concerns about the state of a specific library, consult the pache JIRA +issue tracker to see whether a JIRA has been filed, discussions have taken place about +the library in question, and whether or not there is already a fix in the pipeline. +*Please don't file new JIRAs about dependency-X.Y.Z having a CVE without +searching for any existing issue first* + +As an open source project, contributions in this area are always welcome, +especially in testing the active branches, testing applications downstream of +those branches and of whether updated dependencies trigger regressions. Getting Started =============== @@ -246,3 +141,4 @@ which shows you how to set up a single-node Hadoop installation. Then move on to the [Cluster Setup](./hadoop-project-dist/hadoop-common/ClusterSetup.html) to learn how to set up a multi-node Hadoop installation. + diff --git a/hadoop-project/src/site/site.xml b/hadoop-project/src/site/site.xml index 6c0233877b0..8e85f379ef7 100644 --- a/hadoop-project/src/site/site.xml +++ b/hadoop-project/src/site/site.xml @@ -178,7 +178,6 @@ - diff --git a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java index e9ac1ddea9e..156af04babf 100644 --- a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java +++ b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java @@ -72,6 +72,7 @@ import java.util.Comparator; import java.util.List; import java.util.ListIterator; import java.util.NoSuchElementException; +import java.util.stream.Collectors; import static org.apache.hadoop.fs.aliyun.oss.Constants.*; @@ -203,31 +204,29 @@ public class AliyunOSSFileSystemStore { int retry = 10; int tries = 0; - List deleteFailed = keysToDelete; - while(CollectionUtils.isNotEmpty(deleteFailed)) { + while (CollectionUtils.isNotEmpty(keysToDelete)) { DeleteObjectsRequest deleteRequest = new DeleteObjectsRequest(bucketName); - deleteRequest.setKeys(deleteFailed); + deleteRequest.setKeys(keysToDelete); // There are two modes to do batch delete: - // 1. detail mode: DeleteObjectsResult.getDeletedObjects returns objects - // which were deleted successfully. - // 2. simple mode: DeleteObjectsResult.getDeletedObjects returns objects - // which were deleted unsuccessfully. - // Here, we choose the simple mode to do batch delete. - deleteRequest.setQuiet(true); + // 1. verbose mode: A list of all deleted objects is returned. + // 2. quiet mode: No message body is returned. + // Here, we choose the verbose mode to do batch delete. + deleteRequest.setQuiet(false); DeleteObjectsResult result = ossClient.deleteObjects(deleteRequest); statistics.incrementWriteOps(1); - deleteFailed = result.getDeletedObjects(); + final List deletedObjects = result.getDeletedObjects(); + keysToDelete = keysToDelete.stream().filter(item -> !deletedObjects.contains(item)) + .collect(Collectors.toList()); tries++; if (tries == retry) { break; } } - if (tries == retry && CollectionUtils.isNotEmpty(deleteFailed)) { + if (tries == retry && CollectionUtils.isNotEmpty(keysToDelete)) { // Most of time, it is impossible to try 10 times, expect the // Aliyun OSS service problems. - throw new IOException("Failed to delete Aliyun OSS objects for " + - tries + " times."); + throw new IOException("Failed to delete Aliyun OSS objects for " + tries + " times."); } } diff --git a/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemStore.java b/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemStore.java index f85871dd86e..e57aa63aff4 100644 --- a/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemStore.java +++ b/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemStore.java @@ -18,9 +18,12 @@ package org.apache.hadoop.fs.aliyun.oss; +import com.aliyun.oss.model.OSSObjectSummary; import com.aliyun.oss.model.ObjectMetadata; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.contract.ContractTestUtils; + import org.junit.After; import org.junit.Before; import org.junit.BeforeClass; @@ -36,7 +39,10 @@ import java.security.DigestInputStream; import java.security.DigestOutputStream; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; +import java.util.ArrayList; +import java.util.List; +import static org.apache.hadoop.fs.aliyun.oss.Constants.MAX_PAGING_KEYS_DEFAULT; import static org.junit.Assert.assertArrayEquals; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; @@ -128,4 +134,29 @@ public class TestAliyunOSSFileSystemStore { writeRenameReadCompare(new Path("/test/xlarge"), Constants.MULTIPART_UPLOAD_PART_SIZE_DEFAULT + 1); } + + @Test + public void testDeleteObjects() throws IOException, NoSuchAlgorithmException { + // generate test files + final int files = 10; + final long size = 5 * 1024 * 1024; + final String prefix = "dir"; + for (int i = 0; i < files; i++) { + Path path = new Path(String.format("/%s/testFile-%d.txt", prefix, i)); + ContractTestUtils.generateTestFile(this.fs, path, size, 256, 255); + } + OSSListRequest listRequest = + store.createListObjectsRequest(prefix, MAX_PAGING_KEYS_DEFAULT, null, null, true); + List keysToDelete = new ArrayList<>(); + OSSListResult objects = store.listObjects(listRequest); + assertEquals(files, objects.getObjectSummaries().size()); + + // test delete files + for (OSSObjectSummary objectSummary : objects.getObjectSummaries()) { + keysToDelete.add(objectSummary.getKey()); + } + store.deleteObjects(keysToDelete); + objects = store.listObjects(listRequest); + assertEquals(0, objects.getObjectSummaries().size()); + } } diff --git a/hadoop-tools/hadoop-archive-logs/src/site/markdown/HadoopArchiveLogs.md b/hadoop-tools/hadoop-archive-logs/src/site/markdown/HadoopArchiveLogs.md index e95112cfc71..406c2d5adec 100644 --- a/hadoop-tools/hadoop-archive-logs/src/site/markdown/HadoopArchiveLogs.md +++ b/hadoop-tools/hadoop-archive-logs/src/site/markdown/HadoopArchiveLogs.md @@ -76,7 +76,7 @@ The tool works by performing the following procedure: - its aggregation status has successfully completed - has at least ``-minNumberLogFiles`` log files - the sum of its log files size is less than ``-maxTotalLogsSize`` megabytes - 2. If there are are more than ``-maxEligibleApps`` applications found, the + 2. If there are more than ``-maxEligibleApps`` applications found, the newest applications are dropped. They can be processed next time. 3. A shell script is generated based on the eligible applications 4. The Distributed Shell program is run with the aformentioned script. It diff --git a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java index 856d1dfb97b..16472a75fd2 100644 --- a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java +++ b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java @@ -212,6 +212,8 @@ public final class Constants { public static final String PROXY_PASSWORD = "fs.s3a.proxy.password"; public static final String PROXY_DOMAIN = "fs.s3a.proxy.domain"; public static final String PROXY_WORKSTATION = "fs.s3a.proxy.workstation"; + /** Is the proxy secured(proxyProtocol = HTTPS)? */ + public static final String PROXY_SECURED = "fs.s3a.proxy.ssl.enabled"; /** * Number of times the AWS client library should retry errors before @@ -249,12 +251,12 @@ public final class Constants { public static final boolean EXPERIMENTAL_AWS_INTERNAL_THROTTLING_DEFAULT = true; - // seconds until we give up trying to establish a connection to s3 + // milliseconds until we give up trying to establish a connection to s3 public static final String ESTABLISH_TIMEOUT = "fs.s3a.connection.establish.timeout"; public static final int DEFAULT_ESTABLISH_TIMEOUT = 50000; - // seconds until we give up on a connection to s3 + // milliseconds until we give up on a connection to s3 public static final String SOCKET_TIMEOUT = "fs.s3a.connection.timeout"; public static final int DEFAULT_SOCKET_TIMEOUT = 200000; diff --git a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3A.java b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3A.java index 78643cc5e04..ec433fa95c2 100644 --- a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3A.java +++ b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3A.java @@ -33,10 +33,10 @@ import java.net.URISyntaxException; */ @InterfaceAudience.Public @InterfaceStability.Evolving -public class S3A extends DelegateToFileSystem{ +public class S3A extends DelegateToFileSystem { public S3A(URI theUri, Configuration conf) - throws IOException, URISyntaxException { + throws IOException, URISyntaxException { super(theUri, new S3AFileSystem(), conf, "s3a", false); } @@ -54,4 +54,13 @@ public class S3A extends DelegateToFileSystem{ sb.append('}'); return sb.toString(); } + + /** + * Close the file system; the FileContext API doesn't have an explicit close. + */ + @Override + protected void finalize() throws Throwable { + fsImpl.close(); + super.finalize(); + } } diff --git a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java index 3e6f2322d3b..cb17b80fb6a 100644 --- a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java +++ b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java @@ -138,7 +138,6 @@ import org.apache.hadoop.fs.s3a.tools.MarkerToolOperationsImpl; import org.apache.hadoop.fs.statistics.DurationTracker; import org.apache.hadoop.fs.statistics.DurationTrackerFactory; import org.apache.hadoop.fs.statistics.IOStatistics; -import org.apache.hadoop.fs.statistics.IOStatisticsLogging; import org.apache.hadoop.fs.statistics.IOStatisticsSource; import org.apache.hadoop.fs.statistics.IOStatisticsContext; import org.apache.hadoop.fs.statistics.impl.IOStatisticsStore; @@ -459,6 +458,13 @@ public class S3AFileSystem extends FileSystem implements StreamCapabilities, AuditSpan span = null; try { LOG.debug("Initializing S3AFileSystem for {}", bucket); + if (LOG.isTraceEnabled()) { + // log a full trace for deep diagnostics of where an object is created, + // for tracking down memory leak issues. + LOG.trace("Filesystem for {} created; fs.s3a.impl.disable.cache = {}", + name, originalConf.getBoolean("fs.s3a.impl.disable.cache", false), + new RuntimeException(super.toString())); + } // clone the configuration into one with propagated bucket options Configuration conf = propagateBucketOptions(originalConf, bucket); // HADOOP-17894. remove references to s3a stores in JCEKS credentials. @@ -525,8 +531,7 @@ public class S3AFileSystem extends FileSystem implements StreamCapabilities, this.prefetchEnabled = conf.getBoolean(PREFETCH_ENABLED_KEY, PREFETCH_ENABLED_DEFAULT); long prefetchBlockSizeLong = - longBytesOption(conf, PREFETCH_BLOCK_SIZE_KEY, PREFETCH_BLOCK_DEFAULT_SIZE, - PREFETCH_BLOCK_DEFAULT_SIZE); + longBytesOption(conf, PREFETCH_BLOCK_SIZE_KEY, PREFETCH_BLOCK_DEFAULT_SIZE, 1); if (prefetchBlockSizeLong > (long) Integer.MAX_VALUE) { throw new IOException("S3A prefatch block size exceeds int limit"); } @@ -3999,22 +4004,18 @@ public class S3AFileSystem extends FileSystem implements StreamCapabilities, } isClosed = true; LOG.debug("Filesystem {} is closed", uri); - if (getConf() != null) { - String iostatisticsLoggingLevel = - getConf().getTrimmed(IOSTATISTICS_LOGGING_LEVEL, - IOSTATISTICS_LOGGING_LEVEL_DEFAULT); - logIOStatisticsAtLevel(LOG, iostatisticsLoggingLevel, getIOStatistics()); - } try { super.close(); } finally { stopAllServices(); - } - // Log IOStatistics at debug. - if (LOG.isDebugEnabled()) { - // robust extract and convert to string - LOG.debug("Statistics for {}: {}", uri, - IOStatisticsLogging.ioStatisticsToPrettyString(getIOStatistics())); + // log IO statistics, including of any file deletion during + // superclass close + if (getConf() != null) { + String iostatisticsLoggingLevel = + getConf().getTrimmed(IOSTATISTICS_LOGGING_LEVEL, + IOSTATISTICS_LOGGING_LEVEL_DEFAULT); + logIOStatisticsAtLevel(LOG, iostatisticsLoggingLevel, getIOStatistics()); + } } } diff --git a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java index 39d41f5ffd2..4b50ab2c04b 100644 --- a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java +++ b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java @@ -910,21 +910,15 @@ public class S3AInputStream extends FSInputStream implements CanSetReadahead, private void readCombinedRangeAndUpdateChildren(CombinedFileRange combinedFileRange, IntFunction allocate) { LOG.debug("Start reading combined range {} from path {} ", combinedFileRange, pathStr); - // This reference is must be kept till all buffers are populated as this is a + // This reference must be kept till all buffers are populated as this is a // finalizable object which closes the internal stream when gc triggers. S3Object objectRange = null; S3ObjectInputStream objectContent = null; try { - checkIfVectoredIOStopped(); - final String operationName = "readCombinedFileRange"; - objectRange = getS3Object(operationName, + objectRange = getS3ObjectAndValidateNotNull("readCombinedFileRange", combinedFileRange.getOffset(), combinedFileRange.getLength()); objectContent = objectRange.getObjectContent(); - if (objectContent == null) { - throw new PathIOException(uri, - "Null IO stream received during " + operationName); - } populateChildBuffers(combinedFileRange, objectContent, allocate); } catch (Exception ex) { LOG.debug("Exception while reading a range {} from path {} ", combinedFileRange, pathStr, ex); @@ -1019,19 +1013,15 @@ public class S3AInputStream extends FSInputStream implements CanSetReadahead, */ private void readSingleRange(FileRange range, ByteBuffer buffer) { LOG.debug("Start reading range {} from path {} ", range, pathStr); + // This reference must be kept till all buffers are populated as this is a + // finalizable object which closes the internal stream when gc triggers. S3Object objectRange = null; S3ObjectInputStream objectContent = null; try { - checkIfVectoredIOStopped(); long position = range.getOffset(); int length = range.getLength(); - final String operationName = "readRange"; - objectRange = getS3Object(operationName, position, length); + objectRange = getS3ObjectAndValidateNotNull("readSingleRange", position, length); objectContent = objectRange.getObjectContent(); - if (objectContent == null) { - throw new PathIOException(uri, - "Null IO stream received during " + operationName); - } populateBuffer(length, buffer, objectContent); range.getData().complete(buffer); } catch (Exception ex) { @@ -1043,6 +1033,29 @@ public class S3AInputStream extends FSInputStream implements CanSetReadahead, LOG.debug("Finished reading range {} from path {} ", range, pathStr); } + /** + * Get the s3 object for S3 server for a specified range. + * Also checks if the vectored io operation has been stopped before and after + * the http get request such that we don't waste time populating the buffers. + * @param operationName name of the operation for which get object on S3 is called. + * @param position position of the object to be read from S3. + * @param length length from position of the object to be read from S3. + * @return result s3 object. + * @throws IOException exception if any. + */ + private S3Object getS3ObjectAndValidateNotNull(final String operationName, + final long position, + final int length) throws IOException { + checkIfVectoredIOStopped(); + S3Object objectRange = getS3Object(operationName, position, length); + if (objectRange.getObjectContent() == null) { + throw new PathIOException(uri, + "Null IO stream received during " + operationName); + } + checkIfVectoredIOStopped(); + return objectRange; + } + /** * Populates the buffer with data from objectContent * till length. Handles both direct and heap byte buffers. @@ -1151,6 +1164,7 @@ public class S3AInputStream extends FSInputStream implements CanSetReadahead, */ @InterfaceAudience.Private @InterfaceStability.Unstable + @VisibleForTesting public S3AInputStreamStatistics getS3AStreamStatistics() { return streamStatistics; } diff --git a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInstrumentation.java b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInstrumentation.java index 46568ec2a8d..9d33efa9d01 100644 --- a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInstrumentation.java +++ b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInstrumentation.java @@ -27,6 +27,7 @@ import org.slf4j.LoggerFactory; import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.impl.WeakRefMetricsSource; import org.apache.hadoop.fs.s3a.statistics.BlockOutputStreamStatistics; import org.apache.hadoop.fs.s3a.statistics.ChangeTrackerStatistics; import org.apache.hadoop.fs.s3a.statistics.CommitterStatistics; @@ -160,7 +161,10 @@ public class S3AInstrumentation implements Closeable, MetricsSource, private final DurationTrackerFactory durationTrackerFactory; - private String metricsSourceName; + /** + * Weak reference so there's no back reference to the instrumentation. + */ + private WeakRefMetricsSource metricsSourceReference; private final MetricsRegistry registry = new MetricsRegistry("s3aFileSystem").setContext(CONTEXT); @@ -233,19 +237,33 @@ public class S3AInstrumentation implements Closeable, MetricsSource, new MetricDurationTrackerFactory()); } + /** + * Get the current metrics system; demand creating. + * @return a metric system, creating if need be. + */ @VisibleForTesting - public MetricsSystem getMetricsSystem() { + static MetricsSystem getMetricsSystem() { synchronized (METRICS_SYSTEM_LOCK) { if (metricsSystem == null) { metricsSystem = new MetricsSystemImpl(); metricsSystem.init(METRICS_SYSTEM_NAME); + LOG.debug("Metrics system inited {}", metricsSystem); } } return metricsSystem; } /** - * Register this instance as a metrics source. + * Does the instrumentation have a metrics system? + * @return true if the metrics system is present. + */ + @VisibleForTesting + static boolean hasMetricSystem() { + return metricsSystem != null; + } + + /** + * Register this instance as a metrics source via a weak reference. * @param name s3a:// URI for the associated FileSystem instance */ private void registerAsMetricsSource(URI name) { @@ -257,8 +275,9 @@ public class S3AInstrumentation implements Closeable, MetricsSource, number = ++metricsSourceNameCounter; } String msName = METRICS_SOURCE_BASENAME + number; - metricsSourceName = msName + "-" + name.getHost(); - metricsSystem.register(metricsSourceName, "", this); + String metricsSourceName = msName + "-" + name.getHost(); + metricsSourceReference = new WeakRefMetricsSource(metricsSourceName, this); + metricsSystem.register(metricsSourceName, "", metricsSourceReference); } /** @@ -680,19 +699,42 @@ public class S3AInstrumentation implements Closeable, MetricsSource, registry.snapshot(collector.addRecord(registry.info().name()), true); } + /** + * if registered with the metrics, return the + * name of the source. + * @return the name of the metrics, or null if this instance is not bonded. + */ + public String getMetricSourceName() { + return metricsSourceReference != null + ? metricsSourceReference.getName() + : null; + } + public void close() { - synchronized (METRICS_SYSTEM_LOCK) { - // it is critical to close each quantile, as they start a scheduled - // task in a shared thread pool. - throttleRateQuantile.stop(); - metricsSystem.unregisterSource(metricsSourceName); - metricsSourceActiveCounter--; - int activeSources = metricsSourceActiveCounter; - if (activeSources == 0) { - LOG.debug("Shutting down metrics publisher"); - metricsSystem.publishMetricsNow(); - metricsSystem.shutdown(); - metricsSystem = null; + if (metricsSourceReference != null) { + // get the name + String name = metricsSourceReference.getName(); + LOG.debug("Unregistering metrics for {}", name); + // then set to null so a second close() is a noop here. + metricsSourceReference = null; + synchronized (METRICS_SYSTEM_LOCK) { + // it is critical to close each quantile, as they start a scheduled + // task in a shared thread pool. + if (metricsSystem == null) { + LOG.debug("there is no metric system to unregister {} from", name); + return; + } + throttleRateQuantile.stop(); + + metricsSystem.unregisterSource(name); + metricsSourceActiveCounter--; + int activeSources = metricsSourceActiveCounter; + if (activeSources == 0) { + LOG.debug("Shutting down metrics publisher"); + metricsSystem.publishMetricsNow(); + metricsSystem.shutdown(); + metricsSystem = null; + } } } } diff --git a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java index 7cc7d635c51..8a1947f3e42 100644 --- a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java +++ b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java @@ -640,7 +640,10 @@ public final class S3AUtils { AWSCredentialProviderList providers = new AWSCredentialProviderList(); for (Class aClass : awsClasses) { - if (aClass.getName().contains(AWS_AUTH_CLASS_PREFIX)) { + // List of V1 credential providers that will be migrated with V2 upgrade + if (!Arrays.asList("EnvironmentVariableCredentialsProvider", + "EC2ContainerCredentialsProviderWrapper", "InstanceProfileCredentialsProvider") + .contains(aClass.getSimpleName()) && aClass.getName().contains(AWS_AUTH_CLASS_PREFIX)) { V2Migration.v1ProviderReferenced(aClass.getName()); } @@ -1348,13 +1351,17 @@ public final class S3AUtils { LOG.error(msg); throw new IllegalArgumentException(msg); } + boolean isProxySecured = conf.getBoolean(PROXY_SECURED, false); awsConf.setProxyUsername(proxyUsername); awsConf.setProxyPassword(proxyPassword); awsConf.setProxyDomain(conf.getTrimmed(PROXY_DOMAIN)); awsConf.setProxyWorkstation(conf.getTrimmed(PROXY_WORKSTATION)); + awsConf.setProxyProtocol(isProxySecured ? Protocol.HTTPS : Protocol.HTTP); if (LOG.isDebugEnabled()) { - LOG.debug("Using proxy server {}:{} as user {} with password {} on " + - "domain {} as workstation {}", awsConf.getProxyHost(), + LOG.debug("Using proxy server {}://{}:{} as user {} with password {} " + + "on domain {} as workstation {}", + awsConf.getProxyProtocol(), + awsConf.getProxyHost(), awsConf.getProxyPort(), String.valueOf(awsConf.getProxyUsername()), awsConf.getProxyPassword(), awsConf.getProxyDomain(), diff --git a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/audit/impl/LoggingAuditor.java b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/audit/impl/LoggingAuditor.java index da1f5b59bdc..feb926a0bfc 100644 --- a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/audit/impl/LoggingAuditor.java +++ b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/audit/impl/LoggingAuditor.java @@ -25,6 +25,7 @@ import java.util.HashMap; import java.util.Map; import com.amazonaws.AmazonWebServiceRequest; +import com.amazonaws.services.s3.model.GetObjectRequest; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -35,6 +36,7 @@ import org.apache.hadoop.fs.audit.CommonAuditContext; import org.apache.hadoop.fs.s3a.audit.AWSRequestAnalyzer; import org.apache.hadoop.fs.s3a.audit.AuditFailureException; import org.apache.hadoop.fs.s3a.audit.AuditSpanS3A; +import org.apache.hadoop.fs.store.LogExactlyOnce; import org.apache.hadoop.fs.store.audit.HttpReferrerAuditHeader; import org.apache.hadoop.security.UserGroupInformation; @@ -110,6 +112,14 @@ public class LoggingAuditor */ private Collection filters; + /** + * Log for warning of problems getting the range of GetObjectRequest + * will only log of a problem once per process instance. + * This is to avoid logs being flooded with errors. + */ + private static final LogExactlyOnce WARN_INCORRECT_RANGE = + new LogExactlyOnce(LOG); + /** * Create the auditor. * The UGI current user is used to provide the principal; @@ -230,6 +240,26 @@ public class LoggingAuditor private final HttpReferrerAuditHeader referrer; + /** + * Attach Range of data for GetObject Request. + * @param request given get object request + */ + private void attachRangeFromRequest(AmazonWebServiceRequest request) { + if (request instanceof GetObjectRequest) { + long[] rangeValue = ((GetObjectRequest) request).getRange(); + if (rangeValue == null || rangeValue.length == 0) { + return; + } + if (rangeValue.length != 2) { + WARN_INCORRECT_RANGE.warn("Expected range to contain 0 or 2 elements." + + " Got {} elements. Ignoring.", rangeValue.length); + return; + } + String combinedRangeValue = String.format("%d-%d", rangeValue[0], rangeValue[1]); + referrer.set(AuditConstants.PARAM_RANGE, combinedRangeValue); + } + } + private final String description; private LoggingAuditSpan( @@ -314,6 +344,8 @@ public class LoggingAuditor @Override public T beforeExecution( final T request) { + // attach range for GetObject requests + attachRangeFromRequest(request); // build the referrer header final String header = referrer.buildHttpReferrer(); // update the outer class's field. diff --git a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/AbstractSessionCredentialsProvider.java b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/AbstractSessionCredentialsProvider.java index c316b91116f..5b1829e0961 100644 --- a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/AbstractSessionCredentialsProvider.java +++ b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/AbstractSessionCredentialsProvider.java @@ -45,7 +45,7 @@ public abstract class AbstractSessionCredentialsProvider extends AbstractAWSCredentialProvider { /** Credentials, created in {@link #init()}. */ - private AWSCredentials awsCredentials; + private volatile AWSCredentials awsCredentials; /** Atomic flag for on-demand initialization. */ private final AtomicBoolean initialized = new AtomicBoolean(false); @@ -54,7 +54,7 @@ public abstract class AbstractSessionCredentialsProvider * The (possibly translated) initialization exception. * Used for testing. */ - private IOException initializationException; + private volatile IOException initializationException; /** * Constructor. @@ -73,9 +73,9 @@ public abstract class AbstractSessionCredentialsProvider * @throws IOException on any failure. */ @Retries.OnceTranslated - protected void init() throws IOException { + protected synchronized void init() throws IOException { // stop re-entrant attempts - if (initialized.getAndSet(true)) { + if (isInitialized()) { return; } try { @@ -84,6 +84,8 @@ public abstract class AbstractSessionCredentialsProvider } catch (IOException e) { initializationException = e; throw e; + } finally { + initialized.set(true); } } @@ -132,13 +134,15 @@ public abstract class AbstractSessionCredentialsProvider } if (awsCredentials == null) { throw new CredentialInitializationException( - "Provider " + this + " has no credentials"); + "Provider " + this + " has no credentials: " + + (initializationException != null ? initializationException.toString() : ""), + initializationException); } return awsCredentials; } public final boolean hasCredentials() { - return awsCredentials == null; + return awsCredentials != null; } @Override diff --git a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/DirMarkerTracker.java b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/DirMarkerTracker.java index ca04fed65a5..9390c699335 100644 --- a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/DirMarkerTracker.java +++ b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/DirMarkerTracker.java @@ -34,20 +34,20 @@ import org.apache.hadoop.fs.s3a.S3ALocatedFileStatus; * Tracks directory markers which have been reported in object listings. * This is needed for auditing and cleanup, including during rename * operations. - *

+ *

* Designed to be used while scanning through the results of listObject * calls, where are we assume the results come in alphanumeric sort order * and parent entries before children. - *

+ *

* This lets as assume that we can identify all leaf markers as those * markers which were added to set of leaf markers and not subsequently * removed as a child entries were discovered. - *

+ *

* To avoid scanning datastructures excessively, the path of the parent * directory of the last file added is cached. This allows for a * quick bailout when many children of the same directory are * returned in a listing. - *

+ *

* Consult the directory_markers document for details on this feature, * including terminology. */ @@ -106,7 +106,7 @@ public class DirMarkerTracker { /** * Construct. - *

+ *

* The base path is currently only used for information rather than * validating paths supplied in other methods. * @param basePath base path of track @@ -128,7 +128,7 @@ public class DirMarkerTracker { /** * A marker has been found; this may or may not be a leaf. - *

+ *

* Trigger a move of all markers above it into the surplus map. * @param path marker path * @param key object key @@ -160,7 +160,7 @@ public class DirMarkerTracker { /** * A path has been found. - *

+ *

* Declare all markers above it as surplus * @param path marker path * @param key object key @@ -187,7 +187,7 @@ public class DirMarkerTracker { /** * Remove all markers from the path and its parents from the * {@link #leafMarkers} map. - *

+ *

* if {@link #recordSurplusMarkers} is true, the marker is * moved to the surplus map. Not doing this is simply an * optimisation designed to reduce risk of excess memory consumption @@ -223,7 +223,7 @@ public class DirMarkerTracker { /** * Get the map of surplus markers. - *

+ *

* Empty if they were not being recorded. * @return all surplus markers. */ diff --git a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/DirectoryPolicy.java b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/DirectoryPolicy.java index 36dd2e4fd24..6ba74c7e971 100644 --- a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/DirectoryPolicy.java +++ b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/DirectoryPolicy.java @@ -69,21 +69,21 @@ public interface DirectoryPolicy { /** * Delete markers. - *

+ *

* This is the classic S3A policy, */ Delete(DIRECTORY_MARKER_POLICY_DELETE), /** * Keep markers. - *

+ *

* This is Not backwards compatible. */ Keep(DIRECTORY_MARKER_POLICY_KEEP), /** * Keep markers in authoritative paths only. - *

+ *

* This is Not backwards compatible within the * auth paths, but is outside these. */ diff --git a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/OperationCallbacks.java b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/OperationCallbacks.java index ecfe2c0ba0a..5d17ae91b81 100644 --- a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/OperationCallbacks.java +++ b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/OperationCallbacks.java @@ -119,6 +119,7 @@ public interface OperationCallbacks { * There's no update of metadata, directory markers, etc. * Callers must implement. * @param srcKey source object path + * @param destKey destination object path * @param srcAttributes S3 attributes of the source object * @param readContext the read context * @return the result of the copy diff --git a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java index bc9ad669b56..ae4d2fe7a34 100644 --- a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java +++ b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java @@ -53,16 +53,16 @@ import static org.apache.hadoop.util.Preconditions.checkArgument; /** * A parallelized rename operation. - *

+ *

* The parallel execution is in groups of size * {@link InternalConstants#RENAME_PARALLEL_LIMIT}; it is only * after one group completes that the next group is initiated. - *

+ *

* Once enough files have been copied that they meet the * {@link InternalConstants#MAX_ENTRIES_TO_DELETE} threshold, a delete * is initiated. * If it succeeds, the rename continues with the next group of files. - *

+ *

* Directory Markers which have child entries are never copied; only those * which represent empty directories are copied in the rename. * The {@link DirMarkerTracker} tracks which markers must be copied, and @@ -71,10 +71,10 @@ import static org.apache.hadoop.util.Preconditions.checkArgument; * the copied tree. This is to ensure that even if a directory tree * is copied from an authoritative path to a non-authoritative one * there is never any contamination of the non-auth path with markers. - *

+ *

* The rename operation implements the classic HDFS rename policy of * rename(file, dir) renames the file under the directory. - *

+ *

* * There is no validation of input and output paths. * Callers are required to themselves verify that destination is not under @@ -178,7 +178,7 @@ public class RenameOperation extends ExecutingStoreOperation { /** * Queue an object for deletion. - *

+ *

* This object will be deleted when the next page of objects to delete * is posted to S3. Therefore, the COPY must have finished * before that deletion operation takes place. @@ -204,9 +204,9 @@ public class RenameOperation extends ExecutingStoreOperation { /** * Queue a list of markers for deletion. - *

+ *

* no-op if the list is empty. - *

+ *

* See {@link #queueToDelete(Path, String)} for * details on safe use of this method. * @@ -221,7 +221,7 @@ public class RenameOperation extends ExecutingStoreOperation { /** * Queue a single marker for deletion. - *

+ *

* See {@link #queueToDelete(Path, String)} for * details on safe use of this method. * @@ -427,7 +427,7 @@ public class RenameOperation extends ExecutingStoreOperation { /** * Operations to perform at the end of every loop iteration. - *

+ *

* This may block the thread waiting for copies to complete * and/or delete a page of data. */ @@ -448,11 +448,11 @@ public class RenameOperation extends ExecutingStoreOperation { /** * Process all directory markers at the end of the rename. * All leaf markers are queued to be copied in the store; - *

+ *

* Why not simply create new markers? All the metadata * gets copied too, so if there was anything relevant then * it would be preserved. - *

+ *

* At the same time: markers aren't valued much and may * be deleted without any safety checks -so if there was relevant * data it is at risk of destruction at any point. @@ -461,7 +461,7 @@ public class RenameOperation extends ExecutingStoreOperation { * Be advised though: the costs of the copy not withstanding, * it is a lot easier to have one single type of scheduled copy operation * than have copy and touch calls being scheduled. - *

+ *

* The duration returned is the time to initiate all copy/delete operations, * including any blocking waits for active copies and paged deletes * to execute. There may still be outstanding operations diff --git a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/prefetch/S3ACachingInputStream.java b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/prefetch/S3ACachingInputStream.java index 0afd0712464..f9ee4e412fc 100644 --- a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/prefetch/S3ACachingInputStream.java +++ b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/prefetch/S3ACachingInputStream.java @@ -28,6 +28,7 @@ import org.apache.hadoop.fs.impl.prefetch.BlockData; import org.apache.hadoop.fs.impl.prefetch.BlockManager; import org.apache.hadoop.fs.impl.prefetch.BufferData; import org.apache.hadoop.fs.impl.prefetch.ExecutorServiceFuturePool; +import org.apache.hadoop.fs.impl.prefetch.FilePosition; import org.apache.hadoop.fs.s3a.S3AInputStream; import org.apache.hadoop.fs.s3a.S3AReadOpContext; import org.apache.hadoop.fs.s3a.S3ObjectAttributes; @@ -84,46 +85,6 @@ public class S3ACachingInputStream extends S3ARemoteInputStream { fileSize); } - /** - * Moves the current read position so that the next read will occur at {@code pos}. - * - * @param pos the next read will take place at this position. - * - * @throws IllegalArgumentException if pos is outside of the range [0, file size]. - */ - @Override - public void seek(long pos) throws IOException { - throwIfClosed(); - throwIfInvalidSeek(pos); - - // The call to setAbsolute() returns true if the target position is valid and - // within the current block. Therefore, no additional work is needed when we get back true. - if (!getFilePosition().setAbsolute(pos)) { - LOG.info("seek({})", getOffsetStr(pos)); - // We could be here in two cases: - // -- the target position is invalid: - // We ignore this case here as the next read will return an error. - // -- it is valid but outside of the current block. - if (getFilePosition().isValid()) { - // There are two cases to consider: - // -- the seek was issued after this buffer was fully read. - // In this case, it is very unlikely that this buffer will be needed again; - // therefore we release the buffer without caching. - // -- if we are jumping out of the buffer before reading it completely then - // we will likely need this buffer again (as observed empirically for Parquet) - // therefore we issue an async request to cache this buffer. - if (!getFilePosition().bufferFullyRead()) { - blockManager.requestCaching(getFilePosition().data()); - } else { - blockManager.release(getFilePosition().data()); - } - getFilePosition().invalidate(); - blockManager.cancelPrefetches(); - } - setSeekTargetPos(pos); - } - } - @Override public void close() throws IOException { // Close the BlockManager first, cancelling active prefetches, @@ -139,36 +100,45 @@ public class S3ACachingInputStream extends S3ARemoteInputStream { return false; } - if (getFilePosition().isValid() && getFilePosition() - .buffer() - .hasRemaining()) { - return true; - } - - long readPos; - int prefetchCount; - - if (getFilePosition().isValid()) { - // A sequential read results in a prefetch. - readPos = getFilePosition().absolute(); - prefetchCount = numBlocksToPrefetch; - } else { - // A seek invalidates the current position. - // We prefetch only 1 block immediately after a seek operation. - readPos = getSeekTargetPos(); - prefetchCount = 1; - } - + long readPos = getNextReadPos(); if (!getBlockData().isValidOffset(readPos)) { return false; } - if (getFilePosition().isValid()) { - if (getFilePosition().bufferFullyRead()) { - blockManager.release(getFilePosition().data()); + // Determine whether this is an out of order read. + FilePosition filePosition = getFilePosition(); + boolean outOfOrderRead = !filePosition.setAbsolute(readPos); + + if (!outOfOrderRead && filePosition.buffer().hasRemaining()) { + // Use the current buffer. + return true; + } + + if (filePosition.isValid()) { + // We are jumping out of the current buffer. There are two cases to consider: + if (filePosition.bufferFullyRead()) { + // This buffer was fully read: + // it is very unlikely that this buffer will be needed again; + // therefore we release the buffer without caching. + blockManager.release(filePosition.data()); } else { - blockManager.requestCaching(getFilePosition().data()); + // We will likely need this buffer again (as observed empirically for Parquet) + // therefore we issue an async request to cache this buffer. + blockManager.requestCaching(filePosition.data()); } + filePosition.invalidate(); + } + + int prefetchCount; + if (outOfOrderRead) { + LOG.debug("lazy-seek({})", getOffsetStr(readPos)); + blockManager.cancelPrefetches(); + + // We prefetch only 1 block immediately after a seek operation. + prefetchCount = 1; + } else { + // A sequential read results in a prefetch. + prefetchCount = numBlocksToPrefetch; } int toBlockNumber = getBlockData().getBlockNumber(readPos); @@ -186,7 +156,7 @@ public class S3ACachingInputStream extends S3ARemoteInputStream { .trackDuration(STREAM_READ_BLOCK_ACQUIRE_AND_READ), () -> blockManager.get(toBlockNumber)); - getFilePosition().setData(data, startOffset, readPos); + filePosition.setData(data, startOffset, readPos); return true; } @@ -197,7 +167,7 @@ public class S3ACachingInputStream extends S3ARemoteInputStream { } StringBuilder sb = new StringBuilder(); - sb.append(String.format("fpos = (%s)%n", getFilePosition())); + sb.append(String.format("%s%n", super.toString())); sb.append(blockManager.toString()); return sb.toString(); } diff --git a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/prefetch/S3AInMemoryInputStream.java b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/prefetch/S3AInMemoryInputStream.java index 322459a9589..e8bfe946f4a 100644 --- a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/prefetch/S3AInMemoryInputStream.java +++ b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/prefetch/S3AInMemoryInputStream.java @@ -26,6 +26,7 @@ import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.hadoop.fs.impl.prefetch.BufferData; +import org.apache.hadoop.fs.impl.prefetch.FilePosition; import org.apache.hadoop.fs.s3a.S3AInputStream; import org.apache.hadoop.fs.s3a.S3AReadOpContext; import org.apache.hadoop.fs.s3a.S3ObjectAttributes; @@ -86,7 +87,12 @@ public class S3AInMemoryInputStream extends S3ARemoteInputStream { return false; } - if (!getFilePosition().isValid()) { + FilePosition filePosition = getFilePosition(); + if (filePosition.isValid()) { + // Update current position (lazy seek). + filePosition.setAbsolute(getNextReadPos()); + } else { + // Read entire file into buffer. buffer.clear(); int numBytesRead = getReader().read(buffer, 0, buffer.capacity()); @@ -94,9 +100,9 @@ public class S3AInMemoryInputStream extends S3ARemoteInputStream { return false; } BufferData data = new BufferData(0, buffer); - getFilePosition().setData(data, 0, getSeekTargetPos()); + filePosition.setData(data, 0, getNextReadPos()); } - return getFilePosition().buffer().hasRemaining(); + return filePosition.buffer().hasRemaining(); } } diff --git a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/prefetch/S3APrefetchingInputStream.java b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/prefetch/S3APrefetchingInputStream.java index 76ef942ed65..f778f40b74c 100644 --- a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/prefetch/S3APrefetchingInputStream.java +++ b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/prefetch/S3APrefetchingInputStream.java @@ -26,6 +26,7 @@ import org.slf4j.LoggerFactory; import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; +import org.apache.hadoop.classification.VisibleForTesting; import org.apache.hadoop.fs.CanSetReadahead; import org.apache.hadoop.fs.FSExceptionMessages; import org.apache.hadoop.fs.FSInputStream; @@ -56,6 +57,21 @@ public class S3APrefetchingInputStream */ private S3ARemoteInputStream inputStream; + /** + * To be only used by synchronized getPos(). + */ + private long lastReadCurrentPos = 0; + + /** + * To be only used by getIOStatistics(). + */ + private IOStatistics ioStatistics = null; + + /** + * To be only used by getS3AStreamStatistics(). + */ + private S3AInputStreamStatistics inputStreamStatistics = null; + /** * Initializes a new instance of the {@code S3APrefetchingInputStream} class. * @@ -115,14 +131,20 @@ public class S3APrefetchingInputStream } /** - * Gets the current position. + * Gets the current position. If the underlying S3 input stream is closed, + * it returns last read current position from the underlying steam. If the + * current position was never read and the underlying input stream is closed, + * this would return 0. * * @return the current position. * @throws IOException if there is an IO error during this operation. */ @Override public synchronized long getPos() throws IOException { - return isClosed() ? 0 : inputStream.getPos(); + if (!isClosed()) { + lastReadCurrentPos = inputStream.getPos(); + } + return lastReadCurrentPos; } /** @@ -215,11 +237,12 @@ public class S3APrefetchingInputStream */ @InterfaceAudience.Private @InterfaceStability.Unstable + @VisibleForTesting public S3AInputStreamStatistics getS3AStreamStatistics() { - if (isClosed()) { - return null; + if (!isClosed()) { + inputStreamStatistics = inputStream.getS3AStreamStatistics(); } - return inputStream.getS3AStreamStatistics(); + return inputStreamStatistics; } /** @@ -229,10 +252,10 @@ public class S3APrefetchingInputStream */ @Override public IOStatistics getIOStatistics() { - if (isClosed()) { - return null; + if (!isClosed()) { + ioStatistics = inputStream.getIOStatistics(); } - return inputStream.getIOStatistics(); + return ioStatistics; } protected boolean isClosed() { @@ -249,7 +272,6 @@ public class S3APrefetchingInputStream @Override public boolean seekToNewSource(long targetPos) throws IOException { - throwIfClosed(); return false; } diff --git a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/prefetch/S3ARemoteInputStream.java b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/prefetch/S3ARemoteInputStream.java index 0f46a8ed5e5..38d740bd74f 100644 --- a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/prefetch/S3ARemoteInputStream.java +++ b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/prefetch/S3ARemoteInputStream.java @@ -77,12 +77,16 @@ public abstract class S3ARemoteInputStream private volatile boolean closed; /** - * Current position within the file. + * Internal position within the file. Updated lazily + * after a seek before a read. */ private FilePosition fpos; - /** The target of the most recent seek operation. */ - private long seekTargetPos; + /** + * This is the actual position within the file, used by + * lazy seek to decide whether to seek on the next read or not. + */ + private long nextReadPos; /** Information about each block of the mapped S3 file. */ private BlockData blockData; @@ -146,7 +150,7 @@ public abstract class S3ARemoteInputStream this.remoteObject = getS3File(); this.reader = new S3ARemoteObjectReader(remoteObject); - this.seekTargetPos = 0; + this.nextReadPos = 0; } /** @@ -212,7 +216,8 @@ public abstract class S3ARemoteInputStream public int available() throws IOException { throwIfClosed(); - if (!ensureCurrentBuffer()) { + // Update the current position in the current buffer, if possible. + if (!fpos.setAbsolute(nextReadPos)) { return 0; } @@ -228,11 +233,7 @@ public abstract class S3ARemoteInputStream public long getPos() throws IOException { throwIfClosed(); - if (fpos.isValid()) { - return fpos.absolute(); - } else { - return seekTargetPos; - } + return nextReadPos; } /** @@ -247,10 +248,7 @@ public abstract class S3ARemoteInputStream throwIfClosed(); throwIfInvalidSeek(pos); - if (!fpos.setAbsolute(pos)) { - fpos.invalidate(); - seekTargetPos = pos; - } + nextReadPos = pos; } /** @@ -268,7 +266,7 @@ public abstract class S3ARemoteInputStream throwIfClosed(); if (remoteObject.size() == 0 - || seekTargetPos >= remoteObject.size()) { + || nextReadPos >= remoteObject.size()) { return -1; } @@ -276,6 +274,7 @@ public abstract class S3ARemoteInputStream return -1; } + nextReadPos++; incrementBytesRead(1); return fpos.buffer().get() & 0xff; @@ -315,7 +314,7 @@ public abstract class S3ARemoteInputStream } if (remoteObject.size() == 0 - || seekTargetPos >= remoteObject.size()) { + || nextReadPos >= remoteObject.size()) { return -1; } @@ -334,6 +333,7 @@ public abstract class S3ARemoteInputStream ByteBuffer buf = fpos.buffer(); int bytesToRead = Math.min(numBytesRemaining, buf.remaining()); buf.get(buffer, offset, bytesToRead); + nextReadPos += bytesToRead; incrementBytesRead(bytesToRead); offset += bytesToRead; numBytesRemaining -= bytesToRead; @@ -367,12 +367,8 @@ public abstract class S3ARemoteInputStream return closed; } - protected long getSeekTargetPos() { - return seekTargetPos; - } - - protected void setSeekTargetPos(long pos) { - seekTargetPos = pos; + protected long getNextReadPos() { + return nextReadPos; } protected BlockData getBlockData() { @@ -443,6 +439,18 @@ public abstract class S3ARemoteInputStream return false; } + @Override + public String toString() { + if (isClosed()) { + return "closed"; + } + + StringBuilder sb = new StringBuilder(); + sb.append(String.format("nextReadPos = (%d)%n", nextReadPos)); + sb.append(String.format("fpos = (%s)", fpos)); + return sb.toString(); + } + protected void throwIfClosed() throws IOException { if (closed) { throw new IOException( @@ -453,6 +461,8 @@ public abstract class S3ARemoteInputStream protected void throwIfInvalidSeek(long pos) throws EOFException { if (pos < 0) { throw new EOFException(FSExceptionMessages.NEGATIVE_SEEK + " " + pos); + } else if (pos > this.getBlockData().getFileSize()) { + throw new EOFException(FSExceptionMessages.CANNOT_SEEK_PAST_EOF + " " + pos); } } diff --git a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java index 1d52b0a34ea..3c16d87fe13 100644 --- a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java +++ b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java @@ -82,7 +82,7 @@ import static org.apache.hadoop.service.launcher.LauncherExitCodes.*; /** * CLI to manage S3Guard Metadata Store. - *

+ *

* Some management tools invoke this class directly. */ @InterfaceAudience.LimitedPrivate("management tools") @@ -526,7 +526,6 @@ public abstract class S3GuardTool extends Configured implements Tool, * Validate the marker options. * @param out output stream * @param fs filesystem - * @param path test path * @param marker desired marker option -may be null. */ private void processMarkerOption(final PrintStream out, diff --git a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java index 230f07793d9..4ddc5f9478b 100644 --- a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java +++ b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java @@ -147,7 +147,7 @@ public final class MarkerTool extends S3GuardTool { /** * Constant to use when there is no limit on the number of * objects listed: {@value}. - *

+ *

* The value is 0 and not -1 because it allows for the limit to be * set on the command line {@code -limit 0}. * The command line parser rejects {@code -limit -1} as the -1 @@ -475,17 +475,23 @@ public final class MarkerTool extends S3GuardTool { '}'; } - /** Exit code to report. */ + /** + * @return Exit code to report. + */ public int getExitCode() { return exitCode; } - /** Tracker which did the scan. */ + /** + * @return Tracker which did the scan. + */ public DirMarkerTracker getTracker() { return tracker; } - /** Summary of purge. Null if none took place. */ + /** + * @return Summary of purge. Null if none took place. + */ public MarkerPurgeSummary getPurgeSummary() { return purgeSummary; } @@ -661,7 +667,7 @@ public final class MarkerTool extends S3GuardTool { * @param path path to scan * @param tracker tracker to update * @param limit limit of files to scan; -1 for 'unlimited' - * @return true if the scan completedly scanned the entire tree + * @return true if the scan completely scanned the entire tree * @throws IOException IO failure */ @Retries.RetryTranslated @@ -840,6 +846,7 @@ public final class MarkerTool extends S3GuardTool { * Execute the marker tool, with no checks on return codes. * * @param scanArgs set of args for the scanner. + * @throws IOException IO failure * @return the result */ @SuppressWarnings("IOResourceOpenedButNotSafelyClosed") @@ -853,9 +860,9 @@ public final class MarkerTool extends S3GuardTool { /** * Arguments for the scan. - *

+ *

* Uses a builder/argument object because too many arguments were - * being created and it was making maintenance harder. + * being created, and it was making maintenance harder. */ public static final class ScanArgs { @@ -960,43 +967,71 @@ public final class MarkerTool extends S3GuardTool { /** Consider only markers in nonauth paths as errors. */ private boolean nonAuth = false; - /** Source FS; must be or wrap an S3A FS. */ + /** + * Source FS; must be or wrap an S3A FS. + * @param source Source FileSystem + * @return the builder class after scanning source FS + */ public ScanArgsBuilder withSourceFS(final FileSystem source) { this.sourceFS = source; return this; } - /** Path to scan. */ + /** + * Path to scan. + * @param p path to scan + * @return builder class for method chaining + */ public ScanArgsBuilder withPath(final Path p) { this.path = p; return this; } - /** Purge? */ + /** + * Should the markers be purged? This is also enabled when using the clean flag on the CLI. + * @param d set to purge if true + * @return builder class for method chaining + */ public ScanArgsBuilder withDoPurge(final boolean d) { this.doPurge = d; return this; } - /** Min marker count (ignored on purge). */ + /** + * Min marker count an audit must find (ignored on purge). + * @param min Minimum Marker Count (default 0) + * @return builder class for method chaining + */ public ScanArgsBuilder withMinMarkerCount(final int min) { this.minMarkerCount = min; return this; } - /** Max marker count (ignored on purge). */ + /** + * Max marker count an audit must find (ignored on purge). + * @param max Maximum Marker Count (default 0) + * @return builder class for method chaining + */ public ScanArgsBuilder withMaxMarkerCount(final int max) { this.maxMarkerCount = max; return this; } - /** Limit of files to scan; 0 for 'unlimited'. */ + /** + * Limit of files to scan; 0 for 'unlimited'. + * @param l Limit of files to scan + * @return builder class for method chaining + */ public ScanArgsBuilder withLimit(final int l) { this.limit = l; return this; } - /** Consider only markers in nonauth paths as errors. */ + /** + * Consider only markers in non-authoritative paths as errors. + * @param b True if tool should only consider markers in non-authoritative paths + * @return builder class for method chaining + */ public ScanArgsBuilder withNonAuth(final boolean b) { this.nonAuth = b; return this; diff --git a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/auditing.md b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/auditing.md index 8ccc36cf83b..2248a959993 100644 --- a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/auditing.md +++ b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/auditing.md @@ -232,6 +232,7 @@ If any of the field values were `null`, the field is omitted. | `p2` | Path 2 of operation | `s3a://alice-london/path2` | | `pr` | Principal | `alice` | | `ps` | Unique process UUID | `235865a0-d399-4696-9978-64568db1b51c` | +| `rg` | GET request range | `100-200` | | `ta` | Task Attempt ID (S3A committer) | | | `t0` | Thread 0: thread span was created in | `100` | | `t1` | Thread 1: thread this operation was executed in | `200` | @@ -410,7 +411,7 @@ log4j.logger.org.apache.hadoop.fs.s3a.audit.impl.LoggingAuditor=DEBUG This adds one log line per request -and does provide some insight into communications between the S3A client and AWS S3. -For low-level debugging of the Auditing system, such as when when spans are +For low-level debugging of the Auditing system, such as when spans are entered and exited, set the log to `TRACE`: ``` diff --git a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/committer_architecture.md b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/committer_architecture.md index b69be8ae336..30fcf157862 100644 --- a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/committer_architecture.md +++ b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/committer_architecture.md @@ -63,7 +63,7 @@ entries, the duration of the lock is low. In contrast to a "real" filesystem, Amazon's S3A object store, similar to most others, does not support `rename()` at all. A hash operation on the filename -determines the location of of the data —there is no separate metadata to change. +determines the location of the data —there is no separate metadata to change. To mimic renaming, the Hadoop S3A client has to copy the data to a new object with the destination filename, then delete the original entry. This copy can be executed server-side, but as it does not complete until the in-cluster @@ -79,7 +79,7 @@ The solution to this problem is closely coupled to the S3 protocol itself: delayed completion of multi-part PUT operations That is: tasks write all data as multipart uploads, *but delay the final -commit action until until the final, single job commit action.* Only that +commit action until the final, single job commit action.* Only that data committed in the job commit action will be made visible; work from speculative and failed tasks will not be instantiated. As there is no rename, there is no delay while data is copied from a temporary directory to the final directory. @@ -307,7 +307,7 @@ def isCommitJobRepeatable() : Accordingly, it is a failure point in the protocol. With a low number of files and fast rename/list algorithms, the window of vulnerability is low. At scale, the vulnerability increases. It could actually be reduced through -parallel execution of the renaming of of committed tasks. +parallel execution of the renaming of committed tasks. ### Job Abort diff --git a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/committers.md b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/committers.md index cfeff28d54e..38aea18cad1 100644 --- a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/committers.md +++ b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/committers.md @@ -51,7 +51,7 @@ obsolete. ## Introduction: The Commit Problem Apache Hadoop MapReduce (and behind the scenes, Apache Spark) often write -the output of their work to filesystems +the output of their work to filesystems. Normally, Hadoop uses the `FileOutputFormatCommitter` to manage the promotion of files created in a single task attempt to the final output of @@ -68,37 +68,37 @@ process across the cluster may rename a file or directory to the same path. If the rename fails for any reason, either the data is at the original location, or it is at the destination, -in which case the rename actually succeeded. -**The S3 object store and the `s3a://` filesystem client cannot meet these requirements.* +_The S3 object store and the `s3a://` filesystem client cannot meet these requirements._ -Although S3A is (now) consistent, the S3A client still mimics `rename()` +Although S3 is (now) consistent, the S3A client still mimics `rename()` by copying files and then deleting the originals. This can fail partway through, and there is nothing to prevent any other process in the cluster attempting a rename at the same time. As a result, -* If a rename fails, the data is left in an unknown state. +* If a 'rename' fails, the data is left in an unknown state. * If more than one process attempts to commit work simultaneously, the output directory may contain the results of both processes: it is no longer an exclusive operation. -*. Commit time is still -proportional to the amount of data created. It still can't handle task failure. +* Commit time is still proportional to the amount of data created. +It still can't handle task failure. **Using the "classic" `FileOutputCommmitter` to commit work to Amazon S3 risks -loss or corruption of generated data** +loss or corruption of generated data**. -To address these problems there is now explicit support in the `hadop-aws` -module for committing work to Amazon S3 via the S3A filesystem client, -*the S3A Committers* +To address these problems there is now explicit support in the `hadoop-aws` +module for committing work to Amazon S3 via the S3A filesystem client: +*the S3A Committers*. For safe, as well as high-performance output of work to S3, -we need use "a committer" explicitly written to work with S3, treating it as -an object store with special features. +we need to use "a committer" explicitly written to work with S3, +treating it as an object store with special features. -### Background : Hadoop's "Commit Protocol" +### Background: Hadoop's "Commit Protocol" How exactly is work written to its final destination? That is accomplished by a "commit protocol" between the workers and the job manager. @@ -106,10 +106,10 @@ a "commit protocol" between the workers and the job manager. This protocol is implemented in Hadoop MapReduce, with a similar but extended version in Apache Spark: -1. A "Job" is the entire query, with inputs to outputs +1. The "Job" is the entire query. It takes a given set of input and produces some output. 1. The "Job Manager" is the process in charge of choreographing the execution of the job. It may perform some of the actual computation too. -1. The job has "workers", which are processes which work the actual data +1. The job has "workers", which are processes which work with the actual data and write the results. 1. Workers execute "Tasks", which are fractions of the job, a job whose input has been *partitioned* into units of work which can be executed independently. @@ -126,7 +126,7 @@ this "speculation" delivers speedup as it can address the "straggler problem". When multiple workers are working on the same data, only one worker is allowed to write the final output. 1. The entire job may fail (often from the failure of the Job Manager (MR Master, Spark Driver, ...)). -1, The network may partition, with workers isolated from each other or +1. The network may partition, with workers isolated from each other or the process managing the entire commit. 1. Restarted jobs may recover from a failure by reusing the output of all completed tasks (MapReduce with the "v1" algorithm), or just by rerunning everything @@ -137,34 +137,34 @@ What is "the commit protocol" then? It is the requirements on workers as to when their data is made visible, where, for a filesystem, "visible" means "can be seen in the destination directory of the query." -* There is a destination directory of work, "the output directory." -* The final output of tasks must be in this directory *or paths underneath it*. +* There is a destination directory of work: "the output directory". +The final output of tasks must be in this directory *or paths underneath it*. * The intermediate output of a task must not be visible in the destination directory. That is: they must not write directly to the destination. * The final output of a task *may* be visible under the destination. -* The Job Manager makes the decision as to whether a task's data is to be "committed", -be it directly to the final directory or to some intermediate store.. -* Individual workers communicate with the Job manager to manage the commit process: -whether the output is to be *committed* or *aborted* +* Individual workers communicate with the Job manager to manage the commit process. +* The Job Manager makes the decision on if a task's output data is to be "committed", +be it directly to the final directory or to some intermediate store. * When a worker commits the output of a task, it somehow promotes its intermediate work to becoming final. * When a worker aborts a task's output, that output must not become visible (i.e. it is not committed). * Jobs themselves may be committed/aborted (the nature of "when" is not covered here). * After a Job is committed, all its work must be visible. -* And a file `_SUCCESS` may be written to the output directory. +A file named `_SUCCESS` may be written to the output directory. * After a Job is aborted, all its intermediate data is lost. * Jobs may also fail. When restarted, the successor job must be able to clean up all the intermediate and committed work of its predecessor(s). * Task and Job processes measure the intervals between communications with their Application Master and YARN respectively. -When the interval has grown too large they must conclude +When the interval has grown too large, they must conclude that the network has partitioned and that they must abort their work. That's "essentially" it. When working with HDFS and similar filesystems, directory `rename()` is the mechanism used to commit the work of tasks and jobs. + * Tasks write data to task attempt directories under the directory `_temporary` underneath the final destination directory. * When a task is committed, these files are renamed to the destination directory @@ -180,20 +180,19 @@ and restarting the job. whose output is in the job attempt directory, *and only rerunning all uncommitted tasks*. -This algorithm does not works safely or swiftly with AWS S3 storage because -tenames go from being fast, atomic operations to slow operations which can fail partway through. +This algorithm does not work safely or swiftly with AWS S3 storage because +renames go from being fast, atomic operations to slow operations which can fail partway through. This then is the problem which the S3A committers address: - -*How to safely and reliably commit work to Amazon S3 or compatible object store* +*How to safely and reliably commit work to Amazon S3 or compatible object store.* ## Meet the S3A Committers Since Hadoop 3.1, the S3A FileSystem has been accompanied by classes -designed to integrate with the Hadoop and Spark job commit protocols, classes -which interact with the S3A filesystem to reliably commit work work to S3: -*The S3A Committers* +designed to integrate with the Hadoop and Spark job commit protocols, +classes which interact with the S3A filesystem to reliably commit work to S3: +*The S3A Committers*. The underlying architecture of this process is very complex, and covered in [the committer architecture documentation](./committer_architecture.html). @@ -219,8 +218,8 @@ conflict with existing files is resolved. | feature | staging | magic | |--------|---------|---| -| task output destination | local disk | S3A *without completing the write* | -| task commit process | upload data from disk to S3 | list all pending uploads on s3 and write details to job attempt directory | +| task output destination | write to local disk | upload to S3 *without completing the write* | +| task commit process | upload data from disk to S3 *without completing the write* | list all pending uploads on S3 and write details to job attempt directory | | task abort process | delete local disk data | list all pending uploads and abort them | | job commit | list & complete pending uploads | list & complete pending uploads | @@ -228,33 +227,30 @@ The other metric is "maturity". There, the fact that the Staging committers are based on Netflix's production code counts in its favor. -### The Staging Committer +### The Staging Committers -This is based on work from Netflix. It "stages" data into the local filesystem. -It also requires the cluster to have HDFS, so that +This is based on work from Netflix. +It "stages" data into the local filesystem, using URLs with `file://` schemas. -Tasks write to URLs with `file://` schemas. When a task is committed, -its files are listed, uploaded to S3 as incompleted Multipart Uploads. +When a task is committed, its files are listed and uploaded to S3 as incomplete Multipart Uploads. The information needed to complete the uploads is saved to HDFS where it is committed through the standard "v1" commit algorithm. When the Job is committed, the Job Manager reads the lists of pending writes from its HDFS Job destination directory and completes those uploads. -Canceling a task is straightforward: the local directory is deleted with -its staged data. Canceling a job is achieved by reading in the lists of +Canceling a _task_ is straightforward: the local directory is deleted with its staged data. +Canceling a _job_ is achieved by reading in the lists of pending writes from the HDFS job attempt directory, and aborting those uploads. For extra safety, all outstanding multipart writes to the destination directory are aborted. -The staging committer comes in two slightly different forms, with slightly -different conflict resolution policies: +There are two staging committers with slightly different conflict resolution behaviors: - -* **Directory**: the entire directory tree of data is written or overwritten, +* **Directory Committer**: the entire directory tree of data is written or overwritten, as normal. -* **Partitioned**: special handling of partitioned directory trees of the form +* **Partitioned Committer**: special handling of partitioned directory trees of the form `YEAR=2017/MONTH=09/DAY=19`: conflict resolution is limited to the partitions being updated. @@ -265,13 +261,16 @@ directories containing new data. It is intended for use with Apache Spark only. -## Conflict Resolution in the Staging Committers +#### Conflict Resolution in the Staging Committers The Staging committers offer the ability to replace the conflict policy of the execution engine with policy designed to work with the tree of data. This is based on the experience and needs of Netflix, where efficiently adding new data to an existing partitioned directory tree is a common operation. +An XML configuration is shown below. +The default conflict mode if unset would be `append`. + ```xml fs.s3a.committer.staging.conflict-mode @@ -283,40 +282,37 @@ new data to an existing partitioned directory tree is a common operation. ``` -**replace** : when the job is committed (and not before), delete files in +The _Directory Committer_ uses the entire directory tree for conflict resolution. +For this committer, the behavior of each conflict mode is shown below: + +- **replace**: When the job is committed (and not before), delete files in directories into which new data will be written. -**fail**: when there are existing files in the destination, fail the job. +- **fail**: When there are existing files in the destination, fail the job. -**append**: Add new data to the directories at the destination; overwriting +- **append**: Add new data to the directories at the destination; overwriting any with the same name. Reliable use requires unique names for generated files, which the committers generate by default. -The difference between the two staging committers are as follows: +The _Partitioned Committer_ calculates the partitions into which files are added, +the final directories in the tree, and uses that in its conflict resolution process. +For the _Partitioned Committer_, the behavior of each mode is as follows: -The Directory Committer uses the entire directory tree for conflict resolution. -If any file exists at the destination it will fail in job setup; if the resolution -mechanism is "replace" then all existing files will be deleted. - -The partitioned committer calculates the partitions into which files are added, -the final directories in the tree, and uses that in its conflict resolution -process: - - -**replace** : delete all data in the destination partition before committing +- **replace**: Delete all data in the destination _partition_ before committing the new files. -**fail**: fail if there is data in the destination partition, ignoring the state +- **fail**: Fail if there is data in the destination _partition_, ignoring the state of any parallel partitions. -**append**: add the new data. +- **append**: Add the new data to the destination _partition_, + overwriting any files with the same name. -It's intended for use in Apache Spark Dataset operations, rather +The _Partitioned Committer_ is intended for use in Apache Spark Dataset operations, rather than Hadoop's original MapReduce engine, and only in jobs where adding new data to an existing dataset is the desired goal. -Prerequisites for successful work +Prerequisites for success with the _Partitioned Committer_: 1. The output is written into partitions via `PARTITIONED BY` or `partitionedBy()` instructions. @@ -356,19 +352,20 @@ task commit. However, it has extra requirements of the filesystem -1. [Obsolete] It requires a consistent object store. +1. The object store must be consistent. 1. The S3A client must be configured to recognize interactions -with the magic directories and treat them specially. +with the magic directories and treat them as a special case. -Now that Amazon S3 is consistent, the magic committer is enabled by default. +Now that [Amazon S3 is consistent](https://aws.amazon.com/s3/consistency/), +the magic directory path rewriting is enabled by default. -It's also not been field tested to the extent of Netflix's committer; consider -it the least mature of the committers. +The Magic Committer has not been field tested to the extent of Netflix's committer; +consider it the least mature of the committers. -#### Which Committer to Use? +### Which Committer to Use? -1. If you want to create or update existing partitioned data trees in Spark, use thee +1. If you want to create or update existing partitioned data trees in Spark, use the Partitioned Committer. Make sure you have enough hard disk capacity for all staged data. Do not use it in other situations. @@ -398,8 +395,8 @@ This is done in `mapred-default.xml` ``` -What is missing is an explicit choice of committer to use in the property -`fs.s3a.committer.name`; so the classic (and unsafe) file committer is used. +You must also choose which of the S3A committers to use with the `fs.s3a.committer.name` property. +Otherwise, the classic (and unsafe) file committer is used. | `fs.s3a.committer.name` | Committer | |--------|---------| @@ -408,9 +405,7 @@ What is missing is an explicit choice of committer to use in the property | `magic` | the "magic" committer | | `file` | the original and unsafe File committer; (default) | - - -## Using the Directory and Partitioned Staging Committers +## Using the Staging Committers Generated files are initially written to a local directory underneath one of the temporary directories listed in `fs.s3a.buffer.dir`. @@ -422,16 +417,14 @@ The staging committer needs a path in the cluster filesystem Temporary files are saved in HDFS (or other cluster filesystem) under the path `${fs.s3a.committer.staging.tmp.path}/${user}` where `user` is the name of the user running the job. The default value of `fs.s3a.committer.staging.tmp.path` is `tmp/staging`, -Which will be converted at run time to a path under the current user's home directory, -essentially `~/tmp/staging` - so the temporary directory +resulting in the HDFS directory `~/tmp/staging/${user}`. The application attempt ID is used to create a unique path under this directory, resulting in a path `~/tmp/staging/${user}/${application-attempt-id}/` under which summary data of each task's pending commits are managed using the standard `FileOutputFormat` committer. -When a task is committed the data is uploaded under the destination directory. +When a task is committed, the data is uploaded under the destination directory. The policy of how to react if the destination exists is defined by the `fs.s3a.committer.staging.conflict-mode` setting. @@ -442,9 +435,9 @@ the `fs.s3a.committer.staging.conflict-mode` setting. | `append` | Add the new files to the existing directory tree | -## The "Partitioned" Staging Committer +### The "Partitioned" Staging Committer -This committer an extension of the "Directory" committer which has a special conflict resolution +This committer is an extension of the "Directory" committer which has a special conflict resolution policy designed to support operations which insert new data into a directory tree structured using Hive's partitioning strategy: different levels of the tree represent different columns. @@ -471,10 +464,10 @@ logs/YEAR=2017/MONTH=04/ A partitioned structure like this allows for queries using Hive or Spark to filter out files which do not contain relevant data. -What the partitioned committer does is, where the tooling permits, allows callers -to add data to an existing partitioned layout*. +The partitioned committer allows callers to add new data to an existing partitioned layout, +where the application supports it. -More specifically, it does this by having a conflict resolution options which +More specifically, it does this by reducing the scope of conflict resolution to only act on individual partitions, rather than across the entire output tree. | `fs.s3a.committer.staging.conflict-mode` | Meaning | @@ -492,18 +485,18 @@ was written. With the policy of `append`, the new file would be added to the existing set of files. -### Notes +### Notes on using Staging Committers 1. A deep partition tree can itself be a performance problem in S3 and the s3a client, -or, more specifically. a problem with applications which use recursive directory tree +or more specifically a problem with applications which use recursive directory tree walks to work with data. 1. The outcome if you have more than one job trying simultaneously to write data to the same destination with any policy other than "append" is undefined. 1. In the `append` operation, there is no check for conflict with file names. -If, in the example above, the file `log-20170228.avro` already existed, -it would be overridden. Set `fs.s3a.committer.staging.unique-filenames` to `true` +If the file `log-20170228.avro` in the example above already existed, it would be overwritten. +Set `fs.s3a.committer.staging.unique-filenames` to `true` to ensure that a UUID is included in every filename to avoid this. @@ -514,7 +507,11 @@ performance. ### FileSystem client setup -1. Turn the magic on by `fs.s3a.committer.magic.enabled"` +The S3A connector can recognize files created under paths with `__magic/` as a parent directory. +This allows it to handle those files in a special way, such as uploading to a different location +and storing the information needed to complete pending multipart uploads. + +Turn the magic on by setting `fs.s3a.committer.magic.enabled` to `true`: ```xml @@ -526,22 +523,24 @@ performance. ``` - - ### Enabling the committer +Set the committer used by S3A's committer factory to `magic`: + ```xml fs.s3a.committer.name magic - ``` Conflict management is left to the execution engine itself. -## Common Committer Options +## Committer Options Reference +### Common S3A Committer Options + +The table below provides a summary of each option. | Option | Meaning | Default | |--------|---------|---------| @@ -553,19 +552,7 @@ Conflict management is left to the execution engine itself. | `fs.s3a.committer.generate.uuid` | Generate a Job UUID if none is passed down from Spark | `false` | | `fs.s3a.committer.require.uuid` |Require the Job UUID to be passed down from Spark | `false` | - -## Staging committer (Directory and Partitioned) options - - -| Option | Meaning | Default | -|--------|---------|---------| -| `fs.s3a.committer.staging.conflict-mode` | Conflict resolution: `fail`, `append` or `replace`| `append` | -| `fs.s3a.committer.staging.tmp.path` | Path in the cluster filesystem for temporary data. | `tmp/staging` | -| `fs.s3a.committer.staging.unique-filenames` | Generate unique filenames. | `true` | -| `fs.s3a.committer.staging.abort.pending.uploads` | Deprecated; replaced by `fs.s3a.committer.abort.pending.uploads`. | `(false)` | - - -### Common Committer Options +The examples below shows how these options can be configured in XML. ```xml @@ -628,8 +615,8 @@ Conflict management is left to the execution engine itself. fs.s3a.committer.require.uuid false - Should the committer fail to initialize if a unique ID isn't set in - "spark.sql.sources.writeJobUUID" or fs.s3a.committer.staging.uuid + Require the committer fail to initialize if a unique ID is not set in + "spark.sql.sources.writeJobUUID" or "fs.s3a.committer.uuid". This helps guarantee that unique IDs for jobs are being passed down in spark applications. @@ -650,7 +637,14 @@ Conflict management is left to the execution engine itself. ``` -### Staging Committer Options +### Staging committer (Directory and Partitioned) options + +| Option | Meaning | Default | +|--------|---------|---------| +| `fs.s3a.committer.staging.conflict-mode` | Conflict resolution: `fail`, `append`, or `replace`.| `append` | +| `fs.s3a.committer.staging.tmp.path` | Path in the cluster filesystem for temporary data. | `tmp/staging` | +| `fs.s3a.committer.staging.unique-filenames` | Generate unique filenames. | `true` | +| `fs.s3a.committer.staging.abort.pending.uploads` | Deprecated; replaced by `fs.s3a.committer.abort.pending.uploads`. | `(false)` | ```xml @@ -672,7 +666,7 @@ Conflict management is left to the execution engine itself. true Option for final files to have a unique name through job attempt info, - or the value of fs.s3a.committer.staging.uuid + or the value of fs.s3a.committer.uuid. When writing data with the "append" conflict option, this guarantees that new data will not overwrite any existing data. @@ -696,10 +690,9 @@ The magic committer recognizes when files are created under paths with `__magic/ and redirects the upload to a different location, adding the information needed to complete the upload in the job commit operation. -If, for some reason, you *do not* want these paths to be redirected and not manifest until later, +If, for some reason, you *do not* want these paths to be redirected and completed later, the feature can be disabled by setting `fs.s3a.committer.magic.enabled` to false. - -By default it is true. +By default, it is enabled. ```xml @@ -711,6 +704,8 @@ By default it is true. ``` +You will not be able to use the Magic Committer if this option is disabled. + ## Concurrent Jobs writing to the same destination It is sometimes possible for multiple jobs to simultaneously write to the same destination path. @@ -730,7 +725,7 @@ be creating files with paths/filenames unique to the specific job. It is not enough for them to be unique by task `part-00000.snappy.parquet`, because each job will have tasks with the same name, so generate files with conflicting operations. -For the staging committers, setting `fs.s3a.committer.staging.unique-filenames` to ensure unique names are +For the staging committers, enable `fs.s3a.committer.staging.unique-filenames` to ensure unique names are generated during the upload. Otherwise, use what configuration options are available in the specific `FileOutputFormat`. Note: by default, the option `mapreduce.output.basename` sets the base name for files; @@ -757,13 +752,12 @@ org.apache.hadoop.fs.s3a.commit.PathCommitException: `s3a://landsat-pds': Filesy in configuration option fs.s3a.committer.magic.enabled ``` -The Job is configured to use the magic committer, but the S3A bucket has not been explicitly -declared as supporting it. +The Job is configured to use the magic committer, +but the S3A bucket has not been explicitly declared as supporting it. -The Job is configured to use the magic committer, but the S3A bucket has not been explicitly declared as supporting it. - -As this is now true by default, this error will only surface with a configuration which has explicitly disabled it. -Remove all global/per-bucket declarations of `fs.s3a.bucket.magic.enabled` or set them to `true` +Magic Committer support within the S3A filesystem has been enabled by default since Hadoop 3.3.1. +This error will only surface with a configuration which has explicitly disabled it. +Remove all global/per-bucket declarations of `fs.s3a.bucket.magic.enabled` or set them to `true`. ```xml @@ -846,7 +840,7 @@ the failure happen at the start of a job. (Setting this option will not interfere with the Staging Committers' use of HDFS, as it explicitly sets the algorithm to "2" for that part of its work). -The other way to check which committer to use is to examine the `_SUCCESS` file. +The other way to check which committer was used is to examine the `_SUCCESS` file. If it is 0-bytes long, the classic `FileOutputCommitter` committed the job. The S3A committers all write a non-empty JSON file; the `committer` field lists the committer used. @@ -862,7 +856,7 @@ all committers registered for the s3a:// schema. 1. The output format has overridden `FileOutputFormat.getOutputCommitter()` and is returning its own committer -one which is a subclass of `FileOutputCommitter`. -That final cause. *the output format is returning its own committer*, is not +The final cause "the output format is returning its own committer" is not easily fixed; it may be that the custom committer performs critical work during its lifecycle, and contains assumptions about the state of the written data during task and job commit (i.e. it is in the destination filesystem). diff --git a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/delegation_token_architecture.md b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/delegation_token_architecture.md index 0ba516313f4..b5d03783988 100644 --- a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/delegation_token_architecture.md +++ b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/delegation_token_architecture.md @@ -89,7 +89,7 @@ of: * A set of AWS session credentials (`fs.s3a.access.key`, `fs.s3a.secret.key`, `fs.s3a.session.token`). -These credentials are obtained from the AWS Secure Token Service (STS) when the the token is issued. +These credentials are obtained from the AWS Secure Token Service (STS) when the token is issued. * A set of AWS session credentials binding the user to a specific AWS IAM Role, further restricted to only access the S3 bucket. Again, these credentials are requested when the token is issued. diff --git a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/delegation_tokens.md b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/delegation_tokens.md index f8f9d88d1e7..ce204f118a2 100644 --- a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/delegation_tokens.md +++ b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/delegation_tokens.md @@ -20,7 +20,7 @@ The S3A filesystem client supports `Hadoop Delegation Tokens`. This allows YARN application like MapReduce, Distcp, Apache Flink and Apache Spark to -obtain credentials to access S3 buckets and pass them pass these credentials to +obtain credentials to access S3 buckets and pass them to jobs/queries, so granting them access to the service with the same access permissions as the user. @@ -37,9 +37,9 @@ the S3A client from the AWS STS service. They have a limited duration so restrict how long an application can access AWS on behalf of a user. Clients with this token have the full permissions of the user. -*Role Delegation Tokens:* These contain an "STS Session Token" requested by by the -STS "Assume Role" API, so grant the caller to interact with S3 as specific AWS -role, *with permissions restricted to purely accessing that specific S3 bucket*. +*Role Delegation Tokens:* These contain an "STS Session Token" requested by the +STS "Assume Role" API, granting the caller permission to interact with S3 using a specific IAM +role, *with permissions restricted to accessing a specific S3 bucket*. Role Delegation Tokens are the most powerful. By restricting the access rights of the granted STS token, no process receiving the token may perform @@ -55,13 +55,13 @@ see [S3A Delegation Token Architecture](delegation_token_architecture.html). ## Background: Hadoop Delegation Tokens. -A Hadoop Delegation Token are is a byte array of data which is submitted to -a Hadoop services as proof that the caller has the permissions to perform +A Hadoop Delegation Token is a byte array of data which is submitted to +Hadoop services as proof that the caller has the permissions to perform the operation which it is requesting — -and which can be passed between applications to *delegate* those permission. +and which can be passed between applications to *delegate* those permissions. -Tokens are opaque to clients, clients who simply get a byte array -of data which they must to provide to a service when required. +Tokens are opaque to clients. Clients simply get a byte array +of data which they must provide to a service when required. This normally contains encrypted data for use by the service. The service, which holds the password to encrypt/decrypt this data, @@ -79,7 +79,7 @@ After use, tokens may be revoked: this relies on services holding tables of valid tokens, either in memory or, for any HA service, in Apache Zookeeper or similar. Revoking tokens is used to clean up after jobs complete. -Delegation support is tightly integrated with YARN: requests to launch +Delegation Token support is tightly integrated with YARN: requests to launch containers and applications can include a list of delegation tokens to pass along. These tokens are serialized with the request, saved to a file on the node launching the container, and then loaded in to the credentials @@ -103,12 +103,12 @@ S3A now supports delegation tokens, so allowing a caller to acquire tokens from a local S3A Filesystem connector instance and pass them on to applications to grant them equivalent or restricted access. -These S3A Delegation Tokens are special in that they do not contain +These S3A Delegation Tokens are special in a way that they do not contain password-protected data opaque to clients; they contain the secrets needed to access the relevant S3 buckets and associated services. They are obtained by requesting a delegation token from the S3A filesystem client. -Issued token mey be included in job submissions, passed to running applications, +Issued tokens may be included in job submissions, passed to running applications, etc. This token is specific to an individual bucket; all buckets which a client wishes to work with must have a separate delegation token issued. @@ -117,7 +117,7 @@ class, which then supports multiple "bindings" behind it, so supporting different variants of S3A Delegation Tokens. Because applications only collect Delegation Tokens in secure clusters, -It does mean that to be able to submit delegation tokens in transient +it does mean that to be able to submit delegation tokens in transient cloud-hosted Hadoop clusters, _these clusters must also have Kerberos enabled_. *Tip*: you should only be deploying Hadoop in public clouds with Kerberos enabled. @@ -141,10 +141,10 @@ for specifics details on the (current) token lifespan. ### S3A Role Delegation Tokens -A Role Delegation Tokens is created by asking the AWS +A Role Delegation Token is created by asking the AWS [Security Token Service](http://docs.aws.amazon.com/STS/latest/APIReference/Welcome.html) -for set of "Assumed Role" credentials, with a AWS account specific role for a limited duration.. -This role is restricted to only grant access the S3 bucket and all KMS keys, +for a set of "Assumed Role" session credentials with a limited lifetime, belonging to a given IAM Role. +The resulting session credentials are restricted to grant access to all KMS keys, and to the specific S3 bucket. They are marshalled into the S3A Delegation Token. Other S3A connectors can extract these credentials and use them to @@ -156,13 +156,13 @@ Issued tokens cannot be renewed or revoked. ### S3A Full-Credential Delegation Tokens -Full Credential Delegation Tokens tokens contain the full AWS login details +Full Credential Delegation Tokens contain the full AWS login details (access key and secret key) needed to access a bucket. They never expire, so are the equivalent of storing the AWS account credentials in a Hadoop, Hive, Spark configuration or similar. -They differences are: +The differences are: 1. They are automatically passed from the client/user to the application. A remote application can use them to access data on behalf of the user. @@ -181,21 +181,20 @@ Hadoop security enabled —which inevitably means with Kerberos. Even though S3A delegation tokens do not use Kerberos, the code in applications which fetch DTs is normally only executed when the cluster is running in secure mode; somewhere where the `core-site.xml` configuration -sets `hadoop.security.authentication` to to `kerberos` or another valid +sets `hadoop.security.authentication` to `kerberos` or another valid authentication mechanism. -* Without enabling security at this level, delegation tokens will not +*Without enabling security at this level, delegation tokens will not be collected.* -Once Kerberos enabled, the process for acquiring tokens is as follows: +Once Kerberos is enabled, the process for acquiring tokens is as follows: 1. Enable Delegation token support by setting `fs.s3a.delegation.token.binding` to the classname of the token binding to use. -to use. 1. Add any other binding-specific settings (STS endpoint, IAM role, etc.) 1. Make sure the settings are the same in the service as well as the client. 1. In the client, switch to using a [Hadoop Credential Provider](hadoop-project-dist/hadoop-common/CredentialProviderAPI.html) -for storing your local credentials, *with a local filesystem store +for storing your local credentials, with a local filesystem store (`localjceks:` or `jcecks://file`), so as to keep the full secrets out of any job configurations. 1. Execute the client from a Kerberos-authenticated account @@ -215,7 +214,7 @@ application configured with the login credentials for an AWS account able to iss Hadoop MapReduce jobs copy their client-side configurations with the job. If your AWS login secrets are set in an XML file then they are picked up and passed in with the job, _even if delegation tokens are used to propagate -session or role secrets. +session or role secrets_. Spark-submit will take any credentials in the `spark-defaults.conf`file and again, spread them across the cluster. @@ -261,7 +260,7 @@ the same STS endpoint. * In experiments, a few hundred requests per second are needed to trigger throttling, so this is very unlikely to surface in production systems. * The S3A filesystem connector retries all throttled requests to AWS services, including STS. -* Other S3 clients with use the AWS SDK will, if configured, also retry throttled requests. +* Other S3 clients which use the AWS SDK will, if configured, also retry throttled requests. Overall, the risk of triggering STS throttling appears low, and most applications will recover from what is generally an intermittently used AWS service. @@ -303,7 +302,7 @@ relevant bucket, then a new session token will be issued. a session delegation token, then the existing token will be forwarded. The life of the token will not be extended. 1. If the application requesting a token does not have either of these, -the the tokens cannot be issued: the operation will fail with an error. +the token cannot be issued: the operation will fail with an error. The endpoint for STS requests are set by the same configuration @@ -353,10 +352,10 @@ it is authenticated with; the role token binding will fail. When the AWS credentials supplied to the Session Delegation Token binding through `fs.s3a.aws.credentials.provider` are themselves a set of -session credentials, generated delegation tokens with simply contain these -existing session credentials, a new set of credentials obtained from STS. +session credentials, generated delegation tokens will simply contain these +existing session credentials, not a new set of credentials obtained from STS. This is because the STS service does not let -callers authenticated with session/role credentials from requesting new sessions. +callers authenticated with session/role credentials request new sessions. This feature is useful when generating tokens from an EC2 VM instance in one IAM role and forwarding them over to VMs which are running in a different IAM role. @@ -384,7 +383,7 @@ There are some further configuration options: | **Key** | **Meaning** | **Default** | | --- | --- | --- | -| `fs.s3a.assumed.role.session.duration"` | Duration of delegation tokens | `1h` | +| `fs.s3a.assumed.role.session.duration` | Duration of delegation tokens | `1h` | | `fs.s3a.assumed.role.arn` | ARN for role to request | (undefined) | | `fs.s3a.assumed.role.sts.endpoint.region` | region for issued tokens | (undefined) | @@ -413,7 +412,8 @@ The XML settings needed to enable session tokens are: ``` A JSON role policy for the role/session will automatically be generated which will -consist of +consist of: + 1. Full access to the S3 bucket for all operations used by the S3A client (read, write, list, multipart operations, get bucket location, etc). 1. Full user access to KMS keys. This is to be able to decrypt any data @@ -449,7 +449,7 @@ relevant bucket, then a full credential token will be issued. a session delegation token, then the existing token will be forwarded. The life of the token will not be extended. 1. If the application requesting a token does not have either of these, -the the tokens cannot be issued: the operation will fail with an error. +the tokens cannot be issued: the operation will fail with an error. ## Managing the Delegation Tokens Duration @@ -465,7 +465,7 @@ that of the role itself: 1h by default, though this can be changed to 12h [In the IAM Console](https://console.aws.amazon.com/iam/home#/roles), or from the AWS CLI. -*Without increasing the duration of role, one hour is the maximum value; +Without increasing the duration of the role, one hour is the maximum value; the error message `The requested DurationSeconds exceeds the MaxSessionDuration set for this role` is returned if the requested duration of a Role Delegation Token is greater than that available for the role. @@ -545,7 +545,7 @@ Consult [troubleshooting Assumed Roles](assumed_roles.html#troubleshooting) for details on AWS error messages related to AWS IAM roles. The [cloudstore](https://github.com/steveloughran/cloudstore) module's StoreDiag -utility can also be used to explore delegation token support +utility can also be used to explore delegation token support. ### Submitted job cannot authenticate @@ -557,7 +557,7 @@ There are many causes for this; delegation tokens add some more. * This user is not `kinit`-ed in to Kerberos. Use `klist` and `hadoop kdiag` to see the Kerberos authentication state of the logged in user. -* The filesystem instance on the client has not had a token binding set in +* The filesystem instance on the client does not have a token binding set in `fs.s3a.delegation.token.binding`, so does not attempt to issue any. * The job submission is not aware that access to the specific S3 buckets are required. Review the application's submission mechanism to determine @@ -717,7 +717,7 @@ In the initial results of these tests: * A few hundred requests a second can be made before STS block the caller. * The throttling does not last very long (seconds) -* Tt does not appear to affect any other STS endpoints. +* It does not appear to affect any other STS endpoints. If developers wish to experiment with these tests and provide more detailed analysis, we would welcome this. Do bear in mind that all users of the @@ -749,7 +749,7 @@ Look at the other examples to see what to do; `SessionTokenIdentifier` does most of the work. Having a `toString()` method which is informative is ideal for the `hdfs creds` -command as well as debugging: *but do not print secrets* +command as well as debugging: *but do not print secrets*. *Important*: Add no references to any AWS SDK class, to ensure it can be safely deserialized whenever the relevant token @@ -835,13 +835,13 @@ Tests the lifecycle of session tokens. #### Integration Test `ITestSessionDelegationInFileystem`. This collects DTs from one filesystem, and uses that to create a new FS instance and -then perform filesystem operations. A miniKDC is instantiated +then perform filesystem operations. A miniKDC is instantiated. * Take care to remove all login secrets from the environment, so as to make sure that the second instance is picking up the DT information. * `UserGroupInformation.reset()` can be used to reset user secrets after every test case (e.g. teardown), so that issued DTs from one test case do not contaminate the next. -* its subclass, `ITestRoleDelegationInFileystem` adds a check that the current credentials +* It's subclass, `ITestRoleDelegationInFileystem` adds a check that the current credentials in the DT cannot be used to access data on other buckets —that is, the active session really is restricted to the target bucket. @@ -851,7 +851,7 @@ session really is restricted to the target bucket. It's not easy to bring up a YARN cluster with a secure HDFS and miniKDC controller in test cases —this test, the closest there is to an end-to-end test, uses mocking to mock the RPC calls to the YARN AM, and then verifies that the tokens -have been collected in the job context, +have been collected in the job context. #### Load Test `ILoadTestSessionCredentials` diff --git a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/directory_markers.md b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/directory_markers.md index 27c337354c4..41099fe6653 100644 --- a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/directory_markers.md +++ b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/directory_markers.md @@ -218,7 +218,7 @@ This can have adverse effects on those large directories, again. In the Presto [S3 connector](https://prestodb.io/docs/current/connector/hive.html#amazon-s3-configuration), `mkdirs()` is a no-op. -Whenever it lists any path which isn't an object or a prefix of one more more objects, it returns an +Whenever it lists any path which isn't an object or a prefix of one more objects, it returns an empty listing. That is:; by default, every path is an empty directory. Provided no code probes for a directory existing and fails if it is there, this @@ -524,7 +524,7 @@ Ignoring 3 markers in authoritative paths ``` All of this S3A bucket _other_ than the authoritative path `/tables` will be safe for -incompatible Hadoop releases to to use. +incompatible Hadoop releases to use. ### `markers clean` diff --git a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md index cd7793bfa92..ae042b16199 100644 --- a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md +++ b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md @@ -505,7 +505,7 @@ providers listed after it will be ignored. ### Simple name/secret credentials with `SimpleAWSCredentialsProvider`* -This is is the standard credential provider, which supports the secret +This is the standard credential provider, which supports the secret key in `fs.s3a.access.key` and token in `fs.s3a.secret.key` values. @@ -1108,6 +1108,7 @@ options are covered in [Testing](./testing.md). 8MB The size of a single prefetched block of data. + Decreasing this will increase the number of prefetches required, and may negatively impact performance. @@ -1392,7 +1393,7 @@ an S3 implementation that doesn't return eTags. When `true` (default) and 'Get Object' doesn't return eTag or version ID (depending on configured 'source'), a `NoVersionAttributeException` -will be thrown. When `false` and and eTag or version ID is not returned, +will be thrown. When `false` and eTag or version ID is not returned, the stream can be read, but without any version checking. @@ -1868,7 +1869,7 @@ in byte arrays in the JVM's heap prior to upload. This *may* be faster than buffering to disk. The amount of data which can be buffered is limited by the available -size of the JVM heap heap. The slower the write bandwidth to S3, the greater +size of the JVM heap. The slower the write bandwidth to S3, the greater the risk of heap overflows. This risk can be mitigated by [tuning the upload settings](#upload_thread_tuning). diff --git a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/performance.md b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/performance.md index 88e6e8a0b21..e3ab79d92e5 100644 --- a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/performance.md +++ b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/performance.md @@ -122,7 +122,7 @@ Optimised for random IO, specifically the Hadoop `PositionedReadable` operations —though `seek(offset); read(byte_buffer)` also benefits. Rather than ask for the whole file, the range of the HTTP request is -set to that that of the length of data desired in the `read` operation +set to the length of data desired in the `read` operation (Rounded up to the readahead value set in `setReadahead()` if necessary). By reducing the cost of closing existing HTTP requests, this is @@ -172,7 +172,7 @@ sequential to `random`. This policy essentially recognizes the initial read pattern of columnar storage formats (e.g. Apache ORC and Apache Parquet), which seek to the end of a file, read in index data and then seek backwards to selectively read -columns. The first seeks may be be expensive compared to the random policy, +columns. The first seeks may be expensive compared to the random policy, however the overall process is much less expensive than either sequentially reading through a file with the `random` policy, or reading columnar data with the `sequential` policy. @@ -384,7 +384,7 @@ data loss. Amazon S3 uses a set of front-end servers to provide access to the underlying data. The choice of which front-end server to use is handled via load-balancing DNS service: when the IP address of an S3 bucket is looked up, the choice of which -IP address to return to the client is made based on the the current load +IP address to return to the client is made based on the current load of the front-end servers. Over time, the load across the front-end changes, so those servers considered @@ -694,4 +694,4 @@ connectors for other buckets, would end up blocking too. Consider experimenting with this when running applications where many threads may try to simultaneously interact -with the same slow-to-initialize object stores. \ No newline at end of file +with the same slow-to-initialize object stores. diff --git a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/prefetching.md b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/prefetching.md index 9b4e888d604..8bb85008e36 100644 --- a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/prefetching.md +++ b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/prefetching.md @@ -43,6 +43,10 @@ Multiple blocks may be read in parallel. |`fs.s3a.prefetch.block.size` |Size of a block |`8M` | |`fs.s3a.prefetch.block.count` |Number of blocks to prefetch |`8` | +The default size of a block is 8MB, and the minimum allowed block size is 1 byte. +Decreasing block size will increase the number of blocks to be read for a file. +A smaller block size may negatively impact performance as the number of prefetches required will increase. + ### Key Components `S3PrefetchingInputStream` - When prefetching is enabled, S3AFileSystem will return an instance of @@ -158,7 +162,7 @@ The buffer for the block furthest from the current block is released. Once a buffer has been acquired by `CachingBlockManager`, if the buffer is in a *READY* state, it is returned. This means that data was already read into this buffer asynchronously by a prefetch. -If it's state is *BLANK* then data is read into it using +If its state is *BLANK* then data is read into it using `S3Reader.read(ByteBuffer buffer, long offset, int size).` For the second read call, `in.read(buffer, 0, 8MB)`, since the block sizes are of 8MB and only 5MB @@ -170,7 +174,10 @@ The number of blocks to be prefetched is determined by `fs.s3a.prefetch.block.co ##### Random Reads -If the caller makes the following calls: +The `CachingInputStream` also caches prefetched blocks. This happens when `read()` is issued +after a `seek()` outside the current block, but the current block still has not been fully read. + +For example, consider the following calls: ``` in.read(buffer, 0, 5MB) @@ -180,13 +187,14 @@ in.seek(2MB) in.read(buffer, 0, 4MB) ``` -The `CachingInputStream` also caches prefetched blocks. -This happens when a `seek()` is issued for outside the current block and the current block still has -not been fully read. +For the above read sequence, after the `seek(10MB)` call is issued, block 0 has not been read +completely so the subsequent call to `read()` will cache it, on the assumption that the caller +will probably want to read from it again. -For the above read sequence, when the `seek(10MB)` call is issued, block 0 has not been read -completely so cache it as the caller will probably want to read from it again. +After `seek(2MB)` is called, the position is back inside block 0. The next read can now be +satisfied from the locally cached block file, which is typically orders of magnitude faster +than a network based read. -When `seek(2MB)` is called, the position is back inside block 0. -The next read can now be satisfied from the locally cached block file, which is typically orders of -magnitude faster than a network based read. \ No newline at end of file +NB: `seek()` is implemented lazily, so it only keeps track of the new position but does not +otherwise affect the internal state of the stream. Only when a `read()` is issued, it will call +the `ensureCurrentBuffer()` method and fetch a new block if required. \ No newline at end of file diff --git a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3_select.md b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3_select.md index a5aaae91454..886a2d97d24 100644 --- a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3_select.md +++ b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3_select.md @@ -738,7 +738,7 @@ at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecor ``` The underlying problem is that the gzip decompressor is automatically enabled -when the the source file ends with the ".gz" extension. Because S3 Select +when the source file ends with the ".gz" extension. Because S3 Select returns decompressed data, the codec fails. The workaround here is to declare that the job should add the "Passthrough Codec" @@ -934,6 +934,21 @@ Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: GZIP is not applic ... ``` + +### AWSBadRequestException `UnsupportedStorageClass` + +S3 Select doesn't work with some storage classes like Glacier or Reduced Redundancy. +Make sure you've set `fs.s3a.create.storage.class` to a supported storage class for S3 Select. + +``` +org.apache.hadoop.fs.s3a.AWSBadRequestException: + Select on s3a://example/dataset.csv.gz: + com.amazonaws.services.s3.model.AmazonS3Exception: + We do not support REDUCED_REDUNDANCY storage class. + Please check the service documentation and try again. + (Service: Amazon S3; Status Code: 400; Error Code: UnsupportedStorageClass +``` + ### `PathIOException`: "seek() not supported" The input stream returned by the select call does not support seeking diff --git a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3n.md b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3n.md index 9b59ad1d382..16beed920b1 100644 --- a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3n.md +++ b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3n.md @@ -29,7 +29,7 @@ the S3A connector** - - - -## How to migrate to to the S3A client +## How to migrate to the S3A client 1. Keep the `hadoop-aws` JAR on your classpath. diff --git a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/troubleshooting_s3a.md b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/troubleshooting_s3a.md index ef9f7f123e5..64878465133 100644 --- a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/troubleshooting_s3a.md +++ b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/troubleshooting_s3a.md @@ -324,7 +324,7 @@ There's two main causes If you see this and you are trying to use the S3A connector with Spark, then the cause can be that the isolated classloader used to load Hive classes is interfering with the S3A -connector's dynamic loading of `com.amazonaws` classes. To fix this, declare that that +connector's dynamic loading of `com.amazonaws` classes. To fix this, declare that the classes in the aws SDK are loaded from the same classloader which instantiated the S3A FileSystem instance: diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractTestS3AEncryption.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractTestS3AEncryption.java index 23776f3164f..6df4f7593c9 100644 --- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractTestS3AEncryption.java +++ b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractTestS3AEncryption.java @@ -78,8 +78,16 @@ public abstract class AbstractTestS3AEncryption extends AbstractS3ATestBase { 0, 1, 2, 3, 4, 5, 254, 255, 256, 257, 2 ^ 12 - 1 }; + /** + * Skips the tests if encryption is not enabled in configuration. + * + * @implNote We can use {@link #createConfiguration()} here since + * it does not depend on any per-bucket based configuration. + * Otherwise, we would need to grab the configuration from an + * instance of {@link S3AFileSystem}. + */ protected void requireEncryptedFileSystem() { - skipIfEncryptionTestsDisabled(getFileSystem().getConf()); + skipIfEncryptionTestsDisabled(createConfiguration()); } /** @@ -91,8 +99,8 @@ public abstract class AbstractTestS3AEncryption extends AbstractS3ATestBase { @Override public void setup() throws Exception { try { - super.setup(); requireEncryptedFileSystem(); + super.setup(); } catch (AccessDeniedException e) { skip("Bucket does not allow " + getSSEAlgorithm() + " encryption method"); } diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AClosedFS.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AClosedFS.java index 79772ec9dad..327b0fab288 100644 --- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AClosedFS.java +++ b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AClosedFS.java @@ -103,4 +103,16 @@ public class ITestS3AClosedFS extends AbstractS3ATestBase { () -> getFileSystem().open(path("to-open"))); } + @Test + public void testClosedInstrumentation() throws Exception { + // no metrics + Assertions.assertThat(S3AInstrumentation.hasMetricSystem()) + .describedAs("S3AInstrumentation.hasMetricSystem()") + .isFalse(); + + Assertions.assertThat(getFileSystem().getIOStatistics()) + .describedAs("iostatistics of %s", getFileSystem()) + .isNotNull(); + } + } diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3APrefetchingInputStream.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3APrefetchingInputStream.java index 24f74b3a021..b1ab4afce47 100644 --- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3APrefetchingInputStream.java +++ b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3APrefetchingInputStream.java @@ -31,9 +31,9 @@ import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.contract.ContractTestUtils; import org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest; +import org.apache.hadoop.fs.s3a.prefetch.S3APrefetchingInputStream; +import org.apache.hadoop.fs.s3a.statistics.S3AInputStreamStatistics; import org.apache.hadoop.fs.statistics.IOStatistics; -import org.apache.hadoop.fs.statistics.StoreStatisticNames; -import org.apache.hadoop.fs.statistics.StreamStatisticNames; import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_DEFAULT_SIZE; import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_SIZE_KEY; @@ -41,7 +41,13 @@ import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_ENABLED_KEY; import static org.apache.hadoop.fs.statistics.IOStatisticAssertions.assertThatStatisticMaximum; import static org.apache.hadoop.fs.statistics.IOStatisticAssertions.verifyStatisticCounterValue; import static org.apache.hadoop.fs.statistics.IOStatisticAssertions.verifyStatisticGaugeValue; +import static org.apache.hadoop.fs.statistics.StoreStatisticNames.ACTION_EXECUTOR_ACQUIRED; +import static org.apache.hadoop.fs.statistics.StoreStatisticNames.ACTION_HTTP_GET_REQUEST; import static org.apache.hadoop.fs.statistics.StoreStatisticNames.SUFFIX_MAX; +import static org.apache.hadoop.fs.statistics.StreamStatisticNames.STREAM_READ_ACTIVE_MEMORY_IN_USE; +import static org.apache.hadoop.fs.statistics.StreamStatisticNames.STREAM_READ_BLOCKS_IN_FILE_CACHE; +import static org.apache.hadoop.fs.statistics.StreamStatisticNames.STREAM_READ_OPENED; +import static org.apache.hadoop.fs.statistics.StreamStatisticNames.STREAM_READ_PREFETCH_OPERATIONS; import static org.apache.hadoop.io.IOUtils.cleanupWithLogger; /** @@ -86,11 +92,12 @@ public class ITestS3APrefetchingInputStream extends AbstractS3ACostTest { private void openFS() throws Exception { Configuration conf = getConfiguration(); + String largeFileUri = S3ATestUtils.getCSVTestFile(conf); - largeFile = new Path(DEFAULT_CSVTEST_FILE); + largeFile = new Path(largeFileUri); blockSize = conf.getInt(PREFETCH_BLOCK_SIZE_KEY, PREFETCH_BLOCK_DEFAULT_SIZE); largeFileFS = new S3AFileSystem(); - largeFileFS.initialize(new URI(DEFAULT_CSVTEST_FILE), getConfiguration()); + largeFileFS.initialize(new URI(largeFileUri), getConfiguration()); FileStatus fileStatus = largeFileFS.getFileStatus(largeFile); largeFileSize = fileStatus.getLen(); numBlocks = calculateNumBlocks(largeFileSize, blockSize); @@ -120,20 +127,52 @@ public class ITestS3APrefetchingInputStream extends AbstractS3ACostTest { in.readFully(buffer, 0, (int) Math.min(buffer.length, largeFileSize - bytesRead)); bytesRead += buffer.length; // Blocks are fully read, no blocks should be cached - verifyStatisticGaugeValue(ioStats, StreamStatisticNames.STREAM_READ_BLOCKS_IN_FILE_CACHE, + verifyStatisticGaugeValue(ioStats, STREAM_READ_BLOCKS_IN_FILE_CACHE, 0); } // Assert that first block is read synchronously, following blocks are prefetched - verifyStatisticCounterValue(ioStats, StreamStatisticNames.STREAM_READ_PREFETCH_OPERATIONS, + verifyStatisticCounterValue(ioStats, STREAM_READ_PREFETCH_OPERATIONS, numBlocks - 1); - verifyStatisticCounterValue(ioStats, StoreStatisticNames.ACTION_HTTP_GET_REQUEST, numBlocks); - verifyStatisticCounterValue(ioStats, StreamStatisticNames.STREAM_READ_OPENED, numBlocks); + verifyStatisticCounterValue(ioStats, ACTION_HTTP_GET_REQUEST, numBlocks); + verifyStatisticCounterValue(ioStats, STREAM_READ_OPENED, numBlocks); } // Verify that once stream is closed, all memory is freed - verifyStatisticGaugeValue(ioStats, StreamStatisticNames.STREAM_READ_ACTIVE_MEMORY_IN_USE, 0); + verifyStatisticGaugeValue(ioStats, STREAM_READ_ACTIVE_MEMORY_IN_USE, 0); assertThatStatisticMaximum(ioStats, - StoreStatisticNames.ACTION_EXECUTOR_ACQUIRED + SUFFIX_MAX).isGreaterThan(0); + ACTION_EXECUTOR_ACQUIRED + SUFFIX_MAX).isGreaterThan(0); + } + + @Test + public void testReadLargeFileFullyLazySeek() throws Throwable { + describe("read a large file using readFully(position,buffer,offset,length)," + + " uses S3ACachingInputStream"); + IOStatistics ioStats; + openFS(); + + try (FSDataInputStream in = largeFileFS.open(largeFile)) { + ioStats = in.getIOStatistics(); + + byte[] buffer = new byte[S_1M * 10]; + long bytesRead = 0; + + while (bytesRead < largeFileSize) { + in.readFully(bytesRead, buffer, 0, (int) Math.min(buffer.length, + largeFileSize - bytesRead)); + bytesRead += buffer.length; + // Blocks are fully read, no blocks should be cached + verifyStatisticGaugeValue(ioStats, STREAM_READ_BLOCKS_IN_FILE_CACHE, + 0); + } + + // Assert that first block is read synchronously, following blocks are prefetched + verifyStatisticCounterValue(ioStats, STREAM_READ_PREFETCH_OPERATIONS, + numBlocks - 1); + verifyStatisticCounterValue(ioStats, ACTION_HTTP_GET_REQUEST, numBlocks); + verifyStatisticCounterValue(ioStats, STREAM_READ_OPENED, numBlocks); + } + // Verify that once stream is closed, all memory is freed + verifyStatisticGaugeValue(ioStats, STREAM_READ_ACTIVE_MEMORY_IN_USE, 0); } @Test @@ -147,24 +186,29 @@ public class ITestS3APrefetchingInputStream extends AbstractS3ACostTest { byte[] buffer = new byte[blockSize]; - // Don't read the block completely so it gets cached on seek + // Don't read block 0 completely so it gets cached on read after seek in.read(buffer, 0, blockSize - S_1K * 10); - in.seek(blockSize + S_1K * 10); - // Backwards seek, will use cached block + + // Seek to block 2 and read all of it + in.seek(blockSize * 2); + in.read(buffer, 0, blockSize); + + // Seek to block 4 but don't read: noop. + in.seek(blockSize * 4); + + // Backwards seek, will use cached block 0 in.seek(S_1K * 5); in.read(); - verifyStatisticCounterValue(ioStats, StoreStatisticNames.ACTION_HTTP_GET_REQUEST, 2); - verifyStatisticCounterValue(ioStats, StreamStatisticNames.STREAM_READ_OPENED, 2); - verifyStatisticCounterValue(ioStats, StreamStatisticNames.STREAM_READ_PREFETCH_OPERATIONS, 1); - // block 0 is cached when we seek to block 1, block 1 is cached as it is being prefetched - // when we seek out of block 0, see cancelPrefetches() - verifyStatisticGaugeValue(ioStats, StreamStatisticNames.STREAM_READ_BLOCKS_IN_FILE_CACHE, 2); + // Expected to get block 0 (partially read), 1 (prefetch), 2 (fully read), 3 (prefetch) + // Blocks 0, 1, 3 were not fully read, so remain in the file cache + verifyStatisticCounterValue(ioStats, ACTION_HTTP_GET_REQUEST, 4); + verifyStatisticCounterValue(ioStats, STREAM_READ_OPENED, 4); + verifyStatisticCounterValue(ioStats, STREAM_READ_PREFETCH_OPERATIONS, 2); + verifyStatisticGaugeValue(ioStats, STREAM_READ_BLOCKS_IN_FILE_CACHE, 3); } - verifyStatisticGaugeValue(ioStats, StreamStatisticNames.STREAM_READ_BLOCKS_IN_FILE_CACHE, 0); - verifyStatisticGaugeValue(ioStats, StreamStatisticNames.STREAM_READ_ACTIVE_MEMORY_IN_USE, 0); - assertThatStatisticMaximum(ioStats, - StoreStatisticNames.ACTION_EXECUTOR_ACQUIRED + SUFFIX_MAX).isGreaterThan(0); + verifyStatisticGaugeValue(ioStats, STREAM_READ_BLOCKS_IN_FILE_CACHE, 0); + verifyStatisticGaugeValue(ioStats, STREAM_READ_ACTIVE_MEMORY_IN_USE, 0); } @Test @@ -184,15 +228,67 @@ public class ITestS3APrefetchingInputStream extends AbstractS3ACostTest { in.seek(S_1K * 12); in.read(buffer, 0, S_1K * 4); - verifyStatisticCounterValue(ioStats, StoreStatisticNames.ACTION_HTTP_GET_REQUEST, 1); - verifyStatisticCounterValue(ioStats, StreamStatisticNames.STREAM_READ_OPENED, 1); - verifyStatisticCounterValue(ioStats, StreamStatisticNames.STREAM_READ_PREFETCH_OPERATIONS, 0); + verifyStatisticCounterValue(ioStats, ACTION_HTTP_GET_REQUEST, 1); + verifyStatisticCounterValue(ioStats, STREAM_READ_OPENED, 1); + verifyStatisticCounterValue(ioStats, STREAM_READ_PREFETCH_OPERATIONS, 0); // The buffer pool is not used - verifyStatisticGaugeValue(ioStats, StreamStatisticNames.STREAM_READ_ACTIVE_MEMORY_IN_USE, 0); + verifyStatisticGaugeValue(ioStats, STREAM_READ_ACTIVE_MEMORY_IN_USE, 0); // no prefetch ops, so no action_executor_acquired assertThatStatisticMaximum(ioStats, - StoreStatisticNames.ACTION_EXECUTOR_ACQUIRED + SUFFIX_MAX).isEqualTo(-1); + ACTION_EXECUTOR_ACQUIRED + SUFFIX_MAX).isEqualTo(-1); } } + @Test + public void testStatusProbesAfterClosingStream() throws Throwable { + describe("When the underlying input stream is closed, the prefetch input stream" + + " should still support some status probes"); + + byte[] data = ContractTestUtils.dataset(SMALL_FILE_SIZE, 'a', 26); + Path smallFile = methodPath(); + ContractTestUtils.writeDataset(getFileSystem(), smallFile, data, data.length, 16, true); + + FSDataInputStream in = getFileSystem().open(smallFile); + + byte[] buffer = new byte[SMALL_FILE_SIZE]; + in.read(buffer, 0, S_1K * 4); + in.seek(S_1K * 12); + in.read(buffer, 0, S_1K * 4); + + long pos = in.getPos(); + IOStatistics ioStats = in.getIOStatistics(); + S3AInputStreamStatistics inputStreamStatistics = + ((S3APrefetchingInputStream) (in.getWrappedStream())).getS3AStreamStatistics(); + + assertNotNull("Prefetching input IO stats should not be null", ioStats); + assertNotNull("Prefetching input stream stats should not be null", inputStreamStatistics); + assertNotEquals("Position retrieved from prefetching input stream should be greater than 0", 0, + pos); + + in.close(); + + // status probes after closing the input stream + long newPos = in.getPos(); + IOStatistics newIoStats = in.getIOStatistics(); + S3AInputStreamStatistics newInputStreamStatistics = + ((S3APrefetchingInputStream) (in.getWrappedStream())).getS3AStreamStatistics(); + + assertNotNull("Prefetching input IO stats should not be null", newIoStats); + assertNotNull("Prefetching input stream stats should not be null", newInputStreamStatistics); + assertNotEquals("Position retrieved from prefetching input stream should be greater than 0", 0, + newPos); + + // compare status probes after closing of the stream with status probes done before + // closing the stream + assertEquals("Position retrieved through stream before and after closing should match", pos, + newPos); + assertEquals("IO stats retrieved through stream before and after closing should match", ioStats, + newIoStats); + assertEquals("Stream stats retrieved through stream before and after closing should match", + inputStreamStatistics, newInputStreamStatistics); + + assertFalse("seekToNewSource() not supported with prefetch", in.seekToNewSource(10)); + + } + } diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestInstrumentationLifecycle.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestInstrumentationLifecycle.java new file mode 100644 index 00000000000..d8b9247008c --- /dev/null +++ b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestInstrumentationLifecycle.java @@ -0,0 +1,104 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a; + +import java.net.URI; + +import org.assertj.core.api.Assertions; +import org.junit.Test; + +import org.apache.hadoop.fs.impl.WeakRefMetricsSource; +import org.apache.hadoop.metrics2.MetricsSource; +import org.apache.hadoop.metrics2.MetricsSystem; +import org.apache.hadoop.test.AbstractHadoopTestBase; + +import static org.apache.hadoop.fs.s3a.S3AInstrumentation.getMetricsSystem; +import static org.apache.hadoop.fs.s3a.Statistic.DIRECTORIES_CREATED; +import static org.assertj.core.api.Assertions.assertThat; + +/** + * Test the {@link S3AInstrumentation} lifecycle, in particular how + * it binds to hadoop metrics through a {@link WeakRefMetricsSource} + * and that it will deregister itself in {@link S3AInstrumentation#close()}. + */ +public class TestInstrumentationLifecycle extends AbstractHadoopTestBase { + + @Test + public void testDoubleClose() throws Throwable { + S3AInstrumentation instrumentation = new S3AInstrumentation(new URI("s3a://example/")); + + // the metric system is created in the constructor + assertThat(S3AInstrumentation.hasMetricSystem()) + .describedAs("S3AInstrumentation.hasMetricSystem()") + .isTrue(); + // ask for a metric + String metricName = DIRECTORIES_CREATED.getSymbol(); + assertThat(instrumentation.lookupMetric(metricName)) + .describedAs("lookupMetric(%s) while open", metricName) + .isNotNull(); + + MetricsSystem activeMetrics = getMetricsSystem(); + final String metricSourceName = instrumentation.getMetricSourceName(); + final MetricsSource source = activeMetrics.getSource(metricSourceName); + // verify the source is registered through a weak ref, and that the + // reference maps to the instance. + Assertions.assertThat(source) + .describedAs("metric source %s", metricSourceName) + .isNotNull() + .isInstanceOf(WeakRefMetricsSource.class) + .extracting(m -> ((WeakRefMetricsSource) m).getSource()) + .isSameAs(instrumentation); + + // this will close the metrics system + instrumentation.close(); + + // iostats is still valid + assertThat(instrumentation.getIOStatistics()) + .describedAs("iostats of %s", instrumentation) + .isNotNull(); + + // no metrics + assertThat(S3AInstrumentation.hasMetricSystem()) + .describedAs("S3AInstrumentation.hasMetricSystem()") + .isFalse(); + + // metric lookup still works, so any invocation of an s3a + // method which still updates a metric also works + assertThat(instrumentation.lookupMetric(metricName)) + .describedAs("lookupMetric(%s) when closed", metricName) + .isNotNull(); + + // which we can implicitly verify by asking for it and + // verifying that we get given a different one back + // from the demand-created instance + MetricsSystem metrics2 = getMetricsSystem(); + assertThat(metrics2) + .describedAs("metric system 2") + .isNotSameAs(activeMetrics); + + // this is going to be a no-op + instrumentation.close(); + + // which we can verify because the metrics system doesn't + // get closed this time + assertThat(getMetricsSystem()) + .describedAs("metric system 3") + .isSameAs(metrics2); + } +} diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AAWSCredentialsProvider.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AAWSCredentialsProvider.java index 6030005d10f..6456cb5e12a 100644 --- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AAWSCredentialsProvider.java +++ b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AAWSCredentialsProvider.java @@ -22,9 +22,15 @@ import java.io.IOException; import java.io.InterruptedIOException; import java.net.URI; import java.nio.file.AccessDeniedException; +import java.util.ArrayList; import java.util.Arrays; import java.util.Collections; import java.util.List; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.concurrent.Future; +import java.util.concurrent.TimeUnit; +import javax.annotation.Nullable; import com.amazonaws.auth.AWSCredentials; import com.amazonaws.auth.AWSCredentialsProvider; @@ -37,6 +43,7 @@ import org.junit.rules.ExpectedException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.s3a.auth.AbstractSessionCredentialsProvider; import org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider; import org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException; import org.apache.hadoop.io.retry.RetryPolicy; @@ -46,6 +53,7 @@ import static org.apache.hadoop.fs.s3a.S3ATestConstants.*; import static org.apache.hadoop.fs.s3a.S3ATestUtils.*; import static org.apache.hadoop.fs.s3a.S3AUtils.*; import static org.apache.hadoop.test.LambdaTestUtils.intercept; +import static org.apache.hadoop.test.LambdaTestUtils.interceptFuture; import static org.junit.Assert.*; /** @@ -198,7 +206,7 @@ public class TestS3AAWSCredentialsProvider { /** * A credential provider whose constructor signature doesn't match. */ - static class ConstructorSignatureErrorProvider + protected static class ConstructorSignatureErrorProvider implements AWSCredentialsProvider { @SuppressWarnings("unused") @@ -218,7 +226,7 @@ public class TestS3AAWSCredentialsProvider { /** * A credential provider whose constructor raises an NPE. */ - static class ConstructorFailureProvider + protected static class ConstructorFailureProvider implements AWSCredentialsProvider { @SuppressWarnings("unused") @@ -246,7 +254,7 @@ public class TestS3AAWSCredentialsProvider { } } - static class AWSExceptionRaisingFactory implements AWSCredentialsProvider { + protected static class AWSExceptionRaisingFactory implements AWSCredentialsProvider { public static final String NO_AUTH = "No auth"; @@ -462,7 +470,7 @@ public class TestS3AAWSCredentialsProvider { /** * Credential provider which raises an IOE when constructed. */ - private static class IOERaisingProvider implements AWSCredentialsProvider { + protected static class IOERaisingProvider implements AWSCredentialsProvider { public IOERaisingProvider(URI uri, Configuration conf) throws IOException { @@ -480,4 +488,153 @@ public class TestS3AAWSCredentialsProvider { } } + private static final AWSCredentials EXPECTED_CREDENTIALS = new AWSCredentials() { + @Override + public String getAWSAccessKeyId() { + return "expectedAccessKey"; + } + + @Override + public String getAWSSecretKey() { + return "expectedSecret"; + } + }; + + /** + * Credential provider that takes a long time. + */ + protected static class SlowProvider extends AbstractSessionCredentialsProvider { + + public SlowProvider(@Nullable URI uri, Configuration conf) { + super(uri, conf); + } + + @Override + protected AWSCredentials createCredentials(Configuration config) throws IOException { + // yield to other callers to induce race condition + Thread.yield(); + return EXPECTED_CREDENTIALS; + } + } + + private static final int CONCURRENT_THREADS = 10; + + @Test + public void testConcurrentAuthentication() throws Throwable { + Configuration conf = createProviderConfiguration(SlowProvider.class.getName()); + Path testFile = getCSVTestPath(conf); + + AWSCredentialProviderList list = createAWSCredentialProviderSet(testFile.toUri(), conf); + + SlowProvider provider = (SlowProvider) list.getProviders().get(0); + + ExecutorService pool = Executors.newFixedThreadPool(CONCURRENT_THREADS); + + List> results = new ArrayList<>(); + + try { + assertFalse( + "Provider not initialized. isInitialized should be false", + provider.isInitialized()); + assertFalse( + "Provider not initialized. hasCredentials should be false", + provider.hasCredentials()); + if (provider.getInitializationException() != null) { + throw new AssertionError( + "Provider not initialized. getInitializationException should return null", + provider.getInitializationException()); + } + + for (int i = 0; i < CONCURRENT_THREADS; i++) { + results.add(pool.submit(() -> list.getCredentials())); + } + + for (Future result : results) { + AWSCredentials credentials = result.get(); + assertEquals("Access key from credential provider", + "expectedAccessKey", credentials.getAWSAccessKeyId()); + assertEquals("Secret key from credential provider", + "expectedSecret", credentials.getAWSSecretKey()); + } + } finally { + pool.awaitTermination(10, TimeUnit.SECONDS); + pool.shutdown(); + } + + assertTrue( + "Provider initialized without errors. isInitialized should be true", + provider.isInitialized()); + assertTrue( + "Provider initialized without errors. hasCredentials should be true", + provider.hasCredentials()); + if (provider.getInitializationException() != null) { + throw new AssertionError( + "Provider initialized without errors. getInitializationException should return null", + provider.getInitializationException()); + } + } + + /** + * Credential provider with error. + */ + protected static class ErrorProvider extends AbstractSessionCredentialsProvider { + + public ErrorProvider(@Nullable URI uri, Configuration conf) { + super(uri, conf); + } + + @Override + protected AWSCredentials createCredentials(Configuration config) throws IOException { + throw new IOException("expected error"); + } + } + + @Test + public void testConcurrentAuthenticationError() throws Throwable { + Configuration conf = createProviderConfiguration(ErrorProvider.class.getName()); + Path testFile = getCSVTestPath(conf); + + AWSCredentialProviderList list = createAWSCredentialProviderSet(testFile.toUri(), conf); + ErrorProvider provider = (ErrorProvider) list.getProviders().get(0); + + ExecutorService pool = Executors.newFixedThreadPool(CONCURRENT_THREADS); + + List> results = new ArrayList<>(); + + try { + assertFalse("Provider not initialized. isInitialized should be false", + provider.isInitialized()); + assertFalse("Provider not initialized. hasCredentials should be false", + provider.hasCredentials()); + if (provider.getInitializationException() != null) { + throw new AssertionError( + "Provider not initialized. getInitializationException should return null", + provider.getInitializationException()); + } + + for (int i = 0; i < CONCURRENT_THREADS; i++) { + results.add(pool.submit(() -> list.getCredentials())); + } + + for (Future result : results) { + interceptFuture(CredentialInitializationException.class, + "expected error", + result + ); + } + } finally { + pool.awaitTermination(10, TimeUnit.SECONDS); + pool.shutdown(); + } + + assertTrue( + "Provider initialization failed. isInitialized should be true", + provider.isInitialized()); + assertFalse( + "Provider initialization failed. hasCredentials should be false", + provider.hasCredentials()); + assertTrue( + "Provider initialization failed. getInitializationException should contain the error", + provider.getInitializationException().getMessage().contains("expected error")); + } } diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AProxy.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AProxy.java new file mode 100644 index 00000000000..e05ee25adfa --- /dev/null +++ b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AProxy.java @@ -0,0 +1,101 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a; + +import java.io.IOException; + +import com.amazonaws.ClientConfiguration; +import com.amazonaws.Protocol; +import org.assertj.core.api.Assertions; +import org.junit.Test; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.test.AbstractHadoopTestBase; + +import static org.apache.hadoop.fs.s3a.Constants.PROXY_HOST; +import static org.apache.hadoop.fs.s3a.Constants.PROXY_PORT; +import static org.apache.hadoop.fs.s3a.Constants.PROXY_SECURED; +import static org.apache.hadoop.fs.s3a.S3AUtils.initProxySupport; + +/** + * Tests to verify {@link S3AUtils} translates the proxy configurations + * are set correctly to Client configurations which are later used to construct + * the proxy in AWS SDK. + */ +public class TestS3AProxy extends AbstractHadoopTestBase { + + /** + * Verify Http proxy protocol. + */ + @Test + public void testProxyHttp() throws IOException { + Configuration proxyConfigForHttp = createProxyConfig(false); + verifyProxy(proxyConfigForHttp, false); + } + + /** + * Verify Https proxy protocol. + */ + @Test + public void testProxyHttps() throws IOException { + Configuration proxyConfigForHttps = createProxyConfig(true); + verifyProxy(proxyConfigForHttps, true); + } + + /** + * Verify default proxy protocol. + */ + @Test + public void testProxyDefault() throws IOException { + Configuration proxyConfigDefault = new Configuration(); + proxyConfigDefault.set(PROXY_HOST, "testProxyDefault"); + verifyProxy(proxyConfigDefault, false); + } + + /** + * Assert that the configuration set for a proxy gets translated to Client + * configuration with the correct protocol to be used by AWS SDK. + * @param proxyConfig Configuration used to set the proxy configs. + * @param isExpectedSecured What is the expected protocol for the proxy to + * be? true for https, and false for http. + * @throws IOException + */ + private void verifyProxy(Configuration proxyConfig, + boolean isExpectedSecured) + throws IOException { + ClientConfiguration awsConf = new ClientConfiguration(); + initProxySupport(proxyConfig, "test-bucket", awsConf); + Assertions.assertThat(awsConf.getProxyProtocol()) + .describedAs("Proxy protocol not as expected") + .isEqualTo(isExpectedSecured ? Protocol.HTTPS : Protocol.HTTP); + } + + /** + * Create a configuration file with proxy configs. + * @param isSecured Should the configured proxy be secured or not? + * @return configuration. + */ + private Configuration createProxyConfig(boolean isSecured) { + Configuration conf = new Configuration(); + conf.set(PROXY_HOST, "testProxy"); + conf.set(PROXY_PORT, "1234"); + conf.setBoolean(PROXY_SECURED, isSecured); + return conf; + } +} diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/audit/AbstractAuditingTest.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/audit/AbstractAuditingTest.java index c76e3fa968f..f5e5cd5e954 100644 --- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/audit/AbstractAuditingTest.java +++ b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/audit/AbstractAuditingTest.java @@ -20,8 +20,10 @@ package org.apache.hadoop.fs.s3a.audit; import java.io.IOException; import java.util.Map; +import java.util.function.Consumer; import com.amazonaws.services.s3.model.GetObjectMetadataRequest; +import com.amazonaws.services.s3.model.GetObjectRequest; import org.junit.After; import org.junit.Before; import org.slf4j.Logger; @@ -138,6 +140,17 @@ public abstract class AbstractAuditingTest extends AbstractHadoopTestBase { requestFactory.newGetObjectMetadataRequest("/")); } + /** + * Create a GetObject request and modify it before passing it through auditor. + * @param modifyRequest Consumer Interface for changing the request before passing to the auditor + * @return the request + */ + protected GetObjectRequest get(Consumer modifyRequest) { + GetObjectRequest req = requestFactory.newGetObjectRequest("/"); + modifyRequest.accept(req); + return manager.beforeExecution(req); + } + /** * Assert a head request fails as there is no * active span. @@ -210,4 +223,15 @@ public abstract class AbstractAuditingTest extends AbstractHadoopTestBase { .isEqualTo(expected); } + /** + * Assert the map does not contain the key, i.e, it is null. + * @param params map of params + * @param key key + */ + protected void assertMapNotContains(final Map params, final String key) { + assertThat(params.get(key)) + .describedAs(key) + .isNull(); + } + } diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/audit/TestHttpReferrerAuditHeader.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/audit/TestHttpReferrerAuditHeader.java index b653d24d416..af94e1455fc 100644 --- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/audit/TestHttpReferrerAuditHeader.java +++ b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/audit/TestHttpReferrerAuditHeader.java @@ -23,6 +23,7 @@ import java.util.Map; import java.util.regex.Matcher; import com.amazonaws.services.s3.model.GetObjectMetadataRequest; +import com.amazonaws.services.s3.model.GetObjectRequest; import org.junit.Before; import org.junit.Test; import org.slf4j.Logger; @@ -46,6 +47,7 @@ import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_OP; import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_PATH; import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_PATH2; import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_PRINCIPAL; +import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_RANGE; import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_THREAD0; import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_THREAD1; import static org.apache.hadoop.fs.audit.AuditConstants.PARAM_TIMESTAMP; @@ -115,6 +117,7 @@ public class TestHttpReferrerAuditHeader extends AbstractAuditingTest { assertThat(span.getTimestamp()) .describedAs("Timestamp of " + span) .isEqualTo(ts); + assertMapNotContains(params, PARAM_RANGE); assertMapContains(params, PARAM_TIMESTAMP, Long.toString(ts)); @@ -309,6 +312,44 @@ public class TestHttpReferrerAuditHeader extends AbstractAuditingTest { expectStrippedField("\"\"\"b\"", "b"); } + /** + * Verify that correct range is getting published in header. + */ + @Test + public void testGetObjectRange() throws Throwable { + AuditSpan span = span(); + GetObjectRequest request = get(getObjectRequest -> getObjectRequest.setRange(100, 200)); + Map headers + = request.getCustomRequestHeaders(); + assertThat(headers) + .describedAs("Custom headers") + .containsKey(HEADER_REFERRER); + String header = headers.get(HEADER_REFERRER); + LOG.info("Header is {}", header); + Map params + = HttpReferrerAuditHeader.extractQueryParameters(header); + assertMapContains(params, PARAM_RANGE, "100-200"); + } + + /** + * Verify that no range is getting added to the header in request without range. + */ + @Test + public void testGetObjectWithoutRange() throws Throwable { + AuditSpan span = span(); + GetObjectRequest request = get(getObjectRequest -> {}); + Map headers + = request.getCustomRequestHeaders(); + assertThat(headers) + .describedAs("Custom headers") + .containsKey(HEADER_REFERRER); + String header = headers.get(HEADER_REFERRER); + LOG.info("Header is {}", header); + Map params + = HttpReferrerAuditHeader.extractQueryParameters(header); + assertMapNotContains(params, PARAM_RANGE); + } + /** * Expect a field with quote stripping to match the expected value. * @param str string to strip diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/prefetch/TestS3ARemoteInputStream.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/prefetch/TestS3ARemoteInputStream.java index 4ab33ef6cd0..d449a79a5a8 100644 --- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/prefetch/TestS3ARemoteInputStream.java +++ b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/prefetch/TestS3ARemoteInputStream.java @@ -21,6 +21,7 @@ package org.apache.hadoop.fs.s3a.prefetch; import java.io.EOFException; import java.io.IOException; +import java.io.InputStream; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; @@ -35,6 +36,7 @@ import org.apache.hadoop.fs.s3a.S3ObjectAttributes; import org.apache.hadoop.fs.s3a.statistics.S3AInputStreamStatistics; import org.apache.hadoop.test.AbstractHadoopTestBase; +import static org.assertj.core.api.Assertions.assertThat; import static org.junit.Assert.assertEquals; /** @@ -97,7 +99,7 @@ public class TestS3ARemoteInputStream extends AbstractHadoopTestBase { private void testRead0SizedFileHelper(S3ARemoteInputStream inputStream, int bufferSize) throws Exception { - assertEquals(0, inputStream.available()); + assertAvailable(0, inputStream); assertEquals(-1, inputStream.read()); assertEquals(-1, inputStream.read()); @@ -121,8 +123,8 @@ public class TestS3ARemoteInputStream extends AbstractHadoopTestBase { private void testReadHelper(S3ARemoteInputStream inputStream, int bufferSize) throws Exception { - assertEquals(bufferSize, inputStream.available()); assertEquals(0, inputStream.read()); + assertAvailable(bufferSize - 1, inputStream); assertEquals(1, inputStream.read()); byte[] buffer = new byte[2]; @@ -170,12 +172,14 @@ public class TestS3ARemoteInputStream extends AbstractHadoopTestBase { int bufferSize, int fileSize) throws Exception { + assertAvailable(0, inputStream); assertEquals(0, inputStream.getPos()); - inputStream.seek(7); - assertEquals(7, inputStream.getPos()); + inputStream.seek(bufferSize); + assertAvailable(0, inputStream); + assertEquals(bufferSize, inputStream.getPos()); inputStream.seek(0); + assertAvailable(0, inputStream); - assertEquals(bufferSize, inputStream.available()); for (int i = 0; i < fileSize; i++) { assertEquals(i, inputStream.read()); } @@ -187,11 +191,20 @@ public class TestS3ARemoteInputStream extends AbstractHadoopTestBase { } } + // Can seek to the EOF: read() will then return -1. + inputStream.seek(fileSize); + assertEquals(-1, inputStream.read()); + // Test invalid seeks. ExceptionAsserts.assertThrows( EOFException.class, FSExceptionMessages.NEGATIVE_SEEK, () -> inputStream.seek(-1)); + + ExceptionAsserts.assertThrows( + EOFException.class, + FSExceptionMessages.CANNOT_SEEK_PAST_EOF, + () -> inputStream.seek(fileSize + 1)); } @Test @@ -217,7 +230,7 @@ public class TestS3ARemoteInputStream extends AbstractHadoopTestBase { assertEquals(7, inputStream.getPos()); inputStream.seek(0); - assertEquals(bufferSize, inputStream.available()); + assertAvailable(0, inputStream); for (int i = 0; i < fileSize; i++) { assertEquals(i, inputStream.read()); } @@ -251,9 +264,10 @@ public class TestS3ARemoteInputStream extends AbstractHadoopTestBase { private void testCloseHelper(S3ARemoteInputStream inputStream, int bufferSize) throws Exception { - assertEquals(bufferSize, inputStream.available()); + assertAvailable(0, inputStream); assertEquals(0, inputStream.read()); assertEquals(1, inputStream.read()); + assertAvailable(bufferSize - 2, inputStream); inputStream.close(); @@ -276,4 +290,11 @@ public class TestS3ARemoteInputStream extends AbstractHadoopTestBase { // Verify a second close() does not throw. inputStream.close(); } + + private static void assertAvailable(int expected, InputStream inputStream) + throws IOException { + assertThat(inputStream.available()) + .describedAs("Check available bytes on stream %s", inputStream) + .isEqualTo(expected); + } } diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/select/AbstractS3SelectTest.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/select/AbstractS3SelectTest.java index bf5d96e73b3..2c1a10a21d0 100644 --- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/select/AbstractS3SelectTest.java +++ b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/select/AbstractS3SelectTest.java @@ -60,7 +60,9 @@ import org.apache.hadoop.mapreduce.lib.input.LineRecordReader; import org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl; import org.apache.hadoop.util.DurationInfo; +import static org.apache.hadoop.fs.s3a.Constants.STORAGE_CLASS; import static org.apache.hadoop.fs.s3a.S3ATestUtils.getLandsatCSVPath; +import static org.apache.hadoop.fs.s3a.S3ATestUtils.removeBaseAndBucketOverrides; import static org.apache.hadoop.fs.s3a.select.CsvFile.ALL_QUOTES; import static org.apache.hadoop.fs.s3a.select.SelectConstants.*; import static org.apache.hadoop.test.LambdaTestUtils.intercept; @@ -280,6 +282,14 @@ public abstract class AbstractS3SelectTest extends AbstractS3ATestBase { .hasCapability(S3_SELECT_CAPABILITY); } + @Override + protected Configuration createConfiguration() { + Configuration conf = super.createConfiguration(); + removeBaseAndBucketOverrides(conf, STORAGE_CLASS); + + return conf; + } + /** * Setup: requires select to be available. */ diff --git a/hadoop-tools/hadoop-aws/src/test/resources/log4j.properties b/hadoop-tools/hadoop-aws/src/test/resources/log4j.properties index 0ec8d520428..306a79a20a2 100644 --- a/hadoop-tools/hadoop-aws/src/test/resources/log4j.properties +++ b/hadoop-tools/hadoop-aws/src/test/resources/log4j.properties @@ -53,6 +53,8 @@ log4j.logger.org.apache.hadoop.ipc.Server=WARN # for debugging low level S3a operations, uncomment these lines # Log all S3A classes log4j.logger.org.apache.hadoop.fs.s3a=DEBUG +# when logging at trace, the stack of the initialize() call is logged +#log4j.logger.org.apache.hadoop.fs.s3a.S3AFileSystem=TRACE #log4j.logger.org.apache.hadoop.fs.s3a.S3AUtils=INFO #log4j.logger.org.apache.hadoop.fs.s3a.Listing=INFO log4j.logger.org.apache.hadoop.fs.s3a.SDKV2Upgrade=WARN diff --git a/hadoop-tools/hadoop-azure/.gitignore b/hadoop-tools/hadoop-azure/.gitignore index 0e17efaa1eb..522210137ec 100644 --- a/hadoop-tools/hadoop-azure/.gitignore +++ b/hadoop-tools/hadoop-azure/.gitignore @@ -1,5 +1,6 @@ .checkstyle bin/ -src/test/resources/combinationConfigFiles src/test/resources/abfs-combination-test-configs.xml dev-support/testlogs +src/test/resources/accountSettings/* +!src/test/resources/accountSettings/accountName_settings.xml.template diff --git a/hadoop-tools/hadoop-azure/dev-support/testrun-scripts/runtests.sh b/hadoop-tools/hadoop-azure/dev-support/testrun-scripts/runtests.sh index 25d9593d573..400d0a23834 100755 --- a/hadoop-tools/hadoop-azure/dev-support/testrun-scripts/runtests.sh +++ b/hadoop-tools/hadoop-azure/dev-support/testrun-scripts/runtests.sh @@ -2,7 +2,7 @@ # shellcheck disable=SC2034 # unused variables are global in nature and used in testsupport.sh - +test set -eo pipefail # Licensed to the Apache Software Foundation (ASF) under one or more @@ -22,36 +22,154 @@ set -eo pipefail # shellcheck disable=SC1091 . dev-support/testrun-scripts/testsupport.sh +init -begin +resourceDir=src/test/resources/ +logdir=dev-support/testlogs/ +azureTestXml=azure-auth-keys.xml +azureTestXmlPath=$resourceDir$azureTestXml +processCount=8 -### ADD THE TEST COMBINATIONS BELOW. DO NOT EDIT THE ABOVE LINES. -### THE SCRIPT REQUIRES THE FOLLOWING UTILITIES xmlstarlet AND pcregrep. +## SECTION: TEST COMBINATION METHODS +runHNSOAuthTest() +{ + accountName=$(xmlstarlet sel -t -v '//property[name = "fs.azure.hnsTestAccountName"]/value' -n $azureTestXmlPath) + PROPERTIES=("fs.azure.account.auth.type") + VALUES=("OAuth") + triggerRun "HNS-OAuth" "$accountName" "$runTest" $processCount "$cleanUpTestContainers" +} -combination=HNS-OAuth -properties=("fs.azure.abfs.account.name" "fs.azure.test.namespace.enabled" -"fs.azure.account.auth.type") -values=("{account name}.dfs.core.windows.net" "true" "OAuth") -generateconfigs +runHNSSharedKeyTest() +{ + accountName=$(xmlstarlet sel -t -v '//property[name = "fs.azure.hnsTestAccountName"]/value' -n $azureTestXmlPath) + PROPERTIES=("fs.azure.account.auth.type") + VALUES=("SharedKey") + triggerRun "HNS-SharedKey" "$accountName" "$runTest" $processCount "$cleanUpTestContainers" +} -combination=AppendBlob-HNS-OAuth -properties=("fs.azure.abfs.account.name" "fs.azure.test.namespace.enabled" -"fs.azure.account.auth.type" "fs.azure.test.appendblob.enabled") -values=("{account name}.dfs.core.windows.net" "true" "OAuth" "true") -generateconfigs +runNonHNSSharedKeyTest() +{ + accountName=$(xmlstarlet sel -t -v '//property[name = "fs.azure.nonHnsTestAccountName"]/value' -n $azureTestXmlPath) + PROPERTIES=("fs.azure.account.auth.type") + VALUES=("SharedKey") + triggerRun "NonHNS-SharedKey" "$accountName" "$runTest" $processCount "$cleanUpTestContainers" +} -combination=HNS-SharedKey -properties=("fs.azure.abfs.account.name" "fs.azure.test.namespace.enabled" "fs.azure.account.auth.type") -values=("{account name}.dfs.core.windows.net" "true" "SharedKey") -generateconfigs +runAppendBlobHNSOAuthTest() +{ + accountName=$(xmlstarlet sel -t -v '//property[name = "fs.azure.hnsTestAccountName"]/value' -n $azureTestXmlPath) + PROPERTIES=("fs.azure.account.auth.type" "fs.azure.test.appendblob.enabled") + VALUES=("OAuth" "true") + triggerRun "AppendBlob-HNS-OAuth" "$accountName" "$runTest" $processCount "$cleanUpTestContainers" +} -combination=NonHNS-SharedKey -properties=("fs.azure.abfs.account.name" "fs.azure.test.namespace.enabled" "fs.azure.account.auth.type") -values=("{account name}.dfs.core.windows.net" "false" "SharedKey") -generateconfigs +runTest=false +cleanUpTestContainers=false +echo 'Ensure below are complete before running script:' +echo '1. Account specific settings file is present.' +echo ' Copy accountName_settings.xml.template to accountName_settings.xml' +echo ' where accountName in copied file name should be the test account name without domain' +echo ' (accountName_settings.xml.template is present in src/test/resources/accountName_settings' +echo ' folder. New account settings file to be added to same folder.)' +echo ' Follow instructions in the template to populate settings correctly for the account' +echo '2. In azure-auth-keys.xml, update properties fs.azure.hnsTestAccountName and fs.azure.nonHnsTestAccountName' +echo ' where accountNames should be the test account names without domain' +echo ' ' +echo ' ' +echo 'Choose action:' +echo '[Note - SET_ACTIVE_TEST_CONFIG will help activate the config for IDE/single test class runs]' +select scriptMode in SET_ACTIVE_TEST_CONFIG RUN_TEST CLEAN_UP_OLD_TEST_CONTAINERS SET_OR_CHANGE_TEST_ACCOUNT PRINT_LOG4J_LOG_PATHS_FROM_LAST_RUN +do + case $scriptMode in + SET_ACTIVE_TEST_CONFIG) + runTest=false + break + ;; + RUN_TEST) + runTest=true + read -r -p "Enter parallel test run process count [default - 8]: " processCount + processCount=${processCount:-8} + break + ;; + CLEAN_UP_OLD_TEST_CONTAINERS) + runTest=false + cleanUpTestContainers=true + break + ;; + SET_OR_CHANGE_TEST_ACCOUNT) + runTest=false + cleanUpTestContainers=false + accountSettingsFile="src/test/resources/azure-auth-keys.xml" + if [[ ! -f "$accountSettingsFile" ]]; + then + logOutput "No settings present. Creating new settings file ($accountSettingsFile) from template" + cp src/test/resources/azure-auth-keys.xml.template $accountSettingsFile + fi + vi $accountSettingsFile + exit 0 + break + ;; + PRINT_LOG4J_LOG_PATHS_FROM_LAST_RUN) + runTest=false + cleanUpTestContainers=false + logFilePaths=/tmp/logPaths + find target/ -name "*output.txt" > $logFilePaths + logOutput "$(cat $logFilePaths)" + rm $logFilePaths + exit 0 + break + ;; + *) logOutput "ERROR: Invalid selection" + ;; + esac +done -### DO NOT EDIT THE LINES BELOW. +## SECTION: COMBINATION DEFINITIONS AND TRIGGER -runtests "$@" +echo ' ' +echo 'Set the active test combination to run the action:' +select combo in HNS-OAuth HNS-SharedKey nonHNS-SharedKey AppendBlob-HNS-OAuth AllCombinationsTestRun Quit +do + case $combo in + HNS-OAuth) + runHNSOAuthTest + break + ;; + HNS-SharedKey) + runHNSSharedKeyTest + break + ;; + nonHNS-SharedKey) + runNonHNSSharedKeyTest + break + ;; + AppendBlob-HNS-OAuth) + runAppendBlobHNSOAuthTest + break + ;; + AllCombinationsTestRun) + if [ $runTest == false ] + then + logOutput "ERROR: Invalid selection for SET_ACTIVE_TEST_CONFIG. This is applicable only for RUN_TEST." + break + fi + runHNSOAuthTest + runHNSSharedKeyTest + runNonHNSSharedKeyTest + runAppendBlobHNSOAuthTest ## Keep this as the last run scenario always + break + ;; + Quit) + exit 0 + ;; + *) logOutput "ERROR: Invalid selection" + ;; + esac +done + +if [ $runTest == true ] +then + printAggregate +fi diff --git a/hadoop-tools/hadoop-azure/dev-support/testrun-scripts/testsupport.sh b/hadoop-tools/hadoop-azure/dev-support/testrun-scripts/testsupport.sh index 1b118ae1e82..28f96edd273 100644 --- a/hadoop-tools/hadoop-azure/dev-support/testrun-scripts/testsupport.sh +++ b/hadoop-tools/hadoop-azure/dev-support/testrun-scripts/testsupport.sh @@ -15,117 +15,88 @@ # See the License for the specific language governing permissions and # limitations under the License. -testresourcesdir=src/test/resources -combconfsdir=$testresourcesdir/combinationConfigFiles -combtestfile=$testresourcesdir/abfs-combination-test-configs.xml +resourceDir=src/test/resources/ +accountSettingsFolderName=accountSettings +combtestfile=$resourceDir +combtestfile+=abfs-combination-test-configs.xml +logdir=dev-support/testlogs/ -logdir=dev-support/testlogs testresultsregex="Results:(\n|.)*?Tests run:" -testresultsfilename= -starttime= -threadcount= -defaultthreadcount=8 +accountConfigFileSuffix="_settings.xml" +testOutputLogFolder=$logdir +testlogfilename=combinationTestLogFile -properties= -values= +fullRunStartTime=$(date +%s) +STARTTIME=$(date +%s) +ENDTIME=$(date +%s) -validate() { - if [ -z "$threadcount" ] ; then - threadcount=$defaultthreadcount - fi - numberegex='^[0-9]+$' - if ! [[ $threadcount =~ $numberegex ]] ; then - echo "Exiting. The script param (threadcount) should be a number" - exit -1 - fi - if [ -z "$combination" ]; then - echo "Exiting. combination cannot be empty" - exit -1 - fi - propertiessize=${#properties[@]} - valuessize=${#values[@]} - if [ "$propertiessize" -lt 1 ] || [ "$valuessize" -lt 1 ] || [ "$propertiessize" -ne "$valuessize" ]; then - echo "Exiting. Both properties and values arrays has to be populated and of same size. Please check for combination $combination" - exit -1 - fi - - for filename in "${combinations[@]}"; do - if [[ ! -f "$combconfsdir/$filename.xml" ]]; then - echo "Exiting. Combination config file ($combconfsdir/$combination.xml) does not exist." - exit -1 - fi - done -} - -checkdependencies() { - if ! [ "$(command -v pcregrep)" ]; then - echo "Exiting. pcregrep is required to run the script." - exit -1 - fi - if ! [ "$(command -v xmlstarlet)" ]; then - echo "Exiting. xmlstarlet is required to run the script." - exit -1 - fi -} - -cleancombinationconfigs() { - rm -rf $combconfsdir - mkdir -p $combconfsdir -} - -generateconfigs() { - combconffile="$combconfsdir/$combination.xml" - rm -rf "$combconffile" - cat > "$combconffile" << ENDOFFILE - - - -ENDOFFILE - - propertiessize=${#properties[@]} - valuessize=${#values[@]} - if [ "$propertiessize" -ne "$valuessize" ]; then - echo "Exiting. Number of properties and values differ for $combination" - exit -1 - fi - for ((i = 0; i < propertiessize; i++)); do - key=${properties[$i]} - val=${values[$i]} - changeconf "$key" "$val" - done - formatxml "$combconffile" -} - -formatxml() { - xmlstarlet fo -s 2 "$1" > "$1.tmp" - mv "$1.tmp" "$1" -} - -setactiveconf() { - if [[ ! -f "$combconfsdir/$combination.xml" ]]; then - echo "Exiting. Combination config file ($combconfsdir/$combination.xml) does not exist." - exit -1 +outputFormatOn="\033[0;95m" +outputFormatOff="\033[0m" + +triggerRun() +{ + echo ' ' + combination=$1 + accountName=$2 + runTest=$3 + processcount=$4 + cleanUpTestContainers=$5 + + if [ -z "$accountName" ]; then + logOutput "ERROR: Test account not configured. Re-run the script and choose SET_OR_CHANGE_TEST_ACCOUNT to configure the test account." + exit 1; fi + accountConfigFile=$accountSettingsFolderName/$accountName$accountConfigFileSuffix rm -rf $combtestfile cat > $combtestfile << ENDOFFILE ENDOFFILE + propertiessize=${#PROPERTIES[@]} + valuessize=${#VALUES[@]} + if [ "$propertiessize" -ne "$valuessize" ]; then + logOutput "Exiting. Number of properties and values differ for $combination" + exit 1 + fi + for ((i = 0; i < propertiessize; i++)); do + key=${PROPERTIES[$i]} + val=${VALUES[$i]} + echo "Combination specific property setting: [ key=$key , value=$val ]" + changeconf "$key" "$val" + done + formatxml "$combtestfile" xmlstarlet ed -P -L -s /configuration -t elem -n include -v "" $combtestfile - xmlstarlet ed -P -L -i /configuration/include -t attr -n href -v "combinationConfigFiles/$combination.xml" $combtestfile + xmlstarlet ed -P -L -i /configuration/include -t attr -n href -v "$accountConfigFile" $combtestfile xmlstarlet ed -P -L -i /configuration/include -t attr -n xmlns -v "http://www.w3.org/2001/XInclude" $combtestfile formatxml $combtestfile -} + echo ' ' + echo "Activated [$combtestfile] - for account: $accountName for combination $combination" + testlogfilename="$testOutputLogFolder/Test-Logs-$combination.txt" + touch "$testlogfilename" -changeconf() { - xmlstarlet ed -P -L -d "/configuration/property[name='$1']" "$combconffile" - xmlstarlet ed -P -L -s /configuration -t elem -n propertyTMP -v "" -s /configuration/propertyTMP -t elem -n name -v "$1" -r /configuration/propertyTMP -v property "$combconffile" - if ! xmlstarlet ed -P -L -s "/configuration/property[name='$1']" -t elem -n value -v "$2" "$combconffile" + if [ "$runTest" == true ] then - echo "Exiting. Changing config property failed." - exit -1 + STARTTIME=$(date +%s) + echo "Running test for combination $combination on account $accountName [ProcessCount=$processcount]" + logOutput "Test run report can be seen in $testlogfilename" + mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount="$processcount" verify >> "$testlogfilename" || true + ENDTIME=$(date +%s) + summary fi + + if [ "$cleanUpTestContainers" == true ] + then + mvn test -Dtest=org.apache.hadoop.fs.azurebfs.utils.CleanupTestContainers >> "$testlogfilename" || true + if grep -q "There are test failures" "$testlogfilename"; + then logOutput "ERROR: All test containers could not be deleted. Detailed error cause in $testlogfilename" + pcregrep -M "$testresultsregex" "$testlogfilename" + exit 0 + fi + + logOutput "Delete test containers - complete. Test run logs in - $testlogfilename" + fi + } summary() { @@ -134,17 +105,42 @@ summary() { echo "$combination" echo "========================" pcregrep -M "$testresultsregex" "$testlogfilename" - } >> "$testresultsfilename" + } >> "$aggregatedTestResult" printf "\n----- Test results -----\n" pcregrep -M "$testresultsregex" "$testlogfilename" - secondstaken=$((ENDTIME - STARTTIME)) mins=$((secondstaken / 60)) secs=$((secondstaken % 60)) printf "\nTime taken: %s mins %s secs.\n" "$mins" "$secs" - echo "Find test logs for the combination ($combination) in: $testlogfilename" - echo "Find consolidated test results in: $testresultsfilename" - echo "----------" + echo "Find test result for the combination ($combination) in: $testlogfilename" + logOutput "Consolidated test result is saved in: $aggregatedTestResult" + echo "------------------------" +} + +checkdependencies() { + if ! [ "$(command -v pcregrep)" ]; then + logOutput "Exiting. pcregrep is required to run the script." + exit 1 + fi + if ! [ "$(command -v xmlstarlet)" ]; then + logOutput "Exiting. xmlstarlet is required to run the script." + exit 1 + fi +} + +formatxml() { + xmlstarlet fo -s 2 "$1" > "$1.tmp" + mv "$1.tmp" "$1" +} + +changeconf() { + xmlstarlet ed -P -L -d "/configuration/property[name='$1']" "$combtestfile" + xmlstarlet ed -P -L -s /configuration -t elem -n propertyTMP -v "" -s /configuration/propertyTMP -t elem -n name -v "$1" -r /configuration/propertyTMP -v property "$combtestfile" + if ! xmlstarlet ed -P -L -s "/configuration/property[name='$1']" -t elem -n value -v "$2" "$combtestfile" + then + logOutput "Exiting. Changing config property failed." + exit 1 + fi } init() { @@ -153,89 +149,24 @@ init() { then echo "" echo "Exiting. Build failed." - exit -1 - fi - starttime=$(date +"%Y-%m-%d_%H-%M-%S") - mkdir -p "$logdir" - testresultsfilename="$logdir/$starttime/Test-Results.txt" - if [[ -z "$combinations" ]]; then - combinations=( $( ls $combconfsdir/*.xml )) - fi -} - -runtests() { - parseoptions "$@" - validate - if [ -z "$starttime" ]; then - init - fi - shopt -s nullglob - for combconffile in "${combinations[@]}"; do - STARTTIME=$(date +%s) - combination=$(basename "$combconffile" .xml) - mkdir -p "$logdir/$starttime" - testlogfilename="$logdir/$starttime/Test-Logs-$combination.txt" - printf "\nRunning the combination: %s..." "$combination" - setactiveconf - mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=$threadcount verify >> "$testlogfilename" || true - ENDTIME=$(date +%s) - summary - done -} - -begin() { - cleancombinationconfigs -} - -parseoptions() { -runactivate=0 -runtests=0 - while getopts ":c:a:t:" option; do - case "${option}" in - a) - if [[ "$runactivate" -eq "1" ]]; then - echo "-a Option is not multivalued" - exit 1 - fi - runactivate=1 - combination=$(basename "$OPTARG" .xml) - ;; - c) - runtests=1 - combination=$(basename "$OPTARG" .xml) - combinations+=("$combination") - ;; - t) - threadcount=$OPTARG - ;; - *|?|h) - if [[ -z "$combinations" ]]; then - combinations=( $( ls $combconfsdir/*.xml )) - fi - combstr="" - for combconffile in "${combinations[@]}"; do - combname=$(basename "$combconffile" .xml) - combstr="${combname}, ${combstr}" - done - combstr=${combstr:0:-2} - - echo "Usage: $0 [-n] [-a COMBINATION_NAME] [-c COMBINATION_NAME] [-t THREAD_COUNT]" - echo "" - echo "Where:" - echo " -a COMBINATION_NAME Specify the combination name which needs to be activated." - echo " Configured combinations: ${combstr}" - echo " -c COMBINATION_NAME Specify the combination name for test runs" - echo " -t THREAD_COUNT Specify the thread count" - exit 1 - ;; - esac - done - if [[ "$runactivate" -eq "1" && "$runtests" -eq "1" ]]; then - echo "Both activate (-a option) and test run combinations (-c option) cannot be specified together" exit 1 fi - if [[ "$runactivate" -eq "1" ]]; then - setactiveconf - exit 0 - fi + starttime=$(date +"%Y-%m-%d_%H-%M-%S") + testOutputLogFolder+=$starttime + mkdir -p "$testOutputLogFolder" + aggregatedTestResult="$testOutputLogFolder/Test-Results.txt" + } + + printAggregate() { + echo :::: AGGREGATED TEST RESULT :::: + cat "$aggregatedTestResult" + fullRunEndTime=$(date +%s) + fullRunTimeInSecs=$((fullRunEndTime - fullRunStartTime)) + mins=$((fullRunTimeInSecs / 60)) + secs=$((fullRunTimeInSecs % 60)) + printf "\nTime taken: %s mins %s secs.\n" "$mins" "$secs" + } + +logOutput() { + echo -e "$outputFormatOn" "$1" "$outputFormatOff" } diff --git a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/Abfs.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/Abfs.java index 32df9422386..e595b2f4efa 100644 --- a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/Abfs.java +++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/Abfs.java @@ -43,4 +43,13 @@ public class Abfs extends DelegateToFileSystem { public int getUriDefaultPort() { return -1; } + + /** + * Close the file system; the FileContext API doesn't have an explicit close. + */ + @Override + protected void finalize() throws Throwable { + fsImpl.close(); + super.finalize(); + } } diff --git a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java index fafc30372b4..80f803d80da 100644 --- a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java +++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java @@ -117,6 +117,10 @@ public class AbfsConfiguration{ DefaultValue = DEFAULT_OPTIMIZE_FOOTER_READ) private boolean optimizeFooterRead; + @BooleanConfigurationValidatorAnnotation(ConfigurationKey = FS_AZURE_ACCOUNT_LEVEL_THROTTLING_ENABLED, + DefaultValue = DEFAULT_FS_AZURE_ACCOUNT_LEVEL_THROTTLING_ENABLED) + private boolean accountThrottlingEnabled; + @IntegerConfigurationValidatorAnnotation(ConfigurationKey = AZURE_READ_BUFFER_SIZE, MinValue = MIN_BUFFER_SIZE, MaxValue = MAX_BUFFER_SIZE, @@ -260,6 +264,14 @@ public class AbfsConfiguration{ DefaultValue = DEFAULT_ENABLE_AUTOTHROTTLING) private boolean enableAutoThrottling; + @IntegerConfigurationValidatorAnnotation(ConfigurationKey = FS_AZURE_ACCOUNT_OPERATION_IDLE_TIMEOUT, + DefaultValue = DEFAULT_ACCOUNT_OPERATION_IDLE_TIMEOUT_MS) + private int accountOperationIdleTimeout; + + @IntegerConfigurationValidatorAnnotation(ConfigurationKey = FS_AZURE_ANALYSIS_PERIOD, + DefaultValue = DEFAULT_ANALYSIS_PERIOD_MS) + private int analysisPeriod; + @IntegerConfigurationValidatorAnnotation(ConfigurationKey = FS_AZURE_ABFS_IO_RATE_LIMIT, MinValue = 0, DefaultValue = RATE_LIMIT_DEFAULT) @@ -302,6 +314,11 @@ public class AbfsConfiguration{ DefaultValue = DEFAULT_ABFS_LATENCY_TRACK) private boolean trackLatency; + @BooleanConfigurationValidatorAnnotation( + ConfigurationKey = FS_AZURE_ENABLE_READAHEAD, + DefaultValue = DEFAULT_ENABLE_READAHEAD) + private boolean enabledReadAhead; + @LongConfigurationValidatorAnnotation(ConfigurationKey = FS_AZURE_SAS_TOKEN_RENEW_PERIOD_FOR_STREAMS, MinValue = 0, DefaultValue = DEFAULT_SAS_TOKEN_RENEW_PERIOD_FOR_STREAMS_IN_SECONDS) @@ -689,6 +706,10 @@ public class AbfsConfiguration{ return this.azureAppendBlobDirs; } + public boolean accountThrottlingEnabled() { + return accountThrottlingEnabled; + } + public String getAzureInfiniteLeaseDirs() { return this.azureInfiniteLeaseDirs; } @@ -731,6 +752,14 @@ public class AbfsConfiguration{ return this.enableAutoThrottling; } + public int getAccountOperationIdleTimeout() { + return accountOperationIdleTimeout; + } + + public int getAnalysisPeriod() { + return analysisPeriod; + } + public int getRateLimit() { return rateLimit; } @@ -915,6 +944,15 @@ public class AbfsConfiguration{ } } + public boolean isReadAheadEnabled() { + return this.enabledReadAhead; + } + + @VisibleForTesting + void setReadAheadEnabled(final boolean enabledReadAhead) { + this.enabledReadAhead = enabledReadAhead; + } + public int getReadAheadRange() { return this.readAheadRange; } diff --git a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/Abfss.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/Abfss.java index c33265ce324..ba20bbb5d76 100644 --- a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/Abfss.java +++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/Abfss.java @@ -43,4 +43,13 @@ public class Abfss extends DelegateToFileSystem { public int getUriDefaultPort() { return -1; } + + /** + * Close the file system; the FileContext API doesn't have an explicit close. + */ + @Override + protected void finalize() throws Throwable { + fsImpl.close(); + super.finalize(); + } } diff --git a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java index d0bdd9818db..5534b5fb44a 100644 --- a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java +++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java @@ -55,7 +55,6 @@ import org.apache.commons.lang3.tuple.Pair; import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.fs.azurebfs.commit.ResilientCommitByRename; import org.apache.hadoop.fs.azurebfs.services.AbfsClient; -import org.apache.hadoop.fs.azurebfs.services.AbfsClientThrottlingIntercept; import org.apache.hadoop.fs.azurebfs.services.AbfsListStatusRemoteIterator; import org.apache.hadoop.fs.RemoteIterator; import org.apache.hadoop.classification.InterfaceStability; @@ -118,6 +117,7 @@ import static org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.FS_AZURE import static org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.FS_AZURE_BLOCK_UPLOAD_BUFFER_DIR; import static org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.BLOCK_UPLOAD_ACTIVE_BLOCKS_DEFAULT; import static org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.DATA_BLOCKS_BUFFER_DEFAULT; +import static org.apache.hadoop.fs.azurebfs.constants.InternalConstants.CAPABILITY_SAFE_READAHEAD; import static org.apache.hadoop.fs.impl.PathCapabilitiesSupport.validatePathCapabilityArgs; import static org.apache.hadoop.fs.statistics.IOStatisticsLogging.logIOStatisticsAtLevel; import static org.apache.hadoop.util.functional.RemoteIterators.filteringRemoteIterator; @@ -225,7 +225,6 @@ public class AzureBlobFileSystem extends FileSystem } } - AbfsClientThrottlingIntercept.initializeSingleton(abfsConfiguration.isAutoThrottlingEnabled()); rateLimiting = RateLimitingFactory.create(abfsConfiguration.getRateLimit()); LOG.debug("Initializing AzureBlobFileSystem for {} complete", uri); } @@ -237,6 +236,7 @@ public class AzureBlobFileSystem extends FileSystem sb.append("uri=").append(uri); sb.append(", user='").append(abfsStore.getUser()).append('\''); sb.append(", primaryUserGroup='").append(abfsStore.getPrimaryGroup()).append('\''); + sb.append("[" + CAPABILITY_SAFE_READAHEAD + "]"); sb.append('}'); return sb.toString(); } @@ -1638,6 +1638,11 @@ public class AzureBlobFileSystem extends FileSystem new TracingContext(clientCorrelationId, fileSystemId, FSOperationType.HAS_PATH_CAPABILITY, tracingHeaderFormat, listener)); + + // probe for presence of the HADOOP-18546 readahead fix. + case CAPABILITY_SAFE_READAHEAD: + return true; + default: return super.hasPathCapability(p, capability); } diff --git a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java index 11397e03e5c..e5e70561265 100644 --- a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java +++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java @@ -808,6 +808,7 @@ public class AzureBlobFileSystemStore implements Closeable, ListingSupport { .withReadBufferSize(abfsConfiguration.getReadBufferSize()) .withReadAheadQueueDepth(abfsConfiguration.getReadAheadQueueDepth()) .withTolerateOobAppends(abfsConfiguration.getTolerateOobAppends()) + .isReadAheadEnabled(abfsConfiguration.isReadAheadEnabled()) .withReadSmallFilesCompletely(abfsConfiguration.readSmallFilesCompletely()) .withOptimizeFooterRead(abfsConfiguration.optimizeFooterRead()) .withReadAheadRange(abfsConfiguration.getReadAheadRange()) diff --git a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java index 9d3b2d5e82c..a59f76b6d0f 100644 --- a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java +++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java @@ -38,6 +38,7 @@ public final class ConfigurationKeys { public static final String FS_AZURE_ACCOUNT_KEY_PROPERTY_NAME = "fs.azure.account.key"; public static final String FS_AZURE_ACCOUNT_KEY_PROPERTY_NAME_REGX = "fs\\.azure\\.account\\.key\\.(.*)"; public static final String FS_AZURE_SECURE_MODE = "fs.azure.secure.mode"; + public static final String FS_AZURE_ACCOUNT_LEVEL_THROTTLING_ENABLED = "fs.azure.account.throttling.enabled"; // Retry strategy defined by the user public static final String AZURE_MIN_BACKOFF_INTERVAL = "fs.azure.io.retry.min.backoff.interval"; @@ -116,6 +117,8 @@ public final class ConfigurationKeys { public static final String AZURE_CREATE_REMOTE_FILESYSTEM_DURING_INITIALIZATION = "fs.azure.createRemoteFileSystemDuringInitialization"; public static final String AZURE_SKIP_USER_GROUP_METADATA_DURING_INITIALIZATION = "fs.azure.skipUserGroupMetadataDuringInitialization"; public static final String FS_AZURE_ENABLE_AUTOTHROTTLING = "fs.azure.enable.autothrottling"; + public static final String FS_AZURE_ACCOUNT_OPERATION_IDLE_TIMEOUT = "fs.azure.account.operation.idle.timeout"; + public static final String FS_AZURE_ANALYSIS_PERIOD = "fs.azure.analysis.period"; public static final String FS_AZURE_ALWAYS_USE_HTTPS = "fs.azure.always.use.https"; public static final String FS_AZURE_ATOMIC_RENAME_KEY = "fs.azure.atomic.rename.key"; /** This config ensures that during create overwrite an existing file will be @@ -186,6 +189,13 @@ public final class ConfigurationKeys { public static final String FS_AZURE_SKIP_SUPER_USER_REPLACEMENT = "fs.azure.identity.transformer.skip.superuser.replacement"; public static final String AZURE_KEY_ACCOUNT_KEYPROVIDER = "fs.azure.account.keyprovider"; public static final String AZURE_KEY_ACCOUNT_SHELLKEYPROVIDER_SCRIPT = "fs.azure.shellkeyprovider.script"; + + /** + * Enable or disable readahead buffer in AbfsInputStream. + * Value: {@value}. + */ + public static final String FS_AZURE_ENABLE_READAHEAD = "fs.azure.enable.readahead"; + /** Setting this true will make the driver use it's own RemoteIterator implementation */ public static final String FS_AZURE_ENABLE_ABFS_LIST_ITERATOR = "fs.azure.enable.abfslistiterator"; /** Server side encryption key */ diff --git a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java index 63d62a33b18..9994d9f5207 100644 --- a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java +++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java @@ -94,6 +94,9 @@ public final class FileSystemConfigurations { public static final boolean DEFAULT_ENABLE_FLUSH = true; public static final boolean DEFAULT_DISABLE_OUTPUTSTREAM_FLUSH = true; public static final boolean DEFAULT_ENABLE_AUTOTHROTTLING = true; + public static final boolean DEFAULT_FS_AZURE_ACCOUNT_LEVEL_THROTTLING_ENABLED = true; + public static final int DEFAULT_ACCOUNT_OPERATION_IDLE_TIMEOUT_MS = 60_000; + public static final int DEFAULT_ANALYSIS_PERIOD_MS = 10_000; public static final DelegatingSSLSocketFactory.SSLChannelMode DEFAULT_FS_AZURE_SSL_CHANNEL_MODE = DelegatingSSLSocketFactory.SSLChannelMode.Default; @@ -106,6 +109,7 @@ public final class FileSystemConfigurations { public static final boolean DEFAULT_ABFS_LATENCY_TRACK = false; public static final long DEFAULT_SAS_TOKEN_RENEW_PERIOD_FOR_STREAMS_IN_SECONDS = 120; + public static final boolean DEFAULT_ENABLE_READAHEAD = true; public static final String DEFAULT_FS_AZURE_USER_AGENT_PREFIX = EMPTY_STRING; public static final String DEFAULT_VALUE_UNKNOWN = "UNKNOWN"; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/metrics/DisableEventTypeMetrics.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/InternalConstants.java similarity index 56% rename from hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/metrics/DisableEventTypeMetrics.java rename to hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/InternalConstants.java index 7b4af0c3e09..85603b0bfd8 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/metrics/DisableEventTypeMetrics.java +++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/InternalConstants.java @@ -1,4 +1,4 @@ -/** +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,28 +15,32 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -package org.apache.hadoop.yarn.metrics; + +package org.apache.hadoop.fs.azurebfs.constants; import org.apache.hadoop.classification.InterfaceAudience; -import org.apache.hadoop.metrics2.MetricsCollector; -import org.apache.hadoop.metrics2.annotation.Metrics; +/** + * Constants which are used internally and which don't fit into the other + * classes. + * For use within the {@code hadoop-azure} module only. + */ @InterfaceAudience.Private -@Metrics(context="yarn") -public class DisableEventTypeMetrics implements EventTypeMetrics { - @Override - public void increment(Enum type, long processingTimeUs) { - //nop - return; - } - @Override - public void getMetrics(MetricsCollector collector, boolean all) { - //nop - return; +public final class InternalConstants { + + private InternalConstants() { } - @Override - public long get(Enum type) { - return 0; - } + /** + * Does this version of the store have safe readahead? + * Possible combinations of this and the probe + * {@code "fs.capability.etags.available"}. + *

    + *
  1. {@value}: store is safe
  2. + *
  3. no etags: store is safe
  4. + *
  5. etags and not {@value}: store is UNSAFE
  6. + *
+ */ + public static final String CAPABILITY_SAFE_READAHEAD = + "fs.azure.capability.readahead.safe"; } diff --git a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java index aa72ed64e6e..25562660ae2 100644 --- a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java +++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java @@ -101,6 +101,7 @@ public class AbfsClient implements Closeable { private AccessTokenProvider tokenProvider; private SASTokenProvider sasTokenProvider; private final AbfsCounters abfsCounters; + private final AbfsThrottlingIntercept intercept; private final ListeningScheduledExecutorService executorService; @@ -120,6 +121,7 @@ public class AbfsClient implements Closeable { this.retryPolicy = abfsClientContext.getExponentialRetryPolicy(); this.accountName = abfsConfiguration.getAccountName().substring(0, abfsConfiguration.getAccountName().indexOf(AbfsHttpConstants.DOT)); this.authType = abfsConfiguration.getAuthType(accountName); + this.intercept = AbfsThrottlingInterceptFactory.getInstance(accountName, abfsConfiguration); String encryptionKey = this.abfsConfiguration .getClientProvidedEncryptionKey(); @@ -216,6 +218,10 @@ public class AbfsClient implements Closeable { return sharedKeyCredentials; } + AbfsThrottlingIntercept getIntercept() { + return intercept; + } + List createDefaultHeaders() { final List requestHeaders = new ArrayList(); requestHeaders.add(new AbfsHttpHeader(X_MS_VERSION, xMsVersion)); @@ -1130,6 +1136,10 @@ public class AbfsClient implements Closeable { sasToken = cachedSasToken; LOG.trace("Using cached SAS token."); } + // if SAS Token contains a prefix of ?, it should be removed + if (sasToken.charAt(0) == '?') { + sasToken = sasToken.substring(1); + } queryBuilder.setSASToken(sasToken); LOG.trace("SAS token fetch complete for {} on {}", operation, path); } catch (Exception ex) { @@ -1273,6 +1283,14 @@ public class AbfsClient implements Closeable { return abfsCounters; } + /** + * Getter for abfsConfiguration from AbfsClient. + * @return AbfsConfiguration instance + */ + protected AbfsConfiguration getAbfsConfiguration() { + return abfsConfiguration; + } + public int getNumLeaseThreads() { return abfsConfiguration.getNumLeaseThreads(); } diff --git a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClientThrottlingAnalyzer.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClientThrottlingAnalyzer.java index 6dfd352954d..f1eb3a2a774 100644 --- a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClientThrottlingAnalyzer.java +++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClientThrottlingAnalyzer.java @@ -20,20 +20,23 @@ package org.apache.hadoop.fs.azurebfs.services; import java.util.Timer; import java.util.TimerTask; +import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicLong; import java.util.concurrent.atomic.AtomicReference; import org.apache.hadoop.classification.VisibleForTesting; +import org.apache.hadoop.fs.azurebfs.AbfsConfiguration; import org.apache.hadoop.util.Preconditions; import org.apache.commons.lang3.StringUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; +import static org.apache.hadoop.util.Time.now; + class AbfsClientThrottlingAnalyzer { private static final Logger LOG = LoggerFactory.getLogger( AbfsClientThrottlingAnalyzer.class); - private static final int DEFAULT_ANALYSIS_PERIOD_MS = 10 * 1000; private static final int MIN_ANALYSIS_PERIOD_MS = 1000; private static final int MAX_ANALYSIS_PERIOD_MS = 30000; private static final double MIN_ACCEPTABLE_ERROR_PERCENTAGE = .1; @@ -50,42 +53,38 @@ class AbfsClientThrottlingAnalyzer { private String name = null; private Timer timer = null; private AtomicReference blobMetrics = null; + private AtomicLong lastExecutionTime = null; + private final AtomicBoolean isOperationOnAccountIdle = new AtomicBoolean(false); + private AbfsConfiguration abfsConfiguration = null; + private boolean accountLevelThrottlingEnabled = true; private AbfsClientThrottlingAnalyzer() { // hide default constructor } - /** - * Creates an instance of the AbfsClientThrottlingAnalyzer class with - * the specified name. - * - * @param name a name used to identify this instance. - * @throws IllegalArgumentException if name is null or empty. - */ - AbfsClientThrottlingAnalyzer(String name) throws IllegalArgumentException { - this(name, DEFAULT_ANALYSIS_PERIOD_MS); - } - /** * Creates an instance of the AbfsClientThrottlingAnalyzer class with * the specified name and period. * * @param name A name used to identify this instance. - * @param period The frequency, in milliseconds, at which metrics are - * analyzed. + * @param abfsConfiguration The configuration set. * @throws IllegalArgumentException If name is null or empty. * If period is less than 1000 or greater than 30000 milliseconds. */ - AbfsClientThrottlingAnalyzer(String name, int period) + AbfsClientThrottlingAnalyzer(String name, AbfsConfiguration abfsConfiguration) throws IllegalArgumentException { Preconditions.checkArgument( StringUtils.isNotEmpty(name), "The argument 'name' cannot be null or empty."); + int period = abfsConfiguration.getAnalysisPeriod(); Preconditions.checkArgument( period >= MIN_ANALYSIS_PERIOD_MS && period <= MAX_ANALYSIS_PERIOD_MS, "The argument 'period' must be between 1000 and 30000."); this.name = name; - this.analysisPeriodMs = period; + this.abfsConfiguration = abfsConfiguration; + this.accountLevelThrottlingEnabled = abfsConfiguration.accountThrottlingEnabled(); + this.analysisPeriodMs = abfsConfiguration.getAnalysisPeriod(); + this.lastExecutionTime = new AtomicLong(now()); this.blobMetrics = new AtomicReference( new AbfsOperationMetrics(System.currentTimeMillis())); this.timer = new Timer( @@ -95,6 +94,47 @@ class AbfsClientThrottlingAnalyzer { analysisPeriodMs); } + /** + * Resumes the timer if it was stopped. + */ + private void resumeTimer() { + blobMetrics = new AtomicReference( + new AbfsOperationMetrics(System.currentTimeMillis())); + timer.schedule(new TimerTaskImpl(), + analysisPeriodMs, + analysisPeriodMs); + isOperationOnAccountIdle.set(false); + } + + /** + * Synchronized method to suspend or resume timer. + * @param timerFunctionality resume or suspend. + * @param timerTask The timertask object. + * @return true or false. + */ + private synchronized boolean timerOrchestrator(TimerFunctionality timerFunctionality, + TimerTask timerTask) { + switch (timerFunctionality) { + case RESUME: + if (isOperationOnAccountIdle.get()) { + resumeTimer(); + } + break; + case SUSPEND: + if (accountLevelThrottlingEnabled && (System.currentTimeMillis() + - lastExecutionTime.get() >= getOperationIdleTimeout())) { + isOperationOnAccountIdle.set(true); + timerTask.cancel(); + timer.purge(); + return true; + } + break; + default: + break; + } + return false; + } + /** * Updates metrics with results from the current storage operation. * @@ -104,12 +144,13 @@ class AbfsClientThrottlingAnalyzer { public void addBytesTransferred(long count, boolean isFailedOperation) { AbfsOperationMetrics metrics = blobMetrics.get(); if (isFailedOperation) { - metrics.bytesFailed.addAndGet(count); - metrics.operationsFailed.incrementAndGet(); + metrics.addBytesFailed(count); + metrics.incrementOperationsFailed(); } else { - metrics.bytesSuccessful.addAndGet(count); - metrics.operationsSuccessful.incrementAndGet(); + metrics.addBytesSuccessful(count); + metrics.incrementOperationsSuccessful(); } + blobMetrics.set(metrics); } /** @@ -117,6 +158,8 @@ class AbfsClientThrottlingAnalyzer { * @return true if Thread sleeps(Throttling occurs) else false. */ public boolean suspendIfNecessary() { + lastExecutionTime.set(now()); + timerOrchestrator(TimerFunctionality.RESUME, null); int duration = sleepDuration; if (duration > 0) { try { @@ -134,19 +177,27 @@ class AbfsClientThrottlingAnalyzer { return sleepDuration; } + int getOperationIdleTimeout() { + return abfsConfiguration.getAccountOperationIdleTimeout(); + } + + AtomicBoolean getIsOperationOnAccountIdle() { + return isOperationOnAccountIdle; + } + private int analyzeMetricsAndUpdateSleepDuration(AbfsOperationMetrics metrics, int sleepDuration) { final double percentageConversionFactor = 100; - double bytesFailed = metrics.bytesFailed.get(); - double bytesSuccessful = metrics.bytesSuccessful.get(); - double operationsFailed = metrics.operationsFailed.get(); - double operationsSuccessful = metrics.operationsSuccessful.get(); + double bytesFailed = metrics.getBytesFailed().get(); + double bytesSuccessful = metrics.getBytesSuccessful().get(); + double operationsFailed = metrics.getOperationsFailed().get(); + double operationsSuccessful = metrics.getOperationsSuccessful().get(); double errorPercentage = (bytesFailed <= 0) ? 0 : (percentageConversionFactor * bytesFailed / (bytesFailed + bytesSuccessful)); - long periodMs = metrics.endTime - metrics.startTime; + long periodMs = metrics.getEndTime() - metrics.getStartTime(); double newSleepDuration; @@ -238,10 +289,13 @@ class AbfsClientThrottlingAnalyzer { } long now = System.currentTimeMillis(); - if (now - blobMetrics.get().startTime >= analysisPeriodMs) { + if (timerOrchestrator(TimerFunctionality.SUSPEND, this)) { + return; + } + if (now - blobMetrics.get().getStartTime() >= analysisPeriodMs) { AbfsOperationMetrics oldMetrics = blobMetrics.getAndSet( new AbfsOperationMetrics(now)); - oldMetrics.endTime = now; + oldMetrics.setEndTime(now); sleepDuration = analyzeMetricsAndUpdateSleepDuration(oldMetrics, sleepDuration); } @@ -252,24 +306,4 @@ class AbfsClientThrottlingAnalyzer { } } } - - /** - * Stores Abfs operation metrics during each analysis period. - */ - static class AbfsOperationMetrics { - private AtomicLong bytesFailed; - private AtomicLong bytesSuccessful; - private AtomicLong operationsFailed; - private AtomicLong operationsSuccessful; - private long endTime; - private long startTime; - - AbfsOperationMetrics(long startTime) { - this.startTime = startTime; - this.bytesFailed = new AtomicLong(); - this.bytesSuccessful = new AtomicLong(); - this.operationsFailed = new AtomicLong(); - this.operationsSuccessful = new AtomicLong(); - } - } } diff --git a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClientThrottlingIntercept.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClientThrottlingIntercept.java index 7303e833418..52a46bc7469 100644 --- a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClientThrottlingIntercept.java +++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClientThrottlingIntercept.java @@ -19,10 +19,12 @@ package org.apache.hadoop.fs.azurebfs.services; import java.net.HttpURLConnection; +import java.util.concurrent.locks.ReentrantLock; import org.slf4j.Logger; import org.slf4j.LoggerFactory; +import org.apache.hadoop.fs.azurebfs.AbfsConfiguration; import org.apache.hadoop.fs.azurebfs.AbfsStatistic; import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations; @@ -38,35 +40,89 @@ import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations; * and sleeps just enough to minimize errors, allowing optimal ingress and/or * egress throughput. */ -public final class AbfsClientThrottlingIntercept { +public final class AbfsClientThrottlingIntercept implements AbfsThrottlingIntercept { private static final Logger LOG = LoggerFactory.getLogger( AbfsClientThrottlingIntercept.class); private static final String RANGE_PREFIX = "bytes="; - private static AbfsClientThrottlingIntercept singleton = null; - private AbfsClientThrottlingAnalyzer readThrottler = null; - private AbfsClientThrottlingAnalyzer writeThrottler = null; - private static boolean isAutoThrottlingEnabled = false; + private static AbfsClientThrottlingIntercept singleton; // singleton, initialized in static initialization block + private static final ReentrantLock LOCK = new ReentrantLock(); + private final AbfsClientThrottlingAnalyzer readThrottler; + private final AbfsClientThrottlingAnalyzer writeThrottler; + private final String accountName; // Hide default constructor - private AbfsClientThrottlingIntercept() { - readThrottler = new AbfsClientThrottlingAnalyzer("read"); - writeThrottler = new AbfsClientThrottlingAnalyzer("write"); + public AbfsClientThrottlingIntercept(String accountName, AbfsConfiguration abfsConfiguration) { + this.accountName = accountName; + this.readThrottler = setAnalyzer("read " + accountName, abfsConfiguration); + this.writeThrottler = setAnalyzer("write " + accountName, abfsConfiguration); + LOG.debug("Client-side throttling is enabled for the ABFS file system for the account : {}", accountName); } - public static synchronized void initializeSingleton(boolean enableAutoThrottling) { - if (!enableAutoThrottling) { - return; - } + // Hide default constructor + private AbfsClientThrottlingIntercept(AbfsConfiguration abfsConfiguration) { + //Account name is kept as empty as same instance is shared across all accounts + this.accountName = ""; + this.readThrottler = setAnalyzer("read", abfsConfiguration); + this.writeThrottler = setAnalyzer("write", abfsConfiguration); + LOG.debug("Client-side throttling is enabled for the ABFS file system using singleton intercept"); + } + + /** + * Sets the analyzer for the intercept. + * @param name Name of the analyzer. + * @param abfsConfiguration The configuration. + * @return AbfsClientThrottlingAnalyzer instance. + */ + private AbfsClientThrottlingAnalyzer setAnalyzer(String name, AbfsConfiguration abfsConfiguration) { + return new AbfsClientThrottlingAnalyzer(name, abfsConfiguration); + } + + /** + * Returns the analyzer for read operations. + * @return AbfsClientThrottlingAnalyzer for read. + */ + AbfsClientThrottlingAnalyzer getReadThrottler() { + return readThrottler; + } + + /** + * Returns the analyzer for write operations. + * @return AbfsClientThrottlingAnalyzer for write. + */ + AbfsClientThrottlingAnalyzer getWriteThrottler() { + return writeThrottler; + } + + /** + * Creates a singleton object of the AbfsClientThrottlingIntercept. + * which is shared across all filesystem instances. + * @param abfsConfiguration configuration set. + * @return singleton object of intercept. + */ + static AbfsClientThrottlingIntercept initializeSingleton(AbfsConfiguration abfsConfiguration) { if (singleton == null) { - singleton = new AbfsClientThrottlingIntercept(); - isAutoThrottlingEnabled = true; - LOG.debug("Client-side throttling is enabled for the ABFS file system."); + LOCK.lock(); + try { + if (singleton == null) { + singleton = new AbfsClientThrottlingIntercept(abfsConfiguration); + LOG.debug("Client-side throttling is enabled for the ABFS file system."); + } + } finally { + LOCK.unlock(); + } } + return singleton; } - static void updateMetrics(AbfsRestOperationType operationType, - AbfsHttpOperation abfsHttpOperation) { - if (!isAutoThrottlingEnabled || abfsHttpOperation == null) { + /** + * Updates the metrics for successful and failed read and write operations. + * @param operationType Only applicable for read and write operations. + * @param abfsHttpOperation Used for status code and data transferred. + */ + @Override + public void updateMetrics(AbfsRestOperationType operationType, + AbfsHttpOperation abfsHttpOperation) { + if (abfsHttpOperation == null) { return; } @@ -82,7 +138,7 @@ public final class AbfsClientThrottlingIntercept { case Append: contentLength = abfsHttpOperation.getBytesSent(); if (contentLength > 0) { - singleton.writeThrottler.addBytesTransferred(contentLength, + writeThrottler.addBytesTransferred(contentLength, isFailedOperation); } break; @@ -90,7 +146,7 @@ public final class AbfsClientThrottlingIntercept { String range = abfsHttpOperation.getConnection().getRequestProperty(HttpHeaderConfigurations.RANGE); contentLength = getContentLengthIfKnown(range); if (contentLength > 0) { - singleton.readThrottler.addBytesTransferred(contentLength, + readThrottler.addBytesTransferred(contentLength, isFailedOperation); } break; @@ -104,21 +160,18 @@ public final class AbfsClientThrottlingIntercept { * uses this to suspend the request, if necessary, to minimize errors and * maximize throughput. */ - static void sendingRequest(AbfsRestOperationType operationType, + @Override + public void sendingRequest(AbfsRestOperationType operationType, AbfsCounters abfsCounters) { - if (!isAutoThrottlingEnabled) { - return; - } - switch (operationType) { case ReadFile: - if (singleton.readThrottler.suspendIfNecessary() + if (readThrottler.suspendIfNecessary() && abfsCounters != null) { abfsCounters.incrementCounter(AbfsStatistic.READ_THROTTLES, 1); } break; case Append: - if (singleton.writeThrottler.suspendIfNecessary() + if (writeThrottler.suspendIfNecessary() && abfsCounters != null) { abfsCounters.incrementCounter(AbfsStatistic.WRITE_THROTTLES, 1); } diff --git a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java index 553ccdcbc0a..fdeaf701775 100644 --- a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java +++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java @@ -50,6 +50,7 @@ import static java.lang.Math.min; import static org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.ONE_KB; import static org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.STREAM_ID_LEN; +import static org.apache.hadoop.fs.azurebfs.constants.InternalConstants.CAPABILITY_SAFE_READAHEAD; import static org.apache.hadoop.util.StringUtils.toLowerCase; /** @@ -137,7 +138,7 @@ public class AbfsInputStream extends FSInputStream implements CanUnbuffer, this.tolerateOobAppends = abfsInputStreamContext.isTolerateOobAppends(); this.eTag = eTag; this.readAheadRange = abfsInputStreamContext.getReadAheadRange(); - this.readAheadEnabled = true; + this.readAheadEnabled = abfsInputStreamContext.isReadAheadEnabled(); this.alwaysReadBufferSize = abfsInputStreamContext.shouldReadBufferSizeAlways(); this.bufferedPreadDisabled = abfsInputStreamContext @@ -745,6 +746,11 @@ public class AbfsInputStream extends FSInputStream implements CanUnbuffer, return buffer; } + @VisibleForTesting + public boolean isReadAheadEnabled() { + return readAheadEnabled; + } + @VisibleForTesting public int getReadAheadRange() { return readAheadRange; @@ -823,11 +829,12 @@ public class AbfsInputStream extends FSInputStream implements CanUnbuffer, @Override public String toString() { final StringBuilder sb = new StringBuilder(super.toString()); + sb.append("AbfsInputStream@(").append(this.hashCode()).append("){"); + sb.append("[" + CAPABILITY_SAFE_READAHEAD + "]"); if (streamStatistics != null) { - sb.append("AbfsInputStream@(").append(this.hashCode()).append("){"); - sb.append(streamStatistics.toString()); - sb.append("}"); + sb.append(", ").append(streamStatistics); } + sb.append("}"); return sb.toString(); } diff --git a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStreamContext.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStreamContext.java index ae69cde6efa..e258958b1a1 100644 --- a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStreamContext.java +++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStreamContext.java @@ -35,6 +35,8 @@ public class AbfsInputStreamContext extends AbfsStreamContext { private boolean tolerateOobAppends; + private boolean isReadAheadEnabled = true; + private boolean alwaysReadBufferSize; private int readAheadBlockSize; @@ -72,6 +74,12 @@ public class AbfsInputStreamContext extends AbfsStreamContext { return this; } + public AbfsInputStreamContext isReadAheadEnabled( + final boolean isReadAheadEnabled) { + this.isReadAheadEnabled = isReadAheadEnabled; + return this; + } + public AbfsInputStreamContext withReadAheadRange( final int readAheadRange) { this.readAheadRange = readAheadRange; @@ -141,6 +149,10 @@ public class AbfsInputStreamContext extends AbfsStreamContext { return tolerateOobAppends; } + public boolean isReadAheadEnabled() { + return isReadAheadEnabled; + } + public int getReadAheadRange() { return readAheadRange; } diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/CopyRequest.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsNoOpThrottlingIntercept.java similarity index 61% rename from hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/CopyRequest.java rename to hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsNoOpThrottlingIntercept.java index c25a630cc29..6b84e583c33 100644 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/CopyRequest.java +++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsNoOpThrottlingIntercept.java @@ -16,26 +16,22 @@ * limitations under the License. */ -package org.apache.hadoop.fs.swift.http; +package org.apache.hadoop.fs.azurebfs.services; -import org.apache.http.client.methods.HttpEntityEnclosingRequestBase; +final class AbfsNoOpThrottlingIntercept implements AbfsThrottlingIntercept { -/** - * Implementation for SwiftRestClient to make copy requests. - * COPY is a method that came with WebDAV (RFC2518), and is not something that - * can be handled by all proxies en-route to a filesystem. - */ -class CopyRequest extends HttpEntityEnclosingRequestBase { + public static final AbfsNoOpThrottlingIntercept INSTANCE = new AbfsNoOpThrottlingIntercept(); - CopyRequest() { - super(); + private AbfsNoOpThrottlingIntercept() { } - /** - * @return http method name - */ @Override - public String getMethod() { - return "COPY"; + public void updateMetrics(final AbfsRestOperationType operationType, + final AbfsHttpOperation abfsHttpOperation) { } -} \ No newline at end of file + + @Override + public void sendingRequest(final AbfsRestOperationType operationType, + final AbfsCounters abfsCounters) { + } +} diff --git a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOperationMetrics.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOperationMetrics.java new file mode 100644 index 00000000000..2e53367d39f --- /dev/null +++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOperationMetrics.java @@ -0,0 +1,139 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.services; + +import java.util.concurrent.atomic.AtomicLong; + +/** + * Stores Abfs operation metrics during each analysis period. + */ +class AbfsOperationMetrics { + + /** + * No of bytes which could not be transferred due to a failed operation. + */ + private final AtomicLong bytesFailed; + + /** + * No of bytes successfully transferred during a successful operation. + */ + private final AtomicLong bytesSuccessful; + + /** + * Total no of failed operations. + */ + private final AtomicLong operationsFailed; + + /** + * Total no of successful operations. + */ + private final AtomicLong operationsSuccessful; + + /** + * Time when collection of metrics ended. + */ + private long endTime; + + /** + * Time when the collection of metrics started. + */ + private final long startTime; + + AbfsOperationMetrics(long startTime) { + this.startTime = startTime; + this.bytesFailed = new AtomicLong(); + this.bytesSuccessful = new AtomicLong(); + this.operationsFailed = new AtomicLong(); + this.operationsSuccessful = new AtomicLong(); + } + + /** + * + * @return bytes failed to transfer. + */ + AtomicLong getBytesFailed() { + return bytesFailed; + } + + /** + * + * @return bytes successfully transferred. + */ + AtomicLong getBytesSuccessful() { + return bytesSuccessful; + } + + /** + * + * @return no of operations failed. + */ + AtomicLong getOperationsFailed() { + return operationsFailed; + } + + /** + * + * @return no of successful operations. + */ + AtomicLong getOperationsSuccessful() { + return operationsSuccessful; + } + + /** + * + * @return end time of metric collection. + */ + long getEndTime() { + return endTime; + } + + /** + * + * @param endTime sets the end time. + */ + void setEndTime(final long endTime) { + this.endTime = endTime; + } + + /** + * + * @return start time of metric collection. + */ + long getStartTime() { + return startTime; + } + + void addBytesFailed(long bytes) { + this.getBytesFailed().addAndGet(bytes); + } + + void addBytesSuccessful(long bytes) { + this.getBytesSuccessful().addAndGet(bytes); + } + + void incrementOperationsFailed() { + this.getOperationsFailed().incrementAndGet(); + } + + void incrementOperationsSuccessful() { + this.getOperationsSuccessful().incrementAndGet(); + } + +} + diff --git a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java index 74b267d563e..00da9b66013 100644 --- a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java +++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java @@ -45,6 +45,8 @@ public class AbfsRestOperation { private final AbfsRestOperationType operationType; // Blob FS client, which has the credentials, retry policy, and logs. private final AbfsClient client; + // Return intercept instance + private final AbfsThrottlingIntercept intercept; // the HTTP method (PUT, PATCH, POST, GET, HEAD, or DELETE) private final String method; // full URL including query parameters @@ -145,6 +147,7 @@ public class AbfsRestOperation { || AbfsHttpConstants.HTTP_METHOD_PATCH.equals(method)); this.sasToken = sasToken; this.abfsCounters = client.getAbfsCounters(); + this.intercept = client.getIntercept(); } /** @@ -241,7 +244,8 @@ public class AbfsRestOperation { */ private boolean executeHttpOperation(final int retryCount, TracingContext tracingContext) throws AzureBlobFileSystemException { - AbfsHttpOperation httpOperation = null; + AbfsHttpOperation httpOperation; + try { // initialize the HTTP request and open the connection httpOperation = new AbfsHttpOperation(url, method, requestHeaders); @@ -278,8 +282,7 @@ public class AbfsRestOperation { // dump the headers AbfsIoUtils.dumpHeadersToDebugLog("Request Headers", httpOperation.getConnection().getRequestProperties()); - AbfsClientThrottlingIntercept.sendingRequest(operationType, abfsCounters); - + intercept.sendingRequest(operationType, abfsCounters); if (hasRequestBody) { // HttpUrlConnection requires httpOperation.sendRequest(buffer, bufferOffset, bufferLength); @@ -317,7 +320,7 @@ public class AbfsRestOperation { return false; } finally { - AbfsClientThrottlingIntercept.updateMetrics(operationType, httpOperation); + intercept.updateMetrics(operationType, httpOperation); } LOG.debug("HttpRequest: {}: {}", operationType, httpOperation); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/timeline/TimelineVersionWatcher.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsThrottlingIntercept.java similarity index 53% rename from hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/timeline/TimelineVersionWatcher.java rename to hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsThrottlingIntercept.java index b00f13a0aba..57b5095bb32 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/timeline/TimelineVersionWatcher.java +++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsThrottlingIntercept.java @@ -16,32 +16,34 @@ * limitations under the License. */ -package org.apache.hadoop.yarn.server.timeline; +package org.apache.hadoop.fs.azurebfs.services; import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; -import org.junit.rules.TestWatcher; -import org.junit.runner.Description; +/** + * An interface for Abfs Throttling Interface. + */ @InterfaceAudience.Private @InterfaceStability.Unstable -public class TimelineVersionWatcher extends TestWatcher { - static final float DEFAULT_TIMELINE_VERSION = 1.0f; - private TimelineVersion version; - - @Override - protected void starting(Description description) { - version = description.getAnnotation(TimelineVersion.class); - } +public interface AbfsThrottlingIntercept { /** - * @return the version number of timeline server for the current test (using - * timeline server v1.0 by default) + * Updates the metrics for successful and failed read and write operations. + * @param operationType Only applicable for read and write operations. + * @param abfsHttpOperation Used for status code and data transferred. */ - public float getTimelineVersion() { - if(version == null) { - return DEFAULT_TIMELINE_VERSION; - } - return version.value(); - } + void updateMetrics(AbfsRestOperationType operationType, + AbfsHttpOperation abfsHttpOperation); + + /** + * Called before the request is sent. Client-side throttling + * uses this to suspend the request, if necessary, to minimize errors and + * maximize throughput. + * @param operationType Only applicable for read and write operations. + * @param abfsCounters Used for counters. + */ + void sendingRequest(AbfsRestOperationType operationType, + AbfsCounters abfsCounters); + } diff --git a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsThrottlingInterceptFactory.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsThrottlingInterceptFactory.java new file mode 100644 index 00000000000..279b7a318ca --- /dev/null +++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsThrottlingInterceptFactory.java @@ -0,0 +1,102 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.services; + +import java.util.ArrayList; +import java.util.List; + +import org.apache.hadoop.fs.azurebfs.AbfsConfiguration; +import org.apache.hadoop.util.WeakReferenceMap; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * Class to get an instance of throttling intercept class per account. + */ +final class AbfsThrottlingInterceptFactory { + + private AbfsThrottlingInterceptFactory() { + } + + private static AbfsConfiguration abfsConfig; + + /** + * List of references notified of loss. + */ + private static List lostReferences = new ArrayList<>(); + + private static final Logger LOG = LoggerFactory.getLogger( + AbfsThrottlingInterceptFactory.class); + + /** + * Map which stores instance of ThrottlingIntercept class per account. + */ + private static WeakReferenceMap + interceptMap = new WeakReferenceMap<>( + AbfsThrottlingInterceptFactory::factory, + AbfsThrottlingInterceptFactory::referenceLost); + + /** + * Returns instance of throttling intercept. + * @param accountName Account name. + * @return instance of throttling intercept. + */ + private static AbfsClientThrottlingIntercept factory(final String accountName) { + return new AbfsClientThrottlingIntercept(accountName, abfsConfig); + } + + /** + * Reference lost callback. + * @param accountName key lost. + */ + private static void referenceLost(String accountName) { + lostReferences.add(accountName); + } + + /** + * Returns an instance of AbfsThrottlingIntercept. + * + * @param accountName The account for which we need instance of throttling intercept. + @param abfsConfiguration The object of abfsconfiguration class. + * @return Instance of AbfsThrottlingIntercept. + */ + static synchronized AbfsThrottlingIntercept getInstance(String accountName, + AbfsConfiguration abfsConfiguration) { + abfsConfig = abfsConfiguration; + AbfsThrottlingIntercept intercept; + if (!abfsConfiguration.isAutoThrottlingEnabled()) { + return AbfsNoOpThrottlingIntercept.INSTANCE; + } + // If singleton is enabled use a static instance of the intercept class for all accounts + if (!abfsConfiguration.accountThrottlingEnabled()) { + intercept = AbfsClientThrottlingIntercept.initializeSingleton( + abfsConfiguration); + } else { + // Return the instance from the map + intercept = interceptMap.get(accountName); + if (intercept == null) { + intercept = new AbfsClientThrottlingIntercept(accountName, + abfsConfiguration); + interceptMap.put(accountName, intercept); + } + } + return intercept; + } +} diff --git a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBufferManager.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBufferManager.java index 317aaf545a1..031545f57a1 100644 --- a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBufferManager.java +++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBufferManager.java @@ -101,6 +101,7 @@ final class ReadBufferManager { // hide instance constructor private ReadBufferManager() { + LOGGER.trace("Creating readbuffer manager with HADOOP-18546 patch"); } @@ -544,7 +545,6 @@ final class ReadBufferManager { LOGGER.debug("Purging stale buffers for AbfsInputStream {} ", stream); readAheadQueue.removeIf(readBuffer -> readBuffer.getStream() == stream); purgeList(stream, completedReadList); - purgeList(stream, inProgressList); } /** @@ -642,4 +642,9 @@ final class ReadBufferManager { freeList.clear(); completedReadList.add(buf); } + + @VisibleForTesting + int getNumBuffers() { + return NUM_BUFFERS; + } } diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/AcceptAllFilter.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimerFunctionality.java similarity index 74% rename from hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/AcceptAllFilter.java rename to hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimerFunctionality.java index 16c9da25777..bf7da69ec49 100644 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/AcceptAllFilter.java +++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimerFunctionality.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,17 +15,12 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -package org.apache.hadoop.fs.swift; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.fs.PathFilter; +package org.apache.hadoop.fs.azurebfs.services; -/** - * A path filter that accepts everything - */ -public class AcceptAllFilter implements PathFilter { - @Override - public boolean accept(Path file) { - return true; - } +public enum TimerFunctionality { + RESUME, + + SUSPEND } + diff --git a/hadoop-tools/hadoop-azure/src/site/markdown/abfs.md b/hadoop-tools/hadoop-azure/src/site/markdown/abfs.md index 35d36055604..31498df1790 100644 --- a/hadoop-tools/hadoop-azure/src/site/markdown/abfs.md +++ b/hadoop-tools/hadoop-azure/src/site/markdown/abfs.md @@ -767,6 +767,12 @@ Hflush() being the only documented API that can provide persistent data transfer, Flush() also attempting to persist buffered data will lead to performance issues. + +### Account level throttling Options + +`fs.azure.account.operation.idle.timeout`: This value specifies the time after which the timer for the analyzer (read or +write) should be paused until no new request is made again. The default value for the same is 60 seconds. + ### HNS Check Options Config `fs.azure.account.hns.enabled` provides an option to specify whether the storage account is HNS enabled or not. In case the config is not provided, @@ -877,6 +883,9 @@ when there are too many writes from the same process. tuned with this config considering each queued request holds a buffer. Set the value 3 or 4 times the value set for s.azure.write.max.concurrent.requests. +`fs.azure.analysis.period`: The time after which sleep duration is recomputed after analyzing metrics. The default value +for the same is 10 seconds. + ### Security Options `fs.azure.always.use.https`: Enforces to use HTTPS instead of HTTP when the flag is made true. Irrespective of the flag, `AbfsClient` will use HTTPS if the secure diff --git a/hadoop-tools/hadoop-azure/src/site/markdown/index.md b/hadoop-tools/hadoop-azure/src/site/markdown/index.md index 2af6b498a27..595353896d1 100644 --- a/hadoop-tools/hadoop-azure/src/site/markdown/index.md +++ b/hadoop-tools/hadoop-azure/src/site/markdown/index.md @@ -435,7 +435,7 @@ The service is expected to return a response in JSON format: ### Delegation token support in WASB -Delegation token support support can be enabled in WASB using the following configuration: +Delegation token support can be enabled in WASB using the following configuration: ```xml @@ -507,7 +507,7 @@ The cache is maintained at a filesystem object level. ``` -The maximum number of entries that that cache can hold can be customized using the following setting: +The maximum number of entries that the cache can hold can be customized using the following setting: ``` fs.azure.authorization.caching.maxentries diff --git a/hadoop-tools/hadoop-azure/src/site/markdown/testing_azure.md b/hadoop-tools/hadoop-azure/src/site/markdown/testing_azure.md index 933f86be3e8..e249a7bd6a9 100644 --- a/hadoop-tools/hadoop-azure/src/site/markdown/testing_azure.md +++ b/hadoop-tools/hadoop-azure/src/site/markdown/testing_azure.md @@ -602,26 +602,76 @@ various test combinations, it will: 2. Run tests for all combinations 3. Summarize results across all the test combination runs. -As a pre-requisite step, fill config values for test accounts and credentials -needed for authentication in `src/test/resources/azure-auth-keys.xml.template` -and rename as `src/test/resources/azure-auth-keys.xml`. +Below are the pre-requiste steps to follow: +1. Copy -**To add a new test combination:** Templates for mandatory test combinations -for PR validation are present in `dev-support/testrun-scripts/runtests.sh`. -If a new one needs to be added, add a combination set within -`dev-support/testrun-scripts/runtests.sh` similar to the ones already defined -and -1. Provide a new combination name -2. Update properties and values array which need to be effective for the test -combination -3. Call generateconfigs + ./src/test/resources/azure-auth-keys.xml.template + TO + ./src/test/resources/azure-auth-keys.xml + Update account names that should be used in the test run for HNS and non-HNS + combinations in the 2 properties present in the xml (account name should be + without domain part), namely + + fs.azure.hnsTestAccountName + fs.azure.nonHnsTestAccountName + azure-auth-keys.xml is listed in .gitignore, so any accidental account name leak is prevented. + +``` +XInclude is supported, so for extra security secrets may be +kept out of the source tree then referenced through an XInclude element: + + +``` + +2. Create account config files (one config file per account) in folder: + + ./src/test/resources/accountSettings/ + Follow the instruction in the start of the template file + + accountName_settings.xml.template + within accountSettings folder while creating account config file. + New files created in folder accountSettings is listed in .gitignore to + prevent accidental cred leaks. **To run PR validation:** Running command -* `dev-support/testrun-scripts/runtests.sh` will generate configurations for -each of the combinations defined and run tests for all the combinations. -* `dev-support/testrun-scripts/runtests.sh -c {combinationname}` Specific -combinations can be provided with -c option. If combinations are provided -with -c option, tests for only those combinations will be run. +* `dev-support/testrun-scripts/runtests.sh` will prompt as below: +```bash +Choose action: +[Note - SET_ACTIVE_TEST_CONFIG will help activate the config for IDE/single test class runs] +1) SET_ACTIVE_TEST_CONFIG 4) SET_OR_CHANGE_TEST_ACCOUNT +2) RUN_TEST 5) PRINT_LOG4J_LOG_PATHS_FROM_LAST_RUN +3) CLEAN_UP_OLD_TEST_CONTAINERS +#? 2 +``` +Enter 1: for setting active combination for IDE test run/single mvn test class runs. + +Enter 2: for choosing the combination to choose for mvn full test suite. + +Enter 3: For clean-up of any abruptly ending test leaving auto generated test +container on the account. + +Enter 4: To create/modify the config file that decides the account to use for specific test combination. + +Enter 5: To print the log4j paths the last test runs. + +On next prompt, current list of combinations to choose are provided. +Sample for Run_TEST action: +```bash +Enter parallel test run process count [default - 8]: 4 +Set the active test combination to run the action: +1) HNS-OAuth 3) nonHNS-SharedKey 5) AllCombinationsTestRun +2) HNS-SharedKey 4) AppendBlob-HNS-OAuth 6) Quit +#? 1 + +Combination specific property setting: [ key=fs.azure.account.auth.type , value=OAuth ] + +Activated [src/test/resources/abfs-combination-test-configs.xml] - for account: snvijayacontracttest for combination HNS-OAuth +Running test for combination HNS-OAuth on account snvijayacontracttest [ProcessCount=4] +Test run report can be seen in dev-support/testlogs/2022-10-07_05-23-22/Test-Logs-HNS-OAuth.txt +```` + +Provide the option for the action chosen first. **Test logs:** Test runs will create a folder within dev-support/testlogs to save the test logs. Folder name will be the test start timestamp. The mvn verify @@ -632,25 +682,18 @@ consolidated results of all the combination runs will be saved into a file as Test-Results.log in the same folder. When run for PR validation, the consolidated test results needs to be pasted into the PR comment section. -**To generate config for use in IDE:** Running command with -a (activate) option -`dev-support/testrun-scripts/runtests.sh -a {combination name}` will update -the effective config relevant for the specific test combination. Hence the same -config files used by the mvn test runs can be used for IDE without any manual -updates needed within config file. +**To add a new test combination:** Templates for mandatory test combinations +for PR validation are present in `dev-support/testrun-scripts/runtests.sh`. +If a new one needs to be added, add a combination to +`dev-support/testrun-scripts/runtests.sh`. +(Refer to current active combinations within +`SECTION: COMBINATION DEFINITIONS AND TRIGGER` and +`SECTION: TEST COMBINATION METHODS` in the script). -**Other command line options:** -* -a Specify the combination name which needs to be -activated. This is to be used to generate config for use in IDE. -* -c Specify the combination name for test runs. If this -config is specified, tests for only the specified combinations will run. All -combinations of tests will be running if this config is not specified. -* -t ABFS mvn tests are run in parallel mode. Tests by default -are run with 8 thread count. It can be changed by providing -t +**Test Configuration Details:** -In order to test ABFS, please add the following configuration to your -`src/test/resources/azure-auth-keys.xml` file. Note that the ABFS tests include -compatibility tests which require WASB credentials, in addition to the ABFS -credentials. + Note that the ABFS tests include compatibility tests which require WASB + credentials, in addition to the ABFS credentials. ```xml diff --git a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsReadWriteAndSeek.java b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsReadWriteAndSeek.java index 5bd6eaff42e..beada775ae8 100644 --- a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsReadWriteAndSeek.java +++ b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsReadWriteAndSeek.java @@ -32,7 +32,6 @@ import org.apache.hadoop.fs.azurebfs.constants.FSOperationType; import org.apache.hadoop.fs.azurebfs.services.AbfsInputStream; import org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream; import org.apache.hadoop.fs.azurebfs.utils.TracingHeaderValidator; -import org.apache.hadoop.fs.statistics.IOStatisticsLogging; import org.apache.hadoop.fs.statistics.IOStatisticsSource; import static org.apache.hadoop.fs.CommonConfigurationKeys.IOSTATISTICS_LOGGING_LEVEL_INFO; @@ -40,6 +39,7 @@ import static org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.A import static org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.DEFAULT_READ_BUFFER_SIZE; import static org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.MAX_BUFFER_SIZE; import static org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.MIN_BUFFER_SIZE; +import static org.apache.hadoop.fs.statistics.IOStatisticsLogging.logIOStatisticsAtLevel; /** * Test read, write and seek. @@ -50,18 +50,27 @@ import static org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.M public class ITestAbfsReadWriteAndSeek extends AbstractAbfsScaleTest { private static final String TEST_PATH = "/testfile"; - @Parameterized.Parameters(name = "Size={0}") + /** + * Parameterize on read buffer size and readahead. + * For test performance, a full x*y test matrix is not used. + * @return the test parameters + */ + @Parameterized.Parameters(name = "Size={0}-readahead={1}") public static Iterable sizes() { - return Arrays.asList(new Object[][]{{MIN_BUFFER_SIZE}, - {DEFAULT_READ_BUFFER_SIZE}, - {APPENDBLOB_MAX_WRITE_BUFFER_SIZE}, - {MAX_BUFFER_SIZE}}); + return Arrays.asList(new Object[][]{{MIN_BUFFER_SIZE, true}, + {DEFAULT_READ_BUFFER_SIZE, false}, + {DEFAULT_READ_BUFFER_SIZE, true}, + {APPENDBLOB_MAX_WRITE_BUFFER_SIZE, false}, + {MAX_BUFFER_SIZE, true}}); } private final int size; + private final boolean readaheadEnabled; - public ITestAbfsReadWriteAndSeek(final int size) throws Exception { + public ITestAbfsReadWriteAndSeek(final int size, + final boolean readaheadEnabled) throws Exception { this.size = size; + this.readaheadEnabled = readaheadEnabled; } @Test @@ -74,6 +83,7 @@ public class ITestAbfsReadWriteAndSeek extends AbstractAbfsScaleTest { final AbfsConfiguration abfsConfiguration = fs.getAbfsStore().getAbfsConfiguration(); abfsConfiguration.setWriteBufferSize(bufferSize); abfsConfiguration.setReadBufferSize(bufferSize); + abfsConfiguration.setReadAheadEnabled(readaheadEnabled); final byte[] b = new byte[2 * bufferSize]; new Random().nextBytes(b); @@ -85,7 +95,7 @@ public class ITestAbfsReadWriteAndSeek extends AbstractAbfsScaleTest { } finally{ stream.close(); } - IOStatisticsLogging.logIOStatisticsAtLevel(LOG, IOSTATISTICS_LOGGING_LEVEL_INFO, stream); + logIOStatisticsAtLevel(LOG, IOSTATISTICS_LOGGING_LEVEL_INFO, stream); final byte[] readBuffer = new byte[2 * bufferSize]; int result; @@ -109,7 +119,7 @@ public class ITestAbfsReadWriteAndSeek extends AbstractAbfsScaleTest { inputStream.seek(0); result = inputStream.read(readBuffer, 0, bufferSize); } - IOStatisticsLogging.logIOStatisticsAtLevel(LOG, IOSTATISTICS_LOGGING_LEVEL_INFO, statisticsSource); + logIOStatisticsAtLevel(LOG, IOSTATISTICS_LOGGING_LEVEL_INFO, statisticsSource); assertNotEquals("data read in final read()", -1, result); assertArrayEquals(readBuffer, b); @@ -121,6 +131,7 @@ public class ITestAbfsReadWriteAndSeek extends AbstractAbfsScaleTest { final AbfsConfiguration abfsConfiguration = fs.getAbfsStore().getAbfsConfiguration(); int bufferSize = MIN_BUFFER_SIZE; abfsConfiguration.setReadBufferSize(bufferSize); + abfsConfiguration.setReadAheadEnabled(readaheadEnabled); final byte[] b = new byte[bufferSize * 10]; new Random().nextBytes(b); @@ -132,8 +143,10 @@ public class ITestAbfsReadWriteAndSeek extends AbstractAbfsScaleTest { ((AbfsOutputStream) stream.getWrappedStream()) .getStreamID())); stream.write(b); + logIOStatisticsAtLevel(LOG, IOSTATISTICS_LOGGING_LEVEL_INFO, stream); } + final byte[] readBuffer = new byte[4 * bufferSize]; int result; fs.registerListener( @@ -146,6 +159,7 @@ public class ITestAbfsReadWriteAndSeek extends AbstractAbfsScaleTest { ((AbfsInputStream) inputStream.getWrappedStream()) .getStreamID())); result = inputStream.read(readBuffer, 0, bufferSize*4); + logIOStatisticsAtLevel(LOG, IOSTATISTICS_LOGGING_LEVEL_INFO, inputStream); } fs.registerListener(null); } diff --git a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelegationSAS.java b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelegationSAS.java index edc3930607c..b164689ef80 100644 --- a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelegationSAS.java +++ b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelegationSAS.java @@ -479,4 +479,17 @@ public class ITestAzureBlobFileSystemDelegationSAS extends AbstractAbfsIntegrati "r--r-----", fileStatus.getPermission().toString()); } + + @Test + public void testSASQuesMarkPrefix() throws Exception { + AbfsConfiguration testConfig = this.getConfiguration(); + // the SAS Token Provider is changed + testConfig.set(FS_AZURE_SAS_TOKEN_PROVIDER_TYPE, "org.apache.hadoop.fs.azurebfs.extensions.MockWithPrefixSASTokenProvider"); + + AzureBlobFileSystem testFs = (AzureBlobFileSystem) FileSystem.newInstance(getRawConfiguration()); + Path testFile = new Path("/testSASPrefixQuesMark"); + + // the creation of this filesystem should work correctly even when a SAS Token is generated with a ? prefix + testFs.create(testFile).close(); + } } diff --git a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestFileSystemInitialization.java b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestFileSystemInitialization.java index 8b60dd801cb..f7d4a5b7a83 100644 --- a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestFileSystemInitialization.java +++ b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestFileSystemInitialization.java @@ -20,14 +20,22 @@ package org.apache.hadoop.fs.azurebfs; import java.net.URI; +import org.assertj.core.api.Assertions; import org.junit.Test; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.CommonConfigurationKeysPublic; import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.azurebfs.constants.FileSystemUriSchemes; import org.apache.hadoop.fs.azurebfs.services.AuthType; +import static org.apache.hadoop.fs.CommonPathCapabilities.ETAGS_AVAILABLE; +import static org.apache.hadoop.fs.CommonPathCapabilities.ETAGS_PRESERVED_IN_RENAME; +import static org.apache.hadoop.fs.CommonPathCapabilities.FS_ACLS; +import static org.apache.hadoop.fs.azurebfs.constants.InternalConstants.CAPABILITY_SAFE_READAHEAD; +import static org.junit.Assume.assumeTrue; + /** * Test AzureBlobFileSystem initialization. */ @@ -74,4 +82,28 @@ public class ITestFileSystemInitialization extends AbstractAbfsIntegrationTest { assertNotNull("working directory", fs.getWorkingDirectory()); } } + + @Test + public void testFileSystemCapabilities() throws Throwable { + final AzureBlobFileSystem fs = getFileSystem(); + + final Path p = new Path("}"); + // etags always present + Assertions.assertThat(fs.hasPathCapability(p, ETAGS_AVAILABLE)) + .describedAs("path capability %s in %s", ETAGS_AVAILABLE, fs) + .isTrue(); + // readahead always correct + Assertions.assertThat(fs.hasPathCapability(p, CAPABILITY_SAFE_READAHEAD)) + .describedAs("path capability %s in %s", CAPABILITY_SAFE_READAHEAD, fs) + .isTrue(); + + // etags-over-rename and ACLs are either both true or both false. + final boolean etagsAcrossRename = fs.hasPathCapability(p, ETAGS_PRESERVED_IN_RENAME); + final boolean acls = fs.hasPathCapability(p, FS_ACLS); + Assertions.assertThat(etagsAcrossRename) + .describedAs("capabilities %s=%s and %s=%s in %s", + ETAGS_PRESERVED_IN_RENAME, etagsAcrossRename, + FS_ACLS, acls, fs) + .isEqualTo(acls); + } } diff --git a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestTracingContext.java b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestTracingContext.java index cf1a89dd1ea..b91a3e2208b 100644 --- a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestTracingContext.java +++ b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestTracingContext.java @@ -130,10 +130,10 @@ public class TestTracingContext extends AbstractAbfsIntegrationTest { testClasses.put(new ITestAzureBlobFileSystemListStatus(), //liststatus ITestAzureBlobFileSystemListStatus.class.getMethod("testListPath")); - testClasses.put(new ITestAbfsReadWriteAndSeek(MIN_BUFFER_SIZE), //open, + testClasses.put(new ITestAbfsReadWriteAndSeek(MIN_BUFFER_SIZE, true), //open, // read, write ITestAbfsReadWriteAndSeek.class.getMethod("testReadAheadRequestID")); - testClasses.put(new ITestAbfsReadWriteAndSeek(MIN_BUFFER_SIZE), //read (bypassreadahead) + testClasses.put(new ITestAbfsReadWriteAndSeek(MIN_BUFFER_SIZE, false), //read (bypassreadahead) ITestAbfsReadWriteAndSeek.class .getMethod("testReadAndWriteWithDifferentBufferSizesAndSeek")); testClasses.put(new ITestAzureBlobFileSystemAppend(), //append diff --git a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/constants/TestConfigurationKeys.java b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/constants/TestConfigurationKeys.java index 565eb38c4f7..9e40f22d231 100644 --- a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/constants/TestConfigurationKeys.java +++ b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/constants/TestConfigurationKeys.java @@ -24,6 +24,9 @@ package org.apache.hadoop.fs.azurebfs.constants; public final class TestConfigurationKeys { public static final String FS_AZURE_ACCOUNT_NAME = "fs.azure.account.name"; public static final String FS_AZURE_ABFS_ACCOUNT_NAME = "fs.azure.abfs.account.name"; + public static final String FS_AZURE_ABFS_ACCOUNT1_NAME = "fs.azure.abfs.account1.name"; + public static final String FS_AZURE_ENABLE_AUTOTHROTTLING = "fs.azure.enable.autothrottling"; + public static final String FS_AZURE_ANALYSIS_PERIOD = "fs.azure.analysis.period"; public static final String FS_AZURE_ACCOUNT_KEY = "fs.azure.account.key"; public static final String FS_AZURE_CONTRACT_TEST_URI = "fs.contract.test.fs.abfs"; public static final String FS_AZURE_TEST_NAMESPACE_ENABLED_ACCOUNT = "fs.azure.test.namespace.enabled"; diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftException.java b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/extensions/MockWithPrefixSASTokenProvider.java similarity index 52% rename from hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftException.java rename to hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/extensions/MockWithPrefixSASTokenProvider.java index eba674fee5d..ed701c4669c 100644 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftException.java +++ b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/extensions/MockWithPrefixSASTokenProvider.java @@ -16,28 +16,25 @@ * limitations under the License. */ -package org.apache.hadoop.fs.swift.exceptions; +package org.apache.hadoop.fs.azurebfs.extensions; import java.io.IOException; -/** - * A Swift-specific exception -subclasses exist - * for various specific problems. - */ -public class SwiftException extends IOException { - public SwiftException() { - super(); - } +public class MockWithPrefixSASTokenProvider extends MockSASTokenProvider { - public SwiftException(String message) { - super(message); - } - - public SwiftException(String message, Throwable cause) { - super(message, cause); - } - - public SwiftException(Throwable cause) { - super(cause); - } -} + /** + * Function to return an already generated SAS Token with a '?' prefix + * @param accountName the name of the storage account. + * @param fileSystem the name of the fileSystem. + * @param path the file or directory path. + * @param operation the operation to be performed on the path. + * @return + * @throws IOException + */ + @Override + public String getSASToken(String accountName, String fileSystem, String path, + String operation) throws IOException { + String token = super.getSASToken(accountName, fileSystem, path, operation); + return "?" + token; + } +} \ No newline at end of file diff --git a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestReadBufferManager.java b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestReadBufferManager.java index 705cc2530d3..a57430fa808 100644 --- a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestReadBufferManager.java +++ b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestReadBufferManager.java @@ -25,6 +25,7 @@ import java.util.Random; import java.util.concurrent.Callable; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; +import java.util.concurrent.TimeUnit; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FSDataInputStream; @@ -43,9 +44,24 @@ import static org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.FS_AZURE import static org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.FS_AZURE_READ_AHEAD_QUEUE_DEPTH; import static org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.MIN_BUFFER_SIZE; import static org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.ONE_MB; +import static org.apache.hadoop.fs.azurebfs.constants.InternalConstants.CAPABILITY_SAFE_READAHEAD; +import static org.apache.hadoop.test.LambdaTestUtils.eventually; public class ITestReadBufferManager extends AbstractAbfsIntegrationTest { + /** + * Time before the JUnit test times out for eventually() clauses + * to fail. This copes with slow network connections and debugging + * sessions, yet still allows for tests to fail with meaningful + * messages. + */ + public static final int TIMEOUT_OFFSET = 5 * 60_000; + + /** + * Interval between eventually preobes. + */ + public static final int PROBE_INTERVAL_MILLIS = 1_000; + public ITestReadBufferManager() throws Exception { } @@ -60,6 +76,11 @@ public class ITestReadBufferManager extends AbstractAbfsIntegrationTest { } ExecutorService executorService = Executors.newFixedThreadPool(4); AzureBlobFileSystem fs = getABFSWithReadAheadConfig(); + // verify that the fs has the capability to validate the fix + Assertions.assertThat(fs.hasPathCapability(new Path("/"), CAPABILITY_SAFE_READAHEAD)) + .describedAs("path capability %s in %s", CAPABILITY_SAFE_READAHEAD, fs) + .isTrue(); + try { for (int i = 0; i < 4; i++) { final String fileName = methodName.getMethodName() + i; @@ -74,17 +95,16 @@ public class ITestReadBufferManager extends AbstractAbfsIntegrationTest { } } finally { executorService.shutdown(); + // wait for all tasks to finish + executorService.awaitTermination(1, TimeUnit.MINUTES); } ReadBufferManager bufferManager = ReadBufferManager.getBufferManager(); - assertListEmpty("CompletedList", bufferManager.getCompletedReadListCopy()); - assertListEmpty("InProgressList", bufferManager.getInProgressCopiedList()); + // readahead queue is empty assertListEmpty("ReadAheadQueue", bufferManager.getReadAheadQueueCopy()); - Assertions.assertThat(bufferManager.getFreeListCopy()) - .describedAs("After closing all streams free list contents should match with " + freeList) - .hasSize(numBuffers) - .containsExactlyInAnyOrderElementsOf(freeList); - + // verify the in progress list eventually empties out. + eventually(getTestTimeoutMillis() - TIMEOUT_OFFSET, PROBE_INTERVAL_MILLIS, () -> + assertListEmpty("InProgressList", bufferManager.getInProgressCopiedList())); } private void assertListEmpty(String listName, List list) { @@ -116,22 +136,18 @@ public class ITestReadBufferManager extends AbstractAbfsIntegrationTest { try { iStream2 = (AbfsInputStream) fs.open(testFilePath).getWrappedStream(); iStream2.read(); - // After closing stream1, none of the buffers associated with stream1 should be present. - assertListDoesnotContainBuffersForIstream(bufferManager.getInProgressCopiedList(), iStream1); - assertListDoesnotContainBuffersForIstream(bufferManager.getCompletedReadListCopy(), iStream1); + // After closing stream1, no queued buffers of stream1 should be present + // assertions can't be made about the state of the other lists as it is + // too prone to race conditions. assertListDoesnotContainBuffersForIstream(bufferManager.getReadAheadQueueCopy(), iStream1); } finally { // closing the stream later. IOUtils.closeStream(iStream2); } - // After closing stream2, none of the buffers associated with stream2 should be present. - assertListDoesnotContainBuffersForIstream(bufferManager.getInProgressCopiedList(), iStream2); - assertListDoesnotContainBuffersForIstream(bufferManager.getCompletedReadListCopy(), iStream2); + // After closing stream2, no queued buffers of stream2 should be present. assertListDoesnotContainBuffersForIstream(bufferManager.getReadAheadQueueCopy(), iStream2); - // After closing both the streams, all lists should be empty. - assertListEmpty("CompletedList", bufferManager.getCompletedReadListCopy()); - assertListEmpty("InProgressList", bufferManager.getInProgressCopiedList()); + // After closing both the streams, read queue should be empty. assertListEmpty("ReadAheadQueue", bufferManager.getReadAheadQueueCopy()); } diff --git a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsClient.java b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsClient.java index 0a1dca7e7d8..08eb3adc926 100644 --- a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsClient.java +++ b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsClient.java @@ -306,6 +306,11 @@ public final class TestAbfsClient { when(client.getAccessToken()).thenCallRealMethod(); when(client.getSharedKeyCredentials()).thenCallRealMethod(); when(client.createDefaultHeaders()).thenCallRealMethod(); + when(client.getAbfsConfiguration()).thenReturn(abfsConfig); + when(client.getIntercept()).thenReturn( + AbfsThrottlingInterceptFactory.getInstance( + abfsConfig.getAccountName().substring(0, + abfsConfig.getAccountName().indexOf(DOT)), abfsConfig)); // override baseurl client = TestAbfsClient.setAbfsClientField(client, "abfsConfiguration", diff --git a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsClientThrottlingAnalyzer.java b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsClientThrottlingAnalyzer.java index 3f680e49930..22649cd190d 100644 --- a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsClientThrottlingAnalyzer.java +++ b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsClientThrottlingAnalyzer.java @@ -18,9 +18,15 @@ package org.apache.hadoop.fs.azurebfs.services; +import java.io.IOException; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.azurebfs.AbfsConfiguration; import org.apache.hadoop.fs.contract.ContractTestUtils; import org.junit.Test; +import static org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_ANALYSIS_PERIOD; +import static org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.TEST_CONFIGURATION_FILE_NAME; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; @@ -33,6 +39,15 @@ public class TestAbfsClientThrottlingAnalyzer { + ANALYSIS_PERIOD / 10; private static final long MEGABYTE = 1024 * 1024; private static final int MAX_ACCEPTABLE_PERCENT_DIFFERENCE = 20; + private AbfsConfiguration abfsConfiguration; + + public TestAbfsClientThrottlingAnalyzer() throws IOException, IllegalAccessException { + final Configuration configuration = new Configuration(); + configuration.addResource(TEST_CONFIGURATION_FILE_NAME); + configuration.setInt(FS_AZURE_ANALYSIS_PERIOD, 1000); + this.abfsConfiguration = new AbfsConfiguration(configuration, + "dummy"); + } private void sleep(long milliseconds) { try { @@ -82,8 +97,7 @@ public class TestAbfsClientThrottlingAnalyzer { @Test public void testNoMetricUpdatesThenNoWaiting() { AbfsClientThrottlingAnalyzer analyzer = new AbfsClientThrottlingAnalyzer( - "test", - ANALYSIS_PERIOD); + "test", abfsConfiguration); validate(0, analyzer.getSleepDuration()); sleep(ANALYSIS_PERIOD_PLUS_10_PERCENT); validate(0, analyzer.getSleepDuration()); @@ -96,8 +110,7 @@ public class TestAbfsClientThrottlingAnalyzer { @Test public void testOnlySuccessThenNoWaiting() { AbfsClientThrottlingAnalyzer analyzer = new AbfsClientThrottlingAnalyzer( - "test", - ANALYSIS_PERIOD); + "test", abfsConfiguration); analyzer.addBytesTransferred(8 * MEGABYTE, false); validate(0, analyzer.getSleepDuration()); sleep(ANALYSIS_PERIOD_PLUS_10_PERCENT); @@ -112,8 +125,7 @@ public class TestAbfsClientThrottlingAnalyzer { @Test public void testOnlyErrorsAndWaiting() { AbfsClientThrottlingAnalyzer analyzer = new AbfsClientThrottlingAnalyzer( - "test", - ANALYSIS_PERIOD); + "test", abfsConfiguration); validate(0, analyzer.getSleepDuration()); analyzer.addBytesTransferred(4 * MEGABYTE, true); sleep(ANALYSIS_PERIOD_PLUS_10_PERCENT); @@ -132,8 +144,7 @@ public class TestAbfsClientThrottlingAnalyzer { @Test public void testSuccessAndErrorsAndWaiting() { AbfsClientThrottlingAnalyzer analyzer = new AbfsClientThrottlingAnalyzer( - "test", - ANALYSIS_PERIOD); + "test", abfsConfiguration); validate(0, analyzer.getSleepDuration()); analyzer.addBytesTransferred(8 * MEGABYTE, false); analyzer.addBytesTransferred(2 * MEGABYTE, true); @@ -157,8 +168,7 @@ public class TestAbfsClientThrottlingAnalyzer { @Test public void testManySuccessAndErrorsAndWaiting() { AbfsClientThrottlingAnalyzer analyzer = new AbfsClientThrottlingAnalyzer( - "test", - ANALYSIS_PERIOD); + "test", abfsConfiguration); validate(0, analyzer.getSleepDuration()); final int numberOfRequests = 20; for (int i = 0; i < numberOfRequests; i++) { diff --git a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java index b5ae9b73784..0395c4183b9 100644 --- a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java +++ b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java @@ -82,6 +82,12 @@ public class TestAbfsInputStream extends REDUCED_READ_BUFFER_AGE_THRESHOLD * 10; // 30 sec private static final int ALWAYS_READ_BUFFER_SIZE_TEST_FILE_SIZE = 16 * ONE_MB; + @Override + public void teardown() throws Exception { + super.teardown(); + ReadBufferManager.getBufferManager().testResetReadBufferManager(); + } + private AbfsRestOperation getMockRestOp() { AbfsRestOperation op = mock(AbfsRestOperation.class); AbfsHttpOperation httpOp = mock(AbfsHttpOperation.class); @@ -493,6 +499,69 @@ public class TestAbfsInputStream extends checkEvictedStatus(inputStream, 0, true); } + /** + * This test expects InProgressList is not purged by the inputStream close. + */ + @Test + public void testStreamPurgeDuringReadAheadCallExecuting() throws Exception { + AbfsClient client = getMockAbfsClient(); + AbfsRestOperation successOp = getMockRestOp(); + final Long serverCommunicationMockLatency = 3_000L; + final Long readBufferTransferToInProgressProbableTime = 1_000L; + final Integer readBufferQueuedCount = 3; + + Mockito.doAnswer(invocationOnMock -> { + //sleeping thread to mock the network latency from client to backend. + Thread.sleep(serverCommunicationMockLatency); + return successOp; + }) + .when(client) + .read(any(String.class), any(Long.class), any(byte[].class), + any(Integer.class), any(Integer.class), any(String.class), + any(String.class), any(TracingContext.class)); + + final ReadBufferManager readBufferManager + = ReadBufferManager.getBufferManager(); + + final int readBufferTotal = readBufferManager.getNumBuffers(); + final int expectedFreeListBufferCount = readBufferTotal + - readBufferQueuedCount; + + try (AbfsInputStream inputStream = getAbfsInputStream(client, + "testSuccessfulReadAhead.txt")) { + // As this is try-with-resources block, the close() method of the created + // abfsInputStream object shall be called on the end of the block. + queueReadAheads(inputStream); + + //Sleeping to give ReadBufferWorker to pick the readBuffers for processing. + Thread.sleep(readBufferTransferToInProgressProbableTime); + + Assertions.assertThat(readBufferManager.getInProgressCopiedList()) + .describedAs(String.format("InProgressList should have %d elements", + readBufferQueuedCount)) + .hasSize(readBufferQueuedCount); + Assertions.assertThat(readBufferManager.getFreeListCopy()) + .describedAs(String.format("FreeList should have %d elements", + expectedFreeListBufferCount)) + .hasSize(expectedFreeListBufferCount); + Assertions.assertThat(readBufferManager.getCompletedReadListCopy()) + .describedAs("CompletedList should have 0 elements") + .hasSize(0); + } + + Assertions.assertThat(readBufferManager.getInProgressCopiedList()) + .describedAs(String.format("InProgressList should have %d elements", + readBufferQueuedCount)) + .hasSize(readBufferQueuedCount); + Assertions.assertThat(readBufferManager.getFreeListCopy()) + .describedAs(String.format("FreeList should have %d elements", + expectedFreeListBufferCount)) + .hasSize(expectedFreeListBufferCount); + Assertions.assertThat(readBufferManager.getCompletedReadListCopy()) + .describedAs("CompletedList should have 0 elements") + .hasSize(0); + } + /** * This test expects ReadAheadManager to throw exception if the read ahead * thread had failed within the last thresholdAgeMilliseconds. diff --git a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestExponentialRetryPolicy.java b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestExponentialRetryPolicy.java index 0f8dc55aa14..a1fc4e138d6 100644 --- a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestExponentialRetryPolicy.java +++ b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestExponentialRetryPolicy.java @@ -18,13 +18,35 @@ package org.apache.hadoop.fs.azurebfs.services; +import static java.net.HttpURLConnection.HTTP_INTERNAL_ERROR; + import static org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.AZURE_BACKOFF_INTERVAL; import static org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.AZURE_MAX_BACKOFF_INTERVAL; import static org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.AZURE_MAX_IO_RETRIES; import static org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.AZURE_MIN_BACKOFF_INTERVAL; +import static org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.FS_AZURE_ACCOUNT_LEVEL_THROTTLING_ENABLED; +import static org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.FS_AZURE_ENABLE_AUTOTHROTTLING; +import static org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.MIN_BUFFER_SIZE; +import static org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_ABFS_ACCOUNT1_NAME; +import static org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_ACCOUNT_NAME; +import static org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.TEST_CONFIGURATION_FILE_NAME; +import static org.junit.Assume.assumeTrue; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; + +import org.apache.hadoop.fs.FSDataInputStream; + +import org.assertj.core.api.Assertions; +import org.junit.Assume; +import org.mockito.Mockito; + +import java.net.URI; import java.util.Random; +import org.apache.hadoop.fs.FSDataOutputStream; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem; import org.junit.Assert; import org.junit.Test; @@ -41,6 +63,9 @@ public class TestExponentialRetryPolicy extends AbstractAbfsIntegrationTest { private final int noRetryCount = 0; private final int retryCount = new Random().nextInt(maxRetryCount); private final int retryCountBeyondMax = maxRetryCount + 1; + private static final String TEST_PATH = "/testfile"; + private static final double MULTIPLYING_FACTOR = 1.5; + private static final int ANALYSIS_PERIOD = 10000; public TestExponentialRetryPolicy() throws Exception { @@ -67,6 +92,173 @@ public class TestExponentialRetryPolicy extends AbstractAbfsIntegrationTest { testMaxIOConfig(abfsConfig); } + @Test + public void testThrottlingIntercept() throws Exception { + AzureBlobFileSystem fs = getFileSystem(); + final Configuration configuration = new Configuration(); + configuration.addResource(TEST_CONFIGURATION_FILE_NAME); + configuration.setBoolean(FS_AZURE_ENABLE_AUTOTHROTTLING, false); + + // On disabling throttling AbfsNoOpThrottlingIntercept object is returned + AbfsConfiguration abfsConfiguration = new AbfsConfiguration(configuration, + "dummy.dfs.core.windows.net"); + AbfsThrottlingIntercept intercept; + AbfsClient abfsClient = TestAbfsClient.createTestClientFromCurrentContext(fs.getAbfsStore().getClient(), abfsConfiguration); + intercept = abfsClient.getIntercept(); + Assertions.assertThat(intercept) + .describedAs("AbfsNoOpThrottlingIntercept instance expected") + .isInstanceOf(AbfsNoOpThrottlingIntercept.class); + + configuration.setBoolean(FS_AZURE_ENABLE_AUTOTHROTTLING, true); + configuration.setBoolean(FS_AZURE_ACCOUNT_LEVEL_THROTTLING_ENABLED, true); + // On disabling throttling AbfsClientThrottlingIntercept object is returned + AbfsConfiguration abfsConfiguration1 = new AbfsConfiguration(configuration, + "dummy1.dfs.core.windows.net"); + AbfsClient abfsClient1 = TestAbfsClient.createTestClientFromCurrentContext(fs.getAbfsStore().getClient(), abfsConfiguration1); + intercept = abfsClient1.getIntercept(); + Assertions.assertThat(intercept) + .describedAs("AbfsClientThrottlingIntercept instance expected") + .isInstanceOf(AbfsClientThrottlingIntercept.class); + } + + @Test + public void testCreateMultipleAccountThrottling() throws Exception { + Configuration config = new Configuration(getRawConfiguration()); + String accountName = config.get(FS_AZURE_ACCOUNT_NAME); + if (accountName == null) { + // check if accountName is set using different config key + accountName = config.get(FS_AZURE_ABFS_ACCOUNT1_NAME); + } + assumeTrue("Not set: " + FS_AZURE_ABFS_ACCOUNT1_NAME, + accountName != null && !accountName.isEmpty()); + + Configuration rawConfig1 = new Configuration(); + rawConfig1.addResource(TEST_CONFIGURATION_FILE_NAME); + + AbfsRestOperation successOp = mock(AbfsRestOperation.class); + AbfsHttpOperation http500Op = mock(AbfsHttpOperation.class); + when(http500Op.getStatusCode()).thenReturn(HTTP_INTERNAL_ERROR); + when(successOp.getResult()).thenReturn(http500Op); + + AbfsConfiguration configuration = Mockito.mock(AbfsConfiguration.class); + when(configuration.getAnalysisPeriod()).thenReturn(ANALYSIS_PERIOD); + when(configuration.isAutoThrottlingEnabled()).thenReturn(true); + when(configuration.accountThrottlingEnabled()).thenReturn(false); + + AbfsThrottlingIntercept instance1 = AbfsThrottlingInterceptFactory.getInstance(accountName, configuration); + String accountName1 = config.get(FS_AZURE_ABFS_ACCOUNT1_NAME); + + assumeTrue("Not set: " + FS_AZURE_ABFS_ACCOUNT1_NAME, + accountName1 != null && !accountName1.isEmpty()); + + AbfsThrottlingIntercept instance2 = AbfsThrottlingInterceptFactory.getInstance(accountName1, configuration); + //if singleton is enabled, for different accounts both the instances should return same value + Assertions.assertThat(instance1) + .describedAs( + "if singleton is enabled, for different accounts both the instances should return same value") + .isEqualTo(instance2); + + when(configuration.accountThrottlingEnabled()).thenReturn(true); + AbfsThrottlingIntercept instance3 = AbfsThrottlingInterceptFactory.getInstance(accountName, configuration); + AbfsThrottlingIntercept instance4 = AbfsThrottlingInterceptFactory.getInstance(accountName1, configuration); + AbfsThrottlingIntercept instance5 = AbfsThrottlingInterceptFactory.getInstance(accountName, configuration); + //if singleton is not enabled, for different accounts instances should return different value + Assertions.assertThat(instance3) + .describedAs( + "iff singleton is not enabled, for different accounts instances should return different value") + .isNotEqualTo(instance4); + + //if singleton is not enabled, for same accounts instances should return same value + Assertions.assertThat(instance3) + .describedAs( + "if singleton is not enabled, for same accounts instances should return same value") + .isEqualTo(instance5); + } + + @Test + public void testOperationOnAccountIdle() throws Exception { + //Get the filesystem. + AzureBlobFileSystem fs = getFileSystem(); + AbfsClient client = fs.getAbfsStore().getClient(); + AbfsConfiguration configuration1 = client.getAbfsConfiguration(); + Assume.assumeTrue(configuration1.isAutoThrottlingEnabled()); + Assume.assumeTrue(configuration1.accountThrottlingEnabled()); + + AbfsClientThrottlingIntercept accountIntercept + = (AbfsClientThrottlingIntercept) client.getIntercept(); + final byte[] b = new byte[2 * MIN_BUFFER_SIZE]; + new Random().nextBytes(b); + + Path testPath = path(TEST_PATH); + + //Do an operation on the filesystem. + try (FSDataOutputStream stream = fs.create(testPath)) { + stream.write(b); + } + + //Don't perform any operation on the account. + int sleepTime = (int) ((getAbfsConfig().getAccountOperationIdleTimeout()) * MULTIPLYING_FACTOR); + Thread.sleep(sleepTime); + + try (FSDataInputStream streamRead = fs.open(testPath)) { + streamRead.read(b); + } + + //Perform operations on another account. + AzureBlobFileSystem fs1 = new AzureBlobFileSystem(); + Configuration config = new Configuration(getRawConfiguration()); + String accountName1 = config.get(FS_AZURE_ABFS_ACCOUNT1_NAME); + assumeTrue("Not set: " + FS_AZURE_ABFS_ACCOUNT1_NAME, + accountName1 != null && !accountName1.isEmpty()); + final String abfsUrl1 = this.getFileSystemName() + "12" + "@" + accountName1; + URI defaultUri1 = null; + defaultUri1 = new URI("abfss", abfsUrl1, null, null, null); + fs1.initialize(defaultUri1, getRawConfiguration()); + AbfsClient client1 = fs1.getAbfsStore().getClient(); + AbfsClientThrottlingIntercept accountIntercept1 + = (AbfsClientThrottlingIntercept) client1.getIntercept(); + try (FSDataOutputStream stream1 = fs1.create(testPath)) { + stream1.write(b); + } + + //Verify the write analyzer for first account is idle but the read analyzer is not idle. + Assertions.assertThat(accountIntercept.getWriteThrottler() + .getIsOperationOnAccountIdle() + .get()) + .describedAs("Write analyzer for first account should be idle the first time") + .isTrue(); + + Assertions.assertThat( + accountIntercept.getReadThrottler() + .getIsOperationOnAccountIdle() + .get()) + .describedAs("Read analyzer for first account should not be idle") + .isFalse(); + + //Verify the write analyzer for second account is not idle. + Assertions.assertThat( + accountIntercept1.getWriteThrottler() + .getIsOperationOnAccountIdle() + .get()) + .describedAs("Write analyzer for second account should not be idle") + .isFalse(); + + //Again perform an operation on the first account. + try (FSDataOutputStream stream2 = fs.create(testPath)) { + stream2.write(b); + } + + //Verify the write analyzer on first account is not idle. + Assertions.assertThat( + + accountIntercept.getWriteThrottler() + .getIsOperationOnAccountIdle() + .get()) + .describedAs( + "Write analyzer for first account should not be idle second time") + .isFalse(); + } + @Test public void testAbfsConfigConstructor() throws Exception { // Ensure we choose expected values that are not defaults diff --git a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/utils/CleanupTestContainers.java b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/utils/CleanupTestContainers.java new file mode 100644 index 00000000000..b8272319ab8 --- /dev/null +++ b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/utils/CleanupTestContainers.java @@ -0,0 +1,74 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.utils; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import com.microsoft.azure.storage.CloudStorageAccount; +import com.microsoft.azure.storage.blob.CloudBlobClient; +import com.microsoft.azure.storage.blob.CloudBlobContainer; +import com.microsoft.azure.storage.StorageCredentials; +import com.microsoft.azure.storage.StorageCredentialsAccountAndKey; + +import org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest; +import org.apache.hadoop.fs.azurebfs.AbfsConfiguration; + +/** + * This looks like a test, but it is really a command to invoke to + * clean up containers created in other test runs. + * + */ +public class CleanupTestContainers extends AbstractAbfsIntegrationTest { + private static final Logger LOG = LoggerFactory.getLogger(CleanupTestContainers.class); + private static final String CONTAINER_PREFIX = "abfs-testcontainer-"; + + public CleanupTestContainers() throws Exception { + } + + @org.junit.Test + public void testDeleteContainers() throws Throwable { + int count = 0; + AbfsConfiguration abfsConfig = getAbfsStore(getFileSystem()).getAbfsConfiguration(); + String accountName = abfsConfig.getAccountName().split("\\.")[0]; + LOG.debug("Deleting test containers in account - {}", abfsConfig.getAccountName()); + + String accountKey = abfsConfig.getStorageAccountKey(); + if ((accountKey == null) || (accountKey.isEmpty())) { + LOG.debug("Clean up not possible. Account ket not present in config"); + } + final StorageCredentials credentials; + credentials = new StorageCredentialsAccountAndKey( + accountName, accountKey); + CloudStorageAccount storageAccount = new CloudStorageAccount(credentials, true); + CloudBlobClient blobClient = storageAccount.createCloudBlobClient(); + Iterable containers + = blobClient.listContainers(CONTAINER_PREFIX); + for (CloudBlobContainer container : containers) { + LOG.info("Container {} URI {}", + container.getName(), + container.getUri()); + if (container.deleteIfExists()) { + count++; + LOG.info("Current deleted test containers count - #{}", count); + } + } + LOG.info("Summary: Deleted {} test containers", count); + } +} diff --git a/hadoop-tools/hadoop-azure/src/test/resources/accountSettings/accountName_settings.xml.template b/hadoop-tools/hadoop-azure/src/test/resources/accountSettings/accountName_settings.xml.template new file mode 100644 index 00000000000..062b2f4bf3a --- /dev/null +++ b/hadoop-tools/hadoop-azure/src/test/resources/accountSettings/accountName_settings.xml.template @@ -0,0 +1,185 @@ + + + + + + + + fs.azure.abfs.account.name + ACCOUNTNAME.dfs.core.windows.net + + + fs.contract.test.fs.abfs + abfs://CONTAINER_NAME@ACCOUNTNAME.dfs.core.windows.net + + + fs.contract.test.fs.abfss + abfss://CONTAINER_NAME@ACCOUNTNAME.dfs.core.windows.net + + + fs.contract.test.fs.wasb + wasb://CONTAINER_NAME@ACCOUNTNAME.blob.core.windows.net + + + fs.azure.wasb.account.name + ACCOUNTNAME.blob.core.windows.net + + + fs.azure.scale.test.enabled + true + + + fs.azure.test.namespace.enabled.ACCOUNTNAME.dfs.core.windows.net + IS_NAMESPACE_ENABLED + + + fs.azure.test.namespace.enabled + IS_NAMESPACE_ENABLED + + + fs.azure.account.hns.enabled.ACCOUNTNAME.dfs.core.windows.net + IS_NAMESPACE_ENABLED + + + fs.azure.account.hns.enabled + IS_NAMESPACE_ENABLED + + + fs.azure.account.key.ACCOUNTNAME.dfs.core.windows.net + ACCOUNT_KEY + + + fs.azure.account.key.ACCOUNTNAME.blob.core.windows.net + ACCOUNT_KEY + + + fs.azure.account.key.ACCOUNTNAME.dfs.core.windows.net + ACCOUNT_KEY + + + fs.azure.account.key.ACCOUNTNAME.blob.core.windows.net + ACCOUNT_KEY + + + + + fs.azure.account.oauth2.client.endpoint.ACCOUNTNAME.dfs.core.windows.net + https://login.microsoftonline.com/SUPERUSER_TENANT_ID/oauth2/token + + + fs.azure.account.oauth2.client.endpoint + https://login.microsoftonline.com/SUPERUSER_TENANT_ID/oauth2/token + + + fs.azure.account.oauth.provider.type.ACCOUNTNAME.dfs.core.windows.net + org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider + + + fs.azure.account.oauth.provider.type + org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider + + + fs.azure.account.oauth2.client.id.ACCOUNTNAME.dfs.core.windows.net + SUPERUSER_CLIENT_ID + + + fs.azure.account.oauth2.client.id + SUPERUSER_CLIENT_ID + + + fs.azure.account.oauth2.client.secret.ACCOUNTNAME.dfs.core.windows.net + SUPERUSER_CLIENT_SECRET + + + fs.azure.account.oauth2.client.secret + SUPERUSER_CLIENT_SECRET + + + + + fs.azure.enable.check.access + true + + + fs.azure.account.test.oauth2.client.id + NO_RBAC_USER_CLIENT_ID + + + fs.azure.account.test.oauth2.client.secret + NO_RBAC_USER_CLIENT_SECRET + + + fs.azure.check.access.testuser.guid + NO_RBAC_USER_OID + + + + + fs.azure.account.oauth2.contributor.client.id + CONTRIBUTOR_RBAC_USER_CLIENT_ID + + + fs.azure.account.oauth2.contributor.client.secret + CONTRIBUTOR_RBAC_USER_CLIENT_SECRET + + + + + fs.azure.account.oauth2.reader.client.id + READER_RBAC_USER_CLIENT_ID + + + fs.azure.account.oauth2.reader.client.secret + READER_RBAC_USER_CLIENT_ID + + diff --git a/hadoop-tools/hadoop-azure/src/test/resources/azure-auth-keys.xml.template b/hadoop-tools/hadoop-azure/src/test/resources/azure-auth-keys.xml.template index 12dbbfab479..636816551dd 100644 --- a/hadoop-tools/hadoop-azure/src/test/resources/azure-auth-keys.xml.template +++ b/hadoop-tools/hadoop-azure/src/test/resources/azure-auth-keys.xml.template @@ -14,162 +14,16 @@ --> - - - - - - - fs.azure.account.auth.type - SharedKey - - - - - - fs.azure.account.key.{ABFS_ACCOUNT_NAME}.dfs.core.windows.net - {ACCOUNT_ACCESS_KEY} - Account access key - - - - fs.azure.account.oauth.provider.type.{ABFS_ACCOUNT_NAME}.dfs.core.windows.net - - org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider - OAuth token provider implementation class - - - - fs.azure.account.oauth2.client.endpoint.{ABFS_ACCOUNT_NAME}.dfs.core.windows.net - - https://login.microsoftonline.com/{TENANTID}/oauth2/token - Token end point, this can be found through Azure portal - - - - - fs.azure.account.oauth2.client.id.{ABFS_ACCOUNT_NAME}.dfs.core.windows.net - - {client id} - AAD client id. - - - - fs.azure.account.oauth2.client.secret.{ABFS_ACCOUNT_NAME}.dfs.core.windows.net - - {client secret} - AAD client secret - - - - - fs.contract.test.fs.abfs - abfs://{CONTAINER_NAME}@{ACCOUNT_NAME}.dfs.core.windows.net - - - fs.contract.test.fs.abfss - abfss://{CONTAINER_NAME}@{ACCOUNT_NAME}.dfs.core.windows.net - - - - - fs.azure.wasb.account.name - {WASB_ACCOUNT_NAME}.blob.core.windows.net - - - fs.azure.account.key.{WASB_ACCOUNT_NAME}.blob.core.windows.net - WASB account key - - - fs.contract.test.fs.wasb - wasb://{WASB_FILESYSTEM}@{WASB_ACCOUNT_NAME}.blob.core.windows.net - - - - - - fs.azure.account.oauth2.contributor.client.id - {Client id of SP with RBAC Storage Blob Data Contributor} - - - fs.azure.account.oauth2.contributor.client.secret - {Client secret of SP with RBAC Storage Blob Data Contributor} - - - fs.azure.account.oauth2.reader.client.id - {Client id of SP with RBAC Storage Blob Data Reader} - - - fs.azure.account.oauth2.reader.client.secret - {Client secret of SP with RBAC Storage Blob Data Reader} - - - - + - fs.azure.account.test.oauth2.client.id - {client id} - The client id(app id) for the app created on step 1 - - - - fs.azure.account.test.oauth2.client.secret - {client secret} - -The client secret(application's secret) for the app created on step 1 - - - - fs.azure.check.access.testuser.guid - {guid} - The guid fetched on step 2 - - - fs.azure.account.oauth2.client.endpoint.{account name}.dfs.core -.windows.net - https://login.microsoftonline.com/{TENANTID}/oauth2/token - -Token end point. This can be found through Azure portal. As part of CheckAccess -test cases. The access will be tested for an FS instance created with the -above mentioned client credentials. So this configuration is necessary to -create the test FS instance. - + fs.azure.hnsTestAccountName + - - fs.azure.test.appendblob.enabled - false - If made true, tests will be running under the assumption that - append blob is enabled and the root directory and contract test root - directory will be part of the append blob directories. Should be false for - non-HNS accounts. - + fs.azure.nonHnsTestAccountName + diff --git a/hadoop-tools/hadoop-azure/src/test/resources/azure-test.xml b/hadoop-tools/hadoop-azure/src/test/resources/azure-test.xml index 24ffeb5d107..6730021974e 100644 --- a/hadoop-tools/hadoop-azure/src/test/resources/azure-test.xml +++ b/hadoop-tools/hadoop-azure/src/test/resources/azure-test.xml @@ -28,11 +28,8 @@ false - - fs.azure.test.namespace.enabled - true - - + + fs.azure.abfs.latency.track false @@ -43,9 +40,6 @@ true - - - fs.azure.user.agent.prefix @@ -59,7 +53,40 @@ STORE THE CONFIGURATION PROPERTIES WITHIN IT. TO PREVENT ACCIDENTAL LEAKS OF YOUR STORAGE ACCOUNT CREDENTIALS, THIS FILE IS LISTED IN .gitignore TO PREVENT YOU FROM INCLUDING - IT IN PATCHES OR COMMITS. --> + IT IN PATCHES OR COMMITS. + + TEST SCRIPT RUNS: + ================ + FOR EASIER TEST RUNS, TEST RUNS FOR VARIOUS COMBINATIONS CAN BE + TRIGGERED OVER SCRIPT: + ./dev-support/testrun-scripts/runtests.sh + (FROM hadoop-azure ROOT PROJECT PATH) + + TO USE THE TEST SCRIPT, + 1. COPY + ./src/test/resources/azure-auth-keys.xml.template + TO + ./src/test/resources/azure-auth-keys.xml + UPDATE ACCOUNT NAMES THAT SHOULD BE USED IN THE TEST RUN + FOR HNS AND NON-HNS COMBINATIONS IN THE 2 PROPERTIES + PRESENT IN THE XML, NAMELY + fs.azure.hnsTestAccountName and + fs.azure.nonHnsTestAccountName + (ACCOUNT NAME SHOULD BE WITHOUT DOMAIN) + + azure-auth-keys.xml IS LISTED IN .gitignore, SO ANY + ACCIDENTAL ACCOUNT NAME LEAK IS PREVENTED. + + 2. CREATE ACCOUNT CONFIG FILES (ONE CONFIG FILE + PER ACCOUNT) IN FOLDER: + ./src/test/resources/accountSettings/ + + FOLLOW INSTRUCTIONS IN THE START OF THE TEMPLATE FILE + accountName_settings.xml.template + WITHIN accountSettings FOLDER WHILE CREATING ACCOUNT CONFIG FILE. + + NEW FILES CREATED IN FOLDER accountSettings IS LISTED IN .gitignore + TO PREVENT ACCIDENTAL CRED LEAKS. --> diff --git a/hadoop-tools/hadoop-benchmark/src/main/java/org/apache/hadoop/benchmark/VectoredReadBenchmark.java b/hadoop-tools/hadoop-benchmark/src/main/java/org/apache/hadoop/benchmark/VectoredReadBenchmark.java index 631842f78e2..5df46c36786 100644 --- a/hadoop-tools/hadoop-benchmark/src/main/java/org/apache/hadoop/benchmark/VectoredReadBenchmark.java +++ b/hadoop-tools/hadoop-benchmark/src/main/java/org/apache/hadoop/benchmark/VectoredReadBenchmark.java @@ -169,7 +169,7 @@ public class VectoredReadBenchmark { FileRangeCallback(AsynchronousFileChannel channel, long offset, int length, Joiner joiner, ByteBuffer buffer) { - super(offset, length); + super(offset, length, null); this.channel = channel; this.joiner = joiner; this.buffer = buffer; diff --git a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java index 8c2bc82d3fc..4ba05794a09 100644 --- a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java +++ b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java @@ -152,9 +152,18 @@ public class CopyCommitter extends FileOutputCommitter { } private void cleanupTempFiles(JobContext context) { - try { - Configuration conf = context.getConfiguration(); + Configuration conf = context.getConfiguration(); + final boolean directWrite = conf.getBoolean( + DistCpOptionSwitch.DIRECT_WRITE.getConfigLabel(), false); + final boolean append = conf.getBoolean( + DistCpOptionSwitch.APPEND.getConfigLabel(), false); + final boolean useTempTarget = !append && !directWrite; + if (!useTempTarget) { + return; + } + + try { Path targetWorkPath = new Path(conf.get(DistCpConstants.CONF_LABEL_TARGET_WORK_PATH)); FileSystem targetFS = targetWorkPath.getFileSystem(conf); diff --git a/hadoop-tools/hadoop-distcp/src/site/markdown/DistCp.md.vm b/hadoop-tools/hadoop-distcp/src/site/markdown/DistCp.md.vm index a351ec5a376..a86e41c6668 100644 --- a/hadoop-tools/hadoop-distcp/src/site/markdown/DistCp.md.vm +++ b/hadoop-tools/hadoop-distcp/src/site/markdown/DistCp.md.vm @@ -581,7 +581,7 @@ $H3 MapReduce and other side-effects $H3 DistCp and Object Stores -DistCp works with Object Stores such as Amazon S3, Azure WASB and OpenStack Swift. +DistCp works with Object Stores such as Amazon S3, Azure ABFS and Google GCS. Prequisites @@ -624,7 +624,7 @@ And to use `-update` to only copy changed files. ```bash hadoop distcp -update -numListstatusThreads 20 \ - swift://history.cluster1/2016 \ + s3a://history/2016 \ hdfs://nn1:8020/history/2016 ``` diff --git a/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyCommitter.java b/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyCommitter.java index 599f3ec2db6..bda80a3d25e 100644 --- a/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyCommitter.java +++ b/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyCommitter.java @@ -18,6 +18,7 @@ package org.apache.hadoop.tools.mapred; +import org.apache.hadoop.fs.contract.ContractTestUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.hadoop.conf.Configuration; @@ -580,6 +581,76 @@ public class TestCopyCommitter { } } + @Test + public void testCommitWithCleanupTempFiles() throws IOException { + testCommitWithCleanup(true, false); + testCommitWithCleanup(false, true); + testCommitWithCleanup(true, true); + testCommitWithCleanup(false, false); + } + + private void testCommitWithCleanup(boolean append, boolean directWrite)throws IOException { + TaskAttemptContext taskAttemptContext = getTaskAttemptContext(config); + JobID jobID = taskAttemptContext.getTaskAttemptID().getJobID(); + JobContext jobContext = new JobContextImpl( + taskAttemptContext.getConfiguration(), + jobID); + Configuration conf = jobContext.getConfiguration(); + + String sourceBase; + String targetBase; + FileSystem fs = null; + try { + fs = FileSystem.get(conf); + sourceBase = "/tmp1/" + rand.nextLong(); + targetBase = "/tmp1/" + rand.nextLong(); + + DistCpOptions options = new DistCpOptions.Builder( + Collections.singletonList(new Path(sourceBase)), + new Path("/out")) + .withAppend(append) + .withSyncFolder(true) + .withDirectWrite(directWrite) + .build(); + options.appendToConf(conf); + + DistCpContext context = new DistCpContext(options); + context.setTargetPathExists(false); + + + conf.set(CONF_LABEL_TARGET_WORK_PATH, targetBase); + conf.set(CONF_LABEL_TARGET_FINAL_PATH, targetBase); + + Path tempFilePath = getTempFile(targetBase, taskAttemptContext); + createDirectory(fs, tempFilePath); + + OutputCommitter committer = new CopyCommitter( + null, taskAttemptContext); + committer.commitJob(jobContext); + + if (append || directWrite) { + ContractTestUtils.assertPathExists(fs, "Temp files should not be cleanup with append or direct option", + tempFilePath); + } else { + ContractTestUtils.assertPathDoesNotExist( + fs, + "Temp files should be clean up without append or direct option", + tempFilePath); + } + } finally { + TestDistCpUtils.delete(fs, "/tmp1"); + TestDistCpUtils.delete(fs, "/meta"); + } + } + + private Path getTempFile(String targetWorkPath, TaskAttemptContext taskAttemptContext) { + Path tempFile = new Path(targetWorkPath, ".distcp.tmp." + + taskAttemptContext.getTaskAttemptID().toString() + + "." + System.currentTimeMillis()); + LOG.info("Creating temp file: {}", tempFile); + return tempFile; + } + /** * Create a source file and its DistCp working files with different checksum * to test the checksum validation for copying blocks in parallel. diff --git a/hadoop-tools/hadoop-openstack/dev-support/findbugs-exclude.xml b/hadoop-tools/hadoop-openstack/dev-support/findbugs-exclude.xml deleted file mode 100644 index cfb75c73081..00000000000 --- a/hadoop-tools/hadoop-openstack/dev-support/findbugs-exclude.xml +++ /dev/null @@ -1,34 +0,0 @@ - - - - - - - - - - - - - - - - - - - diff --git a/hadoop-tools/hadoop-openstack/pom.xml b/hadoop-tools/hadoop-openstack/pom.xml index 1577de28505..a3f0e748454 100644 --- a/hadoop-tools/hadoop-openstack/pom.xml +++ b/hadoop-tools/hadoop-openstack/pom.xml @@ -26,9 +26,10 @@ 3.4.0-SNAPSHOT Apache Hadoop OpenStack support - This module contains code to support integration with OpenStack. - Currently this consists of a filesystem client to read data from - and write data to an OpenStack Swift object store. + This module used to contain code to support integration with OpenStack. + It has been deleted as unsupported; the JAR is still published so as to + not break applications which declare an explicit maven/ivy/SBT dependency + on the module. jar @@ -37,32 +38,6 @@ true
- - - tests-off - - - src/test/resources/auth-keys.xml - - - - true - - - - tests-on - - - src/test/resources/auth-keys.xml - - - - false - - - - - @@ -70,77 +45,11 @@ spotbugs-maven-plugin true - ${basedir}/dev-support/findbugs-exclude.xml - Max - - org.apache.maven.plugins - maven-dependency-plugin - - - deplist - compile - - list - - - - ${project.basedir}/target/hadoop-tools-deps/${project.artifactId}.tools-optional.txt - - - - - - - org.apache.hadoop - hadoop-common - compile - - - javax.enterprise - cdi-api - - - - - org.apache.hadoop - hadoop-common - test - test-jar - - - org.apache.hadoop - hadoop-annotations - compile - - - org.apache.httpcomponents - httpcore - - - commons-logging - commons-logging - compile - - - - junit - junit - provided - - - com.fasterxml.jackson.core - jackson-annotations - - - com.fasterxml.jackson.core - jackson-databind - - diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/ApiKeyAuthenticationRequest.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/ApiKeyAuthenticationRequest.java deleted file mode 100644 index e25d17d2fb8..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/ApiKeyAuthenticationRequest.java +++ /dev/null @@ -1,66 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.auth; - -import com.fasterxml.jackson.annotation.JsonProperty; - -/** - * Class that represents authentication request to Openstack Keystone. - * Contains basic authentication information. - * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON. - * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS - */ -public class ApiKeyAuthenticationRequest extends AuthenticationRequest { - /** - * Credentials for login - */ - private ApiKeyCredentials apiKeyCredentials; - - /** - * API key auth - * @param tenantName tenant - * @param apiKeyCredentials credentials - */ - public ApiKeyAuthenticationRequest(String tenantName, ApiKeyCredentials apiKeyCredentials) { - this.tenantName = tenantName; - this.apiKeyCredentials = apiKeyCredentials; - } - - /** - * @return credentials for login into Keystone - */ - @JsonProperty("RAX-KSKEY:apiKeyCredentials") - public ApiKeyCredentials getApiKeyCredentials() { - return apiKeyCredentials; - } - - /** - * @param apiKeyCredentials credentials for login into Keystone - */ - public void setApiKeyCredentials(ApiKeyCredentials apiKeyCredentials) { - this.apiKeyCredentials = apiKeyCredentials; - } - - @Override - public String toString() { - return "Auth as " + - "tenant '" + tenantName + "' " - + apiKeyCredentials; - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/ApiKeyCredentials.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/ApiKeyCredentials.java deleted file mode 100644 index 412ce81daa3..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/ApiKeyCredentials.java +++ /dev/null @@ -1,87 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.auth; - - -/** - * Describes credentials to log in Swift using Keystone authentication. - * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON. - * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS. - */ -public class ApiKeyCredentials { - /** - * user login - */ - private String username; - - /** - * user password - */ - private String apikey; - - /** - * default constructor - */ - public ApiKeyCredentials() { - } - - /** - * @param username user login - * @param apikey user api key - */ - public ApiKeyCredentials(String username, String apikey) { - this.username = username; - this.apikey = apikey; - } - - /** - * @return user api key - */ - public String getApiKey() { - return apikey; - } - - /** - * @param apikey user api key - */ - public void setApiKey(String apikey) { - this.apikey = apikey; - } - - /** - * @return login - */ - public String getUsername() { - return username; - } - - /** - * @param username login - */ - public void setUsername(String username) { - this.username = username; - } - - @Override - public String toString() { - return "user " + - "'" + username + '\'' + - " with key of length " + ((apikey == null) ? 0 : apikey.length()); - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/AuthenticationRequest.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/AuthenticationRequest.java deleted file mode 100644 index a2a3b55e76f..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/AuthenticationRequest.java +++ /dev/null @@ -1,57 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.auth; - -/** - * Class that represents authentication request to Openstack Keystone. - * Contains basic authentication information. - * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON. - * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS. - */ -public class AuthenticationRequest { - - /** - * tenant name - */ - protected String tenantName; - - public AuthenticationRequest() { - } - - /** - * @return tenant name for Keystone authorization - */ - public String getTenantName() { - return tenantName; - } - - /** - * @param tenantName tenant name for authorization - */ - public void setTenantName(String tenantName) { - this.tenantName = tenantName; - } - - @Override - public String toString() { - return "AuthenticationRequest{" + - "tenantName='" + tenantName + '\'' + - '}'; - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/AuthenticationRequestWrapper.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/AuthenticationRequestWrapper.java deleted file mode 100644 index f30e90dad38..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/AuthenticationRequestWrapper.java +++ /dev/null @@ -1,59 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.auth; - -/** - * This class is used for correct hierarchy mapping of - * Keystone authentication model and java code. - * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON. - * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS. - */ -public class AuthenticationRequestWrapper { - /** - * authentication request - */ - private AuthenticationRequest auth; - - /** - * default constructor used for json parsing - */ - public AuthenticationRequestWrapper() { - } - - /** - * @param auth authentication requests - */ - public AuthenticationRequestWrapper(AuthenticationRequest auth) { - this.auth = auth; - } - - /** - * @return authentication request - */ - public AuthenticationRequest getAuth() { - return auth; - } - - /** - * @param auth authentication request - */ - public void setAuth(AuthenticationRequest auth) { - this.auth = auth; - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/AuthenticationResponse.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/AuthenticationResponse.java deleted file mode 100644 index f09ec0c5fb9..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/AuthenticationResponse.java +++ /dev/null @@ -1,69 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.auth; - -import org.apache.hadoop.fs.swift.auth.entities.AccessToken; -import org.apache.hadoop.fs.swift.auth.entities.Catalog; -import org.apache.hadoop.fs.swift.auth.entities.User; - -import java.util.List; - -/** - * Response from KeyStone deserialized into AuthenticationResponse class. - * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON. - * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS. - */ -public class AuthenticationResponse { - private Object metadata; - private List serviceCatalog; - private User user; - private AccessToken token; - - public Object getMetadata() { - return metadata; - } - - public void setMetadata(Object metadata) { - this.metadata = metadata; - } - - public List getServiceCatalog() { - return serviceCatalog; - } - - public void setServiceCatalog(List serviceCatalog) { - this.serviceCatalog = serviceCatalog; - } - - public User getUser() { - return user; - } - - public void setUser(User user) { - this.user = user; - } - - public AccessToken getToken() { - return token; - } - - public void setToken(AccessToken token) { - this.token = token; - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/KeyStoneAuthRequest.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/KeyStoneAuthRequest.java deleted file mode 100644 index c3abbac88f4..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/KeyStoneAuthRequest.java +++ /dev/null @@ -1,59 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.auth; - -/** - * Class that represents authentication to OpenStack Keystone. - * Contains basic authentication information. - * Used when {@link ApiKeyAuthenticationRequest} is not applicable. - * (problem with different Keystone installations/versions/modifications) - * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON. - * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS. - */ -public class KeyStoneAuthRequest extends AuthenticationRequest { - - /** - * Credentials for Keystone authentication - */ - private KeystoneApiKeyCredentials apiAccessKeyCredentials; - - /** - * @param tenant Keystone tenant name for authentication - * @param apiAccessKeyCredentials Credentials for authentication - */ - public KeyStoneAuthRequest(String tenant, KeystoneApiKeyCredentials apiAccessKeyCredentials) { - this.apiAccessKeyCredentials = apiAccessKeyCredentials; - this.tenantName = tenant; - } - - public KeystoneApiKeyCredentials getApiAccessKeyCredentials() { - return apiAccessKeyCredentials; - } - - public void setApiAccessKeyCredentials(KeystoneApiKeyCredentials apiAccessKeyCredentials) { - this.apiAccessKeyCredentials = apiAccessKeyCredentials; - } - - @Override - public String toString() { - return "KeyStoneAuthRequest as " + - "tenant '" + tenantName + "' " - + apiAccessKeyCredentials; - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/KeystoneApiKeyCredentials.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/KeystoneApiKeyCredentials.java deleted file mode 100644 index 75202b3a6d2..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/KeystoneApiKeyCredentials.java +++ /dev/null @@ -1,66 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.auth; - -/** - * Class for Keystone authentication. - * Used when {@link ApiKeyCredentials} is not applicable - * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON. - * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS. - */ -public class KeystoneApiKeyCredentials { - - /** - * User access key - */ - private String accessKey; - - /** - * User access secret - */ - private String secretKey; - - public KeystoneApiKeyCredentials(String accessKey, String secretKey) { - this.accessKey = accessKey; - this.secretKey = secretKey; - } - - public String getAccessKey() { - return accessKey; - } - - public void setAccessKey(String accessKey) { - this.accessKey = accessKey; - } - - public String getSecretKey() { - return secretKey; - } - - public void setSecretKey(String secretKey) { - this.secretKey = secretKey; - } - - @Override - public String toString() { - return "user " + - "'" + accessKey + '\'' + - " with key of length " + ((secretKey == null) ? 0 : secretKey.length()); - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/PasswordAuthenticationRequest.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/PasswordAuthenticationRequest.java deleted file mode 100644 index ee519f3f8da..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/PasswordAuthenticationRequest.java +++ /dev/null @@ -1,62 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.auth; - -/** - * Class that represents authentication request to Openstack Keystone. - * Contains basic authentication information. - * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON. - * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS. - */ -public class PasswordAuthenticationRequest extends AuthenticationRequest { - /** - * Credentials for login - */ - private PasswordCredentials passwordCredentials; - - /** - * @param tenantName tenant - * @param passwordCredentials password credentials - */ - public PasswordAuthenticationRequest(String tenantName, PasswordCredentials passwordCredentials) { - this.tenantName = tenantName; - this.passwordCredentials = passwordCredentials; - } - - /** - * @return credentials for login into Keystone - */ - public PasswordCredentials getPasswordCredentials() { - return passwordCredentials; - } - - /** - * @param passwordCredentials credentials for login into Keystone - */ - public void setPasswordCredentials(PasswordCredentials passwordCredentials) { - this.passwordCredentials = passwordCredentials; - } - - @Override - public String toString() { - return "Authenticate as " + - "tenant '" + tenantName + "' " - + passwordCredentials; - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/PasswordCredentials.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/PasswordCredentials.java deleted file mode 100644 index 40d8c77feb4..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/PasswordCredentials.java +++ /dev/null @@ -1,86 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.auth; - - -/** - * Describes credentials to log in Swift using Keystone authentication. - * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON. - * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS. - */ -public class PasswordCredentials { - /** - * user login - */ - private String username; - - /** - * user password - */ - private String password; - - /** - * default constructor - */ - public PasswordCredentials() { - } - - /** - * @param username user login - * @param password user password - */ - public PasswordCredentials(String username, String password) { - this.username = username; - this.password = password; - } - - /** - * @return user password - */ - public String getPassword() { - return password; - } - - /** - * @param password user password - */ - public void setPassword(String password) { - this.password = password; - } - - /** - * @return login - */ - public String getUsername() { - return username; - } - - /** - * @param username login - */ - public void setUsername(String username) { - this.username = username; - } - - @Override - public String toString() { - return "PasswordCredentials{username='" + username + "'}"; - } -} - diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/Roles.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/Roles.java deleted file mode 100644 index 57f2fa6d451..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/Roles.java +++ /dev/null @@ -1,97 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.auth; - -/** - * Describes user roles in Openstack system. - * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON. - * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS. - */ -public class Roles { - /** - * role name - */ - private String name; - - /** - * This field user in RackSpace auth model - */ - private String id; - - /** - * This field user in RackSpace auth model - */ - private String description; - - /** - * Service id used in HP public Cloud - */ - private String serviceId; - - /** - * Service id used in HP public Cloud - */ - private String tenantId; - - /** - * @return role name - */ - public String getName() { - return name; - } - - /** - * @param name role name - */ - public void setName(String name) { - this.name = name; - } - - public String getId() { - return id; - } - - public void setId(String id) { - this.id = id; - } - - public String getDescription() { - return description; - } - - public void setDescription(String description) { - this.description = description; - } - - public String getServiceId() { - return serviceId; - } - - public void setServiceId(String serviceId) { - this.serviceId = serviceId; - } - - public String getTenantId() { - return tenantId; - } - - public void setTenantId(String tenantId) { - this.tenantId = tenantId; - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/entities/AccessToken.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/entities/AccessToken.java deleted file mode 100644 index b38d4660e5a..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/entities/AccessToken.java +++ /dev/null @@ -1,107 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.auth.entities; - -import com.fasterxml.jackson.annotation.JsonIgnoreProperties; - -/** - * Access token representation of Openstack Keystone authentication. - * Class holds token id, tenant and expiration time. - * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON. - * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS. - * - * Example: - *
- * "token" : {
- *   "RAX-AUTH:authenticatedBy" : [ "APIKEY" ],
- *   "expires" : "2013-07-12T05:19:24.685-05:00",
- *   "id" : "8bbea4215113abdab9d4c8fb0d37",
- *   "tenant" : { "id" : "01011970",
- *   "name" : "77777"
- *   }
- *  }
- * 
- */ -@JsonIgnoreProperties(ignoreUnknown = true) - -public class AccessToken { - /** - * token expiration time - */ - private String expires; - /** - * token id - */ - private String id; - /** - * tenant name for whom id is attached - */ - private Tenant tenant; - - /** - * @return token expiration time - */ - public String getExpires() { - return expires; - } - - /** - * @param expires the token expiration time - */ - public void setExpires(String expires) { - this.expires = expires; - } - - /** - * @return token value - */ - public String getId() { - return id; - } - - /** - * @param id token value - */ - public void setId(String id) { - this.id = id; - } - - /** - * @return tenant authenticated in Openstack Keystone - */ - public Tenant getTenant() { - return tenant; - } - - /** - * @param tenant tenant authenticated in Openstack Keystone - */ - public void setTenant(Tenant tenant) { - this.tenant = tenant; - } - - @Override - public String toString() { - return "AccessToken{" + - "id='" + id + '\'' + - ", tenant=" + tenant + - ", expires='" + expires + '\'' + - '}'; - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/entities/Catalog.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/entities/Catalog.java deleted file mode 100644 index 76e161b0642..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/entities/Catalog.java +++ /dev/null @@ -1,107 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.auth.entities; - -import com.fasterxml.jackson.annotation.JsonIgnoreProperties; - -import java.util.List; - -/** - * Describes Openstack Swift REST endpoints. - * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON. - * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS. - */ -@JsonIgnoreProperties(ignoreUnknown = true) - -public class Catalog { - /** - * List of valid swift endpoints - */ - private List endpoints; - /** - * endpoint links are additional information description - * which aren't used in Hadoop and Swift integration scope - */ - private List endpoints_links; - /** - * Openstack REST service name. In our case name = "keystone" - */ - private String name; - - /** - * Type of REST service. In our case type = "identity" - */ - private String type; - - /** - * @return List of endpoints - */ - public List getEndpoints() { - return endpoints; - } - - /** - * @param endpoints list of endpoints - */ - public void setEndpoints(List endpoints) { - this.endpoints = endpoints; - } - - /** - * @return list of endpoint links - */ - public List getEndpoints_links() { - return endpoints_links; - } - - /** - * @param endpoints_links list of endpoint links - */ - public void setEndpoints_links(List endpoints_links) { - this.endpoints_links = endpoints_links; - } - - /** - * @return name of Openstack REST service - */ - public String getName() { - return name; - } - - /** - * @param name of Openstack REST service - */ - public void setName(String name) { - this.name = name; - } - - /** - * @return type of Openstack REST service - */ - public String getType() { - return type; - } - - /** - * @param type of REST service - */ - public void setType(String type) { - this.type = type; - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/entities/Endpoint.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/entities/Endpoint.java deleted file mode 100644 index b1cbf2acc7b..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/entities/Endpoint.java +++ /dev/null @@ -1,194 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.auth.entities; - -import com.fasterxml.jackson.annotation.JsonIgnoreProperties; - -import java.net.URI; - -/** - * Openstack Swift endpoint description. - * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON. - * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS. - */ -@JsonIgnoreProperties(ignoreUnknown = true) - -public class Endpoint { - - /** - * endpoint id - */ - private String id; - - /** - * Keystone admin URL - */ - private URI adminURL; - - /** - * Keystone internal URL - */ - private URI internalURL; - - /** - * public accessible URL - */ - private URI publicURL; - - /** - * public accessible URL#2 - */ - private URI publicURL2; - - /** - * Openstack region name - */ - private String region; - - /** - * This field is used in RackSpace authentication model - */ - private String tenantId; - - /** - * This field user in RackSpace auth model - */ - private String versionId; - - /** - * This field user in RackSpace auth model - */ - private String versionInfo; - - /** - * This field user in RackSpace auth model - */ - private String versionList; - - - /** - * @return endpoint id - */ - public String getId() { - return id; - } - - /** - * @param id endpoint id - */ - public void setId(String id) { - this.id = id; - } - - /** - * @return Keystone admin URL - */ - public URI getAdminURL() { - return adminURL; - } - - /** - * @param adminURL Keystone admin URL - */ - public void setAdminURL(URI adminURL) { - this.adminURL = adminURL; - } - - /** - * @return internal Keystone - */ - public URI getInternalURL() { - return internalURL; - } - - /** - * @param internalURL Keystone internal URL - */ - public void setInternalURL(URI internalURL) { - this.internalURL = internalURL; - } - - /** - * @return public accessible URL - */ - public URI getPublicURL() { - return publicURL; - } - - /** - * @param publicURL public URL - */ - public void setPublicURL(URI publicURL) { - this.publicURL = publicURL; - } - - public URI getPublicURL2() { - return publicURL2; - } - - public void setPublicURL2(URI publicURL2) { - this.publicURL2 = publicURL2; - } - - /** - * @return Openstack region name - */ - public String getRegion() { - return region; - } - - /** - * @param region Openstack region name - */ - public void setRegion(String region) { - this.region = region; - } - - public String getTenantId() { - return tenantId; - } - - public void setTenantId(String tenantId) { - this.tenantId = tenantId; - } - - public String getVersionId() { - return versionId; - } - - public void setVersionId(String versionId) { - this.versionId = versionId; - } - - public String getVersionInfo() { - return versionInfo; - } - - public void setVersionInfo(String versionInfo) { - this.versionInfo = versionInfo; - } - - public String getVersionList() { - return versionList; - } - - public void setVersionList(String versionList) { - this.versionList = versionList; - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/entities/Tenant.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/entities/Tenant.java deleted file mode 100644 index 405d2c85368..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/entities/Tenant.java +++ /dev/null @@ -1,107 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.auth.entities; - -import com.fasterxml.jackson.annotation.JsonIgnoreProperties; - -/** - * Tenant is abstraction in Openstack which describes all account - * information and user privileges in system. - * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON. - * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS. - */ -@JsonIgnoreProperties(ignoreUnknown = true) -public class Tenant { - - /** - * tenant id - */ - private String id; - - /** - * tenant short description which Keystone returns - */ - private String description; - - /** - * boolean enabled user account or no - */ - private boolean enabled; - - /** - * tenant human readable name - */ - private String name; - - /** - * @return tenant name - */ - public String getName() { - return name; - } - - /** - * @param name tenant name - */ - public void setName(String name) { - this.name = name; - } - - /** - * @return true if account enabled and false otherwise - */ - public boolean isEnabled() { - return enabled; - } - - /** - * @param enabled enable or disable - */ - public void setEnabled(boolean enabled) { - this.enabled = enabled; - } - - /** - * @return account short description - */ - public String getDescription() { - return description; - } - - /** - * @param description set account description - */ - public void setDescription(String description) { - this.description = description; - } - - /** - * @return set tenant id - */ - public String getId() { - return id; - } - - /** - * @param id tenant id - */ - public void setId(String id) { - this.id = id; - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/entities/User.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/entities/User.java deleted file mode 100644 index da3bac20f2b..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/entities/User.java +++ /dev/null @@ -1,132 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.auth.entities; - -import com.fasterxml.jackson.annotation.JsonIgnoreProperties; -import org.apache.hadoop.fs.swift.auth.Roles; - -import java.util.List; - -/** - * Describes user entity in Keystone - * In different Swift installations User is represented differently. - * To avoid any JSON deserialization failures this entity is ignored. - * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON. - * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS. - */ -@JsonIgnoreProperties(ignoreUnknown = true) -public class User { - - /** - * user id in Keystone - */ - private String id; - - /** - * user human readable name - */ - private String name; - - /** - * user roles in Keystone - */ - private List roles; - - /** - * links to user roles - */ - private List roles_links; - - /** - * human readable username in Keystone - */ - private String username; - - /** - * @return user id - */ - public String getId() { - return id; - } - - /** - * @param id user id - */ - public void setId(String id) { - this.id = id; - } - - - /** - * @return user name - */ - public String getName() { - return name; - } - - - /** - * @param name user name - */ - public void setName(String name) { - this.name = name; - } - - /** - * @return user roles - */ - public List getRoles() { - return roles; - } - - /** - * @param roles sets user roles - */ - public void setRoles(List roles) { - this.roles = roles; - } - - /** - * @return user roles links - */ - public List getRoles_links() { - return roles_links; - } - - /** - * @param roles_links user roles links - */ - public void setRoles_links(List roles_links) { - this.roles_links = roles_links; - } - - /** - * @return username - */ - public String getUsername() { - return username; - } - - /** - * @param username human readable user name - */ - public void setUsername(String username) { - this.username = username; - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftAuthenticationFailedException.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftAuthenticationFailedException.java deleted file mode 100644 index fdb9a3973ad..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftAuthenticationFailedException.java +++ /dev/null @@ -1,48 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.exceptions; - -import org.apache.http.HttpResponse; - -import java.net.URI; - -/** - * An exception raised when an authentication request was rejected - */ -public class SwiftAuthenticationFailedException extends SwiftInvalidResponseException { - - public SwiftAuthenticationFailedException(String message, - int statusCode, - String operation, - URI uri) { - super(message, statusCode, operation, uri); - } - - public SwiftAuthenticationFailedException(String message, - String operation, - URI uri, - HttpResponse resp) { - super(message, operation, uri, resp); - } - - @Override - public String exceptionTitle() { - return "Authentication Failure"; - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftBadRequestException.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftBadRequestException.java deleted file mode 100644 index f5b2abde0a9..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftBadRequestException.java +++ /dev/null @@ -1,49 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.exceptions; - -import org.apache.http.HttpResponse; - -import java.net.URI; - -/** - * Thrown to indicate that data locality can't be calculated or requested path is incorrect. - * Data locality can't be calculated if Openstack Swift version is old. - */ -public class SwiftBadRequestException extends SwiftInvalidResponseException { - - public SwiftBadRequestException(String message, - String operation, - URI uri, - HttpResponse resp) { - super(message, operation, uri, resp); - } - - public SwiftBadRequestException(String message, - int statusCode, - String operation, - URI uri) { - super(message, statusCode, operation, uri); - } - - @Override - public String exceptionTitle() { - return "BadRequest"; - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftConfigurationException.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftConfigurationException.java deleted file mode 100644 index 3651f2e0505..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftConfigurationException.java +++ /dev/null @@ -1,33 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.exceptions; - -/** - * Exception raised to indicate there is some problem with how the Swift FS - * is configured - */ -public class SwiftConfigurationException extends SwiftException { - public SwiftConfigurationException(String message) { - super(message); - } - - public SwiftConfigurationException(String message, Throwable cause) { - super(message, cause); - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftInternalStateException.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftInternalStateException.java deleted file mode 100644 index 0f3e5d98849..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftInternalStateException.java +++ /dev/null @@ -1,38 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.exceptions; - -/** - * The internal state of the Swift client is wrong -presumably a sign - * of some bug - */ -public class SwiftInternalStateException extends SwiftException { - - public SwiftInternalStateException(String message) { - super(message); - } - - public SwiftInternalStateException(String message, Throwable cause) { - super(message, cause); - } - - public SwiftInternalStateException(Throwable cause) { - super(cause); - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftInvalidResponseException.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftInvalidResponseException.java deleted file mode 100644 index e90e57519b9..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftInvalidResponseException.java +++ /dev/null @@ -1,118 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.exceptions; - -import org.apache.hadoop.fs.swift.util.HttpResponseUtils; -import org.apache.http.HttpResponse; - -import java.io.IOException; -import java.net.URI; - -/** - * Exception raised when the HTTP code is invalid. The status code, - * method name and operation URI are all in the response. - */ -public class SwiftInvalidResponseException extends SwiftConnectionException { - - public final int statusCode; - public final String operation; - public final URI uri; - public final String body; - - public SwiftInvalidResponseException(String message, - int statusCode, - String operation, - URI uri) { - super(message); - this.statusCode = statusCode; - this.operation = operation; - this.uri = uri; - this.body = ""; - } - - public SwiftInvalidResponseException(String message, - String operation, - URI uri, - HttpResponse resp) { - super(message); - this.statusCode = resp.getStatusLine().getStatusCode(); - this.operation = operation; - this.uri = uri; - String bodyAsString; - try { - bodyAsString = HttpResponseUtils.getResponseBodyAsString(resp); - if (bodyAsString == null) { - bodyAsString = ""; - } - } catch (IOException e) { - bodyAsString = ""; - } - this.body = bodyAsString; - } - - public int getStatusCode() { - return statusCode; - } - - public String getOperation() { - return operation; - } - - public URI getUri() { - return uri; - } - - public String getBody() { - return body; - } - - /** - * Override point: title of an exception -this is used in the - * toString() method. - * @return the new exception title - */ - public String exceptionTitle() { - return "Invalid Response"; - } - - /** - * Build a description that includes the exception title, the URI, - * the message, the status code -and any body of the response - * @return the string value for display - */ - @Override - public String toString() { - StringBuilder msg = new StringBuilder(); - msg.append(exceptionTitle()); - msg.append(": "); - msg.append(getMessage()); - msg.append(" "); - msg.append(operation); - msg.append(" "); - msg.append(uri); - msg.append(" => "); - msg.append(statusCode); - if (body != null && !body.isEmpty()) { - msg.append(" : "); - msg.append(body); - } - - return msg.toString(); - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftJsonMarshallingException.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftJsonMarshallingException.java deleted file mode 100644 index 0b078d7f433..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftJsonMarshallingException.java +++ /dev/null @@ -1,33 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.exceptions; - -/** - * Exception raised when the J/O mapping fails. - */ -public class SwiftJsonMarshallingException extends SwiftException { - - public SwiftJsonMarshallingException(String message) { - super(message); - } - - public SwiftJsonMarshallingException(String message, Throwable cause) { - super(message, cause); - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftOperationFailedException.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftOperationFailedException.java deleted file mode 100644 index 8f78f70f44b..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftOperationFailedException.java +++ /dev/null @@ -1,35 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.exceptions; - -/** - * Used to relay exceptions upstream from the inner implementation - * to the public API, where it is downgraded to a log+failure. - * Making it visible internally aids testing - */ -public class SwiftOperationFailedException extends SwiftException { - - public SwiftOperationFailedException(String message) { - super(message); - } - - public SwiftOperationFailedException(String message, Throwable cause) { - super(message, cause); - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftThrottledRequestException.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftThrottledRequestException.java deleted file mode 100644 index 1e7ca67d1b0..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftThrottledRequestException.java +++ /dev/null @@ -1,37 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.exceptions; - -import org.apache.http.HttpResponse; - -import java.net.URI; - -/** - * Exception raised if a Swift endpoint returned a HTTP response indicating - * the caller is being throttled. - */ -public class SwiftThrottledRequestException extends - SwiftInvalidResponseException { - public SwiftThrottledRequestException(String message, - String operation, - URI uri, - HttpResponse resp) { - super(message, operation, uri, resp); - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftUnsupportedFeatureException.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftUnsupportedFeatureException.java deleted file mode 100644 index b7e011c59ab..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftUnsupportedFeatureException.java +++ /dev/null @@ -1,30 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.exceptions; - -/** - * Exception raised on an unsupported feature in the FS API -such as - * append() - */ -public class SwiftUnsupportedFeatureException extends SwiftException { - - public SwiftUnsupportedFeatureException(String message) { - super(message); - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/ExceptionDiags.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/ExceptionDiags.java deleted file mode 100644 index d159caa6690..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/ExceptionDiags.java +++ /dev/null @@ -1,98 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.http; - -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; - -import java.io.IOException; -import java.lang.reflect.Constructor; -import java.net.ConnectException; -import java.net.NoRouteToHostException; -import java.net.SocketTimeoutException; -import java.net.UnknownHostException; - -/** - * Variant of Hadoop NetUtils exception wrapping with URI awareness and - * available in branch-1 too. - */ -public class ExceptionDiags { - private static final Logger LOG = - LoggerFactory.getLogger(ExceptionDiags.class); - - /** text to point users elsewhere: {@value} */ - private static final String FOR_MORE_DETAILS_SEE - = " For more details see: "; - /** text included in wrapped exceptions if the host is null: {@value} */ - public static final String UNKNOWN_HOST = "(unknown)"; - /** Base URL of the Hadoop Wiki: {@value} */ - public static final String HADOOP_WIKI = "http://wiki.apache.org/hadoop/"; - - /** - * Take an IOException and a URI, wrap it where possible with - * something that includes the URI - * - * @param dest target URI - * @param operation operation - * @param exception the caught exception. - * @return an exception to throw - */ - public static IOException wrapException(final String dest, - final String operation, - final IOException exception) { - String action = operation + " " + dest; - String xref = null; - - if (exception instanceof ConnectException) { - xref = "ConnectionRefused"; - } else if (exception instanceof UnknownHostException) { - xref = "UnknownHost"; - } else if (exception instanceof SocketTimeoutException) { - xref = "SocketTimeout"; - } else if (exception instanceof NoRouteToHostException) { - xref = "NoRouteToHost"; - } - String msg = action - + " failed on exception: " - + exception; - if (xref != null) { - msg = msg + ";" + see(xref); - } - return wrapWithMessage(exception, msg); - } - - private static String see(final String entry) { - return FOR_MORE_DETAILS_SEE + HADOOP_WIKI + entry; - } - - @SuppressWarnings("unchecked") - private static T wrapWithMessage( - T exception, String msg) { - Class clazz = exception.getClass(); - try { - Constructor ctor = - clazz.getConstructor(String.class); - Throwable t = ctor.newInstance(msg); - return (T) (t.initCause(exception)); - } catch (Throwable e) { - return exception; - } - } - -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/HttpInputStreamWithRelease.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/HttpInputStreamWithRelease.java deleted file mode 100644 index bd025aca1b8..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/HttpInputStreamWithRelease.java +++ /dev/null @@ -1,234 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.http; - -import org.apache.hadoop.fs.swift.exceptions.SwiftConnectionClosedException; -import org.apache.hadoop.fs.swift.util.SwiftUtils; -import org.apache.http.HttpResponse; -import org.apache.http.client.methods.HttpRequestBase; -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; - -import java.io.ByteArrayInputStream; -import java.io.EOFException; -import java.io.IOException; -import java.io.InputStream; -import java.net.URI; - -/** - * This replaces the input stream release class from JetS3t and AWS; - * # Failures in the constructor are relayed up instead of simply logged. - * # it is set up to be more robust at teardown - * # release logic is thread safe - * Note that the thread safety of the inner stream contains no thread - * safety guarantees -this stream is not to be read across streams. - * The thread safety logic here is to ensure that even if somebody ignores - * that rule, the release code does not get entered twice -and that - * any release in one thread is picked up by read operations in all others. - */ -public class HttpInputStreamWithRelease extends InputStream { - - private static final Logger LOG = - LoggerFactory.getLogger(HttpInputStreamWithRelease.class); - private final URI uri; - private HttpRequestBase req; - private HttpResponse resp; - //flag to say the stream is released -volatile so that read operations - //pick it up even while unsynchronized. - private volatile boolean released; - //volatile flag to verify that data is consumed. - private volatile boolean dataConsumed; - private InputStream inStream; - /** - * In debug builds, this is filled in with the construction-time - * stack, which is then included in logs from the finalize(), method. - */ - private final Exception constructionStack; - - /** - * Why the stream is closed - */ - private String reasonClosed = "unopened"; - - public HttpInputStreamWithRelease(URI uri, HttpRequestBase req, - HttpResponse resp) throws IOException { - this.uri = uri; - this.req = req; - this.resp = resp; - constructionStack = LOG.isDebugEnabled() ? new Exception("stack") : null; - if (req == null) { - throw new IllegalArgumentException("Null 'request' parameter "); - } - try { - inStream = resp.getEntity().getContent(); - } catch (IOException e) { - inStream = new ByteArrayInputStream(new byte[]{}); - throw releaseAndRethrow("getResponseBodyAsStream() in constructor -" + e, e); - } - } - - @Override - public void close() throws IOException { - release("close()", null); - } - - /** - * Release logic - * @param reason reason for release (used in debug messages) - * @param ex exception that is a cause -null for non-exceptional releases - * @return true if the release took place here - * @throws IOException if the abort or close operations failed. - */ - private synchronized boolean release(String reason, Exception ex) throws - IOException { - if (!released) { - reasonClosed = reason; - try { - LOG.debug("Releasing connection to {}: {}", uri, reason, ex); - if (req != null) { - if (!dataConsumed) { - req.abort(); - } - req.releaseConnection(); - } - if (inStream != null) { - //this guard may seem un-needed, but a stack trace seen - //on the JetS3t predecessor implied that it - //is useful - inStream.close(); - } - return true; - } finally { - //if something went wrong here, we do not want the release() operation - //to try and do anything in advance. - released = true; - dataConsumed = true; - } - } else { - return false; - } - } - - /** - * Release the method, using the exception as a cause - * @param operation operation that failed - * @param ex the exception which triggered it. - * @return the exception to throw - */ - private IOException releaseAndRethrow(String operation, IOException ex) { - try { - release(operation, ex); - } catch (IOException ioe) { - LOG.debug("Exception during release: {}", operation, ioe); - //make this the exception if there was none before - if (ex == null) { - ex = ioe; - } - } - return ex; - } - - /** - * Assume that the connection is not released: throws an exception if it is - * @throws SwiftConnectionClosedException - */ - private synchronized void assumeNotReleased() throws SwiftConnectionClosedException { - if (released || inStream == null) { - throw new SwiftConnectionClosedException(reasonClosed); - } - } - - @Override - public int available() throws IOException { - assumeNotReleased(); - try { - return inStream.available(); - } catch (IOException e) { - throw releaseAndRethrow("available() failed -" + e, e); - } - } - - @Override - public int read() throws IOException { - assumeNotReleased(); - int read = 0; - try { - read = inStream.read(); - } catch (EOFException e) { - LOG.debug("EOF exception", e); - read = -1; - } catch (IOException e) { - throw releaseAndRethrow("read()", e); - } - if (read < 0) { - dataConsumed = true; - release("read() -all data consumed", null); - } - return read; - } - - @Override - public int read(byte[] b, int off, int len) throws IOException { - SwiftUtils.validateReadArgs(b, off, len); - if (len == 0) { - return 0; - } - //if the stream is already closed, then report an exception. - assumeNotReleased(); - //now read in a buffer, reacting differently to different operations - int read; - try { - read = inStream.read(b, off, len); - } catch (EOFException e) { - LOG.debug("EOF exception", e); - read = -1; - } catch (IOException e) { - throw releaseAndRethrow("read(b, off, " + len + ")", e); - } - if (read < 0) { - dataConsumed = true; - release("read() -all data consumed", null); - } - return read; - } - - /** - * Finalizer does release the stream, but also logs at WARN level - * including the URI at fault - */ - @Override - protected void finalize() { - try { - if (release("finalize()", constructionStack)) { - LOG.warn("input stream of {}" + - " not closed properly -cleaned up in finalize()", uri); - } - } catch (Exception e) { - //swallow anything that failed here - LOG.warn("Exception while releasing {} in finalizer", uri, e); - } - } - - @Override - public String toString() { - return "HttpInputStreamWithRelease working with " + uri - +" released=" + released - +" dataConsumed=" + dataConsumed; - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/RestClientBindings.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/RestClientBindings.java deleted file mode 100644 index f6917d3ffae..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/RestClientBindings.java +++ /dev/null @@ -1,225 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.http; - -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException; - -import java.net.URI; -import java.util.Properties; - -import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.*; - -/** - * This class implements the binding logic between Hadoop configurations - * and the swift rest client. - *

- * The swift rest client takes a Properties instance containing - * the string values it uses to bind to a swift endpoint. - *

- * This class extracts the values for a specific filesystem endpoint - * and then builds an appropriate Properties file. - */ -public final class RestClientBindings { - private static final Logger LOG = - LoggerFactory.getLogger(RestClientBindings.class); - - public static final String E_INVALID_NAME = "Invalid swift hostname '%s':" + - " hostname must in form container.service"; - - /** - * Public for testing : build the full prefix for use in resolving - * configuration items - * - * @param service service to use - * @return the prefix string without any trailing "." - */ - public static String buildSwiftInstancePrefix(String service) { - return SWIFT_SERVICE_PREFIX + service; - } - - /** - * Raise an exception for an invalid service name - * - * @param hostname hostname that was being parsed - * @return an exception to throw - */ - private static SwiftConfigurationException invalidName(String hostname) { - return new SwiftConfigurationException( - String.format(E_INVALID_NAME, hostname)); - } - - /** - * Get the container name from the hostname -the single element before the - * first "." in the hostname - * - * @param hostname hostname to split - * @return the container - * @throws SwiftConfigurationException - */ - public static String extractContainerName(String hostname) throws - SwiftConfigurationException { - int i = hostname.indexOf("."); - if (i <= 0) { - throw invalidName(hostname); - } - return hostname.substring(0, i); - } - - public static String extractContainerName(URI uri) throws - SwiftConfigurationException { - return extractContainerName(uri.getHost()); - } - - /** - * Get the service name from a longer hostname string - * - * @param hostname hostname - * @return the separated out service name - * @throws SwiftConfigurationException if the hostname was invalid - */ - public static String extractServiceName(String hostname) throws - SwiftConfigurationException { - int i = hostname.indexOf("."); - if (i <= 0) { - throw invalidName(hostname); - } - String service = hostname.substring(i + 1); - if (service.isEmpty() || service.contains(".")) { - //empty service contains dots in -not currently supported - throw invalidName(hostname); - } - return service; - } - - public static String extractServiceName(URI uri) throws - SwiftConfigurationException { - return extractServiceName(uri.getHost()); - } - - /** - * Build a properties instance bound to the configuration file -using - * the filesystem URI as the source of the information. - * - * @param fsURI filesystem URI - * @param conf configuration - * @return a properties file with the instance-specific properties extracted - * and bound to the swift client properties. - * @throws SwiftConfigurationException if the configuration is invalid - */ - public static Properties bind(URI fsURI, Configuration conf) throws - SwiftConfigurationException { - String host = fsURI.getHost(); - if (host == null || host.isEmpty()) { - //expect shortnames -> conf names - throw invalidName(host); - } - - String container = extractContainerName(host); - String service = extractServiceName(host); - - //build filename schema - String prefix = buildSwiftInstancePrefix(service); - if (LOG.isDebugEnabled()) { - LOG.debug("Filesystem " + fsURI - + " is using configuration keys " + prefix); - } - Properties props = new Properties(); - props.setProperty(SWIFT_SERVICE_PROPERTY, service); - props.setProperty(SWIFT_CONTAINER_PROPERTY, container); - copy(conf, prefix + DOT_AUTH_URL, props, SWIFT_AUTH_PROPERTY, true); - copy(conf, prefix + DOT_USERNAME, props, SWIFT_USERNAME_PROPERTY, true); - copy(conf, prefix + DOT_APIKEY, props, SWIFT_APIKEY_PROPERTY, false); - copy(conf, prefix + DOT_PASSWORD, props, SWIFT_PASSWORD_PROPERTY, - props.contains(SWIFT_APIKEY_PROPERTY) ? true : false); - copy(conf, prefix + DOT_TENANT, props, SWIFT_TENANT_PROPERTY, false); - copy(conf, prefix + DOT_REGION, props, SWIFT_REGION_PROPERTY, false); - copy(conf, prefix + DOT_HTTP_PORT, props, SWIFT_HTTP_PORT_PROPERTY, false); - copy(conf, prefix + - DOT_HTTPS_PORT, props, SWIFT_HTTPS_PORT_PROPERTY, false); - - copyBool(conf, prefix + DOT_PUBLIC, props, SWIFT_PUBLIC_PROPERTY, false); - copyBool(conf, prefix + DOT_LOCATION_AWARE, props, - SWIFT_LOCATION_AWARE_PROPERTY, false); - - return props; - } - - /** - * Extract a boolean value from the configuration and copy it to the - * properties instance. - * @param conf source configuration - * @param confKey key in the configuration file - * @param props destination property set - * @param propsKey key in the property set - * @param defVal default value - */ - private static void copyBool(Configuration conf, - String confKey, - Properties props, - String propsKey, - boolean defVal) { - boolean b = conf.getBoolean(confKey, defVal); - props.setProperty(propsKey, Boolean.toString(b)); - } - - private static void set(Properties props, String key, String optVal) { - if (optVal != null) { - props.setProperty(key, optVal); - } - } - - /** - * Copy a (trimmed) property from the configuration file to the properties file. - *

- * If marked as required and not found in the configuration, an - * exception is raised. - * If not required -and missing- then the property will not be set. - * In this case, if the property is already in the Properties instance, - * it will remain untouched. - * - * @param conf source configuration - * @param confKey key in the configuration file - * @param props destination property set - * @param propsKey key in the property set - * @param required is the property required - * @throws SwiftConfigurationException if the property is required but was - * not found in the configuration instance. - */ - public static void copy(Configuration conf, String confKey, Properties props, - String propsKey, - boolean required) throws SwiftConfigurationException { - //TODO: replace. version compatibility issue conf.getTrimmed fails with NoSuchMethodError - String val = conf.get(confKey); - if (val != null) { - val = val.trim(); - } - if (required && val == null) { - throw new SwiftConfigurationException( - "Missing mandatory configuration option: " - + - confKey); - } - set(props, propsKey, val); - } - - -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftProtocolConstants.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftProtocolConstants.java deleted file mode 100644 index a01f32c18b2..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftProtocolConstants.java +++ /dev/null @@ -1,270 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.http; - -import org.apache.hadoop.util.VersionInfo; - -/** - * Constants used in the Swift REST protocol, - * and in the properties used to configure the {@link SwiftRestClient}. - */ -public class SwiftProtocolConstants { - /** - * Swift-specific header for authentication: {@value} - */ - public static final String HEADER_AUTH_KEY = "X-Auth-Token"; - - /** - * Default port used by Swift for HTTP - */ - public static final int SWIFT_HTTP_PORT = 8080; - - /** - * Default port used by Swift Auth for HTTPS - */ - public static final int SWIFT_HTTPS_PORT = 443; - - /** HTTP standard {@value} header */ - public static final String HEADER_RANGE = "Range"; - - /** HTTP standard {@value} header */ - public static final String HEADER_DESTINATION = "Destination"; - - /** HTTP standard {@value} header */ - public static final String HEADER_LAST_MODIFIED = "Last-Modified"; - - /** HTTP standard {@value} header */ - public static final String HEADER_CONTENT_LENGTH = "Content-Length"; - - /** HTTP standard {@value} header */ - public static final String HEADER_CONTENT_RANGE = "Content-Range"; - - /** - * Patten for range headers - */ - public static final String SWIFT_RANGE_HEADER_FORMAT_PATTERN = "bytes=%d-%d"; - - /** - * section in the JSON catalog provided after auth listing the swift FS: - * {@value} - */ - public static final String SERVICE_CATALOG_SWIFT = "swift"; - /** - * section in the JSON catalog provided after auth listing the cloud files; - * this is an alternate catalog entry name - * {@value} - */ - public static final String SERVICE_CATALOG_CLOUD_FILES = "cloudFiles"; - /** - * section in the JSON catalog provided after auth listing the object store; - * this is an alternate catalog entry name - * {@value} - */ - public static final String SERVICE_CATALOG_OBJECT_STORE = "object-store"; - - /** - * entry in the swift catalog defining the prefix used to talk to objects - * {@value} - */ - public static final String SWIFT_OBJECT_AUTH_ENDPOINT = - "/object_endpoint/"; - /** - * Swift-specific header: object manifest used in the final upload - * of a multipart operation: {@value} - */ - public static final String X_OBJECT_MANIFEST = "X-Object-Manifest"; - /** - * Swift-specific header -#of objects in a container: {@value} - */ - public static final String X_CONTAINER_OBJECT_COUNT = - "X-Container-Object-Count"; - /** - * Swift-specific header: no. of bytes used in a container {@value} - */ - public static final String X_CONTAINER_BYTES_USED = "X-Container-Bytes-Used"; - - /** - * Header to set when requesting the latest version of a file: : {@value} - */ - public static final String X_NEWEST = "X-Newest"; - - /** - * throttled response sent by some endpoints. - */ - public static final int SC_THROTTLED_498 = 498; - /** - * W3C recommended status code for throttled operations - */ - public static final int SC_TOO_MANY_REQUESTS_429 = 429; - - public static final String FS_SWIFT = "fs.swift"; - - /** - * Prefix for all instance-specific values in the configuration: {@value} - */ - public static final String SWIFT_SERVICE_PREFIX = FS_SWIFT + ".service."; - - /** - * timeout for all connections: {@value} - */ - public static final String SWIFT_CONNECTION_TIMEOUT = - FS_SWIFT + ".connect.timeout"; - - /** - * timeout for all connections: {@value} - */ - public static final String SWIFT_SOCKET_TIMEOUT = - FS_SWIFT + ".socket.timeout"; - - /** - * the default socket timeout in millis {@value}. - * This controls how long the connection waits for responses from - * servers. - */ - public static final int DEFAULT_SOCKET_TIMEOUT = 60000; - - /** - * connection retry count for all connections: {@value} - */ - public static final String SWIFT_RETRY_COUNT = - FS_SWIFT + ".connect.retry.count"; - - /** - * delay in millis between bulk (delete, rename, copy operations: {@value} - */ - public static final String SWIFT_THROTTLE_DELAY = - FS_SWIFT + ".connect.throttle.delay"; - - /** - * the default throttle delay in millis {@value} - */ - public static final int DEFAULT_THROTTLE_DELAY = 0; - - /** - * blocksize for all filesystems: {@value} - */ - public static final String SWIFT_BLOCKSIZE = - FS_SWIFT + ".blocksize"; - - /** - * the default blocksize for filesystems in KB: {@value} - */ - public static final int DEFAULT_SWIFT_BLOCKSIZE = 32 * 1024; - - /** - * partition size for all filesystems in KB: {@value} - */ - public static final String SWIFT_PARTITION_SIZE = - FS_SWIFT + ".partsize"; - - /** - * The default partition size for uploads: {@value} - */ - public static final int DEFAULT_SWIFT_PARTITION_SIZE = 4608*1024; - - /** - * request size for reads in KB: {@value} - */ - public static final String SWIFT_REQUEST_SIZE = - FS_SWIFT + ".requestsize"; - - /** - * The default request size for reads: {@value} - */ - public static final int DEFAULT_SWIFT_REQUEST_SIZE = 64; - - - public static final String HEADER_USER_AGENT="User-Agent"; - - /** - * The user agent sent in requests. - */ - public static final String SWIFT_USER_AGENT= "Apache Hadoop Swift Client " - + VersionInfo.getBuildVersion(); - - /** - * Key for passing the service name as a property -not read from the - * configuration : {@value} - */ - public static final String DOT_SERVICE = ".SERVICE-NAME"; - - /** - * Key for passing the container name as a property -not read from the - * configuration : {@value} - */ - public static final String DOT_CONTAINER = ".CONTAINER-NAME"; - - public static final String DOT_AUTH_URL = ".auth.url"; - public static final String DOT_TENANT = ".tenant"; - public static final String DOT_USERNAME = ".username"; - public static final String DOT_PASSWORD = ".password"; - public static final String DOT_HTTP_PORT = ".http.port"; - public static final String DOT_HTTPS_PORT = ".https.port"; - public static final String DOT_REGION = ".region"; - public static final String DOT_PROXY_HOST = ".proxy.host"; - public static final String DOT_PROXY_PORT = ".proxy.port"; - public static final String DOT_LOCATION_AWARE = ".location-aware"; - public static final String DOT_APIKEY = ".apikey"; - public static final String DOT_USE_APIKEY = ".useApikey"; - - /** - * flag to say use public URL - */ - public static final String DOT_PUBLIC = ".public"; - - public static final String SWIFT_SERVICE_PROPERTY = FS_SWIFT + DOT_SERVICE; - public static final String SWIFT_CONTAINER_PROPERTY = FS_SWIFT + DOT_CONTAINER; - - public static final String SWIFT_AUTH_PROPERTY = FS_SWIFT + DOT_AUTH_URL; - public static final String SWIFT_TENANT_PROPERTY = FS_SWIFT + DOT_TENANT; - public static final String SWIFT_USERNAME_PROPERTY = FS_SWIFT + DOT_USERNAME; - public static final String SWIFT_PASSWORD_PROPERTY = FS_SWIFT + DOT_PASSWORD; - public static final String SWIFT_APIKEY_PROPERTY = FS_SWIFT + DOT_APIKEY; - public static final String SWIFT_HTTP_PORT_PROPERTY = FS_SWIFT + DOT_HTTP_PORT; - public static final String SWIFT_HTTPS_PORT_PROPERTY = FS_SWIFT - + DOT_HTTPS_PORT; - public static final String SWIFT_REGION_PROPERTY = FS_SWIFT + DOT_REGION; - public static final String SWIFT_PUBLIC_PROPERTY = FS_SWIFT + DOT_PUBLIC; - - public static final String SWIFT_USE_API_KEY_PROPERTY = FS_SWIFT + DOT_USE_APIKEY; - - public static final String SWIFT_LOCATION_AWARE_PROPERTY = FS_SWIFT + - DOT_LOCATION_AWARE; - - public static final String SWIFT_PROXY_HOST_PROPERTY = FS_SWIFT + DOT_PROXY_HOST; - public static final String SWIFT_PROXY_PORT_PROPERTY = FS_SWIFT + DOT_PROXY_PORT; - public static final String HTTP_ROUTE_DEFAULT_PROXY = - "http.route.default-proxy"; - /** - * Topology to return when a block location is requested - */ - public static final String TOPOLOGY_PATH = "/swift/unknown"; - /** - * Block location to return when a block location is requested - */ - public static final String BLOCK_LOCATION = "/default-rack/swift"; - /** - * Default number of attempts to retry a connect request: {@value} - */ - static final int DEFAULT_RETRY_COUNT = 3; - /** - * Default timeout in milliseconds for connection requests: {@value} - */ - static final int DEFAULT_CONNECT_TIMEOUT = 15000; -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftRestClient.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftRestClient.java deleted file mode 100644 index cf6bf9b972a..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftRestClient.java +++ /dev/null @@ -1,1879 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.http; - -import org.apache.hadoop.fs.swift.util.HttpResponseUtils; -import org.apache.http.Header; -import org.apache.http.HttpHost; -import org.apache.http.HttpResponse; -import org.apache.http.HttpStatus; -import org.apache.http.client.HttpClient; -import org.apache.http.client.config.RequestConfig; -import org.apache.http.client.methods.HttpDelete; -import org.apache.http.client.methods.HttpGet; -import org.apache.http.client.methods.HttpHead; -import org.apache.http.client.methods.HttpPost; -import org.apache.http.client.methods.HttpPut; -import org.apache.http.client.methods.HttpRequestBase; -import org.apache.http.client.methods.HttpUriRequest; -import org.apache.http.config.SocketConfig; -import org.apache.http.entity.ContentType; -import org.apache.http.entity.InputStreamEntity; -import org.apache.http.entity.StringEntity; -import org.apache.http.impl.client.CloseableHttpClient; -import org.apache.http.impl.client.DefaultHttpRequestRetryHandler; -import org.apache.http.impl.client.HttpClientBuilder; -import org.apache.http.message.BasicHeader; -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.swift.auth.ApiKeyAuthenticationRequest; -import org.apache.hadoop.fs.swift.auth.ApiKeyCredentials; -import org.apache.hadoop.fs.swift.auth.AuthenticationRequest; -import org.apache.hadoop.fs.swift.auth.AuthenticationRequestWrapper; -import org.apache.hadoop.fs.swift.auth.AuthenticationResponse; -import org.apache.hadoop.fs.swift.auth.AuthenticationWrapper; -import org.apache.hadoop.fs.swift.auth.KeyStoneAuthRequest; -import org.apache.hadoop.fs.swift.auth.KeystoneApiKeyCredentials; -import org.apache.hadoop.fs.swift.auth.PasswordAuthenticationRequest; -import org.apache.hadoop.fs.swift.auth.PasswordCredentials; -import org.apache.hadoop.fs.swift.auth.entities.AccessToken; -import org.apache.hadoop.fs.swift.auth.entities.Catalog; -import org.apache.hadoop.fs.swift.auth.entities.Endpoint; -import org.apache.hadoop.fs.swift.exceptions.SwiftAuthenticationFailedException; -import org.apache.hadoop.fs.swift.exceptions.SwiftBadRequestException; -import org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException; -import org.apache.hadoop.fs.swift.exceptions.SwiftException; -import org.apache.hadoop.fs.swift.exceptions.SwiftInternalStateException; -import org.apache.hadoop.fs.swift.exceptions.SwiftInvalidResponseException; -import org.apache.hadoop.fs.swift.exceptions.SwiftThrottledRequestException; -import org.apache.hadoop.fs.swift.util.Duration; -import org.apache.hadoop.fs.swift.util.DurationStats; -import org.apache.hadoop.fs.swift.util.DurationStatsTable; -import org.apache.hadoop.fs.swift.util.JSONUtil; -import org.apache.hadoop.fs.swift.util.SwiftObjectPath; -import org.apache.hadoop.fs.swift.util.SwiftUtils; - -import java.io.EOFException; -import java.io.FileNotFoundException; -import java.io.IOException; -import java.io.InputStream; -import java.io.UnsupportedEncodingException; -import java.net.URI; -import java.net.URISyntaxException; -import java.net.URLEncoder; -import java.util.List; -import java.util.Properties; - -import static org.apache.http.HttpStatus.*; -import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.*; - -/** - * This implements the client-side of the Swift REST API. - * - * The core actions put, get and query data in the Swift object store, - * after authenticating the client. - * - * Logging: - * - * Logging at DEBUG level displays detail about the actions of this - * client, including HTTP requests and responses -excluding authentication - * details. - */ -public final class SwiftRestClient { - private static final Logger LOG = - LoggerFactory.getLogger(SwiftRestClient.class); - - /** - * Header that says "use newest version" -ensures that - * the query doesn't pick up older versions served by - * an eventually consistent filesystem (except in the special case - * of a network partition, at which point no guarantees about - * consistency can be made. - */ - public static final Header NEWEST = - new BasicHeader(SwiftProtocolConstants.X_NEWEST, "true"); - - /** - * the authentication endpoint as supplied in the configuration. - */ - private final URI authUri; - - /** - * Swift region. Some OpenStack installations has more than one region. - * In this case user can specify the region with which Hadoop will be working - */ - private final String region; - - /** - * tenant name. - */ - private final String tenant; - - /** - * username name. - */ - private final String username; - - /** - * user password. - */ - private final String password; - - /** - * user api key. - */ - private final String apiKey; - - /** - * The authentication request used to authenticate with Swift. - */ - private final AuthenticationRequest authRequest; - - /** - * This auth request is similar to @see authRequest, - * with one difference: it has another json representation when - * authRequest one is not applicable. - */ - private AuthenticationRequest keystoneAuthRequest; - - private boolean useKeystoneAuthentication = false; - - /** - * The container this client is working with. - */ - private final String container; - private final String serviceDescription; - - /** - * Access token (Secret). - */ - private AccessToken token; - - /** - * Endpoint for swift operations, obtained after authentication. - */ - private URI endpointURI; - - /** - * URI under which objects can be found. - * This is set when the user is authenticated -the URI - * is returned in the body of the success response. - */ - private URI objectLocationURI; - - /** - * The name of the service provider. - */ - private final String serviceProvider; - - /** - * Should the public swift endpoint be used, rather than the in-cluster one? - */ - private final boolean usePublicURL; - - /** - * Number of times to retry a connection. - */ - private final int retryCount; - - /** - * How long (in milliseconds) should a connection be attempted. - */ - private final int connectTimeout; - - /** - * How long (in milliseconds) should a connection be attempted. - */ - private final int socketTimeout; - - /** - * How long (in milliseconds) between bulk operations. - */ - private final int throttleDelay; - - /** - * the name of a proxy host (can be null, in which case there is no proxy). - */ - private String proxyHost; - - /** - * The port of a proxy. This is ignored if {@link #proxyHost} is null. - */ - private int proxyPort; - - /** - * Flag to indicate whether or not the client should - * query for file location data. - */ - private final boolean locationAware; - - private final int partSizeKB; - /** - * The blocksize of this FS - */ - private final int blocksizeKB; - private final int bufferSizeKB; - - private final DurationStatsTable durationStats = new DurationStatsTable(); - /** - * objects query endpoint. This is synchronized - * to handle a simultaneous update of all auth data in one - * go. - */ - private synchronized URI getEndpointURI() { - return endpointURI; - } - - /** - * token for Swift communication. - */ - private synchronized AccessToken getToken() { - return token; - } - - /** - * Setter of authentication and endpoint details. - * Being synchronized guarantees that all three fields are set up together. - * It is up to the reader to read all three fields in their own - * synchronized block to be sure that they are all consistent. - * - * @param endpoint endpoint URI - * @param objectLocation object location URI - * @param authToken auth token - */ - private void setAuthDetails(URI endpoint, - URI objectLocation, - AccessToken authToken) { - if (LOG.isDebugEnabled()) { - LOG.debug(String.format("setAuth: endpoint=%s; objectURI=%s; token=%s", - endpoint, objectLocation, authToken)); - } - synchronized (this) { - endpointURI = endpoint; - objectLocationURI = objectLocation; - token = authToken; - } - } - - - /** - * Base class for all Swift REST operations. - * - * @param request - * @param result - */ - private static abstract class HttpRequestProcessor - { - public final M createRequest(String uri) throws IOException { - final M req = doCreateRequest(uri); - setup(req); - return req; - } - - /** - * Override it to return some result after request is executed. - */ - public abstract R extractResult(M req, HttpResponse resp) - throws IOException; - - /** - * Factory method to create a REST method against the given URI. - * - * @param uri target - * @return method to invoke - */ - protected abstract M doCreateRequest(String uri) throws IOException; - - /** - * Override port to set up the request before it is executed. - */ - protected void setup(M req) throws IOException { - } - - /** - * Override point: what are the status codes that this operation supports? - * - * @return an array with the permitted status code(s) - */ - protected int[] getAllowedStatusCodes() { - return new int[]{ - SC_OK, - SC_CREATED, - SC_ACCEPTED, - SC_NO_CONTENT, - SC_PARTIAL_CONTENT, - }; - } - } - - private static abstract class GetRequestProcessor - extends HttpRequestProcessor { - @Override - protected final HttpGet doCreateRequest(String uri) { - return new HttpGet(uri); - } - } - - private static abstract class PostRequestProcessor - extends HttpRequestProcessor { - @Override - protected final HttpPost doCreateRequest(String uri) { - return new HttpPost(uri); - } - } - - /** - * There's a special type for auth messages, so that low-level - * message handlers can react to auth failures differently from everything - * else. - */ - private static final class AuthPostRequest extends HttpPost { - private AuthPostRequest(String uri) { - super(uri); - } - } - - /** - * Generate an auth message. - * @param response - */ - private static abstract class AuthRequestProcessor - extends HttpRequestProcessor { - @Override - protected final AuthPostRequest doCreateRequest(String uri) { - return new AuthPostRequest(uri); - } - } - - private static abstract class PutRequestProcessor - extends HttpRequestProcessor { - @Override - protected final HttpPut doCreateRequest(String uri) { - return new HttpPut(uri); - } - - /** - * Override point: what are the status codes that this operation supports? - * - * @return the list of status codes to accept - */ - @Override - protected int[] getAllowedStatusCodes() { - return new int[]{ - SC_OK, - SC_CREATED, - SC_NO_CONTENT, - SC_ACCEPTED, - }; - } - } - - /** - * Create operation. - * - * @param result type - */ - private static abstract class CopyRequestProcessor - extends HttpRequestProcessor { - @Override - protected final CopyRequest doCreateRequest(String uri) - throws SwiftException { - CopyRequest copy = new CopyRequest(); - try { - copy.setURI(new URI(uri)); - } catch (URISyntaxException e) { - throw new SwiftException("Failed to create URI from: " + uri); - } - return copy; - } - - /** - * The only allowed status code is 201:created. - * @return an array with the permitted status code(s) - */ - @Override - protected int[] getAllowedStatusCodes() { - return new int[]{ - SC_CREATED - }; - } - } - - /** - * Delete operation. - * - * @param - */ - private static abstract class DeleteRequestProcessor - extends HttpRequestProcessor { - @Override - protected final HttpDelete doCreateRequest(String uri) { - return new HttpDelete(uri); - } - - @Override - protected int[] getAllowedStatusCodes() { - return new int[]{ - SC_OK, - SC_ACCEPTED, - SC_NO_CONTENT, - SC_NOT_FOUND - }; - } - } - - private static abstract class HeadRequestProcessor - extends HttpRequestProcessor { - @Override - protected final HttpHead doCreateRequest(String uri) { - return new HttpHead(uri); - } - } - - - /** - * Create a Swift Rest Client instance. - * - * @param filesystemURI filesystem URI - * @param conf The configuration to use to extract the binding - * @throws SwiftConfigurationException the configuration is not valid for - * defining a rest client against the service - */ - private SwiftRestClient(URI filesystemURI, - Configuration conf) - throws SwiftConfigurationException { - Properties props = RestClientBindings.bind(filesystemURI, conf); - String stringAuthUri = getOption(props, SWIFT_AUTH_PROPERTY); - username = getOption(props, SWIFT_USERNAME_PROPERTY); - password = props.getProperty(SWIFT_PASSWORD_PROPERTY); - apiKey = props.getProperty(SWIFT_APIKEY_PROPERTY); - //optional - region = props.getProperty(SWIFT_REGION_PROPERTY); - //tenant is optional - tenant = props.getProperty(SWIFT_TENANT_PROPERTY); - //service is used for diagnostics - serviceProvider = props.getProperty(SWIFT_SERVICE_PROPERTY); - container = props.getProperty(SWIFT_CONTAINER_PROPERTY); - String isPubProp = props.getProperty(SWIFT_PUBLIC_PROPERTY, "false"); - usePublicURL = "true".equals(isPubProp); - - if (apiKey == null && password == null) { - throw new SwiftConfigurationException( - "Configuration for " + filesystemURI +" must contain either " - + SWIFT_PASSWORD_PROPERTY + " or " - + SWIFT_APIKEY_PROPERTY); - } - //create the (reusable) authentication request - if (password != null) { - authRequest = new PasswordAuthenticationRequest(tenant, - new PasswordCredentials( - username, - password)); - } else { - authRequest = new ApiKeyAuthenticationRequest(tenant, - new ApiKeyCredentials( - username, apiKey)); - keystoneAuthRequest = new KeyStoneAuthRequest(tenant, - new KeystoneApiKeyCredentials(username, apiKey)); - } - locationAware = "true".equals( - props.getProperty(SWIFT_LOCATION_AWARE_PROPERTY, "false")); - - //now read in properties that are shared across all connections - - //connection and retries - try { - retryCount = conf.getInt(SWIFT_RETRY_COUNT, DEFAULT_RETRY_COUNT); - connectTimeout = conf.getInt(SWIFT_CONNECTION_TIMEOUT, - DEFAULT_CONNECT_TIMEOUT); - socketTimeout = conf.getInt(SWIFT_SOCKET_TIMEOUT, - DEFAULT_SOCKET_TIMEOUT); - - throttleDelay = conf.getInt(SWIFT_THROTTLE_DELAY, - DEFAULT_THROTTLE_DELAY); - - //proxy options - proxyHost = conf.get(SWIFT_PROXY_HOST_PROPERTY); - proxyPort = conf.getInt(SWIFT_PROXY_PORT_PROPERTY, 8080); - - blocksizeKB = conf.getInt(SWIFT_BLOCKSIZE, - DEFAULT_SWIFT_BLOCKSIZE); - if (blocksizeKB <= 0) { - throw new SwiftConfigurationException("Invalid blocksize set in " - + SWIFT_BLOCKSIZE - + ": " + blocksizeKB); - } - partSizeKB = conf.getInt(SWIFT_PARTITION_SIZE, - DEFAULT_SWIFT_PARTITION_SIZE); - if (partSizeKB <=0) { - throw new SwiftConfigurationException("Invalid partition size set in " - + SWIFT_PARTITION_SIZE - + ": " + partSizeKB); - } - - bufferSizeKB = conf.getInt(SWIFT_REQUEST_SIZE, - DEFAULT_SWIFT_REQUEST_SIZE); - if (bufferSizeKB <=0) { - throw new SwiftConfigurationException("Invalid buffer size set in " - + SWIFT_REQUEST_SIZE - + ": " + bufferSizeKB); - } - } catch (NumberFormatException e) { - //convert exceptions raised parsing ints and longs into - // SwiftConfigurationException instances - throw new SwiftConfigurationException(e.toString(), e); - } - //everything you need for diagnostics. The password is omitted. - serviceDescription = String.format( - "Service={%s} container={%s} uri={%s}" - + " tenant={%s} user={%s} region={%s}" - + " publicURL={%b}" - + " location aware={%b}" - + " partition size={%d KB}, buffer size={%d KB}" - + " block size={%d KB}" - + " connect timeout={%d}, retry count={%d}" - + " socket timeout={%d}" - + " throttle delay={%d}" - , - serviceProvider, - container, - stringAuthUri, - tenant, - username, - region != null ? region : "(none)", - usePublicURL, - locationAware, - partSizeKB, - bufferSizeKB, - blocksizeKB, - connectTimeout, - retryCount, - socketTimeout, - throttleDelay - ); - if (LOG.isDebugEnabled()) { - LOG.debug(serviceDescription); - } - try { - this.authUri = new URI(stringAuthUri); - } catch (URISyntaxException e) { - throw new SwiftConfigurationException("The " + SWIFT_AUTH_PROPERTY - + " property was incorrect: " - + stringAuthUri, e); - } - } - - /** - * Get a mandatory configuration option. - * - * @param props property set - * @param key key - * @return value of the configuration - * @throws SwiftConfigurationException if there was no match for the key - */ - private static String getOption(Properties props, String key) throws - SwiftConfigurationException { - String val = props.getProperty(key); - if (val == null) { - throw new SwiftConfigurationException("Undefined property: " + key); - } - return val; - } - - /** - * Make an HTTP GET request to Swift to get a range of data in the object. - * - * @param path path to object - * @param offset offset from file beginning - * @param length file length - * @return The input stream -which must be closed afterwards. - * @throws IOException Problems - * @throws SwiftException swift specific error - * @throws FileNotFoundException path is not there - */ - public HttpBodyContent getData(SwiftObjectPath path, - long offset, - long length) throws IOException { - if (offset < 0) { - throw new SwiftException("Invalid offset: " + offset - + " in getDataAsInputStream( path=" + path - + ", offset=" + offset - + ", length =" + length + ")"); - } - if (length <= 0) { - throw new SwiftException("Invalid length: " + length - + " in getDataAsInputStream( path="+ path - + ", offset=" + offset - + ", length ="+ length + ")"); - } - - final String range = String.format(SWIFT_RANGE_HEADER_FORMAT_PATTERN, - offset, - offset + length - 1); - if (LOG.isDebugEnabled()) { - LOG.debug("getData:" + range); - } - - return getData(path, - new BasicHeader(HEADER_RANGE, range), - SwiftRestClient.NEWEST); - } - - /** - * Returns object length. - * - * @param uri file URI - * @return object length - * @throws SwiftException on swift-related issues - * @throws IOException on network/IO problems - */ - public long getContentLength(URI uri) throws IOException { - preRemoteCommand("getContentLength"); - return perform("getContentLength", uri, new HeadRequestProcessor() { - @Override - public Long extractResult(HttpHead req, HttpResponse resp) - throws IOException { - return HttpResponseUtils.getContentLength(resp); - } - - @Override - protected void setup(HttpHead req) throws IOException { - super.setup(req); - req.addHeader(NEWEST); - } - }); - } - - /** - * Get the length of the remote object. - * @param path object to probe - * @return the content length - * @throws IOException on any failure - */ - public long getContentLength(SwiftObjectPath path) throws IOException { - return getContentLength(pathToURI(path)); - } - - /** - * Get the path contents as an input stream. - * Warning: this input stream must be closed to avoid - * keeping Http connections open. - * - * @param path path to file - * @param requestHeaders http headers - * @return byte[] file data or null if the object was not found - * @throws IOException on IO Faults - * @throws FileNotFoundException if there is nothing at the path - */ - public HttpBodyContent getData(SwiftObjectPath path, - final Header... requestHeaders) - throws IOException { - preRemoteCommand("getData"); - return doGet(pathToURI(path), - requestHeaders); - } - - /** - * Returns object location as byte[]. - * - * @param path path to file - * @param requestHeaders http headers - * @return byte[] file data or null if the object was not found - * @throws IOException on IO Faults - */ - public byte[] getObjectLocation(SwiftObjectPath path, - final Header... requestHeaders) throws IOException { - if (!isLocationAware()) { - //if the filesystem is not location aware, do not ask for this information - return null; - } - preRemoteCommand("getObjectLocation"); - try { - return perform("getObjectLocation", pathToObjectLocation(path), - new GetRequestProcessor() { - @Override - protected int[] getAllowedStatusCodes() { - return new int[]{ - SC_OK, - SC_FORBIDDEN, - SC_NO_CONTENT - }; - } - - @Override - public byte[] extractResult(HttpGet req, HttpResponse resp) throws - IOException { - - //TODO: remove SC_NO_CONTENT if it depends on Swift versions - int statusCode = resp.getStatusLine().getStatusCode(); - if (statusCode == SC_NOT_FOUND - || statusCode == SC_FORBIDDEN - || statusCode == SC_NO_CONTENT - || resp.getEntity().getContent() == null) { - return null; - } - final InputStream responseBodyAsStream = - resp.getEntity().getContent(); - final byte[] locationData = new byte[1024]; - - return responseBodyAsStream.read(locationData) > 0 ? - locationData : null; - } - - @Override - protected void setup(HttpGet req) - throws SwiftInternalStateException { - setHeaders(req, requestHeaders); - } - }); - } catch (IOException e) { - LOG.warn("Failed to get the location of " + path + ": " + e, e); - return null; - } - } - - /** - * Create the URI needed to query the location of an object. - * @param path object path to retrieve information about - * @return the URI for the location operation - * @throws SwiftException if the URI could not be constructed - */ - private URI pathToObjectLocation(SwiftObjectPath path) throws SwiftException { - URI uri; - String dataLocationURI = objectLocationURI.toString(); - try { - if (path.toString().startsWith("/")) { - dataLocationURI = dataLocationURI.concat(path.toUriPath()); - } else { - dataLocationURI = dataLocationURI.concat("/").concat(path.toUriPath()); - } - - uri = new URI(dataLocationURI); - } catch (URISyntaxException e) { - throw new SwiftException(e); - } - return uri; - } - - /** - * Find objects under a prefix. - * - * @param path path prefix - * @param requestHeaders optional request headers - * @return byte[] file data or null if the object was not found - * @throws IOException on IO Faults - * @throws FileNotFoundException if nothing is at the end of the URI -that is, - * the directory is empty - */ - public byte[] findObjectsByPrefix(SwiftObjectPath path, - final Header... requestHeaders) throws IOException { - preRemoteCommand("findObjectsByPrefix"); - URI uri; - String dataLocationURI = getEndpointURI().toString(); - try { - String object = path.getObject(); - if (object.startsWith("/")) { - object = object.substring(1); - } - object = encodeUrl(object); - dataLocationURI = dataLocationURI.concat("/") - .concat(path.getContainer()) - .concat("/?prefix=") - .concat(object) - ; - uri = new URI(dataLocationURI); - } catch (URISyntaxException e) { - throw new SwiftException("Bad URI: " + dataLocationURI, e); - } - - return perform("findObjectsByPrefix", uri, - new GetRequestProcessor() { - @Override - public byte[] extractResult(HttpGet req, HttpResponse resp) - throws IOException { - if (resp.getStatusLine().getStatusCode() == SC_NOT_FOUND) { - //no result - throw new FileNotFoundException("Not found " + req.getURI()); - } - return HttpResponseUtils.getResponseBody(resp); - } - - @Override - protected int[] getAllowedStatusCodes() { - return new int[]{ - SC_OK, - SC_NOT_FOUND - }; - } - - @Override - protected void setup(HttpGet req) throws SwiftInternalStateException { - setHeaders(req, requestHeaders); - } - }); - } - - /** - * Find objects in a directory. - * - * @param path path prefix - * @param requestHeaders optional request headers - * @return byte[] file data or null if the object was not found - * @throws IOException on IO Faults - * @throws FileNotFoundException if nothing is at the end of the URI -that is, - * the directory is empty - */ - public byte[] listDeepObjectsInDirectory(SwiftObjectPath path, - boolean listDeep, - final Header... requestHeaders) - throws IOException { - preRemoteCommand("listDeepObjectsInDirectory"); - - String endpoint = getEndpointURI().toString(); - StringBuilder dataLocationURI = new StringBuilder(); - dataLocationURI.append(endpoint); - String object = path.getObject(); - if (object.startsWith("/")) { - object = object.substring(1); - } - if (!object.endsWith("/")) { - object = object.concat("/"); - } - - if (object.equals("/")) { - object = ""; - } - - dataLocationURI = dataLocationURI.append("/") - .append(path.getContainer()) - .append("/?prefix=") - .append(object) - .append("&format=json"); - - //in listing deep set param to false - if (listDeep == false) { - dataLocationURI.append("&delimiter=/"); - } - - return findObjects(dataLocationURI.toString(), requestHeaders); - } - - /** - * Find objects in a location. - * @param location URI - * @param requestHeaders optional request headers - * @return the body of te response - * @throws IOException IO problems - */ - private byte[] findObjects(String location, final Header[] requestHeaders) - throws IOException { - URI uri; - preRemoteCommand("findObjects"); - try { - uri = new URI(location); - } catch (URISyntaxException e) { - throw new SwiftException("Bad URI: " + location, e); - } - - return perform("findObjects", uri, - new GetRequestProcessor() { - @Override - public byte[] extractResult(HttpGet req, HttpResponse resp) - throws IOException { - if (resp.getStatusLine().getStatusCode() == SC_NOT_FOUND) { - //no result - throw new FileNotFoundException("Not found " + req.getURI()); - } - return HttpResponseUtils.getResponseBody(resp); - } - - @Override - protected int[] getAllowedStatusCodes() { - return new int[]{ - SC_OK, - SC_NOT_FOUND - }; - } - - @Override - protected void setup(HttpGet req) throws SwiftInternalStateException { - setHeaders(req, requestHeaders); - } - }); - } - - /** - * Copy an object. This is done by sending a COPY method to the filesystem - * which is required to handle this WebDAV-level extension to the - * base HTTP operations. - * - * @param src source path - * @param dst destination path - * @param headers any headers - * @return true if the status code was considered successful - * @throws IOException on IO Faults - */ - public boolean copyObject(SwiftObjectPath src, final SwiftObjectPath dst, - final Header... headers) throws IOException { - - preRemoteCommand("copyObject"); - - return perform("copy", pathToURI(src), - new CopyRequestProcessor() { - @Override - public Boolean extractResult(CopyRequest req, HttpResponse resp) - throws IOException { - return resp.getStatusLine().getStatusCode() != SC_NOT_FOUND; - } - - @Override - protected void setup(CopyRequest req) throws - SwiftInternalStateException { - setHeaders(req, headers); - req.addHeader(HEADER_DESTINATION, dst.toUriPath()); - } - }); - } - - /** - * Uploads file as Input Stream to Swift. - * The data stream will be closed after the request. - * - * @param path path to Swift - * @param data object data - * @param length length of data - * @param requestHeaders http headers - * @throws IOException on IO Faults - */ - public void upload(SwiftObjectPath path, - final InputStream data, - final long length, - final Header... requestHeaders) - throws IOException { - preRemoteCommand("upload"); - - try { - perform("upload", pathToURI(path), new PutRequestProcessor() { - @Override - public byte[] extractResult(HttpPut req, HttpResponse resp) - throws IOException { - return HttpResponseUtils.getResponseBody(resp); - } - - @Override - protected void setup(HttpPut req) throws - SwiftInternalStateException { - req.setEntity(new InputStreamEntity(data, length)); - setHeaders(req, requestHeaders); - } - }); - } finally { - data.close(); - } - - } - - - /** - * Deletes object from swift. - * The result is true if this operation did the deletion. - * - * @param path path to file - * @param requestHeaders http headers - * @throws IOException on IO Faults - */ - public boolean delete(SwiftObjectPath path, final Header... requestHeaders) throws IOException { - preRemoteCommand("delete"); - - return perform("", pathToURI(path), new DeleteRequestProcessor() { - @Override - public Boolean extractResult(HttpDelete req, HttpResponse resp) - throws IOException { - return resp.getStatusLine().getStatusCode() == SC_NO_CONTENT; - } - - @Override - protected void setup(HttpDelete req) throws - SwiftInternalStateException { - setHeaders(req, requestHeaders); - } - }); - } - - /** - * Issue a head request. - * @param reason reason -used in logs - * @param path path to query - * @param requestHeaders request header - * @return the response headers. This may be an empty list - * @throws IOException IO problems - * @throws FileNotFoundException if there is nothing at the end - */ - public Header[] headRequest(String reason, - SwiftObjectPath path, - final Header... requestHeaders) - throws IOException { - - preRemoteCommand("headRequest: "+ reason); - return perform(reason, pathToURI(path), - new HeadRequestProcessor() { - @Override - public Header[] extractResult(HttpHead req, HttpResponse resp) - throws IOException { - if (resp.getStatusLine().getStatusCode() == SC_NOT_FOUND) { - throw new FileNotFoundException("Not Found " + req.getURI()); - } - return resp.getAllHeaders(); - } - - @Override - protected void setup(HttpHead req) throws - SwiftInternalStateException { - setHeaders(req, requestHeaders); - } - }); - } - - /** - * Issue a put request. - * @param path path - * @param requestHeaders optional headers - * @return the HTTP response - * @throws IOException any problem - */ - public int putRequest(SwiftObjectPath path, final Header... requestHeaders) - throws IOException { - - preRemoteCommand("putRequest"); - return perform(pathToURI(path), new PutRequestProcessor() { - - @Override - public Integer extractResult(HttpPut req, HttpResponse resp) - throws IOException { - return resp.getStatusLine().getStatusCode(); - } - - @Override - protected void setup(HttpPut req) throws - SwiftInternalStateException { - setHeaders(req, requestHeaders); - } - }); - } - - /** - * Authenticate to Openstack Keystone. - * As well as returning the access token, the member fields {@link #token}, - * {@link #endpointURI} and {@link #objectLocationURI} are set up for re-use. - *

- * This method is re-entrant -if more than one thread attempts to authenticate - * neither will block -but the field values with have those of the last caller. - * - * @return authenticated access token - */ - public AccessToken authenticate() throws IOException { - final AuthenticationRequest authenticationRequest; - if (useKeystoneAuthentication) { - authenticationRequest = keystoneAuthRequest; - } else { - authenticationRequest = authRequest; - } - - LOG.debug("started authentication"); - return perform("authentication", - authUri, - new AuthenticationPost(authenticationRequest)); - } - - private final class AuthenticationPost extends - AuthRequestProcessor { - final AuthenticationRequest authenticationRequest; - - private AuthenticationPost(AuthenticationRequest authenticationRequest) { - this.authenticationRequest = authenticationRequest; - } - - @Override - protected void setup(AuthPostRequest req) throws IOException { - req.setEntity(getAuthenticationRequst(authenticationRequest)); - } - - /** - * specification says any of the 2xxs are OK, so list all - * the standard ones - * @return a set of 2XX status codes. - */ - @Override - protected int[] getAllowedStatusCodes() { - return new int[]{ - SC_OK, - SC_BAD_REQUEST, - SC_CREATED, - SC_ACCEPTED, - SC_NON_AUTHORITATIVE_INFORMATION, - SC_NO_CONTENT, - SC_RESET_CONTENT, - SC_PARTIAL_CONTENT, - SC_MULTI_STATUS, - SC_UNAUTHORIZED //if request unauthorized, try another method - }; - } - - @Override - public AccessToken extractResult(AuthPostRequest req, HttpResponse resp) - throws IOException { - //initial check for failure codes leading to authentication failures - if (resp.getStatusLine().getStatusCode() == SC_BAD_REQUEST) { - throw new SwiftAuthenticationFailedException( - authenticationRequest.toString(), "POST", authUri, resp); - } - - final AuthenticationResponse access = - JSONUtil.toObject(HttpResponseUtils.getResponseBodyAsString(resp), - AuthenticationWrapper.class).getAccess(); - final List serviceCatalog = access.getServiceCatalog(); - //locate the specific service catalog that defines Swift; variations - //in the name of this add complexity to the search - StringBuilder catList = new StringBuilder(); - StringBuilder regionList = new StringBuilder(); - - //these fields are all set together at the end of the operation - URI endpointURI = null; - URI objectLocation; - Endpoint swiftEndpoint = null; - AccessToken accessToken; - - for (Catalog catalog : serviceCatalog) { - String name = catalog.getName(); - String type = catalog.getType(); - String descr = String.format("[%s: %s]; ", name, type); - catList.append(descr); - if (LOG.isDebugEnabled()) { - LOG.debug("Catalog entry " + descr); - } - if (name.equals(SERVICE_CATALOG_SWIFT) - || name.equals(SERVICE_CATALOG_CLOUD_FILES) - || type.equals(SERVICE_CATALOG_OBJECT_STORE)) { - //swift is found - if (LOG.isDebugEnabled()) { - LOG.debug("Found swift catalog as " + name + " => " + type); - } - //now go through the endpoints - for (Endpoint endpoint : catalog.getEndpoints()) { - String endpointRegion = endpoint.getRegion(); - URI publicURL = endpoint.getPublicURL(); - URI internalURL = endpoint.getInternalURL(); - descr = String.format("[%s => %s / %s]; ", - endpointRegion, - publicURL, - internalURL); - regionList.append(descr); - if (LOG.isDebugEnabled()) { - LOG.debug("Endpoint " + descr); - } - if (region == null || endpointRegion.equals(region)) { - endpointURI = usePublicURL ? publicURL : internalURL; - swiftEndpoint = endpoint; - break; - } - } - } - } - if (endpointURI == null) { - String message = "Could not find swift service from auth URL " - + authUri - + " and region '" + region + "'. " - + "Categories: " + catList - + ((regionList.length() > 0) ? - ("regions: " + regionList) - : "No regions"); - throw new SwiftInvalidResponseException(message, - SC_OK, - "authenticating", - authUri); - - } - - - accessToken = access.getToken(); - String path = SWIFT_OBJECT_AUTH_ENDPOINT - + swiftEndpoint.getTenantId(); - String host = endpointURI.getHost(); - try { - objectLocation = new URI(endpointURI.getScheme(), - null, - host, - endpointURI.getPort(), - path, - null, - null); - } catch (URISyntaxException e) { - throw new SwiftException("object endpoint URI is incorrect: " - + endpointURI - + " + " + path, - e); - } - setAuthDetails(endpointURI, objectLocation, accessToken); - - if (LOG.isDebugEnabled()) { - LOG.debug("authenticated against " + endpointURI); - } - createDefaultContainer(); - return accessToken; - } - } - - private StringEntity getAuthenticationRequst( - AuthenticationRequest authenticationRequest) throws IOException { - final String data = JSONUtil.toJSON(new AuthenticationRequestWrapper( - authenticationRequest)); - if (LOG.isDebugEnabled()) { - LOG.debug("Authenticating with " + authenticationRequest); - } - return new StringEntity(data, ContentType.create("application/json", - "UTF-8")); - } - - /** - * create default container if it doesn't exist for Hadoop Swift integration. - * non-reentrant, as this should only be needed once. - * - * @throws IOException IO problems. - */ - private synchronized void createDefaultContainer() throws IOException { - createContainer(container); - } - - /** - * Create a container -if it already exists, do nothing. - * - * @param containerName the container name - * @throws IOException IO problems - * @throws SwiftBadRequestException invalid container name - * @throws SwiftInvalidResponseException error from the server - */ - public void createContainer(String containerName) throws IOException { - SwiftObjectPath objectPath = new SwiftObjectPath(containerName, ""); - try { - //see if the data is there - headRequest("createContainer", objectPath, NEWEST); - } catch (FileNotFoundException ex) { - int status = 0; - try { - status = putRequest(objectPath); - } catch (FileNotFoundException e) { - //triggered by a very bad container name. - //re-insert the 404 result into the status - status = SC_NOT_FOUND; - } - if (status == SC_BAD_REQUEST) { - throw new SwiftBadRequestException( - "Bad request -authentication failure or bad container name?", - status, - "PUT", - null); - } - if (!isStatusCodeExpected(status, - SC_OK, - SC_CREATED, - SC_ACCEPTED, - SC_NO_CONTENT)) { - throw new SwiftInvalidResponseException("Couldn't create container " - + containerName + - " for storing data in Swift." + - " Try to create container " + - containerName + " manually ", - status, - "PUT", - null); - } else { - throw ex; - } - } - } - - /** - * Trigger an initial auth operation if some of the needed - * fields are missing. - * - * @throws IOException on problems - */ - private void authIfNeeded() throws IOException { - if (getEndpointURI() == null) { - authenticate(); - } - } - - /** - * Pre-execution actions to be performed by methods. Currently this - *

    - *
  • Logs the operation at TRACE
  • - *
  • Authenticates the client -if needed
  • - *
- * @throws IOException - */ - private void preRemoteCommand(String operation) throws IOException { - if (LOG.isTraceEnabled()) { - LOG.trace("Executing " + operation); - } - authIfNeeded(); - } - - - /** - * Performs the HTTP request, validates the response code and returns - * the received data. HTTP Status codes are converted into exceptions. - * - * @param uri URI to source - * @param processor HttpMethodProcessor - * @param method - * @param result type - * @return result of HTTP request - * @throws IOException IO problems - * @throws SwiftBadRequestException the status code indicated "Bad request" - * @throws SwiftInvalidResponseException the status code is out of range - * for the action (excluding 404 responses) - * @throws SwiftInternalStateException the internal state of this client - * is invalid - * @throws FileNotFoundException a 404 response was returned - */ - private R perform(URI uri, - HttpRequestProcessor processor) - throws IOException, - SwiftBadRequestException, - SwiftInternalStateException, - SwiftInvalidResponseException, - FileNotFoundException { - return perform("",uri, processor); - } - - /** - * Performs the HTTP request, validates the response code and returns - * the received data. HTTP Status codes are converted into exceptions. - * @param reason why is this operation taking place. Used for statistics - * @param uri URI to source - * @param processor HttpMethodProcessor - * @param method - * @param result type - * @return result of HTTP request - * @throws IOException IO problems - * @throws SwiftBadRequestException the status code indicated "Bad request" - * @throws SwiftInvalidResponseException the status code is out of range - * for the action (excluding 404 responses) - * @throws SwiftInternalStateException the internal state of this client - * is invalid - * @throws FileNotFoundException a 404 response was returned - */ - private R perform(String reason, URI uri, - HttpRequestProcessor processor) - throws IOException, SwiftBadRequestException, SwiftInternalStateException, - SwiftInvalidResponseException, FileNotFoundException { - checkNotNull(uri); - checkNotNull(processor); - - final M req = processor.createRequest(uri.toString()); - req.addHeader(HEADER_USER_AGENT, SWIFT_USER_AGENT); - //retry policy - HttpClientBuilder clientBuilder = HttpClientBuilder.create(); - clientBuilder.setRetryHandler( - new DefaultHttpRequestRetryHandler(retryCount, false)); - RequestConfig.Builder requestConfigBuilder = - RequestConfig.custom().setConnectTimeout(connectTimeout); - if (proxyHost != null) { - requestConfigBuilder.setProxy(new HttpHost(proxyHost, proxyPort)); - } - clientBuilder.setDefaultRequestConfig(requestConfigBuilder.build()); - clientBuilder.setDefaultSocketConfig( - SocketConfig.custom().setSoTimeout(socketTimeout).build()); - Duration duration = new Duration(); - boolean success = false; - HttpResponse resp; - try { - // client should not be closed in this method because - // the connection can be used later - CloseableHttpClient client = clientBuilder.build(); - int statusCode = 0; - try { - resp = exec(client, req); - statusCode = checkNotNull(resp.getStatusLine().getStatusCode()); - } catch (IOException e) { - //rethrow with extra diagnostics and wiki links - throw ExceptionDiags.wrapException(uri.toString(), req.getMethod(), e); - } - - //look at the response and see if it was valid or not. - //Valid is more than a simple 200; even 404 "not found" is considered - //valid -which it is for many methods. - - //validate the allowed status code for this operation - int[] allowedStatusCodes = processor.getAllowedStatusCodes(); - boolean validResponse = isStatusCodeExpected(statusCode, - allowedStatusCodes); - - if (!validResponse) { - IOException ioe = buildException(uri, req, resp, statusCode); - throw ioe; - } - - R r = processor.extractResult(req, resp); - success = true; - return r; - } catch (IOException e) { - //release the connection -always - req.releaseConnection(); - throw e; - } finally { - duration.finished(); - durationStats.add(req.getMethod() + " " + reason, duration, success); - } - } - - /** - * Build an exception from a failed operation. This can include generating - * specific exceptions (e.g. FileNotFound), as well as the default - * {@link SwiftInvalidResponseException}. - * - * @param uri URI for operation - * @param resp operation that failed - * @param statusCode status code - * @param method type - * @return an exception to throw - */ - private IOException buildException( - URI uri, M req, HttpResponse resp, int statusCode) { - IOException fault; - - //log the failure @debug level - String errorMessage = String.format("Method %s on %s failed, status code: %d," + - " status line: %s", - req.getMethod(), - uri, - statusCode, - resp.getStatusLine() - ); - if (LOG.isDebugEnabled()) { - LOG.debug(errorMessage); - } - //send the command - switch (statusCode) { - case SC_NOT_FOUND: - fault = new FileNotFoundException("Operation " + req.getMethod() - + " on " + uri); - break; - - case SC_BAD_REQUEST: - //bad HTTP request - fault = new SwiftBadRequestException("Bad request against " + uri, - req.getMethod(), uri, resp); - break; - - case SC_REQUESTED_RANGE_NOT_SATISFIABLE: - //out of range - StringBuilder errorText = new StringBuilder( - resp.getStatusLine().getReasonPhrase()); - //get the requested length - Header requestContentLen = req.getFirstHeader(HEADER_CONTENT_LENGTH); - if (requestContentLen != null) { - errorText.append(" requested ").append(requestContentLen.getValue()); - } - //and the result - Header availableContentRange = resp.getFirstHeader(HEADER_CONTENT_RANGE); - - if (availableContentRange != null) { - errorText.append(" available ") - .append(availableContentRange.getValue()); - } - fault = new EOFException(errorText.toString()); - break; - - case SC_UNAUTHORIZED: - //auth failure; should only happen on the second attempt - fault = new SwiftAuthenticationFailedException( - "Operation not authorized- current access token =" + getToken(), - req.getMethod(), - uri, - resp); - break; - - case SwiftProtocolConstants.SC_TOO_MANY_REQUESTS_429: - case SwiftProtocolConstants.SC_THROTTLED_498: - //response code that may mean the client is being throttled - fault = new SwiftThrottledRequestException( - "Client is being throttled: too many requests", - req.getMethod(), - uri, - resp); - break; - - default: - //return a generic invalid HTTP response - fault = new SwiftInvalidResponseException( - errorMessage, - req.getMethod(), - uri, - resp); - } - - return fault; - } - - /** - * Exec a GET request and return the input stream of the response. - * - * @param uri URI to GET - * @param requestHeaders request headers - * @return the input stream. This must be closed to avoid log errors - * @throws IOException - */ - private HttpBodyContent doGet(final URI uri, final Header... requestHeaders) throws IOException { - return perform("", uri, new GetRequestProcessor() { - @Override - public HttpBodyContent extractResult(HttpGet req, HttpResponse resp) - throws IOException { - return new HttpBodyContent( - new HttpInputStreamWithRelease(uri, req, resp), - HttpResponseUtils.getContentLength(resp)); - } - - @Override - protected void setup(HttpGet req) throws - SwiftInternalStateException { - setHeaders(req, requestHeaders); - } - }); - } - - /** - * Create an instance against a specific FS URI. - * - * @param filesystemURI filesystem to bond to - * @param config source of configuration data - * @return REST client instance - * @throws IOException on instantiation problems - */ - public static SwiftRestClient getInstance(URI filesystemURI, - Configuration config) throws IOException { - return new SwiftRestClient(filesystemURI, config); - } - - - /** - * Converts Swift path to URI to make request. - * This is public for unit testing - * - * @param path path to object - * @param endpointURI domain url e.g. http://domain.com - * @return valid URI for object - * @throws SwiftException - */ - public static URI pathToURI(SwiftObjectPath path, - URI endpointURI) throws SwiftException { - checkNotNull(endpointURI, "Null Endpoint -client is not authenticated"); - - String dataLocationURI = endpointURI.toString(); - try { - - dataLocationURI = SwiftUtils.joinPaths(dataLocationURI, encodeUrl(path.toUriPath())); - return new URI(dataLocationURI); - } catch (URISyntaxException e) { - throw new SwiftException("Failed to create URI from " + dataLocationURI, e); - } - } - - /** - * Encode the URL. This extends {@link URLEncoder#encode(String, String)} - * with a replacement of + with %20. - * @param url URL string - * @return an encoded string - * @throws SwiftException if the URL cannot be encoded - */ - private static String encodeUrl(String url) throws SwiftException { - if (url.matches(".*\\s+.*")) { - try { - url = URLEncoder.encode(url, "UTF-8"); - url = url.replace("+", "%20"); - } catch (UnsupportedEncodingException e) { - throw new SwiftException("failed to encode URI", e); - } - } - - return url; - } - - /** - * Convert a swift path to a URI relative to the current endpoint. - * - * @param path path - * @return an path off the current endpoint URI. - * @throws SwiftException - */ - private URI pathToURI(SwiftObjectPath path) throws SwiftException { - return pathToURI(path, getEndpointURI()); - } - - /** - * Add the headers to the method, and the auth token (which must be set). - * @param method method to update - * @param requestHeaders the list of headers - * @throws SwiftInternalStateException not yet authenticated - */ - private void setHeaders(HttpUriRequest method, Header[] requestHeaders) - throws SwiftInternalStateException { - for (Header header : requestHeaders) { - method.addHeader(header); - } - setAuthToken(method, getToken()); - } - - - /** - * Set the auth key header of the method to the token ID supplied. - * - * @param method method - * @param accessToken access token - * @throws SwiftInternalStateException if the client is not yet authenticated - */ - private void setAuthToken(HttpUriRequest method, AccessToken accessToken) - throws SwiftInternalStateException { - checkNotNull(accessToken,"Not authenticated"); - method.addHeader(HEADER_AUTH_KEY, accessToken.getId()); - } - - /** - * Execute a method in a new HttpClient instance. If the auth failed, - * authenticate then retry the method. - * - * @param req request to exec - * @param client client to use - * @param Request type - * @return the status code - * @throws IOException on any failure - */ - private HttpResponse exec(HttpClient client, M req) - throws IOException { - HttpResponse resp = execWithDebugOutput(req, client); - int statusCode = resp.getStatusLine().getStatusCode(); - if ((statusCode == HttpStatus.SC_UNAUTHORIZED - || statusCode == HttpStatus.SC_BAD_REQUEST) - && req instanceof AuthPostRequest - && !useKeystoneAuthentication) { - if (LOG.isDebugEnabled()) { - LOG.debug("Operation failed with status " + statusCode - + " attempting keystone auth"); - } - //if rackspace key authentication failed - try custom Keystone authentication - useKeystoneAuthentication = true; - final AuthPostRequest authentication = (AuthPostRequest) req; - //replace rackspace auth with keystone one - authentication.setEntity(getAuthenticationRequst(keystoneAuthRequest)); - resp = execWithDebugOutput(req, client); - } - - if (statusCode == HttpStatus.SC_UNAUTHORIZED ) { - //unauthed -or the auth uri rejected it. - - if (req instanceof AuthPostRequest) { - //unauth response from the AUTH URI itself. - throw new SwiftAuthenticationFailedException(authRequest.toString(), - "auth", - authUri, - resp); - } - //any other URL: try again - if (LOG.isDebugEnabled()) { - LOG.debug("Reauthenticating"); - } - //re-auth, this may recurse into the same dir - authenticate(); - if (LOG.isDebugEnabled()) { - LOG.debug("Retrying original request"); - } - resp = execWithDebugOutput(req, client); - } - return resp; - } - - /** - * Execute the request with the request and response logged at debug level. - * @param req request to execute - * @param client client to use - * @param method type - * @return the status code - * @throws IOException any failure reported by the HTTP client. - */ - private HttpResponse execWithDebugOutput(M req, - HttpClient client) throws IOException { - if (LOG.isDebugEnabled()) { - StringBuilder builder = new StringBuilder( - req.getMethod() + " " + req.getURI() + "\n"); - for (Header header : req.getAllHeaders()) { - builder.append(header.toString()); - } - LOG.debug(builder.toString()); - } - HttpResponse resp = client.execute(req); - if (LOG.isDebugEnabled()) { - LOG.debug("Status code = " + resp.getStatusLine().getStatusCode()); - } - return resp; - } - - /** - * Ensures that an object reference passed as a parameter to the calling - * method is not null. - * - * @param reference an object reference - * @return the non-null reference that was validated - * @throws NullPointerException if {@code reference} is null - */ - private static T checkNotNull(T reference) throws - SwiftInternalStateException { - return checkNotNull(reference, "Null Reference"); - } - - private static T checkNotNull(T reference, String message) throws - SwiftInternalStateException { - if (reference == null) { - throw new SwiftInternalStateException(message); - } - return reference; - } - - /** - * Check for a status code being expected -takes a list of expected values - * - * @param status received status - * @param expected expected value - * @return true if status is an element of [expected] - */ - private boolean isStatusCodeExpected(int status, int... expected) { - for (int code : expected) { - if (status == code) { - return true; - } - } - return false; - } - - - @Override - public String toString() { - return "Swift client: " + serviceDescription; - } - - /** - * Get the region which this client is bound to - * @return the region - */ - public String getRegion() { - return region; - } - - /** - * Get the tenant to which this client is bound - * @return the tenant - */ - public String getTenant() { - return tenant; - } - - /** - * Get the username this client identifies itself as - * @return the username - */ - public String getUsername() { - return username; - } - - /** - * Get the container to which this client is bound - * @return the container - */ - public String getContainer() { - return container; - } - - /** - * Is this client bound to a location aware Swift blobstore - * -that is, can you query for the location of partitions? - * @return true iff the location of multipart file uploads - * can be determined. - */ - public boolean isLocationAware() { - return locationAware; - } - - /** - * Get the blocksize of this filesystem - * @return a blocksize > 0 - */ - public long getBlocksizeKB() { - return blocksizeKB; - } - - /** - * Get the partition size in KB. - * @return the partition size - */ - public int getPartSizeKB() { - return partSizeKB; - } - - /** - * Get the buffer size in KB. - * @return the buffer size wanted for reads - */ - public int getBufferSizeKB() { - return bufferSizeKB; - } - - public int getProxyPort() { - return proxyPort; - } - - public String getProxyHost() { - return proxyHost; - } - - public int getRetryCount() { - return retryCount; - } - - public int getConnectTimeout() { - return connectTimeout; - } - - public boolean isUsePublicURL() { - return usePublicURL; - } - - public int getThrottleDelay() { - return throttleDelay; - } - - /** - * Get the current operation statistics. - * @return a snapshot of the statistics - */ - - public List getOperationStatistics() { - return durationStats.getDurationStatistics(); - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/package.html b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/package.html deleted file mode 100644 index ad900f90d06..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/package.html +++ /dev/null @@ -1,81 +0,0 @@ - - - - - - Swift Filesystem Client for Apache Hadoop - - - -

- Swift Filesystem Client for Apache Hadoop -

- -

Introduction

- -
This package provides support in Apache Hadoop for the OpenStack Swift - Key-Value store, allowing client applications -including MR Jobs- to - read and write data in Swift. -
- -
Design Goals
-
    -
  1. Give clients access to SwiftFS files, similar to S3n:
  2. -
  3. maybe: support a Swift Block store -- at least until Swift's - support for >5GB files has stabilized. -
  4. -
  5. Support for data-locality if the Swift FS provides file location information
  6. -
  7. Support access to multiple Swift filesystems in the same client/task.
  8. -
  9. Authenticate using the Keystone APIs.
  10. -
  11. Avoid dependency on unmaintained libraries.
  12. -
- - -

Supporting multiple Swift Filesystems

- -The goal of supporting multiple swift filesystems simultaneously changes how -clusters are named and authenticated. In Hadoop's S3 and S3N filesystems, the "bucket" into -which objects are stored is directly named in the URL, such as -s3n://bucket/object1. The Hadoop configuration contains a -single set of login credentials for S3 (username and key), which are used to -authenticate the HTTP operations. - -For swift, we need to know not only which "container" name, but which credentials -to use to authenticate with it -and which URL to use for authentication. - -This has led to a different design pattern from S3, as instead of simple bucket names, -the hostname of an S3 container is two-level, the name of the service provider -being the second path: swift://bucket.service/ - -The service portion of this domain name is used as a reference into -the client settings -and so identify the service provider of that container. - - -

Testing

- -
- The client code can be tested against public or private Swift instances; the - public services are (at the time of writing -January 2013-), Rackspace and - HP Cloud. Testing against both instances is how interoperability - can be verified. -
- - - diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/StrictBufferedFSInputStream.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/StrictBufferedFSInputStream.java deleted file mode 100644 index 794219f31a4..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/StrictBufferedFSInputStream.java +++ /dev/null @@ -1,49 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.snative; - -import org.apache.hadoop.fs.BufferedFSInputStream; -import org.apache.hadoop.fs.FSExceptionMessages; -import org.apache.hadoop.fs.FSInputStream; -import org.apache.hadoop.fs.swift.exceptions.SwiftConnectionClosedException; - -import java.io.EOFException; -import java.io.IOException; - -/** - * Add stricter compliance with the evolving FS specifications - */ -public class StrictBufferedFSInputStream extends BufferedFSInputStream { - - public StrictBufferedFSInputStream(FSInputStream in, - int size) { - super(in, size); - } - - @Override - public void seek(long pos) throws IOException { - if (pos < 0) { - throw new EOFException(FSExceptionMessages.NEGATIVE_SEEK); - } - if (in == null) { - throw new SwiftConnectionClosedException(FSExceptionMessages.STREAM_IS_CLOSED); - } - super.seek(pos); - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftFileStatus.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftFileStatus.java deleted file mode 100644 index 725cae1e3b8..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftFileStatus.java +++ /dev/null @@ -1,102 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.snative; - -import org.apache.hadoop.fs.FileStatus; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.fs.permission.FsPermission; - -/** - * A subclass of {@link FileStatus} that contains the - * Swift-specific rules of when a file is considered to be a directory. - */ -public class SwiftFileStatus extends FileStatus { - - public SwiftFileStatus() { - } - - public SwiftFileStatus(long length, - boolean isdir, - int block_replication, - long blocksize, long modification_time, Path path) { - super(length, isdir, block_replication, blocksize, modification_time, path); - } - - public SwiftFileStatus(long length, - boolean isdir, - int block_replication, - long blocksize, - long modification_time, - long access_time, - FsPermission permission, - String owner, String group, Path path) { - super(length, isdir, block_replication, blocksize, modification_time, - access_time, permission, owner, group, path); - } - - //HDFS2+ only - - public SwiftFileStatus(long length, - boolean isdir, - int block_replication, - long blocksize, - long modification_time, - long access_time, - FsPermission permission, - String owner, String group, Path symlink, Path path) { - super(length, isdir, block_replication, blocksize, modification_time, - access_time, permission, owner, group, symlink, path); - } - - /** - * Declare that the path represents a directory, which in the - * SwiftNativeFileSystem means "is a directory or a 0 byte file" - * - * @return true if the status is considered to be a file - */ - @Override - public boolean isDirectory() { - return super.isDirectory() || getLen() == 0; - } - - /** - * A entry is a file if it is not a directory. - * By implementing it and not marking as an override this - * subclass builds and runs in both Hadoop versions. - * @return the opposite value to {@link #isDirectory()} - */ - @Override - public boolean isFile() { - return !this.isDirectory(); - } - - @Override - public String toString() { - StringBuilder sb = new StringBuilder(); - sb.append(getClass().getSimpleName()); - sb.append("{ "); - sb.append("path=").append(getPath()); - sb.append("; isDirectory=").append(isDirectory()); - sb.append("; length=").append(getLen()); - sb.append("; blocksize=").append(getBlockSize()); - sb.append("; modification_time=").append(getModificationTime()); - sb.append("}"); - return sb.toString(); - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystem.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystem.java deleted file mode 100644 index 560eadd9309..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystem.java +++ /dev/null @@ -1,761 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.snative; - -import org.apache.hadoop.security.UserGroupInformation; -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; -import org.apache.hadoop.classification.InterfaceAudience; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.BlockLocation; -import org.apache.hadoop.fs.CreateFlag; -import org.apache.hadoop.fs.FSDataInputStream; -import org.apache.hadoop.fs.FSDataOutputStream; -import org.apache.hadoop.fs.FileAlreadyExistsException; -import org.apache.hadoop.fs.FileStatus; -import org.apache.hadoop.fs.FileSystem; -import org.apache.hadoop.fs.ParentNotDirectoryException; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.fs.permission.FsPermission; -import org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException; -import org.apache.hadoop.fs.swift.exceptions.SwiftOperationFailedException; -import org.apache.hadoop.fs.swift.exceptions.SwiftUnsupportedFeatureException; -import org.apache.hadoop.fs.swift.http.SwiftProtocolConstants; -import org.apache.hadoop.fs.swift.util.DurationStats; -import org.apache.hadoop.fs.swift.util.SwiftObjectPath; -import org.apache.hadoop.fs.swift.util.SwiftUtils; -import org.apache.hadoop.util.Progressable; - -import java.io.FileNotFoundException; -import java.io.IOException; -import java.io.OutputStream; -import java.net.URI; -import java.util.ArrayList; -import java.util.EnumSet; -import java.util.List; - -/** - * Swift file system implementation. Extends Hadoop FileSystem - */ -public class SwiftNativeFileSystem extends FileSystem { - - /** filesystem prefix: {@value} */ - public static final String SWIFT = "swift"; - private static final Logger LOG = - LoggerFactory.getLogger(SwiftNativeFileSystem.class); - - /** - * path to user work directory for storing temporary files - */ - private Path workingDir; - - /** - * Swift URI - */ - private URI uri; - - /** - * reference to swiftFileSystemStore - */ - private SwiftNativeFileSystemStore store; - - /** - * Default constructor for Hadoop - */ - public SwiftNativeFileSystem() { - // set client in initialize() - } - - /** - * This constructor used for testing purposes - */ - public SwiftNativeFileSystem(SwiftNativeFileSystemStore store) { - this.store = store; - } - - /** - * This is for testing - * @return the inner store class - */ - public SwiftNativeFileSystemStore getStore() { - return store; - } - - @Override - public String getScheme() { - return SWIFT; - } - - /** - * default class initialization. - * - * @param fsuri path to Swift - * @param conf Hadoop configuration - * @throws IOException - */ - @Override - public void initialize(URI fsuri, Configuration conf) throws IOException { - super.initialize(fsuri, conf); - - setConf(conf); - if (store == null) { - store = new SwiftNativeFileSystemStore(); - } - this.uri = fsuri; - String username; - try { - username = UserGroupInformation.getCurrentUser().getShortUserName(); - } catch (IOException ex) { - LOG.warn("Unable to get user name. Fall back to system property " + - "user.name", ex); - username = System.getProperty("user.name"); - } - this.workingDir = new Path("/user", username) - .makeQualified(uri, new Path(username)); - if (LOG.isDebugEnabled()) { - LOG.debug("Initializing SwiftNativeFileSystem against URI " + uri - + " and working dir " + workingDir); - } - store.initialize(uri, conf); - LOG.debug("SwiftFileSystem initialized"); - } - - /** - * @return path to Swift - */ - @Override - public URI getUri() { - - return uri; - } - - @Override - public String toString() { - return "Swift FileSystem " + store; - } - - /** - * Path to user working directory - * - * @return Hadoop path - */ - @Override - public Path getWorkingDirectory() { - return workingDir; - } - - /** - * @param dir user working directory - */ - @Override - public void setWorkingDirectory(Path dir) { - workingDir = makeAbsolute(dir); - if (LOG.isDebugEnabled()) { - LOG.debug("SwiftFileSystem.setWorkingDirectory to " + dir); - } - } - - /** - * Return a file status object that represents the path. - * - * @param path The path we want information from - * @return a FileStatus object - */ - @Override - public FileStatus getFileStatus(Path path) throws IOException { - Path absolutePath = makeAbsolute(path); - return store.getObjectMetadata(absolutePath); - } - - /** - * The blocksize of this filesystem is set by the property - * SwiftProtocolConstants.SWIFT_BLOCKSIZE;the default is the value of - * SwiftProtocolConstants.DEFAULT_SWIFT_BLOCKSIZE; - * @return the blocksize for this FS. - */ - @Override - public long getDefaultBlockSize() { - return store.getBlocksize(); - } - - /** - * The blocksize for this filesystem. - * @see #getDefaultBlockSize() - * @param f path of file - * @return the blocksize for the path - */ - @Override - public long getDefaultBlockSize(Path f) { - return store.getBlocksize(); - } - - @Override - public long getBlockSize(Path path) throws IOException { - return store.getBlocksize(); - } - - @Override - @SuppressWarnings("deprecation") - public boolean isFile(Path f) throws IOException { - try { - FileStatus fileStatus = getFileStatus(f); - return !SwiftUtils.isDirectory(fileStatus); - } catch (FileNotFoundException e) { - return false; // f does not exist - } - } - - @SuppressWarnings("deprecation") - @Override - public boolean isDirectory(Path f) throws IOException { - - try { - FileStatus fileStatus = getFileStatus(f); - return SwiftUtils.isDirectory(fileStatus); - } catch (FileNotFoundException e) { - return false; // f does not exist - } - } - - /** - * Override getCononicalServiceName because we don't support token in Swift - */ - @Override - public String getCanonicalServiceName() { - // Does not support Token - return null; - } - - /** - * Return an array containing hostnames, offset and size of - * portions of the given file. For a nonexistent - * file or regions, null will be returned. - *

- * This call is most helpful with DFS, where it returns - * hostnames of machines that contain the given file. - *

- * The FileSystem will simply return an elt containing 'localhost'. - */ - @Override - public BlockLocation[] getFileBlockLocations(FileStatus file, - long start, - long len) throws IOException { - //argument checks - if (file == null) { - return null; - } - - if (start < 0 || len < 0) { - throw new IllegalArgumentException("Negative start or len parameter" + - " to getFileBlockLocations"); - } - if (file.getLen() <= start) { - return new BlockLocation[0]; - } - - // Check if requested file in Swift is more than 5Gb. In this case - // each block has its own location -which may be determinable - // from the Swift client API, depending on the remote server - final FileStatus[] listOfFileBlocks = store.listSubPaths(file.getPath(), - false, - true); - List locations = new ArrayList(); - if (listOfFileBlocks.length > 1) { - for (FileStatus fileStatus : listOfFileBlocks) { - if (SwiftObjectPath.fromPath(uri, fileStatus.getPath()) - .equals(SwiftObjectPath.fromPath(uri, file.getPath()))) { - continue; - } - locations.addAll(store.getObjectLocation(fileStatus.getPath())); - } - } else { - locations = store.getObjectLocation(file.getPath()); - } - - if (locations.isEmpty()) { - LOG.debug("No locations returned for " + file.getPath()); - //no locations were returned for the object - //fall back to the superclass - - String[] name = {SwiftProtocolConstants.BLOCK_LOCATION}; - String[] host = { "localhost" }; - String[] topology={SwiftProtocolConstants.TOPOLOGY_PATH}; - return new BlockLocation[] { - new BlockLocation(name, host, topology,0, file.getLen()) - }; - } - - final String[] names = new String[locations.size()]; - final String[] hosts = new String[locations.size()]; - int i = 0; - for (URI location : locations) { - hosts[i] = location.getHost(); - names[i] = location.getAuthority(); - i++; - } - return new BlockLocation[]{ - new BlockLocation(names, hosts, 0, file.getLen()) - }; - } - - /** - * Create the parent directories. - * As an optimization, the entire hierarchy of parent - * directories is Not polled. Instead - * the tree is walked up from the last to the first, - * creating directories until one that exists is found. - * - * This strategy means if a file is created in an existing directory, - * one quick poll suffices. - * - * There is a big assumption here: that all parent directories of an existing - * directory also exists. - * @param path path to create. - * @param permission to apply to files - * @return true if the operation was successful - * @throws IOException on a problem - */ - @Override - public boolean mkdirs(Path path, FsPermission permission) throws IOException { - if (LOG.isDebugEnabled()) { - LOG.debug("SwiftFileSystem.mkdirs: " + path); - } - Path directory = makeAbsolute(path); - - //build a list of paths to create - List paths = new ArrayList(); - while (shouldCreate(directory)) { - //this directory needs creation, add to the list - paths.add(0, directory); - //now see if the parent needs to be created - directory = directory.getParent(); - } - - //go through the list of directories to create - for (Path p : paths) { - if (isNotRoot(p)) { - //perform a mkdir operation without any polling of - //the far end first - forceMkdir(p); - } - } - - //if an exception was not thrown, this operation is considered - //a success - return true; - } - - private boolean isNotRoot(Path absolutePath) { - return !isRoot(absolutePath); - } - - private boolean isRoot(Path absolutePath) { - return absolutePath.getParent() == null; - } - - /** - * internal implementation of directory creation. - * - * @param path path to file - * @return boolean file is created; false: no need to create - * @throws IOException if specified path is file instead of directory - */ - private boolean mkdir(Path path) throws IOException { - Path directory = makeAbsolute(path); - boolean shouldCreate = shouldCreate(directory); - if (shouldCreate) { - forceMkdir(directory); - } - return shouldCreate; - } - - /** - * Should mkdir create this directory? - * If the directory is root : false - * If the entry exists and is a directory: false - * If the entry exists and is a file: exception - * else: true - * @param directory path to query - * @return true iff the directory should be created - * @throws IOException IO problems - * @throws ParentNotDirectoryException if the path references a file - */ - private boolean shouldCreate(Path directory) throws IOException { - FileStatus fileStatus; - boolean shouldCreate; - if (isRoot(directory)) { - //its the base dir, bail out immediately - return false; - } - try { - //find out about the path - fileStatus = getFileStatus(directory); - - if (!SwiftUtils.isDirectory(fileStatus)) { - //if it's a file, raise an error - throw new ParentNotDirectoryException( - String.format("%s: can't mkdir since it exists and is not a directory: %s", - directory, fileStatus)); - } else { - //path exists, and it is a directory - if (LOG.isDebugEnabled()) { - LOG.debug("skipping mkdir(" + directory + ") as it exists already"); - } - shouldCreate = false; - } - } catch (FileNotFoundException e) { - shouldCreate = true; - } - return shouldCreate; - } - - /** - * mkdir of a directory -irrespective of what was there underneath. - * There are no checks for the directory existing, there not - * being a path there, etc. etc. Those are assumed to have - * taken place already - * @param absolutePath path to create - * @throws IOException IO problems - */ - private void forceMkdir(Path absolutePath) throws IOException { - if (LOG.isDebugEnabled()) { - LOG.debug("Making dir '" + absolutePath + "' in Swift"); - } - //file is not found: it must be created - store.createDirectory(absolutePath); - } - - /** - * List the statuses of the files/directories in the given path if the path is - * a directory. - * - * @param path given path - * @return the statuses of the files/directories in the given path - * @throws IOException - */ - @Override - public FileStatus[] listStatus(Path path) throws IOException { - if (LOG.isDebugEnabled()) { - LOG.debug("SwiftFileSystem.listStatus for: " + path); - } - return store.listSubPaths(makeAbsolute(path), false, true); - } - - /** - * This optional operation is not supported - */ - @Override - public FSDataOutputStream append(Path f, int bufferSize, Progressable progress) - throws IOException { - LOG.debug("SwiftFileSystem.append"); - throw new SwiftUnsupportedFeatureException("Not supported: append()"); - } - - /** - * @param permission Currently ignored. - */ - @Override - public FSDataOutputStream create(Path file, FsPermission permission, - boolean overwrite, int bufferSize, - short replication, long blockSize, - Progressable progress) - throws IOException { - LOG.debug("SwiftFileSystem.create"); - - FileStatus fileStatus = null; - Path absolutePath = makeAbsolute(file); - try { - fileStatus = getFileStatus(absolutePath); - } catch (FileNotFoundException e) { - //the file isn't there. - } - - if (fileStatus != null) { - //the path exists -action depends on whether or not it is a directory, - //and what the overwrite policy is. - - //What is clear at this point is that if the entry exists, there's - //no need to bother creating any parent entries - if (fileStatus.isDirectory()) { - //here someone is trying to create a file over a directory - -/* we can't throw an exception here as there is no easy way to distinguish - a file from the dir - - throw new SwiftPathExistsException("Cannot create a file over a directory:" - + file); - */ - if (LOG.isDebugEnabled()) { - LOG.debug("Overwriting either an empty file or a directory"); - } - } - if (overwrite) { - //overwrite set -> delete the object. - store.delete(absolutePath, true); - } else { - throw new FileAlreadyExistsException("Path exists: " + file); - } - } else { - // destination does not exist -trigger creation of the parent - Path parent = file.getParent(); - if (parent != null) { - if (!mkdirs(parent)) { - throw new SwiftOperationFailedException( - "Mkdirs failed to create " + parent); - } - } - } - - SwiftNativeOutputStream out = createSwiftOutputStream(file); - return new FSDataOutputStream(out, statistics); - } - - /** - * Create the swift output stream - * @param path path to write to - * @return the new file - * @throws IOException - */ - protected SwiftNativeOutputStream createSwiftOutputStream(Path path) throws - IOException { - long partSizeKB = getStore().getPartsizeKB(); - return new SwiftNativeOutputStream(getConf(), - getStore(), - path.toUri().toString(), - partSizeKB); - } - - /** - * Opens an FSDataInputStream at the indicated Path. - * - * @param path the file name to open - * @param bufferSize the size of the buffer to be used. - * @return the input stream - * @throws FileNotFoundException if the file is not found - * @throws IOException any IO problem - */ - @Override - public FSDataInputStream open(Path path, int bufferSize) throws IOException { - int bufferSizeKB = getStore().getBufferSizeKB(); - long readBlockSize = bufferSizeKB * 1024L; - return open(path, bufferSize, readBlockSize); - } - - /** - * Low-level operation to also set the block size for this operation - * @param path the file name to open - * @param bufferSize the size of the buffer to be used. - * @param readBlockSize how big should the read block/buffer size be? - * @return the input stream - * @throws FileNotFoundException if the file is not found - * @throws IOException any IO problem - */ - public FSDataInputStream open(Path path, - int bufferSize, - long readBlockSize) throws IOException { - if (readBlockSize <= 0) { - throw new SwiftConfigurationException("Bad remote buffer size"); - } - Path absolutePath = makeAbsolute(path); - return new FSDataInputStream( - new StrictBufferedFSInputStream( - new SwiftNativeInputStream(store, - statistics, - absolutePath, - readBlockSize), - bufferSize)); - } - - /** - * Renames Path src to Path dst. On swift this uses copy-and-delete - * and is not atomic. - * - * @param src path - * @param dst path - * @return true if directory renamed, false otherwise - * @throws IOException on problems - */ - @Override - public boolean rename(Path src, Path dst) throws IOException { - - try { - store.rename(makeAbsolute(src), makeAbsolute(dst)); - //success - return true; - } catch (SwiftOperationFailedException - | FileAlreadyExistsException - | FileNotFoundException - | ParentNotDirectoryException e) { - //downgrade to a failure - LOG.debug("rename({}, {}) failed",src, dst, e); - return false; - } - } - - - /** - * Delete a file or directory - * - * @param path the path to delete. - * @param recursive if path is a directory and set to - * true, the directory is deleted else throws an exception if the - * directory is not empty - * case of a file the recursive can be set to either true or false. - * @return true if the object was deleted - * @throws IOException IO problems - */ - @Override - public boolean delete(Path path, boolean recursive) throws IOException { - try { - return store.delete(path, recursive); - } catch (FileNotFoundException e) { - //base path was not found. - return false; - } - } - - /** - * Delete a file. - * This method is abstract in Hadoop 1.x; in 2.x+ it is non-abstract - * and deprecated - */ - @Override - public boolean delete(Path f) throws IOException { - return delete(f, true); - } - - /** - * Makes path absolute - * - * @param path path to file - * @return absolute path - */ - protected Path makeAbsolute(Path path) { - if (path.isAbsolute()) { - return path; - } - return new Path(workingDir, path); - } - - /** - * Get the current operation statistics - * @return a snapshot of the statistics - */ - public List getOperationStatistics() { - return store.getOperationStatistics(); - } - - /** - * Low level method to do a deep listing of all entries, not stopping - * at the next directory entry. This is to let tests be confident that - * recursive deletes really are working. - * @param path path to recurse down - * @param newest ask for the newest data, potentially slower than not. - * @return a potentially empty array of file status - * @throws IOException any problem - */ - @InterfaceAudience.Private - public FileStatus[] listRawFileStatus(Path path, boolean newest) throws IOException { - return store.listSubPaths(makeAbsolute(path), true, newest); - } - - /** - * Get the number of partitions written by an output stream - * This is for testing - * @param outputStream output stream - * @return the #of partitions written by that stream - */ - @InterfaceAudience.Private - public static int getPartitionsWritten(FSDataOutputStream outputStream) { - SwiftNativeOutputStream snos = getSwiftNativeOutputStream(outputStream); - return snos.getPartitionsWritten(); - } - - private static SwiftNativeOutputStream getSwiftNativeOutputStream( - FSDataOutputStream outputStream) { - OutputStream wrappedStream = outputStream.getWrappedStream(); - return (SwiftNativeOutputStream) wrappedStream; - } - - /** - * Get the size of partitions written by an output stream - * This is for testing - * - * @param outputStream output stream - * @return partition size in bytes - */ - @InterfaceAudience.Private - public static long getPartitionSize(FSDataOutputStream outputStream) { - SwiftNativeOutputStream snos = getSwiftNativeOutputStream(outputStream); - return snos.getFilePartSize(); - } - - /** - * Get the the number of bytes written to an output stream - * This is for testing - * - * @param outputStream output stream - * @return partition size in bytes - */ - @InterfaceAudience.Private - public static long getBytesWritten(FSDataOutputStream outputStream) { - SwiftNativeOutputStream snos = getSwiftNativeOutputStream(outputStream); - return snos.getBytesWritten(); - } - - /** - * Get the the number of bytes uploaded by an output stream - * to the swift cluster. - * This is for testing - * - * @param outputStream output stream - * @return partition size in bytes - */ - @InterfaceAudience.Private - public static long getBytesUploaded(FSDataOutputStream outputStream) { - SwiftNativeOutputStream snos = getSwiftNativeOutputStream(outputStream); - return snos.getBytesUploaded(); - } - - /** - * {@inheritDoc} - * @throws FileNotFoundException if the parent directory is not present -or - * is not a directory. - */ - @Override - public FSDataOutputStream createNonRecursive(Path path, - FsPermission permission, - EnumSet flags, - int bufferSize, - short replication, - long blockSize, - Progressable progress) throws IOException { - Path parent = path.getParent(); - if (parent != null) { - // expect this to raise an exception if there is no parent - if (!getFileStatus(parent).isDirectory()) { - throw new FileAlreadyExistsException("Not a directory: " + parent); - } - } - return create(path, permission, - flags.contains(CreateFlag.OVERWRITE), bufferSize, - replication, blockSize, progress); - } - -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystemStore.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystemStore.java deleted file mode 100644 index 5e480090092..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystemStore.java +++ /dev/null @@ -1,986 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.fs.swift.snative; - -import com.fasterxml.jackson.databind.type.CollectionType; - -import org.apache.http.Header; -import org.apache.http.HttpStatus; -import org.apache.http.message.BasicHeader; -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.FileAlreadyExistsException; -import org.apache.hadoop.fs.FileStatus; -import org.apache.hadoop.fs.ParentNotDirectoryException; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException; -import org.apache.hadoop.fs.swift.exceptions.SwiftException; -import org.apache.hadoop.fs.swift.exceptions.SwiftInvalidResponseException; -import org.apache.hadoop.fs.swift.exceptions.SwiftOperationFailedException; -import org.apache.hadoop.fs.swift.http.HttpBodyContent; -import org.apache.hadoop.fs.swift.http.SwiftProtocolConstants; -import org.apache.hadoop.fs.swift.http.SwiftRestClient; -import org.apache.hadoop.fs.swift.util.DurationStats; -import org.apache.hadoop.fs.swift.util.JSONUtil; -import org.apache.hadoop.fs.swift.util.SwiftObjectPath; -import org.apache.hadoop.fs.swift.util.SwiftUtils; - -import java.io.ByteArrayInputStream; -import java.io.FileNotFoundException; -import java.io.IOException; -import java.io.InputStream; -import java.io.InterruptedIOException; -import java.net.URI; -import java.net.URISyntaxException; -import java.nio.charset.Charset; -import java.text.ParseException; -import java.text.SimpleDateFormat; -import java.util.ArrayList; -import java.util.Collection; -import java.util.Collections; -import java.util.LinkedList; -import java.util.List; -import java.util.regex.Matcher; -import java.util.regex.Pattern; - -/** - * File system store implementation. - * Makes REST requests, parses data from responses - */ -public class SwiftNativeFileSystemStore { - private static final Pattern URI_PATTERN = Pattern.compile("\"\\S+?\""); - private static final String PATTERN = "EEE, d MMM yyyy hh:mm:ss zzz"; - private static final Logger LOG = - LoggerFactory.getLogger(SwiftNativeFileSystemStore.class); - private URI uri; - private SwiftRestClient swiftRestClient; - - /** - * Initalize the filesystem store -this creates the REST client binding. - * - * @param fsURI URI of the filesystem, which is used to map to the filesystem-specific - * options in the configuration file - * @param configuration configuration - * @throws IOException on any failure. - */ - public void initialize(URI fsURI, Configuration configuration) throws IOException { - this.uri = fsURI; - this.swiftRestClient = SwiftRestClient.getInstance(fsURI, configuration); - } - - @Override - public String toString() { - return "SwiftNativeFileSystemStore with " - + swiftRestClient; - } - - /** - * Get the default blocksize of this (bound) filesystem - * @return the blocksize returned for all FileStatus queries, - * which is used by the MapReduce splitter. - */ - public long getBlocksize() { - return 1024L * swiftRestClient.getBlocksizeKB(); - } - - public long getPartsizeKB() { - return swiftRestClient.getPartSizeKB(); - } - - public int getBufferSizeKB() { - return swiftRestClient.getBufferSizeKB(); - } - - public int getThrottleDelay() { - return swiftRestClient.getThrottleDelay(); - } - /** - * Upload a file/input stream of a specific length. - * - * @param path destination path in the swift filesystem - * @param inputStream input data. This is closed afterwards, always - * @param length length of the data - * @throws IOException on a problem - */ - public void uploadFile(Path path, InputStream inputStream, long length) - throws IOException { - swiftRestClient.upload(toObjectPath(path), inputStream, length); - } - - /** - * Upload part of a larger file. - * - * @param path destination path - * @param partNumber item number in the path - * @param inputStream input data - * @param length length of the data - * @throws IOException on a problem - */ - public void uploadFilePart(Path path, int partNumber, - InputStream inputStream, long length) - throws IOException { - - String stringPath = path.toUri().toString(); - String partitionFilename = SwiftUtils.partitionFilenameFromNumber( - partNumber); - if (stringPath.endsWith("/")) { - stringPath = stringPath.concat(partitionFilename); - } else { - stringPath = stringPath.concat("/").concat(partitionFilename); - } - - swiftRestClient.upload( - new SwiftObjectPath(toDirPath(path).getContainer(), stringPath), - inputStream, - length); - } - - /** - * Tell the Swift server to expect a multi-part upload by submitting - * a 0-byte file with the X-Object-Manifest header - * - * @param path path of final final - * @throws IOException - */ - public void createManifestForPartUpload(Path path) throws IOException { - String pathString = toObjectPath(path).toString(); - if (!pathString.endsWith("/")) { - pathString = pathString.concat("/"); - } - if (pathString.startsWith("/")) { - pathString = pathString.substring(1); - } - - swiftRestClient.upload(toObjectPath(path), - new ByteArrayInputStream(new byte[0]), - 0, - new BasicHeader(SwiftProtocolConstants.X_OBJECT_MANIFEST, pathString)); - } - - /** - * Get the metadata of an object - * - * @param path path - * @return file metadata. -or null if no headers were received back from the server. - * @throws IOException on a problem - * @throws FileNotFoundException if there is nothing at the end - */ - public SwiftFileStatus getObjectMetadata(Path path) throws IOException { - return getObjectMetadata(path, true); - } - - /** - * Get the HTTP headers, in case you really need the low-level - * metadata - * @param path path to probe - * @param newest newest or oldest? - * @return the header list - * @throws IOException IO problem - * @throws FileNotFoundException if there is nothing at the end - */ - public Header[] getObjectHeaders(Path path, boolean newest) - throws IOException, FileNotFoundException { - SwiftObjectPath objectPath = toObjectPath(path); - return stat(objectPath, newest); - } - - /** - * Get the metadata of an object - * - * @param path path - * @param newest flag to say "set the newest header", otherwise take any entry - * @return file metadata. -or null if no headers were received back from the server. - * @throws IOException on a problem - * @throws FileNotFoundException if there is nothing at the end - */ - public SwiftFileStatus getObjectMetadata(Path path, boolean newest) - throws IOException, FileNotFoundException { - - SwiftObjectPath objectPath = toObjectPath(path); - final Header[] headers = stat(objectPath, newest); - //no headers is treated as a missing file - if (headers.length == 0) { - throw new FileNotFoundException("Not Found " + path.toUri()); - } - - boolean isDir = false; - long length = 0; - long lastModified = 0 ; - for (Header header : headers) { - String headerName = header.getName(); - if (headerName.equals(SwiftProtocolConstants.X_CONTAINER_OBJECT_COUNT) || - headerName.equals(SwiftProtocolConstants.X_CONTAINER_BYTES_USED)) { - length = 0; - isDir = true; - } - if (SwiftProtocolConstants.HEADER_CONTENT_LENGTH.equals(headerName)) { - length = Long.parseLong(header.getValue()); - } - if (SwiftProtocolConstants.HEADER_LAST_MODIFIED.equals(headerName)) { - final SimpleDateFormat simpleDateFormat = new SimpleDateFormat(PATTERN); - try { - lastModified = simpleDateFormat.parse(header.getValue()).getTime(); - } catch (ParseException e) { - throw new SwiftException("Failed to parse " + header.toString(), e); - } - } - } - if (lastModified == 0) { - lastModified = System.currentTimeMillis(); - } - - Path correctSwiftPath = getCorrectSwiftPath(path); - return new SwiftFileStatus(length, - isDir, - 1, - getBlocksize(), - lastModified, - correctSwiftPath); - } - - private Header[] stat(SwiftObjectPath objectPath, boolean newest) throws - IOException { - Header[] headers; - if (newest) { - headers = swiftRestClient.headRequest("getObjectMetadata-newest", - objectPath, SwiftRestClient.NEWEST); - } else { - headers = swiftRestClient.headRequest("getObjectMetadata", - objectPath); - } - return headers; - } - - /** - * Get the object as an input stream - * - * @param path object path - * @return the input stream -this must be closed to terminate the connection - * @throws IOException IO problems - * @throws FileNotFoundException path doesn't resolve to an object - */ - public HttpBodyContent getObject(Path path) throws IOException { - return swiftRestClient.getData(toObjectPath(path), - SwiftRestClient.NEWEST); - } - - /** - * Get the input stream starting from a specific point. - * - * @param path path to object - * @param byteRangeStart starting point - * @param length no. of bytes - * @return an input stream that must be closed - * @throws IOException IO problems - */ - public HttpBodyContent getObject(Path path, long byteRangeStart, long length) - throws IOException { - return swiftRestClient.getData( - toObjectPath(path), byteRangeStart, length); - } - - /** - * List a directory. - * This is O(n) for the number of objects in this path. - * - * - * - * @param path working path - * @param listDeep ask for all the data - * @param newest ask for the newest data - * @return Collection of file statuses - * @throws IOException IO problems - * @throws FileNotFoundException if the path does not exist - */ - private List listDirectory(SwiftObjectPath path, - boolean listDeep, - boolean newest) throws IOException { - final byte[] bytes; - final ArrayList files = new ArrayList(); - final Path correctSwiftPath = getCorrectSwiftPath(path); - try { - bytes = swiftRestClient.listDeepObjectsInDirectory(path, listDeep); - } catch (FileNotFoundException e) { - if (LOG.isDebugEnabled()) { - LOG.debug("" + - "File/Directory not found " + path); - } - if (SwiftUtils.isRootDir(path)) { - return Collections.emptyList(); - } else { - throw e; - } - } catch (SwiftInvalidResponseException e) { - //bad HTTP error code - if (e.getStatusCode() == HttpStatus.SC_NO_CONTENT) { - //this can come back on a root list if the container is empty - if (SwiftUtils.isRootDir(path)) { - return Collections.emptyList(); - } else { - //NO_CONTENT returned on something other than the root directory; - //see if it is there, and convert to empty list or not found - //depending on whether the entry exists. - FileStatus stat = getObjectMetadata(correctSwiftPath, newest); - - if (stat.isDirectory()) { - //it's an empty directory. state that - return Collections.emptyList(); - } else { - //it's a file -return that as the status - files.add(stat); - return files; - } - } - } else { - //a different status code: rethrow immediately - throw e; - } - } - - final CollectionType collectionType = JSONUtil.getJsonMapper().getTypeFactory(). - constructCollectionType(List.class, SwiftObjectFileStatus.class); - - final List fileStatusList = JSONUtil.toObject( - new String(bytes, Charset.forName("UTF-8")), collectionType); - - //this can happen if user lists file /data/files/file - //in this case swift will return empty array - if (fileStatusList.isEmpty()) { - SwiftFileStatus objectMetadata = getObjectMetadata(correctSwiftPath, - newest); - if (objectMetadata.isFile()) { - files.add(objectMetadata); - } - - return files; - } - - for (SwiftObjectFileStatus status : fileStatusList) { - if (status.getName() != null) { - files.add(new SwiftFileStatus(status.getBytes(), - status.getBytes() == 0, - 1, - getBlocksize(), - status.getLast_modified().getTime(), - getCorrectSwiftPath(new Path(status.getName())))); - } - } - - return files; - } - - /** - * List all elements in this directory - * - * - * - * @param path path to work with - * @param recursive do a recursive get - * @param newest ask for the newest, or can some out of date data work? - * @return the file statuses, or an empty array if there are no children - * @throws IOException on IO problems - * @throws FileNotFoundException if the path is nonexistent - */ - public FileStatus[] listSubPaths(Path path, - boolean recursive, - boolean newest) throws IOException { - final Collection fileStatuses; - fileStatuses = listDirectory(toDirPath(path), recursive, newest); - return fileStatuses.toArray(new FileStatus[fileStatuses.size()]); - } - - /** - * Create a directory - * - * @param path path - * @throws IOException - */ - public void createDirectory(Path path) throws IOException { - innerCreateDirectory(toDirPath(path)); - } - - /** - * The inner directory creation option. This only creates - * the dir at the given path, not any parent dirs. - * @param swiftObjectPath swift object path at which a 0-byte blob should be - * put - * @throws IOException IO problems - */ - private void innerCreateDirectory(SwiftObjectPath swiftObjectPath) - throws IOException { - - swiftRestClient.putRequest(swiftObjectPath); - } - - private SwiftObjectPath toDirPath(Path path) throws - SwiftConfigurationException { - return SwiftObjectPath.fromPath(uri, path, false); - } - - private SwiftObjectPath toObjectPath(Path path) throws - SwiftConfigurationException { - return SwiftObjectPath.fromPath(uri, path); - } - - /** - * Try to find the specific server(s) on which the data lives - * @param path path to probe - * @return a possibly empty list of locations - * @throws IOException on problems determining the locations - */ - public List getObjectLocation(Path path) throws IOException { - final byte[] objectLocation; - objectLocation = swiftRestClient.getObjectLocation(toObjectPath(path)); - if (objectLocation == null || objectLocation.length == 0) { - //no object location, return an empty list - return new LinkedList(); - } - return extractUris(new String(objectLocation, Charset.forName("UTF-8")), path); - } - - /** - * deletes object from Swift - * - * @param path path to delete - * @return true if the path was deleted by this specific operation. - * @throws IOException on a failure - */ - public boolean deleteObject(Path path) throws IOException { - SwiftObjectPath swiftObjectPath = toObjectPath(path); - if (!SwiftUtils.isRootDir(swiftObjectPath)) { - return swiftRestClient.delete(swiftObjectPath); - } else { - if (LOG.isDebugEnabled()) { - LOG.debug("Not deleting root directory entry"); - } - return true; - } - } - - /** - * deletes a directory from Swift. This is not recursive - * - * @param path path to delete - * @return true if the path was deleted by this specific operation -or - * the path was root and not acted on. - * @throws IOException on a failure - */ - public boolean rmdir(Path path) throws IOException { - return deleteObject(path); - } - - /** - * Does the object exist - * - * @param path object path - * @return true if the metadata of an object could be retrieved - * @throws IOException IO problems other than FileNotFound, which - * is downgraded to an object does not exist return code - */ - public boolean objectExists(Path path) throws IOException { - return objectExists(toObjectPath(path)); - } - - /** - * Does the object exist - * - * @param path swift object path - * @return true if the metadata of an object could be retrieved - * @throws IOException IO problems other than FileNotFound, which - * is downgraded to an object does not exist return code - */ - public boolean objectExists(SwiftObjectPath path) throws IOException { - try { - Header[] headers = swiftRestClient.headRequest("objectExists", - path, - SwiftRestClient.NEWEST); - //no headers is treated as a missing file - return headers.length != 0; - } catch (FileNotFoundException e) { - return false; - } - } - - /** - * Rename through copy-and-delete. this is a consequence of the - * Swift filesystem using the path as the hash - * into the Distributed Hash Table, "the ring" of filenames. - *

- * Because of the nature of the operation, it is not atomic. - * - * @param src source file/dir - * @param dst destination - * @throws IOException IO failure - * @throws SwiftOperationFailedException if the rename failed - * @throws FileNotFoundException if the source directory is missing, or - * the parent directory of the destination - */ - public void rename(Path src, Path dst) - throws FileNotFoundException, SwiftOperationFailedException, IOException { - if (LOG.isDebugEnabled()) { - LOG.debug("mv " + src + " " + dst); - } - boolean renamingOnToSelf = src.equals(dst); - - SwiftObjectPath srcObject = toObjectPath(src); - SwiftObjectPath destObject = toObjectPath(dst); - - if (SwiftUtils.isRootDir(srcObject)) { - throw new SwiftOperationFailedException("cannot rename root dir"); - } - - final SwiftFileStatus srcMetadata; - srcMetadata = getObjectMetadata(src); - SwiftFileStatus dstMetadata; - try { - dstMetadata = getObjectMetadata(dst); - } catch (FileNotFoundException e) { - //destination does not exist. - LOG.debug("Destination does not exist"); - dstMetadata = null; - } - - //check to see if the destination parent directory exists - Path srcParent = src.getParent(); - Path dstParent = dst.getParent(); - //skip the overhead of a HEAD call if the src and dest share the same - //parent dir (in which case the dest dir exists), or the destination - //directory is root, in which case it must also exist - if (dstParent != null && !dstParent.equals(srcParent)) { - SwiftFileStatus fileStatus; - try { - fileStatus = getObjectMetadata(dstParent); - } catch (FileNotFoundException e) { - //destination parent doesn't exist; bail out - LOG.debug("destination parent directory " + dstParent + " doesn't exist"); - throw e; - } - if (!fileStatus.isDir()) { - throw new ParentNotDirectoryException(dstParent.toString()); - } - } - - boolean destExists = dstMetadata != null; - boolean destIsDir = destExists && SwiftUtils.isDirectory(dstMetadata); - //calculate the destination - SwiftObjectPath destPath; - - //enum the child entries and everything underneath - List childStats = listDirectory(srcObject, true, true); - boolean srcIsFile = !srcMetadata.isDirectory(); - if (srcIsFile) { - - //source is a simple file OR a partitioned file - // outcomes: - // #1 dest exists and is file: fail - // #2 dest exists and is dir: destination path becomes under dest dir - // #3 dest does not exist: use dest as name - if (destExists) { - - if (destIsDir) { - //outcome #2 -move to subdir of dest - destPath = toObjectPath(new Path(dst, src.getName())); - } else { - //outcome #1 dest it's a file: fail if different - if (!renamingOnToSelf) { - throw new FileAlreadyExistsException( - "cannot rename a file over one that already exists"); - } else { - //is mv self self where self is a file. this becomes a no-op - LOG.debug("Renaming file onto self: no-op => success"); - return; - } - } - } else { - //outcome #3 -new entry - destPath = toObjectPath(dst); - } - int childCount = childStats.size(); - //here there is one of: - // - a single object ==> standard file - // -> - if (childCount == 0) { - copyThenDeleteObject(srcObject, destPath); - } else { - //do the copy - SwiftUtils.debug(LOG, "Source file appears to be partitioned." + - " copying file and deleting children"); - - copyObject(srcObject, destPath); - for (FileStatus stat : childStats) { - SwiftUtils.debug(LOG, "Deleting partitioned file %s ", stat); - deleteObject(stat.getPath()); - } - - swiftRestClient.delete(srcObject); - } - } else { - - //here the source exists and is a directory - // outcomes (given we know the parent dir exists if we get this far) - // #1 destination is a file: fail - // #2 destination is a directory: create a new dir under that one - // #3 destination doesn't exist: create a new dir with that name - // #3 and #4 are only allowed if the dest path is not == or under src - - - if (destExists && !destIsDir) { - // #1 destination is a file: fail - throw new FileAlreadyExistsException( - "the source is a directory, but not the destination"); - } - Path targetPath; - if (destExists) { - // #2 destination is a directory: create a new dir under that one - targetPath = new Path(dst, src.getName()); - } else { - // #3 destination doesn't exist: create a new dir with that name - targetPath = dst; - } - SwiftObjectPath targetObjectPath = toObjectPath(targetPath); - //final check for any recursive operations - if (srcObject.isEqualToOrParentOf(targetObjectPath)) { - //you can't rename a directory onto itself - throw new SwiftOperationFailedException( - "cannot move a directory under itself"); - } - - - LOG.info("mv " + srcObject + " " + targetPath); - - logDirectory("Directory to copy ", srcObject, childStats); - - // iterative copy of everything under the directory. - // by listing all children this can be done iteratively - // rather than recursively -everything in this list is either a file - // or a 0-byte-len file pretending to be a directory. - String srcURI = src.toUri().toString(); - int prefixStripCount = srcURI.length() + 1; - for (FileStatus fileStatus : childStats) { - Path copySourcePath = fileStatus.getPath(); - String copySourceURI = copySourcePath.toUri().toString(); - - String copyDestSubPath = copySourceURI.substring(prefixStripCount); - - Path copyDestPath = new Path(targetPath, copyDestSubPath); - if (LOG.isTraceEnabled()) { - //trace to debug some low-level rename path problems; retained - //in case they ever come back. - LOG.trace("srcURI=" + srcURI - + "; copySourceURI=" + copySourceURI - + "; copyDestSubPath=" + copyDestSubPath - + "; copyDestPath=" + copyDestPath); - } - SwiftObjectPath copyDestination = toObjectPath(copyDestPath); - - try { - copyThenDeleteObject(toObjectPath(copySourcePath), - copyDestination); - } catch (FileNotFoundException e) { - LOG.info("Skipping rename of " + copySourcePath); - } - //add a throttle delay - throttle(); - } - //now rename self. If missing, create the dest directory and warn - if (!SwiftUtils.isRootDir(srcObject)) { - try { - copyThenDeleteObject(srcObject, - targetObjectPath); - } catch (FileNotFoundException e) { - //create the destination directory - LOG.warn("Source directory deleted during rename", e); - innerCreateDirectory(destObject); - } - } - } - } - - /** - * Debug action to dump directory statuses to the debug log - * - * @param message explanation - * @param objectPath object path (can be null) - * @param statuses listing output - */ - private void logDirectory(String message, SwiftObjectPath objectPath, - Iterable statuses) { - - if (LOG.isDebugEnabled()) { - LOG.debug(message + ": listing of " + objectPath); - for (FileStatus fileStatus : statuses) { - LOG.debug(fileStatus.getPath().toString()); - } - } - } - - public void copy(Path srcKey, Path dstKey) throws IOException { - SwiftObjectPath srcObject = toObjectPath(srcKey); - SwiftObjectPath destObject = toObjectPath(dstKey); - swiftRestClient.copyObject(srcObject, destObject); - } - - - /** - * Copy an object then, if the copy worked, delete it. - * If the copy failed, the source object is not deleted. - * - * @param srcObject source object path - * @param destObject destination object path - * @throws IOException IO problems - - */ - private void copyThenDeleteObject(SwiftObjectPath srcObject, - SwiftObjectPath destObject) throws - IOException { - - - //do the copy - copyObject(srcObject, destObject); - //getting here means the copy worked - swiftRestClient.delete(srcObject); - } - /** - * Copy an object - * @param srcObject source object path - * @param destObject destination object path - * @throws IOException IO problems - */ - private void copyObject(SwiftObjectPath srcObject, - SwiftObjectPath destObject) throws - IOException { - if (srcObject.isEqualToOrParentOf(destObject)) { - throw new SwiftException( - "Can't copy " + srcObject + " onto " + destObject); - } - //do the copy - boolean copySucceeded = swiftRestClient.copyObject(srcObject, destObject); - if (!copySucceeded) { - throw new SwiftException("Copy of " + srcObject + " to " - + destObject + "failed"); - } - } - - /** - * Take a Hadoop path and return one which uses the URI prefix and authority - * of this FS. It doesn't make a relative path absolute - * @param path path in - * @return path with a URI bound to this FS - * @throws SwiftException URI cannot be created. - */ - public Path getCorrectSwiftPath(Path path) throws - SwiftException { - try { - final URI fullUri = new URI(uri.getScheme(), - uri.getAuthority(), - path.toUri().getPath(), - null, - null); - - return new Path(fullUri); - } catch (URISyntaxException e) { - throw new SwiftException("Specified path " + path + " is incorrect", e); - } - } - - /** - * Builds a hadoop-Path from a swift path, inserting the URI authority - * of this FS instance - * @param path swift object path - * @return Hadoop path - * @throws SwiftException if the URI couldn't be created. - */ - private Path getCorrectSwiftPath(SwiftObjectPath path) throws - SwiftException { - try { - final URI fullUri = new URI(uri.getScheme(), - uri.getAuthority(), - path.getObject(), - null, - null); - - return new Path(fullUri); - } catch (URISyntaxException e) { - throw new SwiftException("Specified path " + path + " is incorrect", e); - } - } - - - /** - * extracts URIs from json - * @param json json to parse - * @param path path (used in exceptions) - * @return URIs - * @throws SwiftOperationFailedException on any problem parsing the JSON - */ - public static List extractUris(String json, Path path) throws - SwiftOperationFailedException { - final Matcher matcher = URI_PATTERN.matcher(json); - final List result = new ArrayList(); - while (matcher.find()) { - final String s = matcher.group(); - final String uri = s.substring(1, s.length() - 1); - try { - URI createdUri = URI.create(uri); - result.add(createdUri); - } catch (IllegalArgumentException e) { - //failure to create the URI, which means this is bad JSON. Convert - //to an exception with useful text - throw new SwiftOperationFailedException( - String.format( - "could not convert \"%s\" into a URI." + - " source: %s " + - " first JSON: %s", - uri, path, json.substring(0, 256))); - } - } - return result; - } - - /** - * Insert a throttled wait if the throttle delay > 0 - * @throws InterruptedIOException if interrupted during sleep - */ - public void throttle() throws InterruptedIOException { - int throttleDelay = getThrottleDelay(); - if (throttleDelay > 0) { - try { - Thread.sleep(throttleDelay); - } catch (InterruptedException e) { - //convert to an IOE - throw (InterruptedIOException) new InterruptedIOException(e.toString()) - .initCause(e); - } - } - } - - /** - * Get the current operation statistics - * @return a snapshot of the statistics - */ - public List getOperationStatistics() { - return swiftRestClient.getOperationStatistics(); - } - - - /** - * Delete the entire tree. This is an internal one with slightly different - * behavior: if an entry is missing, a {@link FileNotFoundException} is - * raised. This lets the caller distinguish a file not found with - * other reasons for failure, so handles race conditions in recursive - * directory deletes better. - *

- * The problem being addressed is: caller A requests a recursive directory - * of directory /dir ; caller B requests a delete of a file /dir/file, - * between caller A enumerating the files contents, and requesting a delete - * of /dir/file. We want to recognise the special case - * "directed file is no longer there" and not convert that into a failure - * - * @param absolutePath the path to delete. - * @param recursive if path is a directory and set to - * true, the directory is deleted else throws an exception if the - * directory is not empty - * case of a file the recursive can be set to either true or false. - * @return true if the object was deleted - * @throws IOException IO problems - * @throws FileNotFoundException if a file/dir being deleted is not there - - * this includes entries below the specified path, (if the path is a dir - * and recursive is true) - */ - public boolean delete(Path absolutePath, boolean recursive) throws IOException { - Path swiftPath = getCorrectSwiftPath(absolutePath); - SwiftUtils.debug(LOG, "Deleting path '%s' recursive=%b", - absolutePath, - recursive); - boolean askForNewest = true; - SwiftFileStatus fileStatus = getObjectMetadata(swiftPath, askForNewest); - - //ask for the file/dir status, but don't demand the newest, as we - //don't mind if the directory has changed - //list all entries under this directory. - //this will throw FileNotFoundException if the file isn't there - FileStatus[] statuses = listSubPaths(absolutePath, true, askForNewest); - if (statuses == null) { - //the directory went away during the non-atomic stages of the operation. - // Return false as it was not this thread doing the deletion. - SwiftUtils.debug(LOG, "Path '%s' has no status -it has 'gone away'", - absolutePath, - recursive); - return false; - } - int filecount = statuses.length; - SwiftUtils.debug(LOG, "Path '%s' %d status entries'", - absolutePath, - filecount); - - if (filecount == 0) { - //it's an empty directory or a path - rmdir(absolutePath); - return true; - } - - if (LOG.isDebugEnabled()) { - SwiftUtils.debug(LOG, "%s", SwiftUtils.fileStatsToString(statuses, "\n")); - } - - if (filecount == 1 && swiftPath.equals(statuses[0].getPath())) { - // 1 entry => simple file and it is the target - //simple file: delete it - SwiftUtils.debug(LOG, "Deleting simple file %s", absolutePath); - deleteObject(absolutePath); - return true; - } - - //>1 entry implies directory with children. Run through them, - // but first check for the recursive flag and reject it *unless it looks - // like a partitioned file (len > 0 && has children) - if (!fileStatus.isDirectory()) { - LOG.debug("Multiple child entries but entry has data: assume partitioned"); - } else if (!recursive) { - //if there are children, unless this is a recursive operation, fail immediately - throw new SwiftOperationFailedException("Directory " + fileStatus - + " is not empty: " - + SwiftUtils.fileStatsToString( - statuses, "; ")); - } - - //delete the entries. including ourselves. - for (FileStatus entryStatus : statuses) { - Path entryPath = entryStatus.getPath(); - try { - boolean deleted = deleteObject(entryPath); - if (!deleted) { - SwiftUtils.debug(LOG, "Failed to delete entry '%s'; continuing", - entryPath); - } - } catch (FileNotFoundException e) { - //the path went away -race conditions. - //do not fail, as the outcome is still OK. - SwiftUtils.debug(LOG, "Path '%s' is no longer present; continuing", - entryPath); - } - throttle(); - } - //now delete self - SwiftUtils.debug(LOG, "Deleting base entry %s", absolutePath); - deleteObject(absolutePath); - - return true; - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeInputStream.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeInputStream.java deleted file mode 100644 index bce7325c980..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeInputStream.java +++ /dev/null @@ -1,385 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.snative; - -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; -import org.apache.hadoop.fs.FSExceptionMessages; -import org.apache.hadoop.fs.FSInputStream; -import org.apache.hadoop.fs.FileSystem; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.fs.swift.exceptions.SwiftConnectionClosedException; -import org.apache.hadoop.fs.swift.exceptions.SwiftException; -import org.apache.hadoop.fs.swift.http.HttpBodyContent; -import org.apache.hadoop.fs.swift.http.HttpInputStreamWithRelease; -import org.apache.hadoop.fs.swift.util.SwiftUtils; - -import java.io.EOFException; -import java.io.IOException; - -/** - * The input stream from remote Swift blobs. - * The class attempts to be buffer aware, and react to a forward seek operation - * by trying to scan ahead through the current block of data to find it. - * This accelerates some operations that do a lot of seek()/read() actions, - * including work (such as in the MR engine) that do a seek() immediately after - * an open(). - */ -class SwiftNativeInputStream extends FSInputStream { - - private static final Logger LOG = - LoggerFactory.getLogger(SwiftNativeInputStream.class); - - /** - * range requested off the server: {@value} - */ - private final long bufferSize; - - /** - * File nativeStore instance - */ - private final SwiftNativeFileSystemStore nativeStore; - - /** - * Hadoop statistics. Used to get info about number of reads, writes, etc. - */ - private final FileSystem.Statistics statistics; - - /** - * Data input stream - */ - private HttpInputStreamWithRelease httpStream; - - /** - * File path - */ - private final Path path; - - /** - * Current position - */ - private long pos = 0; - - /** - * Length of the file picked up at start time - */ - private long contentLength = -1; - - /** - * Why the stream is closed - */ - private String reasonClosed = "unopened"; - - /** - * Offset in the range requested last - */ - private long rangeOffset = 0; - - public SwiftNativeInputStream(SwiftNativeFileSystemStore storeNative, - FileSystem.Statistics statistics, Path path, long bufferSize) - throws IOException { - this.nativeStore = storeNative; - this.statistics = statistics; - this.path = path; - if (bufferSize <= 0) { - throw new IllegalArgumentException("Invalid buffer size"); - } - this.bufferSize = bufferSize; - //initial buffer fill - this.httpStream = storeNative.getObject(path).getInputStream(); - //fillBuffer(0); - } - - /** - * Move to a new position within the file relative to where the pointer is now. - * Always call from a synchronized clause - * @param offset offset - */ - private synchronized void incPos(int offset) { - pos += offset; - rangeOffset += offset; - SwiftUtils.trace(LOG, "Inc: pos=%d bufferOffset=%d", pos, rangeOffset); - } - - /** - * Update the start of the buffer; always call from a sync'd clause - * @param seekPos position sought. - * @param contentLength content length provided by response (may be -1) - */ - private synchronized void updateStartOfBufferPosition(long seekPos, - long contentLength) { - //reset the seek pointer - pos = seekPos; - //and put the buffer offset to 0 - rangeOffset = 0; - this.contentLength = contentLength; - SwiftUtils.trace(LOG, "Move: pos=%d; bufferOffset=%d; contentLength=%d", - pos, - rangeOffset, - contentLength); - } - - @Override - public synchronized int read() throws IOException { - verifyOpen(); - int result = -1; - try { - result = httpStream.read(); - } catch (IOException e) { - String msg = "IOException while reading " + path - + ": " +e + ", attempting to reopen."; - LOG.debug(msg, e); - if (reopenBuffer()) { - result = httpStream.read(); - } - } - if (result != -1) { - incPos(1); - } - if (statistics != null && result != -1) { - statistics.incrementBytesRead(1); - } - return result; - } - - @Override - public synchronized int read(byte[] b, int off, int len) throws IOException { - SwiftUtils.debug(LOG, "read(buffer, %d, %d)", off, len); - SwiftUtils.validateReadArgs(b, off, len); - if (len == 0) { - return 0; - } - int result = -1; - try { - verifyOpen(); - result = httpStream.read(b, off, len); - } catch (IOException e) { - //other IO problems are viewed as transient and re-attempted - LOG.info("Received IOException while reading '" + path + - "', attempting to reopen: " + e); - LOG.debug("IOE on read()" + e, e); - if (reopenBuffer()) { - result = httpStream.read(b, off, len); - } - } - if (result > 0) { - incPos(result); - if (statistics != null) { - statistics.incrementBytesRead(result); - } - } - - return result; - } - - /** - * Re-open the buffer - * @return true iff more data could be added to the buffer - * @throws IOException if not - */ - private boolean reopenBuffer() throws IOException { - innerClose("reopening buffer to trigger refresh"); - boolean success = false; - try { - fillBuffer(pos); - success = true; - } catch (EOFException eof) { - //the EOF has been reached - this.reasonClosed = "End of file"; - } - return success; - } - - /** - * close the stream. After this the stream is not usable -unless and until - * it is re-opened (which can happen on some of the buffer ops) - * This method is thread-safe and idempotent. - * - * @throws IOException on IO problems. - */ - @Override - public synchronized void close() throws IOException { - innerClose("closed"); - } - - private void innerClose(String reason) throws IOException { - try { - if (httpStream != null) { - reasonClosed = reason; - if (LOG.isDebugEnabled()) { - LOG.debug("Closing HTTP input stream : " + reason); - } - httpStream.close(); - } - } finally { - httpStream = null; - } - } - - /** - * Assume that the connection is not closed: throws an exception if it is - * @throws SwiftConnectionClosedException - */ - private void verifyOpen() throws SwiftConnectionClosedException { - if (httpStream == null) { - throw new SwiftConnectionClosedException(reasonClosed); - } - } - - @Override - public synchronized String toString() { - return "SwiftNativeInputStream" + - " position=" + pos - + " buffer size = " + bufferSize - + " " - + (httpStream != null ? httpStream.toString() - : (" no input stream: " + reasonClosed)); - } - - /** - * Treats any finalize() call without the input stream being closed - * as a serious problem, logging at error level - * @throws Throwable n/a - */ - @Override - protected void finalize() throws Throwable { - if (httpStream != null) { - LOG.error( - "Input stream is leaking handles by not being closed() properly: " - + httpStream.toString()); - } - } - - /** - * Read through the specified number of bytes. - * The implementation iterates a byte a time, which may seem inefficient - * compared to the read(bytes[]) method offered by input streams. - * However, if you look at the code that implements that method, it comes - * down to read() one char at a time -only here the return value is discarded. - * - *

- * This is a no-op if the stream is closed - * @param bytes number of bytes to read. - * @throws IOException IO problems - * @throws SwiftException if a read returned -1. - */ - private int chompBytes(long bytes) throws IOException { - int count = 0; - if (httpStream != null) { - int result; - for (long i = 0; i < bytes; i++) { - result = httpStream.read(); - if (result < 0) { - throw new SwiftException("Received error code while chomping input"); - } - count ++; - incPos(1); - } - } - return count; - } - - /** - * Seek to an offset. If the data is already in the buffer, move to it - * @param targetPos target position - * @throws IOException on any problem - */ - @Override - public synchronized void seek(long targetPos) throws IOException { - if (targetPos < 0) { - throw new EOFException( - FSExceptionMessages.NEGATIVE_SEEK); - } - //there's some special handling of near-local data - //as the seek can be omitted if it is in/adjacent - long offset = targetPos - pos; - if (LOG.isDebugEnabled()) { - LOG.debug("Seek to " + targetPos + "; current pos =" + pos - + "; offset="+offset); - } - if (offset == 0) { - LOG.debug("seek is no-op"); - return; - } - - if (offset < 0) { - LOG.debug("seek is backwards"); - } else if ((rangeOffset + offset < bufferSize)) { - //if the seek is in range of that requested, scan forwards - //instead of closing and re-opening a new HTTP connection - SwiftUtils.debug(LOG, - "seek is within current stream" - + "; pos= %d ; targetPos=%d; " - + "offset= %d ; bufferOffset=%d", - pos, targetPos, offset, rangeOffset); - try { - LOG.debug("chomping "); - chompBytes(offset); - } catch (IOException e) { - //this is assumed to be recoverable with a seek -or more likely to fail - LOG.debug("while chomping ",e); - } - if (targetPos - pos == 0) { - LOG.trace("chomping successful"); - return; - } - LOG.trace("chomping failed"); - } else { - if (LOG.isDebugEnabled()) { - LOG.debug("Seek is beyond buffer size of " + bufferSize); - } - } - - innerClose("seeking to " + targetPos); - fillBuffer(targetPos); - } - - /** - * Fill the buffer from the target position - * If the target position == current position, the - * read still goes ahead; this is a way of handling partial read failures - * @param targetPos target position - * @throws IOException IO problems on the read - */ - private void fillBuffer(long targetPos) throws IOException { - long length = targetPos + bufferSize; - SwiftUtils.debug(LOG, "Fetching %d bytes starting at %d", length, targetPos); - HttpBodyContent blob = nativeStore.getObject(path, targetPos, length); - httpStream = blob.getInputStream(); - updateStartOfBufferPosition(targetPos, blob.getContentLength()); - } - - @Override - public synchronized long getPos() throws IOException { - return pos; - } - - /** - * This FS doesn't explicitly support multiple data sources, so - * return false here. - * @param targetPos the desired target position - * @return true if a new source of the data has been set up - * as the source of future reads - * @throws IOException IO problems - */ - @Override - public boolean seekToNewSource(long targetPos) throws IOException { - return false; - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeOutputStream.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeOutputStream.java deleted file mode 100644 index ac49a8a6495..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeOutputStream.java +++ /dev/null @@ -1,389 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.snative; - -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.fs.swift.exceptions.SwiftConnectionClosedException; -import org.apache.hadoop.fs.swift.exceptions.SwiftException; -import org.apache.hadoop.fs.swift.exceptions.SwiftInternalStateException; -import org.apache.hadoop.fs.swift.util.SwiftUtils; - -import java.io.BufferedOutputStream; -import java.io.File; -import java.io.FileInputStream; -import java.io.FileOutputStream; -import java.io.IOException; -import java.io.OutputStream; - -/** - * Output stream, buffers data on local disk. - * Writes to Swift on the close() method, unless the - * file is significantly large that it is being written as partitions. - * In this case, the first partition is written on the first write that puts - * data over the partition, as may later writes. The close() then causes - * the final partition to be written, along with a partition manifest. - */ -class SwiftNativeOutputStream extends OutputStream { - public static final int ATTEMPT_LIMIT = 3; - private long filePartSize; - private static final Logger LOG = - LoggerFactory.getLogger(SwiftNativeOutputStream.class); - private Configuration conf; - private String key; - private File backupFile; - private OutputStream backupStream; - private SwiftNativeFileSystemStore nativeStore; - private boolean closed; - private int partNumber; - private long blockOffset; - private long bytesWritten; - private long bytesUploaded; - private boolean partUpload = false; - final byte[] oneByte = new byte[1]; - - /** - * Create an output stream - * @param conf configuration to use - * @param nativeStore native store to write through - * @param key the key to write - * @param partSizeKB the partition size - * @throws IOException - */ - @SuppressWarnings("IOResourceOpenedButNotSafelyClosed") - public SwiftNativeOutputStream(Configuration conf, - SwiftNativeFileSystemStore nativeStore, - String key, - long partSizeKB) throws IOException { - this.conf = conf; - this.key = key; - this.backupFile = newBackupFile(); - this.nativeStore = nativeStore; - this.backupStream = new BufferedOutputStream(new FileOutputStream(backupFile)); - this.partNumber = 1; - this.blockOffset = 0; - this.filePartSize = 1024L * partSizeKB; - } - - private File newBackupFile() throws IOException { - File dir = new File(conf.get("hadoop.tmp.dir")); - if (!dir.mkdirs() && !dir.exists()) { - throw new SwiftException("Cannot create Swift buffer directory: " + dir); - } - File result = File.createTempFile("output-", ".tmp", dir); - result.deleteOnExit(); - return result; - } - - /** - * Flush the local backing stream. - * This does not trigger a flush of data to the remote blobstore. - * @throws IOException - */ - @Override - public void flush() throws IOException { - backupStream.flush(); - } - - /** - * check that the output stream is open - * - * @throws SwiftException if it is not - */ - private synchronized void verifyOpen() throws SwiftException { - if (closed) { - throw new SwiftConnectionClosedException(); - } - } - - /** - * Close the stream. This will trigger the upload of all locally cached - * data to the remote blobstore. - * @throws IOException IO problems uploading the data. - */ - @Override - public synchronized void close() throws IOException { - if (closed) { - return; - } - - try { - closed = true; - //formally declare as closed. - backupStream.close(); - backupStream = null; - Path keypath = new Path(key); - if (partUpload) { - partUpload(true); - nativeStore.createManifestForPartUpload(keypath); - } else { - uploadOnClose(keypath); - } - } finally { - delete(backupFile); - backupFile = null; - } - assert backupStream == null: "backup stream has been reopened"; - } - - /** - * Upload a file when closed, either in one go, or, if the file is - * already partitioned, by uploading the remaining partition and a manifest. - * @param keypath key as a path - * @throws IOException IO Problems - */ - private void uploadOnClose(Path keypath) throws IOException { - boolean uploadSuccess = false; - int attempt = 0; - while (!uploadSuccess) { - try { - ++attempt; - bytesUploaded += uploadFileAttempt(keypath, attempt); - uploadSuccess = true; - } catch (IOException e) { - LOG.info("Upload failed " + e, e); - if (attempt > ATTEMPT_LIMIT) { - throw e; - } - } - } -} - - @SuppressWarnings("IOResourceOpenedButNotSafelyClosed") - private long uploadFileAttempt(Path keypath, int attempt) throws IOException { - long uploadLen = backupFile.length(); - SwiftUtils.debug(LOG, "Closing write of file %s;" + - " localfile=%s of length %d - attempt %d", - key, - backupFile, - uploadLen, - attempt); - - nativeStore.uploadFile(keypath, - new FileInputStream(backupFile), - uploadLen); - return uploadLen; - } - - @Override - protected void finalize() throws Throwable { - if(!closed) { - LOG.warn("stream not closed"); - } - if (backupFile != null) { - LOG.warn("Leaking backing file " + backupFile); - } - } - - private void delete(File file) { - if (file != null) { - SwiftUtils.debug(LOG, "deleting %s", file); - if (!file.delete()) { - LOG.warn("Could not delete " + file); - } - } - } - - @Override - public void write(int b) throws IOException { - //insert to a one byte array - oneByte[0] = (byte) b; - //then delegate to the array writing routine - write(oneByte, 0, 1); - } - - @Override - public synchronized void write(byte[] buffer, int offset, int len) throws - IOException { - //validate args - if (offset < 0 || len < 0 || (offset + len) > buffer.length) { - throw new IndexOutOfBoundsException("Invalid offset/length for write"); - } - //validate the output stream - verifyOpen(); - SwiftUtils.debug(LOG, " write(offset=%d, len=%d)", offset, len); - - // if the size of file is greater than the partition limit - while (blockOffset + len >= filePartSize) { - // - then partition the blob and upload as many partitions - // are needed. - //how many bytes to write for this partition. - int subWriteLen = (int) (filePartSize - blockOffset); - if (subWriteLen < 0 || subWriteLen > len) { - throw new SwiftInternalStateException("Invalid subwrite len: " - + subWriteLen - + " -buffer len: " + len); - } - writeToBackupStream(buffer, offset, subWriteLen); - //move the offset along and length down - offset += subWriteLen; - len -= subWriteLen; - //now upload the partition that has just been filled up - // (this also sets blockOffset=0) - partUpload(false); - } - //any remaining data is now written - writeToBackupStream(buffer, offset, len); - } - - /** - * Write to the backup stream. - * Guarantees: - *

    - *
  1. backupStream is open
  2. - *
  3. blockOffset + len < filePartSize
  4. - *
- * @param buffer buffer to write - * @param offset offset in buffer - * @param len length of write. - * @throws IOException backup stream write failing - */ - private void writeToBackupStream(byte[] buffer, int offset, int len) throws - IOException { - assert len >= 0 : "remainder to write is negative"; - SwiftUtils.debug(LOG," writeToBackupStream(offset=%d, len=%d)", offset, len); - if (len == 0) { - //no remainder -downgrade to no-op - return; - } - - //write the new data out to the backup stream - backupStream.write(buffer, offset, len); - //increment the counters - blockOffset += len; - bytesWritten += len; - } - - /** - * Upload a single partition. This deletes the local backing-file, - * and re-opens it to create a new one. - * @param closingUpload is this the final upload of an upload - * @throws IOException on IO problems - */ - @SuppressWarnings("IOResourceOpenedButNotSafelyClosed") - private void partUpload(boolean closingUpload) throws IOException { - if (backupStream != null) { - backupStream.close(); - } - - if (closingUpload && partUpload && backupFile.length() == 0) { - //skipping the upload if - // - it is close time - // - the final partition is 0 bytes long - // - one part has already been written - SwiftUtils.debug(LOG, "skipping upload of 0 byte final partition"); - delete(backupFile); - } else { - partUpload = true; - boolean uploadSuccess = false; - int attempt = 0; - while(!uploadSuccess) { - try { - ++attempt; - bytesUploaded += uploadFilePartAttempt(attempt); - uploadSuccess = true; - } catch (IOException e) { - LOG.info("Upload failed " + e, e); - if (attempt > ATTEMPT_LIMIT) { - throw e; - } - } - } - delete(backupFile); - partNumber++; - blockOffset = 0; - if (!closingUpload) { - //if not the final upload, create a new output stream - backupFile = newBackupFile(); - backupStream = - new BufferedOutputStream(new FileOutputStream(backupFile)); - } - } - } - - @SuppressWarnings("IOResourceOpenedButNotSafelyClosed") - private long uploadFilePartAttempt(int attempt) throws IOException { - long uploadLen = backupFile.length(); - SwiftUtils.debug(LOG, "Uploading part %d of file %s;" + - " localfile=%s of length %d - attempt %d", - partNumber, - key, - backupFile, - uploadLen, - attempt); - nativeStore.uploadFilePart(new Path(key), - partNumber, - new FileInputStream(backupFile), - uploadLen); - return uploadLen; - } - - /** - * Get the file partition size - * @return the partition size - */ - long getFilePartSize() { - return filePartSize; - } - - /** - * Query the number of partitions written - * This is intended for testing - * @return the of partitions already written to the remote FS - */ - synchronized int getPartitionsWritten() { - return partNumber - 1; - } - - /** - * Get the number of bytes written to the output stream. - * This should always be less than or equal to bytesUploaded. - * @return the number of bytes written to this stream - */ - long getBytesWritten() { - return bytesWritten; - } - - /** - * Get the number of bytes uploaded to remote Swift cluster. - * bytesUploaded -bytesWritten = the number of bytes left to upload - * @return the number of bytes written to the remote endpoint - */ - long getBytesUploaded() { - return bytesUploaded; - } - - @Override - public String toString() { - return "SwiftNativeOutputStream{" + - ", key='" + key + '\'' + - ", backupFile=" + backupFile + - ", closed=" + closed + - ", filePartSize=" + filePartSize + - ", partNumber=" + partNumber + - ", blockOffset=" + blockOffset + - ", partUpload=" + partUpload + - ", nativeStore=" + nativeStore + - ", bytesWritten=" + bytesWritten + - ", bytesUploaded=" + bytesUploaded + - '}'; - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftObjectFileStatus.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftObjectFileStatus.java deleted file mode 100644 index ca8adc6244c..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftObjectFileStatus.java +++ /dev/null @@ -1,115 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.snative; - -import java.util.Date; - -/** - * Java mapping of Swift JSON file status. - * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON. - * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS. - */ - -class SwiftObjectFileStatus { - private long bytes; - private String content_type; - private String hash; - private Date last_modified; - private String name; - private String subdir; - - SwiftObjectFileStatus() { - } - - SwiftObjectFileStatus(long bytes, String content_type, String hash, - Date last_modified, String name) { - this.bytes = bytes; - this.content_type = content_type; - this.hash = hash; - this.last_modified = last_modified; - this.name = name; - } - - public long getBytes() { - return bytes; - } - - public void setBytes(long bytes) { - this.bytes = bytes; - } - - public String getContent_type() { - return content_type; - } - - public void setContent_type(String content_type) { - this.content_type = content_type; - } - - public String getHash() { - return hash; - } - - public void setHash(String hash) { - this.hash = hash; - } - - public Date getLast_modified() { - return last_modified; - } - - public void setLast_modified(Date last_modified) { - this.last_modified = last_modified; - } - - public String getName() { - return pathToRootPath(name); - } - - public void setName(String name) { - this.name = name; - } - - public String getSubdir() { - return pathToRootPath(subdir); - } - - public void setSubdir(String subdir) { - this.subdir = subdir; - } - - /** - * If path doesn't starts with '/' - * method will concat '/' - * - * @param path specified path - * @return root path string - */ - private String pathToRootPath(String path) { - if (path == null) { - return null; - } - - if (path.startsWith("/")) { - return path; - } - - return "/".concat(path); - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/Duration.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/Duration.java deleted file mode 100644 index 3071f946824..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/Duration.java +++ /dev/null @@ -1,57 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.util; - -public class Duration { - - private final long started; - private long finished; - - public Duration() { - started = time(); - finished = started; - } - - private long time() { - return System.currentTimeMillis(); - } - - public void finished() { - finished = time(); - } - - public String getDurationString() { - return humanTime(value()); - } - - public static String humanTime(long time) { - long seconds = (time / 1000); - long minutes = (seconds / 60); - return String.format("%d:%02d:%03d", minutes, seconds % 60, time % 1000); - } - - @Override - public String toString() { - return getDurationString(); - } - - public long value() { - return finished -started; - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/DurationStats.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/DurationStats.java deleted file mode 100644 index 734cf8b6dc1..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/DurationStats.java +++ /dev/null @@ -1,154 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.util; - -/** - * Build ongoing statistics from duration data - */ -public class DurationStats { - - final String operation; - int n; - long sum; - long min; - long max; - double mean, m2; - - /** - * Construct statistics for a given operation. - * @param operation operation - */ - public DurationStats(String operation) { - this.operation = operation; - reset(); - } - - /** - * construct from another stats entry; - * all value are copied. - * @param that the source statistics - */ - public DurationStats(DurationStats that) { - operation = that.operation; - n = that.n; - sum = that.sum; - min = that.min; - max = that.max; - mean = that.mean; - m2 = that.m2; - } - - /** - * Add a duration - * @param duration the new duration - */ - public void add(Duration duration) { - add(duration.value()); - } - - /** - * Add a number - * @param x the number - */ - public void add(long x) { - n++; - sum += x; - double delta = x - mean; - mean += delta / n; - m2 += delta * (x - mean); - if (x < min) { - min = x; - } - if (x > max) { - max = x; - } - } - - /** - * Reset the data - */ - public void reset() { - n = 0; - sum = 0; - sum = 0; - min = 10000000; - max = 0; - mean = 0; - m2 = 0; - } - - /** - * Get the number of entries sampled - * @return the number of durations added - */ - public int getCount() { - return n; - } - - /** - * Get the sum of all durations - * @return all the durations - */ - public long getSum() { - return sum; - } - - /** - * Get the arithmetic mean of the aggregate statistics - * @return the arithmetic mean - */ - public double getArithmeticMean() { - return mean; - } - - /** - * Variance, sigma^2 - * @return variance, or, if no samples are there, 0. - */ - public double getVariance() { - return n > 0 ? (m2 / (n - 1)) : 0; - } - - /** - * Get the std deviation, sigma - * @return the stddev, 0 may mean there are no samples. - */ - public double getDeviation() { - double variance = getVariance(); - return (variance > 0) ? Math.sqrt(variance) : 0; - } - - /** - * Covert to a useful string - * @return a human readable summary - */ - @Override - public String toString() { - return String.format( - "%s count=%d total=%.3fs mean=%.3fs stddev=%.3fs min=%.3fs max=%.3fs", - operation, - n, - sum / 1000.0, - mean / 1000.0, - getDeviation() / 1000000.0, - min / 1000.0, - max / 1000.0); - } - -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/DurationStatsTable.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/DurationStatsTable.java deleted file mode 100644 index 58f8f0b641d..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/DurationStatsTable.java +++ /dev/null @@ -1,77 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.fs.swift.util; - -import java.util.ArrayList; -import java.util.HashMap; -import java.util.List; -import java.util.Map; - -/** - * Build a duration stats table to which you can add statistics. - * Designed to be multithreaded - */ -public class DurationStatsTable { - - private Map statsTable - = new HashMap(6); - - /** - * Add an operation - * @param operation operation name - * @param duration duration - */ - public void add(String operation, Duration duration, boolean success) { - DurationStats durationStats; - String key = operation; - if (!success) { - key += "-FAIL"; - } - synchronized (this) { - durationStats = statsTable.get(key); - if (durationStats == null) { - durationStats = new DurationStats(key); - statsTable.put(key, durationStats); - } - } - synchronized (durationStats) { - durationStats.add(duration); - } - } - - /** - * Get the current duration statistics - * @return a snapshot of the statistics - */ - public synchronized List getDurationStatistics() { - List results = new ArrayList(statsTable.size()); - for (DurationStats stat: statsTable.values()) { - results.add(new DurationStats(stat)); - } - return results; - } - - /** - * reset the values of the statistics. This doesn't delete them, merely zeroes them. - */ - public synchronized void reset() { - for (DurationStats stat : statsTable.values()) { - stat.reset(); - } - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/HttpResponseUtils.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/HttpResponseUtils.java deleted file mode 100644 index 1cc340d83d9..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/HttpResponseUtils.java +++ /dev/null @@ -1,121 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.util; - -import java.io.ByteArrayOutputStream; -import java.io.IOException; -import java.io.InputStream; - -import org.apache.http.Header; -import org.apache.http.HttpResponse; -import org.apache.http.util.EncodingUtils; - -import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.HEADER_CONTENT_LENGTH; - -/** - * Utility class for parsing HttpResponse. This class is implemented like - * {@code org.apache.commons.httpclient.HttpMethodBase.java} in httpclient 3.x. - */ -public abstract class HttpResponseUtils { - - /** - * Returns the response body of the HTTPResponse, if any, as an array of bytes. - * If response body is not available or cannot be read, returns null - * - * Note: This will cause the entire response body to be buffered in memory. A - * malicious server may easily exhaust all the VM memory. It is strongly - * recommended, to use getResponseAsStream if the content length of the - * response is unknown or reasonably large. - * - * @param resp HttpResponse - * @return The response body - * @throws IOException If an I/O (transport) problem occurs while obtaining - * the response body. - */ - public static byte[] getResponseBody(HttpResponse resp) throws IOException { - try(InputStream instream = resp.getEntity().getContent()) { - if (instream != null) { - long contentLength = resp.getEntity().getContentLength(); - if (contentLength > Integer.MAX_VALUE) { - //guard integer cast from overflow - throw new IOException("Content too large to be buffered: " - + contentLength +" bytes"); - } - ByteArrayOutputStream outstream = new ByteArrayOutputStream( - contentLength > 0 ? (int) contentLength : 4*1024); - byte[] buffer = new byte[4096]; - int len; - while ((len = instream.read(buffer)) > 0) { - outstream.write(buffer, 0, len); - } - outstream.close(); - return outstream.toByteArray(); - } - } - return null; - } - - /** - * Returns the response body of the HTTPResponse, if any, as a {@link String}. - * If response body is not available or cannot be read, returns null - * The string conversion on the data is done using UTF-8. - * - * Note: This will cause the entire response body to be buffered in memory. A - * malicious server may easily exhaust all the VM memory. It is strongly - * recommended, to use getResponseAsStream if the content length of the - * response is unknown or reasonably large. - * - * @param resp HttpResponse - * @return The response body. - * @throws IOException If an I/O (transport) problem occurs while obtaining - * the response body. - */ - public static String getResponseBodyAsString(HttpResponse resp) - throws IOException { - byte[] rawdata = getResponseBody(resp); - if (rawdata != null) { - return EncodingUtils.getString(rawdata, "UTF-8"); - } else { - return null; - } - } - - /** - * Return the length (in bytes) of the response body, as specified in a - * Content-Length header. - * - *

- * Return -1 when the content-length is unknown. - *

- * - * @param resp HttpResponse - * @return content length, if Content-Length header is available. - * 0 indicates that the request has no body. - * If Content-Length header is not present, the method - * returns -1. - */ - public static long getContentLength(HttpResponse resp) { - Header header = resp.getFirstHeader(HEADER_CONTENT_LENGTH); - if (header == null) { - return -1; - } else { - return Long.parseLong(header.getValue()); - } - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/JSONUtil.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/JSONUtil.java deleted file mode 100644 index fee7e7f5697..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/JSONUtil.java +++ /dev/null @@ -1,124 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.util; - -import com.fasterxml.jackson.core.JsonGenerationException; -import com.fasterxml.jackson.core.type.TypeReference; -import com.fasterxml.jackson.databind.JsonMappingException; -import com.fasterxml.jackson.databind.ObjectMapper; -import com.fasterxml.jackson.databind.type.CollectionType; -import org.apache.hadoop.fs.swift.exceptions.SwiftJsonMarshallingException; - -import java.io.IOException; -import java.io.StringWriter; -import java.io.Writer; - - -public class JSONUtil { - private static ObjectMapper jsonMapper = new ObjectMapper(); - - /** - * Private constructor. - */ - private JSONUtil() { - } - - /** - * Converting object to JSON string. If errors appears throw - * MeshinException runtime exception. - * - * @param object The object to convert. - * @return The JSON string representation. - * @throws IOException IO issues - * @throws SwiftJsonMarshallingException failure to generate JSON - */ - public static String toJSON(Object object) throws - IOException { - Writer json = new StringWriter(); - try { - jsonMapper.writeValue(json, object); - return json.toString(); - } catch (JsonGenerationException | JsonMappingException e) { - throw new SwiftJsonMarshallingException(e.toString(), e); - } - } - - /** - * Convert string representation to object. If errors appears throw - * Exception runtime exception. - * - * @param value The JSON string. - * @param klazz The class to convert. - * @return The Object of the given class. - */ - public static T toObject(String value, Class klazz) throws - IOException { - try { - return jsonMapper.readValue(value, klazz); - } catch (JsonGenerationException e) { - throw new SwiftJsonMarshallingException(e.toString() - + " source: " + value, - e); - } catch (JsonMappingException e) { - throw new SwiftJsonMarshallingException(e.toString() - + " source: " + value, - e); - } - } - - /** - * @param value json string - * @param typeReference class type reference - * @param type - * @return deserialized T object - */ - @SuppressWarnings("unchecked") - public static T toObject(String value, - final TypeReference typeReference) - throws IOException { - try { - return (T)jsonMapper.readValue(value, typeReference); - } catch (JsonGenerationException | JsonMappingException e) { - throw new SwiftJsonMarshallingException("Error generating response", e); - } - } - - /** - * @param value json string - * @param collectionType class describing how to deserialize collection of objects - * @param type - * @return deserialized T object - */ - @SuppressWarnings("unchecked") - public static T toObject(String value, - final CollectionType collectionType) - throws IOException { - try { - return (T)jsonMapper.readValue(value, collectionType); - } catch (JsonGenerationException | JsonMappingException e) { - throw new SwiftJsonMarshallingException(e.toString() - + " source: " + value, - e); - } - } - - public static ObjectMapper getJsonMapper() { - return jsonMapper; - } -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/SwiftObjectPath.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/SwiftObjectPath.java deleted file mode 100644 index 791509a9e03..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/SwiftObjectPath.java +++ /dev/null @@ -1,187 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.util; - -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException; -import org.apache.hadoop.fs.swift.http.RestClientBindings; - -import java.net.URI; -import java.util.regex.Pattern; - -/** - * Swift hierarchy mapping of (container, path) - */ -public final class SwiftObjectPath { - private static final Pattern PATH_PART_PATTERN = Pattern.compile(".*/AUTH_\\w*/"); - - /** - * Swift container - */ - private final String container; - - /** - * swift object - */ - private final String object; - - private final String uriPath; - - /** - * Build an instance from a (host, object) pair - * - * @param container container name - * @param object object ref underneath the container - */ - public SwiftObjectPath(String container, String object) { - - if (object == null) { - throw new IllegalArgumentException("object name can't be null"); - } - - this.container = container; - this.object = URI.create(object).getPath(); - uriPath = buildUriPath(); - } - - public String getContainer() { - return container; - } - - public String getObject() { - return object; - } - - @Override - public boolean equals(Object o) { - if (this == o) return true; - if (!(o instanceof SwiftObjectPath)) return false; - final SwiftObjectPath that = (SwiftObjectPath) o; - return this.toUriPath().equals(that.toUriPath()); - } - - @Override - public int hashCode() { - int result = container.hashCode(); - result = 31 * result + object.hashCode(); - return result; - } - - private String buildUriPath() { - return SwiftUtils.joinPaths(container, object); - } - - public String toUriPath() { - return uriPath; - } - - @Override - public String toString() { - return toUriPath(); - } - - /** - * Test for the object matching a path, ignoring the container - * value. - * - * @param path path string - * @return true iff the object's name matches the path - */ - public boolean objectMatches(String path) { - return object.equals(path); - } - - - /** - * Query to see if the possibleChild object is a child path of this. - * object. - * - * The test is done by probing for the path of the this object being - * at the start of the second -with a trailing slash, and both - * containers being equal - * - * @param possibleChild possible child dir - * @return true iff the possibleChild is under this object - */ - public boolean isEqualToOrParentOf(SwiftObjectPath possibleChild) { - String origPath = toUriPath(); - String path = origPath; - if (!path.endsWith("/")) { - path = path + "/"; - } - String childPath = possibleChild.toUriPath(); - return childPath.equals(origPath) || childPath.startsWith(path); - } - - /** - * Create a path tuple of (container, path), where the container is - * chosen from the host of the URI. - * - * @param uri uri to start from - * @param path path underneath - * @return a new instance. - * @throws SwiftConfigurationException if the URI host doesn't parse into - * container.service - */ - public static SwiftObjectPath fromPath(URI uri, - Path path) - throws SwiftConfigurationException { - return fromPath(uri, path, false); - } - - /** - * Create a path tuple of (container, path), where the container is - * chosen from the host of the URI. - * A trailing slash can be added to the path. This is the point where - * these /-es need to be appended, because when you construct a {@link Path} - * instance, {@link Path#normalizePath(String, String)} is called - * -which strips off any trailing slash. - * - * @param uri uri to start from - * @param path path underneath - * @param addTrailingSlash should a trailing slash be added if there isn't one. - * @return a new instance. - * @throws SwiftConfigurationException if the URI host doesn't parse into - * container.service - */ - public static SwiftObjectPath fromPath(URI uri, - Path path, - boolean addTrailingSlash) - throws SwiftConfigurationException { - - String url = - path.toUri().getPath().replaceAll(PATH_PART_PATTERN.pattern(), ""); - //add a trailing slash if needed - if (addTrailingSlash && !url.endsWith("/")) { - url += "/"; - } - - String container = uri.getHost(); - if (container == null) { - //no container, not good: replace with "" - container = ""; - } else if (container.contains(".")) { - //its a container.service URI. Strip the container - container = RestClientBindings.extractContainerName(container); - } - return new SwiftObjectPath(container, url); - } - - -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/SwiftTestUtils.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/SwiftTestUtils.java deleted file mode 100644 index 2e3abce251a..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/SwiftTestUtils.java +++ /dev/null @@ -1,547 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.util; - -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.FSDataInputStream; -import org.apache.hadoop.fs.FSDataOutputStream; -import org.apache.hadoop.fs.FileStatus; -import org.apache.hadoop.fs.FileSystem; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException; -import org.junit.internal.AssumptionViolatedException; - -import java.io.FileNotFoundException; -import java.io.IOException; -import java.net.URI; -import java.net.URISyntaxException; -import java.util.Properties; - -/** - * Utilities used across test cases - */ -public class SwiftTestUtils extends org.junit.Assert { - - private static final Logger LOG = - LoggerFactory.getLogger(SwiftTestUtils.class); - - public static final String TEST_FS_SWIFT = "test.fs.swift.name"; - public static final String IO_FILE_BUFFER_SIZE = "io.file.buffer.size"; - - /** - * Get the test URI - * @param conf configuration - * @throws SwiftConfigurationException missing parameter or bad URI - */ - public static URI getServiceURI(Configuration conf) throws - SwiftConfigurationException { - String instance = conf.get(TEST_FS_SWIFT); - if (instance == null) { - throw new SwiftConfigurationException( - "Missing configuration entry " + TEST_FS_SWIFT); - } - try { - return new URI(instance); - } catch (URISyntaxException e) { - throw new SwiftConfigurationException("Bad URI: " + instance); - } - } - - public static boolean hasServiceURI(Configuration conf) { - String instance = conf.get(TEST_FS_SWIFT); - return instance != null; - } - - /** - * Assert that a property in the property set matches the expected value - * @param props property set - * @param key property name - * @param expected expected value. If null, the property must not be in the set - */ - public static void assertPropertyEquals(Properties props, - String key, - String expected) { - String val = props.getProperty(key); - if (expected == null) { - assertNull("Non null property " + key + " = " + val, val); - } else { - assertEquals("property " + key + " = " + val, - expected, - val); - } - } - - /** - * - * Write a file and read it in, validating the result. Optional flags control - * whether file overwrite operations should be enabled, and whether the - * file should be deleted afterwards. - * - * If there is a mismatch between what was written and what was expected, - * a small range of bytes either side of the first error are logged to aid - * diagnosing what problem occurred -whether it was a previous file - * or a corrupting of the current file. This assumes that two - * sequential runs to the same path use datasets with different character - * moduli. - * - * @param fs filesystem - * @param path path to write to - * @param len length of data - * @param overwrite should the create option allow overwrites? - * @param delete should the file be deleted afterwards? -with a verification - * that it worked. Deletion is not attempted if an assertion has failed - * earlier -it is not in a finally{} block. - * @throws IOException IO problems - */ - public static void writeAndRead(FileSystem fs, - Path path, - byte[] src, - int len, - int blocksize, - boolean overwrite, - boolean delete) throws IOException { - fs.mkdirs(path.getParent()); - - writeDataset(fs, path, src, len, blocksize, overwrite); - - byte[] dest = readDataset(fs, path, len); - - compareByteArrays(src, dest, len); - - if (delete) { - boolean deleted = fs.delete(path, false); - assertTrue("Deleted", deleted); - assertPathDoesNotExist(fs, "Cleanup failed", path); - } - } - - /** - * Write a file. - * Optional flags control - * whether file overwrite operations should be enabled - * @param fs filesystem - * @param path path to write to - * @param len length of data - * @param overwrite should the create option allow overwrites? - * @throws IOException IO problems - */ - public static void writeDataset(FileSystem fs, - Path path, - byte[] src, - int len, - int blocksize, - boolean overwrite) throws IOException { - assertTrue( - "Not enough data in source array to write " + len + " bytes", - src.length >= len); - FSDataOutputStream out = fs.create(path, - overwrite, - fs.getConf() - .getInt(IO_FILE_BUFFER_SIZE, - 4096), - (short) 1, - blocksize); - out.write(src, 0, len); - out.close(); - assertFileHasLength(fs, path, len); - } - - /** - * Read the file and convert to a byte dataset - * @param fs filesystem - * @param path path to read from - * @param len length of data to read - * @return the bytes - * @throws IOException IO problems - */ - public static byte[] readDataset(FileSystem fs, Path path, int len) - throws IOException { - FSDataInputStream in = fs.open(path); - byte[] dest = new byte[len]; - try { - in.readFully(0, dest); - } finally { - in.close(); - } - return dest; - } - - /** - * Assert that the array src[0..len] and dest[] are equal - * @param src source data - * @param dest actual - * @param len length of bytes to compare - */ - public static void compareByteArrays(byte[] src, - byte[] dest, - int len) { - assertEquals("Number of bytes read != number written", - len, dest.length); - int errors = 0; - int first_error_byte = -1; - for (int i = 0; i < len; i++) { - if (src[i] != dest[i]) { - if (errors == 0) { - first_error_byte = i; - } - errors++; - } - } - - if (errors > 0) { - String message = String.format(" %d errors in file of length %d", - errors, len); - LOG.warn(message); - // the range either side of the first error to print - // this is a purely arbitrary number, to aid user debugging - final int overlap = 10; - for (int i = Math.max(0, first_error_byte - overlap); - i < Math.min(first_error_byte + overlap, len); - i++) { - byte actual = dest[i]; - byte expected = src[i]; - String letter = toChar(actual); - String line = String.format("[%04d] %2x %s%n", i, actual, letter); - if (expected != actual) { - line = String.format("[%04d] %2x %s -expected %2x %s%n", - i, - actual, - letter, - expected, - toChar(expected)); - } - LOG.warn(line); - } - fail(message); - } - } - - /** - * Convert a byte to a character for printing. If the - * byte value is < 32 -and hence unprintable- the byte is - * returned as a two digit hex value - * @param b byte - * @return the printable character string - */ - public static String toChar(byte b) { - if (b >= 0x20) { - return Character.toString((char) b); - } else { - return String.format("%02x", b); - } - } - - public static String toChar(byte[] buffer) { - StringBuilder builder = new StringBuilder(buffer.length); - for (byte b : buffer) { - builder.append(toChar(b)); - } - return builder.toString(); - } - - public static byte[] toAsciiByteArray(String s) { - char[] chars = s.toCharArray(); - int len = chars.length; - byte[] buffer = new byte[len]; - for (int i = 0; i < len; i++) { - buffer[i] = (byte) (chars[i] & 0xff); - } - return buffer; - } - - public static void cleanupInTeardown(FileSystem fileSystem, - String cleanupPath) { - cleanup("TEARDOWN", fileSystem, cleanupPath); - } - - public static void cleanup(String action, - FileSystem fileSystem, - String cleanupPath) { - noteAction(action); - try { - if (fileSystem != null) { - fileSystem.delete(fileSystem.makeQualified(new Path(cleanupPath)), - true); - } - } catch (Exception e) { - LOG.error("Error deleting in "+ action + " - " + cleanupPath + ": " + e, e); - } - } - - public static void noteAction(String action) { - if (LOG.isDebugEnabled()) { - LOG.debug("============== "+ action +" ============="); - } - } - - /** - * downgrade a failure to a message and a warning, then an - * exception for the Junit test runner to mark as failed - * @param message text message - * @param failure what failed - * @throws AssumptionViolatedException always - */ - public static void downgrade(String message, Throwable failure) { - LOG.warn("Downgrading test " + message, failure); - AssumptionViolatedException ave = - new AssumptionViolatedException(failure, null); - throw ave; - } - - /** - * report an overridden test as unsupported - * @param message message to use in the text - * @throws AssumptionViolatedException always - */ - public static void unsupported(String message) { - throw new AssumptionViolatedException(message); - } - - /** - * report a test has been skipped for some reason - * @param message message to use in the text - * @throws AssumptionViolatedException always - */ - public static void skip(String message) { - throw new AssumptionViolatedException(message); - } - - - /** - * Make an assertion about the length of a file - * @param fs filesystem - * @param path path of the file - * @param expected expected length - * @throws IOException on File IO problems - */ - public static void assertFileHasLength(FileSystem fs, Path path, - int expected) throws IOException { - FileStatus status = fs.getFileStatus(path); - assertEquals( - "Wrong file length of file " + path + " status: " + status, - expected, - status.getLen()); - } - - /** - * Assert that a path refers to a directory - * @param fs filesystem - * @param path path of the directory - * @throws IOException on File IO problems - */ - public static void assertIsDirectory(FileSystem fs, - Path path) throws IOException { - FileStatus fileStatus = fs.getFileStatus(path); - assertIsDirectory(fileStatus); - } - - /** - * Assert that a path refers to a directory - * @param fileStatus stats to check - */ - public static void assertIsDirectory(FileStatus fileStatus) { - assertTrue("Should be a dir -but isn't: " + fileStatus, - fileStatus.isDirectory()); - } - - /** - * Write the text to a file, returning the converted byte array - * for use in validating the round trip - * @param fs filesystem - * @param path path of file - * @param text text to write - * @param overwrite should the operation overwrite any existing file? - * @return the read bytes - * @throws IOException on IO problems - */ - public static byte[] writeTextFile(FileSystem fs, - Path path, - String text, - boolean overwrite) throws IOException { - FSDataOutputStream stream = fs.create(path, overwrite); - byte[] bytes = new byte[0]; - if (text != null) { - bytes = toAsciiByteArray(text); - stream.write(bytes); - } - stream.close(); - return bytes; - } - - /** - * Touch a file: fails if it is already there - * @param fs filesystem - * @param path path - * @throws IOException IO problems - */ - public static void touch(FileSystem fs, - Path path) throws IOException { - fs.delete(path, true); - writeTextFile(fs, path, null, false); - } - - public static void assertDeleted(FileSystem fs, - Path file, - boolean recursive) throws IOException { - assertPathExists(fs, "about to be deleted file", file); - boolean deleted = fs.delete(file, recursive); - String dir = ls(fs, file.getParent()); - assertTrue("Delete failed on " + file + ": " + dir, deleted); - assertPathDoesNotExist(fs, "Deleted file", file); - } - - /** - * Read in "length" bytes, convert to an ascii string - * @param fs filesystem - * @param path path to read - * @param length #of bytes to read. - * @return the bytes read and converted to a string - * @throws IOException - */ - public static String readBytesToString(FileSystem fs, - Path path, - int length) throws IOException { - FSDataInputStream in = fs.open(path); - try { - byte[] buf = new byte[length]; - in.readFully(0, buf); - return toChar(buf); - } finally { - in.close(); - } - } - - public static String getDefaultWorkingDirectory() { - return "/user/" + System.getProperty("user.name"); - } - - public static String ls(FileSystem fileSystem, Path path) throws IOException { - return SwiftUtils.ls(fileSystem, path); - } - - public static String dumpStats(String pathname, FileStatus[] stats) { - return pathname + SwiftUtils.fileStatsToString(stats,"\n"); - } - - /** - /** - * Assert that a file exists and whose {@link FileStatus} entry - * declares that this is a file and not a symlink or directory. - * @param fileSystem filesystem to resolve path against - * @param filename name of the file - * @throws IOException IO problems during file operations - */ - public static void assertIsFile(FileSystem fileSystem, Path filename) throws - IOException { - assertPathExists(fileSystem, "Expected file", filename); - FileStatus status = fileSystem.getFileStatus(filename); - String fileInfo = filename + " " + status; - assertFalse("File claims to be a directory " + fileInfo, - status.isDirectory()); -/* disabled for Hadoop v1 compatibility - assertFalse("File claims to be a symlink " + fileInfo, - status.isSymlink()); -*/ - } - - /** - * Create a dataset for use in the tests; all data is in the range - * base to (base+modulo-1) inclusive - * @param len length of data - * @param base base of the data - * @param modulo the modulo - * @return the newly generated dataset - */ - public static byte[] dataset(int len, int base, int modulo) { - byte[] dataset = new byte[len]; - for (int i = 0; i < len; i++) { - dataset[i] = (byte) (base + (i % modulo)); - } - return dataset; - } - - /** - * Assert that a path exists -but make no assertions as to the - * type of that entry - * - * @param fileSystem filesystem to examine - * @param message message to include in the assertion failure message - * @param path path in the filesystem - * @throws IOException IO problems - */ - public static void assertPathExists(FileSystem fileSystem, String message, - Path path) throws IOException { - try { - fileSystem.getFileStatus(path); - } catch (FileNotFoundException e) { - //failure, report it - throw (IOException)new FileNotFoundException(message + ": not found " - + path + " in " + path.getParent() + ": " + e + " -- " - + ls(fileSystem, path.getParent())).initCause(e); - } - } - - /** - * Assert that a path does not exist - * - * @param fileSystem filesystem to examine - * @param message message to include in the assertion failure message - * @param path path in the filesystem - * @throws IOException IO problems - */ - public static void assertPathDoesNotExist(FileSystem fileSystem, - String message, - Path path) throws IOException { - try { - FileStatus status = fileSystem.getFileStatus(path); - fail(message + ": unexpectedly found " + path + " as " + status); - } catch (FileNotFoundException expected) { - //this is expected - - } - } - - - /** - * Assert that a FileSystem.listStatus on a dir finds the subdir/child entry - * @param fs filesystem - * @param dir directory to scan - * @param subdir full path to look for - * @throws IOException IO problems - */ - public static void assertListStatusFinds(FileSystem fs, - Path dir, - Path subdir) throws IOException { - FileStatus[] stats = fs.listStatus(dir); - boolean found = false; - StringBuilder builder = new StringBuilder(); - for (FileStatus stat : stats) { - builder.append(stat.toString()).append('\n'); - if (stat.getPath().equals(subdir)) { - found = true; - } - } - assertTrue("Path " + subdir - + " not found in directory " + dir + ":" + builder, - found); - } - -} diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/SwiftUtils.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/SwiftUtils.java deleted file mode 100644 index f218a80595a..00000000000 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/SwiftUtils.java +++ /dev/null @@ -1,216 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.util; - -import org.slf4j.Logger; -import org.apache.hadoop.fs.FileStatus; -import org.apache.hadoop.fs.FileSystem; -import org.apache.hadoop.fs.Path; - -import java.io.FileNotFoundException; -import java.io.IOException; - -/** - * Various utility classes for SwiftFS support - */ -public final class SwiftUtils { - - public static final String READ = "read(buffer, offset, length)"; - - /** - * Join two (non null) paths, inserting a forward slash between them - * if needed - * - * @param path1 first path - * @param path2 second path - * @return the combined path - */ - public static String joinPaths(String path1, String path2) { - StringBuilder result = - new StringBuilder(path1.length() + path2.length() + 1); - result.append(path1); - boolean insertSlash = true; - if (path1.endsWith("/")) { - insertSlash = false; - } else if (path2.startsWith("/")) { - insertSlash = false; - } - if (insertSlash) { - result.append("/"); - } - result.append(path2); - return result.toString(); - } - - /** - * This test contains the is-directory logic for Swift, so if - * changed there is only one place for it. - * - * @param fileStatus status to examine - * @return true if we consider this status to be representative of a - * directory. - */ - public static boolean isDirectory(FileStatus fileStatus) { - return fileStatus.isDirectory() || isFilePretendingToBeDirectory(fileStatus); - } - - /** - * Test for the entry being a file that is treated as if it is a - * directory - * - * @param fileStatus status - * @return true if it meets the rules for being a directory - */ - public static boolean isFilePretendingToBeDirectory(FileStatus fileStatus) { - return fileStatus.getLen() == 0; - } - - /** - * Predicate: Is a swift object referring to the root directory? - * @param swiftObject object to probe - * @return true iff the object refers to the root - */ - public static boolean isRootDir(SwiftObjectPath swiftObject) { - return swiftObject.objectMatches("") || swiftObject.objectMatches("/"); - } - - /** - * Sprintf() to the log iff the log is at debug level. If the log - * is not at debug level, the printf operation is skipped, so - * no time is spent generating the string. - * @param log log to use - * @param text text message - * @param args args arguments to the print statement - */ - public static void debug(Logger log, String text, Object... args) { - if (log.isDebugEnabled()) { - log.debug(String.format(text, args)); - } - } - - /** - * Log an exception (in text and trace) iff the log is at debug - * @param log Log to use - * @param text text message - * @param ex exception - */ - public static void debugEx(Logger log, String text, Exception ex) { - if (log.isDebugEnabled()) { - log.debug(text + ex, ex); - } - } - - /** - * Sprintf() to the log iff the log is at trace level. If the log - * is not at trace level, the printf operation is skipped, so - * no time is spent generating the string. - * @param log log to use - * @param text text message - * @param args args arguments to the print statement - */ - public static void trace(Logger log, String text, Object... args) { - if (log.isTraceEnabled()) { - log.trace(String.format(text, args)); - } - } - - /** - * Given a partition number, calculate the partition value. - * This is used in the SwiftNativeOutputStream, and is placed - * here for tests to be able to calculate the filename of - * a partition. - * @param partNumber part number - * @return a string to use as the filename - */ - public static String partitionFilenameFromNumber(int partNumber) { - return String.format("%06d", partNumber); - } - - /** - * List a a path to string - * @param fileSystem filesystem - * @param path directory - * @return a listing of the filestatuses of elements in the directory, one - * to a line, preceded by the full path of the directory - * @throws IOException connectivity problems - */ - public static String ls(FileSystem fileSystem, Path path) throws - IOException { - if (path == null) { - //surfaces when someone calls getParent() on something at the top of the path - return "/"; - } - FileStatus[] stats; - String pathtext = "ls " + path; - try { - stats = fileSystem.listStatus(path); - } catch (FileNotFoundException e) { - return pathtext + " -file not found"; - } catch (IOException e) { - return pathtext + " -failed: " + e; - } - return pathtext + fileStatsToString(stats, "\n"); - } - - /** - * Take an array of filestatus and convert to a string (prefixed w/ a [01] counter - * @param stats array of stats - * @param separator separator after every entry - * @return a stringified set - */ - public static String fileStatsToString(FileStatus[] stats, String separator) { - StringBuilder buf = new StringBuilder(stats.length * 128); - for (int i = 0; i < stats.length; i++) { - buf.append(String.format("[%02d] %s", i, stats[i])).append(separator); - } - return buf.toString(); - } - - /** - * Verify that the basic args to a read operation are valid; - * throws an exception if not -with meaningful text including - * @param buffer destination buffer - * @param off offset - * @param len number of bytes to read - * @throws NullPointerException null buffer - * @throws IndexOutOfBoundsException on any invalid range. - */ - public static void validateReadArgs(byte[] buffer, int off, int len) { - if (buffer == null) { - throw new NullPointerException("Null byte array in"+ READ); - } - if (off < 0 ) { - throw new IndexOutOfBoundsException("Negative buffer offset " - + off - + " in " + READ); - } - if (len < 0 ) { - throw new IndexOutOfBoundsException("Negative read length " - + len - + " in " + READ); - } - if (off > buffer.length) { - throw new IndexOutOfBoundsException("Buffer offset of " - + off - + "beyond buffer size of " - + buffer.length - + " in " + READ); - } - } -} diff --git a/hadoop-tools/hadoop-openstack/src/site/markdown/index.md b/hadoop-tools/hadoop-openstack/src/site/markdown/index.md deleted file mode 100644 index 1815f60c613..00000000000 --- a/hadoop-tools/hadoop-openstack/src/site/markdown/index.md +++ /dev/null @@ -1,549 +0,0 @@ - - -* [Hadoop OpenStack Support: Swift Object Store](#Hadoop_OpenStack_Support:_Swift_Object_Store) - * [Introduction](#Introduction) - * [Features](#Features) - * [Using the Hadoop Swift Filesystem Client](#Using_the_Hadoop_Swift_Filesystem_Client) - * [Concepts: services and containers](#Concepts:_services_and_containers) - * [Containers and Objects](#Containers_and_Objects) - * [Eventual Consistency](#Eventual_Consistency) - * [Non-atomic "directory" operations.](#Non-atomic_directory_operations.) - * [Working with Swift Object Stores in Hadoop](#Working_with_Swift_Object_Stores_in_Hadoop) - * [Swift Filesystem URIs](#Swift_Filesystem_URIs) - * [Installing](#Installing) - * [Configuring](#Configuring) - * [Example: Rackspace US, in-cluster access using API key](#Example:_Rackspace_US_in-cluster_access_using_API_key) - * [Example: Rackspace UK: remote access with password authentication](#Example:_Rackspace_UK:_remote_access_with_password_authentication) - * [Example: HP cloud service definition](#Example:_HP_cloud_service_definition) - * [General Swift Filesystem configuration options](#General_Swift_Filesystem_configuration_options) - * [Blocksize fs.swift.blocksize](#Blocksize_fs.swift.blocksize) - * [Partition size fs.swift.partsize](#Partition_size_fs.swift.partsize) - * [Request size fs.swift.requestsize](#Request_size_fs.swift.requestsize) - * [Connection timeout fs.swift.connect.timeout](#Connection_timeout_fs.swift.connect.timeout) - * [Connection timeout fs.swift.socket.timeout](#Connection_timeout_fs.swift.socket.timeout) - * [Connection Retry Count fs.swift.connect.retry.count](#Connection_Retry_Count_fs.swift.connect.retry.count) - * [Connection Throttle Delay fs.swift.connect.throttle.delay](#Connection_Throttle_Delay_fs.swift.connect.throttle.delay) - * [HTTP Proxy](#HTTP_Proxy) - * [Troubleshooting](#Troubleshooting) - * [ClassNotFoundException](#ClassNotFoundException) - * [Failure to Authenticate](#Failure_to_Authenticate) - * [Timeout connecting to the Swift Service](#Timeout_connecting_to_the_Swift_Service) - * [Warnings](#Warnings) - * [Limits](#Limits) - * [Testing the hadoop-openstack module](#Testing_the_hadoop-openstack_module) - -Hadoop OpenStack Support: Swift Object Store -============================================ - -Introduction ------------- - -[OpenStack](http://www.openstack.org/) is an open source cloud infrastructure which can be accessed from multiple public IaaS providers, and deployed privately. It offers infrastructure services such as VM hosting (Nova), authentication (Keystone) and storage of binary objects (Swift). - -This module enables Apache Hadoop applications -including MapReduce jobs, read and write data to and from instances of the [OpenStack Swift object store](http://www.openstack.org/software/openstack-storage/). - -To make it part of Apache Hadoop's default classpath, simply make sure that -HADOOP_OPTIONAL_TOOLS in hadoop-env.sh has 'hadoop-openstack' in the list. - -Features --------- - -* Read and write of data stored in a Swift object store - -* Support of a pseudo-hierachical file system (directories, subdirectories and - files) - -* Standard filesystem operations: `create`, `delete`, `mkdir`, - `ls`, `mv`, `stat`. - -* Can act as a source of data in a MapReduce job, or a sink. - -* Support for multiple OpenStack services, and multiple containers from a - single service. - -* Supports in-cluster and remote access to Swift data. - -* Supports OpenStack Keystone authentication with password or token. - -* Released under the Apache Software License - -* Tested against the Hadoop 3.x and 1.x branches, against multiple public - OpenStack clusters: Rackspace US, Rackspace UK, HP Cloud. - -* Tested against private OpenStack clusters, including scalability tests of - large file uploads. - -Using the Hadoop Swift Filesystem Client ----------------------------------------- - -### Concepts: services and containers - -OpenStack swift is an *Object Store*; also known as a *blobstore*. It stores arbitrary binary objects by name in a *container*. - -The Hadoop Swift filesystem library adds another concept, the *service*, which defines which Swift blobstore hosts a container -and how to connect to it. - -### Containers and Objects - -* Containers are created by users with accounts on the Swift filestore, and hold - *objects*. - -* Objects can be zero bytes long, or they can contain data. - -* Objects in the container can be up to 5GB; there is a special support for - larger files than this, which merges multiple objects in to one. - -* Each object is referenced by it's *name*; there is no notion of directories. - -* You can use any characters in an object name that can be 'URL-encoded'; the - maximum length of a name is 1034 characters -after URL encoding. - -* Names can have `/` characters in them, which are used to create the illusion of - a directory structure. For example `dir/dir2/name`. Even though this looks - like a directory, *it is still just a name*. There is no requirement to have - any entries in the container called `dir` or `dir/dir2` - -* That said. if the container has zero-byte objects that look like directory - names above other objects, they can pretend to be directories. Continuing the - example, a 0-byte object called `dir` would tell clients that it is a - directory while `dir/dir2` or `dir/dir2/name` were present. This creates an - illusion of containers holding a filesystem. - -Client applications talk to Swift over HTTP or HTTPS, reading, writing and deleting objects using standard HTTP operations (GET, PUT and DELETE, respectively). There is also a COPY operation, that creates a new object in the container, with a new name, containing the old data. There is no rename operation itself, objects need to be copied -then the original entry deleted. - -### Eventual Consistency - -The Swift Filesystem is \*eventually consistent\*: an operation on an object may not be immediately visible to that client, or other clients. This is a consequence of the goal of the filesystem: to span a set of machines, across multiple datacenters, in such a way that the data can still be available when many of them fail. (In contrast, the Hadoop HDFS filesystem is \*immediately consistent\*, but it does not span datacenters.) - -Eventual consistency can cause surprises for client applications that expect immediate consistency: after an object is deleted or overwritten, the object may still be visible -or the old data still retrievable. The Swift Filesystem client for Apache Hadoop attempts to handle this, in conjunction with the MapReduce engine, but there may be still be occasions when eventual consistency causes surprises. - -### Non-atomic "directory" operations. - -Hadoop expects some operations to be atomic, especially `rename()`, which is something the MapReduce layer relies on to commit the output of a job, renaming data from a temp directory to the final path. Because a rename is implemented as a copy of every blob under the directory's path, followed by a delete of the originals, the intermediate state of the operation will be visible to other clients. If two Reducer tasks to rename their temp directory to the final path, both operations may succeed, with the result that output directory contains mixed data. This can happen if MapReduce jobs are being run with *speculation* enabled and Swift used as the direct output of the MR job (it can also happen against Amazon S3). - -Other consequences of the non-atomic operations are: - -1. If a program is looking for the presence of the directory before acting - on the data -it may start prematurely. This can be avoided by using - other mechanisms to co-ordinate the programs, such as the presence of a file - that is written *after* any bulk directory operations. - -2. A `rename()` or `delete()` operation may include files added under - the source directory tree during the operation, may unintentionally delete - it, or delete the 0-byte swift entries that mimic directories and act - as parents for the files. Try to avoid doing this. - -The best ways to avoid all these problems is not using Swift as the filesystem between MapReduce jobs or other Hadoop workflows. It can act as a source of data, and a final destination, but it doesn't meet all of Hadoop's expectations of what a filesystem is -it's a *blobstore*. - -Working with Swift Object Stores in Hadoop ------------------------------------------- - -Once installed, the Swift FileSystem client can be used by any Hadoop application to read from or write to data stored in a Swift container. - -Data stored in Swift can be used as the direct input to a MapReduce job -simply use the `swift:` URL (see below) to declare the source of the data. - -This Swift Filesystem client is designed to work with multiple Swift object stores, both public and private. This allows the client to work with different clusters, reading and writing data to and from either of them. - -It can also work with the same object stores using multiple login details. - -These features are achieved by one basic concept: using a service name in the URI referring to a swift filesystem, and looking up all the connection and login details for that specific service. Different service names can be defined in the Hadoop XML configuration file, so defining different clusters, or providing different login details for the same object store(s). - -### Swift Filesystem URIs - -Hadoop uses URIs to refer to files within a filesystem. Some common examples are: - - local://etc/hosts - hdfs://cluster1/users/example/data/set1 - hdfs://cluster2.example.org:8020/users/example/data/set1 - -The Swift Filesystem Client adds a new URL type `swift`. In a Swift Filesystem URL, the hostname part of a URL identifies the container and the service to work with; the path the name of the object. Here are some examples - - swift://container.rackspace/my-object.csv - swift://data.hpcloud/data/set1 - swift://dmitry.privatecloud/out/results - -In the last two examples, the paths look like directories: it is not, they are simply the objects named `data/set1` and `out/results` respectively. - -### Installing - -The `hadoop-openstack` JAR must be on the classpath of the Hadoop program trying to talk to the Swift service. If installed in the classpath of the Hadoop MapReduce service, then all programs started by the MR engine will pick up the JAR automatically. This is the easiest way to give all Hadoop jobs access to Swift. - -Alternatively, the JAR can be included as one of the JAR files that an application uses. This lets the Hadoop jobs work with a Swift object store even if the Hadoop cluster is not pre-configured for this. - -The library also depends upon the Apache HttpComponents library, which must also be on the classpath. - -### Configuring - -To talk to a swift service, the user must must provide: - -1. The URL defining the container and the service. - -2. In the cluster/job configuration, the login details of that service. - -Multiple service definitions can co-exist in the same configuration file: just use different names for them. - -#### Example: Rackspace US, in-cluster access using API key - -This service definition is for use in a Hadoop cluster deployed within Rackspace's US infrastructure. - - - fs.swift.service.rackspace.auth.url - https://auth.api.rackspacecloud.com/v2.0/tokens - Rackspace US (multiregion) - - - - fs.swift.service.rackspace.username - user4 - - - - fs.swift.service.rackspace.region - DFW - - - - fs.swift.service.rackspace.apikey - fe806aa86dfffe2f6ed8 - - -Here the API key visible in the account settings API keys page is used to log in. No property for public/private access -the default is to use the private endpoint for Swift operations. - -This configuration also selects one of the regions, DFW, for its data. - -A reference to this service would use the `rackspace` service name: - - swift://hadoop-container.rackspace/ - -#### Example: Rackspace UK: remote access with password authentication - -This connects to Rackspace's UK ("LON") datacenter. - - - fs.swift.service.rackspaceuk.auth.url - https://lon.identity.api.rackspacecloud.com/v2.0/tokens - Rackspace UK - - - - fs.swift.service.rackspaceuk.username - user4 - - - - fs.swift.service.rackspaceuk.password - insert-password-here/value> - - - - fs.swift.service.rackspace.public - true - - -This is a public access point connection, using a password over an API key. - -A reference to this service would use the `rackspaceuk` service name: - - swift://hadoop-container.rackspaceuk/ - -Because the public endpoint is used, if this service definition is used within the London datacenter, all accesses will be billed at the public upload/download rates, *irrespective of where the Hadoop cluster is*. - -#### Example: HP cloud service definition - -Here is an example that connects to the HP Cloud object store. - - - fs.swift.service.hpcloud.auth.url - https://region-a.geo-1.identity.hpcloudsvc.com:35357/v2.0/tokens - - HP Cloud - - - - fs.swift.service.hpcloud.tenant - FE806AA86 - - - - fs.swift.service.hpcloud.username - FE806AA86DFFFE2F6ED8 - - - - fs.swift.service.hpcloud.password - secret-password-goes-here - - - - fs.swift.service.hpcloud.public - true - - -A reference to this service would use the `hpcloud` service name: - - swift://hadoop-container.hpcloud/ - -### General Swift Filesystem configuration options - -Some configuration options apply to the Swift client, independent of the specific Swift filesystem chosen. - -#### Blocksize fs.swift.blocksize - -Swift does not break up files into blocks, except in the special case of files over 5GB in length. Accordingly, there isn't a notion of a "block size" to define where the data is kept. - -Hadoop's MapReduce layer depends on files declaring their block size, so that it knows how to partition work. Too small a blocksize means that many mappers work on small pieces of data; too large a block size means that only a few mappers get started. - -The block size value reported by Swift, therefore, controls the basic workload partioning of the MapReduce engine -and can be an important parameter to tune for performance of the cluster. - -The property has a unit of kilobytes; the default value is `32*1024`: 32 MB - - - fs.swift.blocksize - 32768 - - -This blocksize has no influence on how files are stored in Swift; it only controls what the reported size of blocks are - a value used in Hadoop MapReduce to divide work. - -Note that the MapReduce engine's split logic can be tuned independently by setting the `mapred.min.split.size` and `mapred.max.split.size` properties, which can be done in specific job configurations. - - - mapred.min.split.size - 524288 - - - - mapred.max.split.size - 1048576 - - -In an Apache Pig script, these properties would be set as: - - mapred.min.split.size 524288 - mapred.max.split.size 1048576 - -#### Partition size fs.swift.partsize - -The Swift filesystem client breaks very large files into partitioned files, uploading each as it progresses, and writing any remaning data and an XML manifest when a partitioned file is closed. - -The partition size defaults to 4608 MB; 4.5GB, the maximum filesize that Swift can support. - -It is possible to set a smaller partition size, in the `fs.swift.partsize` option. This takes a value in KB. - - - fs.swift.partsize - 1024 - upload every MB - - -When should this value be changed from its default? - -While there is no need to ever change it for basic operation of the Swift filesystem client, it can be tuned - -* If a Swift filesystem is location aware, then breaking a file up into - smaller partitions scatters the data round the cluster. For best performance, - the property `fs.swift.blocksize` should be set to a smaller value than the - partition size of files. - -* When writing to an unpartitioned file, the entire write is done in the - `close()` operation. When a file is partitioned, the outstanding data to - be written whenever the outstanding amount of data is greater than the - partition size. This means that data will be written more incrementally - -#### Request size fs.swift.requestsize - -The Swift filesystem client reads files in HTTP GET operations, asking for a block of data at a time. - -The default value is 64KB. A larger value may be more efficient over faster networks, as it reduces the overhead of setting up the HTTP operation. - -However, if the file is read with many random accesses, requests for data will be made from different parts of the file -discarding some of the previously requested data. The benefits of larger request sizes may be wasted. - -The property `fs.swift.requestsize` sets the request size in KB. - - - fs.swift.requestsize - 128 - - -#### Connection timeout fs.swift.connect.timeout - -This sets the timeout in milliseconds to connect to a Swift service. - - - fs.swift.connect.timeout - 15000 - - -A shorter timeout means that connection failures are raised faster -but may trigger more false alarms. A longer timeout is more resilient to network problems -and may be needed when talking to remote filesystems. - -#### Connection timeout fs.swift.socket.timeout - -This sets the timeout in milliseconds to wait for data from a connected socket. - - - fs.swift.socket.timeout - 60000 - - -A shorter timeout means that connection failures are raised faster -but may trigger more false alarms. A longer timeout is more resilient to network problems -and may be needed when talking to remote filesystems. - -#### Connection Retry Count fs.swift.connect.retry.count - -This sets the number of times to try to connect to a service whenever an HTTP request is made. - - - fs.swift.connect.retry.count - 3 - - -The more retries, the more resilient it is to transient outages -and the less rapid it is at detecting and reporting server connectivity problems. - -#### Connection Throttle Delay fs.swift.connect.throttle.delay - -This property adds a delay between bulk file copy and delete operations, to prevent requests being throttled or blocked by the remote service - - - fs.swift.connect.throttle.delay - 0 - - -It is measured in milliseconds; "0" means do not add any delay. - -Throttling is enabled on the public endpoints of some Swift services. If `rename()` or `delete()` operations fail with `SwiftThrottledRequestException` exceptions, try setting this property. - -#### HTTP Proxy - -If the client can only access the Swift filesystem via a web proxy server, the client configuration must specify the proxy via the `fs.swift.connect.proxy.host` and `fs.swift.connect.proxy.port` properties. - - - fs.swift.proxy.host - web-proxy - - - - fs.swift.proxy.port - 8088 - - -If the host is declared, the proxy port must be set to a valid integer value. - -### Troubleshooting - -#### ClassNotFoundException - -The `hadoop-openstack` JAR -or any dependencies- may not be on your classpath. - -Make sure that the: -* JAR is installed on the servers in the cluster. -* 'hadoop-openstack' is on the HADOOP_OPTIONAL_TOOLS entry in hadoop-env.sh or that the job submission process uploads the JAR file to the distributed cache. - -#### Failure to Authenticate - -A `SwiftAuthenticationFailedException` is thrown when the client cannot authenticate with the OpenStack keystone server. This could be because the URL in the service definition is wrong, or because the supplied credentials are invalid. - -1. Check the authentication URL through `curl` or your browser - -2. Use a Swift client such as CyberDuck to validate your credentials - -3. If you have included a tenant ID, try leaving it out. Similarly, - try adding it if you had not included it. - -4. Try switching from API key authentication to password-based authentication, - by setting the password. - -5. Change your credentials. As with Amazon AWS clients, some credentials - don't seem to like going over the network. - -#### Timeout connecting to the Swift Service - -This happens if the client application is running outside an OpenStack cluster, where it does not have access to the private hostname/IP address for filesystem operations. Set the `public` flag to true -but remember to set it to false for use in-cluster. - -### Warnings - -1. Do not share your login details with anyone, which means do not log the - details, or check the XML configuration files into any revision control system - to which you do not have exclusive access. - -2. Similarly, do not use your real account details in any - documentation \*or any bug reports submitted online\* - -3. Prefer the apikey authentication over passwords as it is easier - to revoke a key -and some service providers allow you to set - an automatic expiry date on a key when issued. - -4. Do not use the public service endpoint from within a public OpenStack - cluster, as it will run up large bills. - -5. Remember: it's not a real filesystem or hierarchical directory structure. - Some operations (directory rename and delete) take time and are not atomic or - isolated from other operations taking place. - -6. Append is not supported. - -7. Unix-style permissions are not supported. All accounts with write access to - a repository have unlimited access; the same goes for those with read access. - -8. In the public clouds, do not make the containers public unless you are happy - with anyone reading your data, and are prepared to pay the costs of their - downloads. - -### Limits - -* Maximum length of an object path: 1024 characters - -* Maximum size of a binary object: no absolute limit. Files \> 5GB are - partitioned into separate files in the native filesystem, and merged during - retrieval. *Warning:* the partitioned/large file support is the - most complex part of the Hadoop/Swift FS integration, and, along with - authentication, the most troublesome to support. - -### Testing the hadoop-openstack module - -The `hadoop-openstack` can be remotely tested against any public or private cloud infrastructure which supports the OpenStack Keystone authentication mechanism. It can also be tested against private OpenStack clusters. OpenStack Development teams are strongly encouraged to test the Hadoop swift filesystem client against any version of Swift that they are developing or deploying, to stress their cluster and to identify bugs early. - -The module comes with a large suite of JUnit tests -tests that are only executed if the source tree includes credentials to test against a specific cluster. - -After checking out the Hadoop source tree, create the file: - - hadoop-tools/hadoop-openstack/src/test/resources/auth-keys.xml - -Into this file, insert the credentials needed to bond to the test filesystem, as decribed above. - -Next set the property `test.fs.swift.name` to the URL of a swift container to test against. The tests expect exclusive access to this container -do not keep any other data on it, or expect it to be preserved. - - - test.fs.swift.name - swift://test.myswift/ - - -In the base hadoop directory, run: - - mvn clean install -DskipTests - -This builds a set of Hadoop JARs consistent with the `hadoop-openstack` module that is about to be tested. - -In the `hadoop-tools/hadoop-openstack` directory run - - mvn test -Dtest=TestSwiftRestClient - -This runs some simple tests which include authenticating against the remote swift service. If these tests fail, so will all the rest. If it does fail: check your authentication. - -Once this test succeeds, you can run the full test suite - - mvn test - -Be advised that these tests can take an hour or more, especially against a remote Swift service -or one that throttles bulk operations. - -Once the `auth-keys.xml` file is in place, the `mvn test` runs from the Hadoop source base directory will automatically run these OpenStack tests While this ensures that no regressions have occurred, it can also add significant time to test runs, and may run up bills, depending on who is providingthe Swift storage service. We recommend having a separate source tree set up purely for the Swift tests, and running it manually or by the CI tooling at a lower frequency than normal test runs. - -Finally: Apache Hadoop is an open source project. Contributions of code -including more tests- are very welcome. diff --git a/hadoop-tools/hadoop-openstack/src/site/resources/css/site.css b/hadoop-tools/hadoop-openstack/src/site/resources/css/site.css deleted file mode 100644 index f830baafa8c..00000000000 --- a/hadoop-tools/hadoop-openstack/src/site/resources/css/site.css +++ /dev/null @@ -1,30 +0,0 @@ -/* -* Licensed to the Apache Software Foundation (ASF) under one or more -* contributor license agreements. See the NOTICE file distributed with -* this work for additional information regarding copyright ownership. -* The ASF licenses this file to You under the Apache License, Version 2.0 -* (the "License"); you may not use this file except in compliance with -* the License. You may obtain a copy of the License at -* -* http://www.apache.org/licenses/LICENSE-2.0 -* -* Unless required by applicable law or agreed to in writing, software -* distributed under the License is distributed on an "AS IS" BASIS, -* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -* See the License for the specific language governing permissions and -* limitations under the License. -*/ -#banner { - height: 93px; - background: none; -} - -#bannerLeft img { - margin-left: 30px; - margin-top: 10px; -} - -#bannerRight img { - margin: 17px; -} - diff --git a/hadoop-tools/hadoop-openstack/src/site/site.xml b/hadoop-tools/hadoop-openstack/src/site/site.xml deleted file mode 100644 index e2941380e2a..00000000000 --- a/hadoop-tools/hadoop-openstack/src/site/site.xml +++ /dev/null @@ -1,46 +0,0 @@ - - - - - - - org.apache.maven.skins - maven-stylus-skin - ${maven-stylus-skin.version} - - - - - - - - - diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/SwiftFileSystemBaseTest.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/SwiftFileSystemBaseTest.java deleted file mode 100644 index 4361a06a949..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/SwiftFileSystemBaseTest.java +++ /dev/null @@ -1,400 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift; - -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.FSDataOutputStream; -import org.apache.hadoop.fs.FileStatus; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.fs.swift.exceptions.SwiftOperationFailedException; -import org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem; -import org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystemStore; -import org.apache.hadoop.fs.swift.util.DurationStats; -import org.apache.hadoop.fs.swift.util.SwiftTestUtils; -import org.junit.After; -import org.junit.AfterClass; -import org.junit.Assert; -import org.junit.Assume; -import org.junit.Before; - -import java.io.FileNotFoundException; -import java.io.IOException; -import java.io.OutputStream; -import java.net.URI; -import java.net.URISyntaxException; -import java.util.List; - -import static org.apache.hadoop.fs.swift.util.SwiftTestUtils.assertPathExists; -import static org.apache.hadoop.fs.swift.util.SwiftTestUtils.cleanupInTeardown; -import static org.apache.hadoop.fs.swift.util.SwiftTestUtils.getServiceURI; -import static org.apache.hadoop.fs.swift.util.SwiftTestUtils.noteAction; - -/** - * This is the base class for most of the Swift tests - */ -public class SwiftFileSystemBaseTest extends Assert implements - SwiftTestConstants { - - protected static final Logger LOG = - LoggerFactory.getLogger(SwiftFileSystemBaseTest.class); - protected SwiftNativeFileSystem fs; - protected static SwiftNativeFileSystem lastFs; - protected byte[] data = SwiftTestUtils.dataset(getBlockSize() * 2, 0, 255); - private Configuration conf; - - @Before - public void setUp() throws Exception { - noteAction("setup"); - final URI uri = getFilesystemURI(); - conf = createConfiguration(); - - fs = createSwiftFS(); - try { - fs.initialize(uri, conf); - } catch (IOException e) { - //FS init failed, set it to null so that teardown doesn't - //attempt to use it - fs = null; - throw e; - } - //remember the last FS - lastFs = fs; - noteAction("setup complete"); - } - - /** - * Configuration generator. May be overridden to inject - * some custom options - * @return a configuration with which to create FS instances - */ - protected Configuration createConfiguration() { - return new Configuration(); - } - - @After - public void tearDown() throws Exception { - cleanupInTeardown(fs, "/test"); - } - - @AfterClass - public static void classTearDown() throws Exception { - if (lastFs != null) { - List statistics = lastFs.getOperationStatistics(); - for (DurationStats stat : statistics) { - LOG.info(stat.toString()); - } - } - } - - /** - * Get the configuration used to set up the FS - * @return the configuration - */ - public Configuration getConf() { - return conf; - } - - /** - * Describe the test, combining some logging with details - * for people reading the code - * - * @param description test description - */ - protected void describe(String description) { - noteAction(description); - } - - protected URI getFilesystemURI() throws URISyntaxException, IOException { - return getServiceURI(createConfiguration()); - } - - protected SwiftNativeFileSystem createSwiftFS() throws IOException { - SwiftNativeFileSystem swiftNativeFileSystem = - new SwiftNativeFileSystem(); - return swiftNativeFileSystem; - } - - protected int getBlockSize() { - return 1024; - } - - /** - * Is rename supported? - * @return true - */ - protected boolean renameSupported() { - return true; - } - - /** - * assume in a test that rename is supported; - * skip it if not - */ - protected void assumeRenameSupported() { - Assume.assumeTrue(renameSupported()); - } - - /** - * Take an unqualified path, and qualify it w.r.t the - * current filesystem - * @param pathString source path - * @return a qualified path instance - */ - protected Path path(String pathString) { - return fs.makeQualified(new Path(pathString)); - } - - /** - * Get the filesystem - * @return the current FS - */ - public SwiftNativeFileSystem getFs() { - return fs; - } - - /** - * Create a file using the standard {@link #data} bytes. - * - * @param path path to write - * @throws IOException on any problem - */ - protected void createFile(Path path) throws IOException { - createFile(path, data); - } - - /** - * Create a file with the given data. - * - * @param path path to write - * @param sourceData source dataset - * @throws IOException on any problem - */ - protected void createFile(Path path, byte[] sourceData) throws IOException { - FSDataOutputStream out = fs.create(path); - out.write(sourceData, 0, sourceData.length); - out.close(); - } - - /** - * Create and then close a file - * @param path path to create - * @throws IOException on a failure - */ - protected void createEmptyFile(Path path) throws IOException { - FSDataOutputStream out = fs.create(path); - out.close(); - } - - /** - * Get the inner store -useful for lower level operations - * - * @return the store - */ - protected SwiftNativeFileSystemStore getStore() { - return fs.getStore(); - } - - /** - * Rename a path - * @param src source - * @param dst dest - * @param renameMustSucceed flag to say "this rename must exist" - * @param srcExists add assert that the source exists afterwards - * @param dstExists add assert the dest exists afterwards - * @throws IOException IO trouble - */ - protected void rename(Path src, Path dst, boolean renameMustSucceed, - boolean srcExists, boolean dstExists) throws IOException { - if (renameMustSucceed) { - renameToSuccess(src, dst, srcExists, dstExists); - } else { - renameToFailure(src, dst); - } - } - - /** - * Get a string describing the outcome of a rename, by listing the dest - * path and its parent along with some covering text - * @param src source path - * @param dst dest path - * @return a string for logs and exceptions - * @throws IOException IO problems - */ - private String getRenameOutcome(Path src, Path dst) throws IOException { - String lsDst = ls(dst); - Path parent = dst.getParent(); - String lsParent = parent != null ? ls(parent) : ""; - return " result of " + src + " => " + dst - + " - " + lsDst - + " \n" + lsParent; - } - - /** - * Rename, expecting an exception to be thrown - * - * @param src source - * @param dst dest - * @throws IOException a failure other than an - * expected SwiftRenameException or FileNotFoundException - */ - protected void renameToFailure(Path src, Path dst) throws IOException { - try { - getStore().rename(src, dst); - fail("Expected failure renaming " + src + " to " + dst - + "- but got success"); - } catch (SwiftOperationFailedException e) { - LOG.debug("Rename failed (expected):" + e); - } catch (FileNotFoundException e) { - LOG.debug("Rename failed (expected):" + e); - } - } - - /** - * Rename to success - * - * @param src source - * @param dst dest - * @param srcExists add assert that the source exists afterwards - * @param dstExists add assert the dest exists afterwards - * @throws SwiftOperationFailedException operation failure - * @throws IOException IO problems - */ - protected void renameToSuccess(Path src, Path dst, - boolean srcExists, boolean dstExists) - throws SwiftOperationFailedException, IOException { - getStore().rename(src, dst); - String outcome = getRenameOutcome(src, dst); - assertEquals("Source " + src + "exists: " + outcome, - srcExists, fs.exists(src)); - assertEquals("Destination " + dstExists + " exists" + outcome, - dstExists, fs.exists(dst)); - } - - /** - * List a path in the test FS - * @param path path to list - * @return the contents of the path/dir - * @throws IOException IO problems - */ - protected String ls(Path path) throws IOException { - return SwiftTestUtils.ls(fs, path); - } - - /** - * assert that a path exists - * @param message message to use in an assertion - * @param path path to probe - * @throws IOException IO problems - */ - public void assertExists(String message, Path path) throws IOException { - assertPathExists(fs, message, path); - } - - /** - * assert that a path does not - * @param message message to use in an assertion - * @param path path to probe - * @throws IOException IO problems - */ - public void assertPathDoesNotExist(String message, Path path) throws - IOException { - SwiftTestUtils.assertPathDoesNotExist(fs, message, path); - } - - /** - * Assert that a file exists and whose {@link FileStatus} entry - * declares that this is a file and not a symlink or directory. - * - * @param filename name of the file - * @throws IOException IO problems during file operations - */ - protected void assertIsFile(Path filename) throws IOException { - SwiftTestUtils.assertIsFile(fs, filename); - } - - /** - * Assert that a file exists and whose {@link FileStatus} entry - * declares that this is a file and not a symlink or directory. - * - * @throws IOException IO problems during file operations - */ - protected void mkdirs(Path path) throws IOException { - assertTrue("Failed to mkdir" + path, fs.mkdirs(path)); - } - - /** - * Assert that a delete succeeded - * @param path path to delete - * @param recursive recursive flag - * @throws IOException IO problems - */ - protected void assertDeleted(Path path, boolean recursive) throws IOException { - SwiftTestUtils.assertDeleted(fs, path, recursive); - } - - /** - * Assert that a value is not equal to the expected value - * @param message message if the two values are equal - * @param expected expected value - * @param actual actual value - */ - protected void assertNotEqual(String message, int expected, int actual) { - assertTrue(message, - actual != expected); - } - - /** - * Get the number of partitions written from the Swift Native FS APIs - * @param out output stream - * @return the number of partitioned files written by the stream - */ - protected int getPartitionsWritten(FSDataOutputStream out) { - return SwiftNativeFileSystem.getPartitionsWritten(out); - } - - /** - * Assert that the no. of partitions written matches expectations - * @param action operation (for use in the assertions) - * @param out output stream - * @param expected expected no. of partitions - */ - protected void assertPartitionsWritten(String action, FSDataOutputStream out, - long expected) { - OutputStream nativeStream = out.getWrappedStream(); - int written = getPartitionsWritten(out); - if(written !=expected) { - Assert.fail(action + ": " + - TestSwiftFileSystemPartitionedUploads.WRONG_PARTITION_COUNT - + " + expected: " + expected + " actual: " + written - + " -- " + nativeStream); - } - } - - /** - * Assert that the result value == -1; which implies - * that a read was successful - * @param text text to include in a message (usually the operation) - * @param result read result to validate - */ - protected void assertMinusOne(String text, int result) { - assertEquals(text + " wrong read result " + result, -1, result); - } -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/SwiftTestConstants.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/SwiftTestConstants.java deleted file mode 100644 index 6948cf92fa6..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/SwiftTestConstants.java +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift; - -/** - * Hard coded constants for the test timeouts - */ -public interface SwiftTestConstants { - /** - * Timeout for swift tests: {@value} - */ - int SWIFT_TEST_TIMEOUT = 5 * 60 * 1000; - - /** - * Timeout for tests performing bulk operations: {@value} - */ - int SWIFT_BULK_IO_TEST_TIMEOUT = 12 * 60 * 1000; -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestFSMainOperationsSwift.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestFSMainOperationsSwift.java deleted file mode 100644 index b595f1c2d14..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestFSMainOperationsSwift.java +++ /dev/null @@ -1,372 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.fs.swift; - - -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.FSMainOperationsBaseTest; -import org.apache.hadoop.fs.FileSystem; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.fs.swift.http.SwiftProtocolConstants; -import org.apache.hadoop.fs.swift.util.SwiftTestUtils; -import org.junit.Assert; -import org.junit.Before; -import org.junit.Test; -import static org.apache.hadoop.fs.swift.SwiftTestConstants.SWIFT_TEST_TIMEOUT; -import java.io.IOException; -import java.net.URI; - -public class TestFSMainOperationsSwift extends FSMainOperationsBaseTest { - - @Override - @Before - public void setUp() throws Exception { - Configuration conf = new Configuration(); - //small blocksize for faster remote tests - conf.setInt(SwiftProtocolConstants.SWIFT_BLOCKSIZE, 2); - URI serviceURI = SwiftTestUtils.getServiceURI(conf); - fSys = FileSystem.get(serviceURI, conf); - super.setUp(); - } - - private Path wd = null; - - @Override - protected FileSystem createFileSystem() throws Exception { - return fSys; - } - - @Override - protected Path getDefaultWorkingDirectory() throws IOException { - if (wd == null) { - wd = fSys.getWorkingDirectory(); - } - return wd; - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testWDAbsolute() throws IOException { - Path absoluteDir = getTestRootPath(fSys, "test/existingDir"); - fSys.mkdirs(absoluteDir); - fSys.setWorkingDirectory(absoluteDir); - Assert.assertEquals(absoluteDir, fSys.getWorkingDirectory()); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testListStatusThrowsExceptionForUnreadableDir() { - SwiftTestUtils.skip("unsupported"); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testGlobStatusThrowsExceptionForUnreadableDir() { - SwiftTestUtils.skip("unsupported"); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testFsStatus() throws Exception { - super.testFsStatus(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testWorkingDirectory() throws Exception { - super.testWorkingDirectory(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testMkdirs() throws Exception { - super.testMkdirs(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testMkdirsFailsForSubdirectoryOfExistingFile() throws Exception { - super.testMkdirsFailsForSubdirectoryOfExistingFile(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testGetFileStatusThrowsExceptionForNonExistentFile() throws - Exception { - super.testGetFileStatusThrowsExceptionForNonExistentFile(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testListStatusThrowsExceptionForNonExistentFile() throws - Exception { - super.testListStatusThrowsExceptionForNonExistentFile(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testListStatus() throws Exception { - super.testListStatus(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testListStatusFilterWithNoMatches() throws Exception { - super.testListStatusFilterWithNoMatches(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testListStatusFilterWithSomeMatches() throws Exception { - super.testListStatusFilterWithSomeMatches(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testGlobStatusNonExistentFile() throws Exception { - super.testGlobStatusNonExistentFile(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testGlobStatusWithNoMatchesInPath() throws Exception { - super.testGlobStatusWithNoMatchesInPath(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testGlobStatusSomeMatchesInDirectories() throws Exception { - super.testGlobStatusSomeMatchesInDirectories(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testGlobStatusWithMultipleWildCardMatches() throws Exception { - super.testGlobStatusWithMultipleWildCardMatches(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testGlobStatusWithMultipleMatchesOfSingleChar() throws Exception { - super.testGlobStatusWithMultipleMatchesOfSingleChar(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testGlobStatusFilterWithEmptyPathResults() throws Exception { - super.testGlobStatusFilterWithEmptyPathResults(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testGlobStatusFilterWithSomePathMatchesAndTrivialFilter() throws - Exception { - super.testGlobStatusFilterWithSomePathMatchesAndTrivialFilter(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testGlobStatusFilterWithMultipleWildCardMatchesAndTrivialFilter() throws - Exception { - super.testGlobStatusFilterWithMultipleWildCardMatchesAndTrivialFilter(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testGlobStatusFilterWithMultiplePathMatchesAndNonTrivialFilter() throws - Exception { - super.testGlobStatusFilterWithMultiplePathMatchesAndNonTrivialFilter(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testGlobStatusFilterWithNoMatchingPathsAndNonTrivialFilter() throws - Exception { - super.testGlobStatusFilterWithNoMatchingPathsAndNonTrivialFilter(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testGlobStatusFilterWithMultiplePathWildcardsAndNonTrivialFilter() throws - Exception { - super.testGlobStatusFilterWithMultiplePathWildcardsAndNonTrivialFilter(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testWriteReadAndDeleteEmptyFile() throws Exception { - super.testWriteReadAndDeleteEmptyFile(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testWriteReadAndDeleteHalfABlock() throws Exception { - super.testWriteReadAndDeleteHalfABlock(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testWriteReadAndDeleteOneBlock() throws Exception { - super.testWriteReadAndDeleteOneBlock(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testWriteReadAndDeleteOneAndAHalfBlocks() throws Exception { - super.testWriteReadAndDeleteOneAndAHalfBlocks(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testWriteReadAndDeleteTwoBlocks() throws Exception { - super.testWriteReadAndDeleteTwoBlocks(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testOverwrite() throws IOException { - super.testOverwrite(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testWriteInNonExistentDirectory() throws IOException { - super.testWriteInNonExistentDirectory(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testDeleteNonExistentFile() throws IOException { - super.testDeleteNonExistentFile(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testDeleteRecursively() throws IOException { - super.testDeleteRecursively(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testDeleteEmptyDirectory() throws IOException { - super.testDeleteEmptyDirectory(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testRenameNonExistentPath() throws Exception { - super.testRenameNonExistentPath(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testRenameFileToNonExistentDirectory() throws Exception { - super.testRenameFileToNonExistentDirectory(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testRenameFileToDestinationWithParentFile() throws Exception { - super.testRenameFileToDestinationWithParentFile(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testRenameFileToExistingParent() throws Exception { - super.testRenameFileToExistingParent(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testRenameFileToItself() throws Exception { - super.testRenameFileToItself(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testRenameFileAsExistingFile() throws Exception { - super.testRenameFileAsExistingFile(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testRenameFileAsExistingDirectory() throws Exception { - super.testRenameFileAsExistingDirectory(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testRenameDirectoryToItself() throws Exception { - super.testRenameDirectoryToItself(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testRenameDirectoryToNonExistentParent() throws Exception { - super.testRenameDirectoryToNonExistentParent(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testRenameDirectoryAsNonExistentDirectory() throws Exception { - super.testRenameDirectoryAsNonExistentDirectory(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testRenameDirectoryAsEmptyDirectory() throws Exception { - super.testRenameDirectoryAsEmptyDirectory(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testRenameDirectoryAsNonEmptyDirectory() throws Exception { - super.testRenameDirectoryAsNonEmptyDirectory(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testRenameDirectoryAsFile() throws Exception { - super.testRenameDirectoryAsFile(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testInputStreamClosedTwice() throws IOException { - super.testInputStreamClosedTwice(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testOutputStreamClosedTwice() throws IOException { - super.testOutputStreamClosedTwice(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testGetWrappedInputStream() throws IOException { - super.testGetWrappedInputStream(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - @Override - public void testCopyToLocalWithUseRawLocalFileSystemOption() throws - Exception { - super.testCopyToLocalWithUseRawLocalFileSystemOption(); - } -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestLogResources.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestLogResources.java deleted file mode 100644 index 99c6962cb48..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestLogResources.java +++ /dev/null @@ -1,63 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift; - -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; -import org.junit.Test; - -import java.net.URL; - -/** - * This test just debugs which log resources are being picked up - */ -public class TestLogResources implements SwiftTestConstants { - protected static final Logger LOG = - LoggerFactory.getLogger(TestLogResources.class); - - private void printf(String format, Object... args) { - String msg = String.format(format, args); - System.out.printf(msg + "\n"); - LOG.info(msg); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testWhichLog4JPropsFile() throws Throwable { - locateResource("log4j.properties"); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testWhichLog4JXMLFile() throws Throwable { - locateResource("log4j.XML"); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testCommonsLoggingProps() throws Throwable { - locateResource("commons-logging.properties"); - } - - private void locateResource(String resource) { - URL url = this.getClass().getClassLoader().getResource(resource); - if (url != null) { - printf("resource %s is at %s", resource, url); - } else { - printf("resource %s is not on the classpath", resource); - } - } -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestReadPastBuffer.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestReadPastBuffer.java deleted file mode 100644 index c195bffc513..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestReadPastBuffer.java +++ /dev/null @@ -1,163 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift; - -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.FSDataInputStream; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.fs.swift.http.SwiftProtocolConstants; -import org.apache.hadoop.fs.swift.util.SwiftTestUtils; -import org.apache.hadoop.io.IOUtils; -import org.junit.After; -import org.junit.Test; - -/** - * Seek tests verify that - *
    - *
  1. When you seek on a 0 byte file to byte (0), it's not an error.
  2. - *
  3. When you seek past the end of a file, it's an error that should - * raise -what- EOFException?
  4. - *
  5. when you seek forwards, you get new data
  6. - *
  7. when you seek backwards, you get the previous data
  8. - *
  9. That this works for big multi-MB files as well as small ones.
  10. - *
- * These may seem "obvious", but the more the input streams try to be clever - * about offsets and buffering, the more likely it is that seek() will start - * to get confused. - */ -public class TestReadPastBuffer extends SwiftFileSystemBaseTest { - protected static final Logger LOG = - LoggerFactory.getLogger(TestReadPastBuffer.class); - public static final int SWIFT_READ_BLOCKSIZE = 4096; - public static final int SEEK_FILE_LEN = SWIFT_READ_BLOCKSIZE * 2; - - private Path testPath; - private Path readFile; - private Path zeroByteFile; - private FSDataInputStream instream; - - - /** - * Get a configuration which a small blocksize reported to callers - * @return a configuration for this test - */ - @Override - public Configuration getConf() { - Configuration conf = super.getConf(); - /* - * set to 4KB - */ - conf.setInt(SwiftProtocolConstants.SWIFT_BLOCKSIZE, SWIFT_READ_BLOCKSIZE); - return conf; - } - - /** - * Setup creates dirs under test/hadoop - * - * @throws Exception - */ - @Override - public void setUp() throws Exception { - super.setUp(); - byte[] block = SwiftTestUtils.dataset(SEEK_FILE_LEN, 0, 255); - - //delete the test directory - testPath = path("/test"); - readFile = new Path(testPath, "TestReadPastBuffer.txt"); - createFile(readFile, block); - } - - @After - public void cleanFile() { - IOUtils.closeStream(instream); - instream = null; - } - - /** - * Create a config with a 1KB request size - * @return a config - */ - @Override - protected Configuration createConfiguration() { - Configuration conf = super.createConfiguration(); - conf.set(SwiftProtocolConstants.SWIFT_REQUEST_SIZE, "1"); - return conf; - } - - /** - * Seek past the buffer then read - * @throws Throwable problems - */ - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testSeekAndReadPastEndOfFile() throws Throwable { - instream = fs.open(readFile); - assertEquals(0, instream.getPos()); - //expect that seek to 0 works - //go just before the end - instream.seek(SEEK_FILE_LEN - 2); - assertTrue("Premature EOF", instream.read() != -1); - assertTrue("Premature EOF", instream.read() != -1); - assertMinusOne("read past end of file", instream.read()); - } - - /** - * Seek past the buffer and attempt a read(buffer) - * @throws Throwable failures - */ - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testSeekBulkReadPastEndOfFile() throws Throwable { - instream = fs.open(readFile); - assertEquals(0, instream.getPos()); - //go just before the end - instream.seek(SEEK_FILE_LEN - 1); - byte[] buffer = new byte[1]; - int result = instream.read(buffer, 0, 1); - //next byte is expected to fail - result = instream.read(buffer, 0, 1); - assertMinusOne("read past end of file", result); - //and this one - result = instream.read(buffer, 0, 1); - assertMinusOne("read past end of file", result); - - //now do an 0-byte read and expect it to - //to be checked first - result = instream.read(buffer, 0, 0); - assertEquals("EOF checks coming before read range check", 0, result); - - } - - - - /** - * Read past the buffer size byte by byte and verify that it refreshed - * @throws Throwable - */ - @Test - public void testReadPastBufferSize() throws Throwable { - instream = fs.open(readFile); - - while (instream.read() != -1); - //here we have gone past the end of a file and its buffer. Now try again - assertMinusOne("reading after the (large) file was read: "+ instream, - instream.read()); - } -} - diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSeek.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSeek.java deleted file mode 100644 index 51fa92a2eb3..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSeek.java +++ /dev/null @@ -1,260 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift; - -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.FSDataInputStream; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.fs.swift.exceptions.SwiftConnectionClosedException; -import org.apache.hadoop.fs.swift.http.SwiftProtocolConstants; -import org.apache.hadoop.fs.swift.util.SwiftTestUtils; -import org.apache.hadoop.io.IOUtils; -import org.junit.After; -import org.junit.Test; - -import java.io.EOFException; -import java.io.IOException; - -/** - * Seek tests verify that - *
    - *
  1. When you seek on a 0 byte file to byte (0), it's not an error.
  2. - *
  3. When you seek past the end of a file, it's an error that should - * raise -what- EOFException?
  4. - *
  5. when you seek forwards, you get new data
  6. - *
  7. when you seek backwards, you get the previous data
  8. - *
  9. That this works for big multi-MB files as well as small ones.
  10. - *
- * These may seem "obvious", but the more the input streams try to be clever - * about offsets and buffering, the more likely it is that seek() will start - * to get confused. - */ -public class TestSeek extends SwiftFileSystemBaseTest { - protected static final Logger LOG = - LoggerFactory.getLogger(TestSeek.class); - public static final int SMALL_SEEK_FILE_LEN = 256; - - private Path testPath; - private Path smallSeekFile; - private Path zeroByteFile; - private FSDataInputStream instream; - - /** - * Setup creates dirs under test/hadoop - * - * @throws Exception - */ - @Override - public void setUp() throws Exception { - super.setUp(); - //delete the test directory - testPath = path("/test"); - smallSeekFile = new Path(testPath, "seekfile.txt"); - zeroByteFile = new Path(testPath, "zero.txt"); - byte[] block = SwiftTestUtils.dataset(SMALL_SEEK_FILE_LEN, 0, 255); - //this file now has a simple rule: offset => value - createFile(smallSeekFile, block); - createEmptyFile(zeroByteFile); - } - - @After - public void cleanFile() { - IOUtils.closeStream(instream); - instream = null; - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testSeekZeroByteFile() throws Throwable { - instream = fs.open(zeroByteFile); - assertEquals(0, instream.getPos()); - //expect initial read to fai; - int result = instream.read(); - assertMinusOne("initial byte read", result); - byte[] buffer = new byte[1]; - //expect that seek to 0 works - instream.seek(0); - //reread, expect same exception - result = instream.read(); - assertMinusOne("post-seek byte read", result); - result = instream.read(buffer, 0, 1); - assertMinusOne("post-seek buffer read", result); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testBlockReadZeroByteFile() throws Throwable { - instream = fs.open(zeroByteFile); - assertEquals(0, instream.getPos()); - //expect that seek to 0 works - byte[] buffer = new byte[1]; - int result = instream.read(buffer, 0, 1); - assertMinusOne("block read zero byte file", result); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testSeekReadClosedFile() throws Throwable { - instream = fs.open(smallSeekFile); - instream.close(); - try { - instream.seek(0); - } catch (SwiftConnectionClosedException e) { - //expected a closed file - } - try { - instream.read(); - } catch (IOException e) { - //expected a closed file - } - try { - byte[] buffer = new byte[1]; - int result = instream.read(buffer, 0, 1); - } catch (IOException e) { - //expected a closed file - } - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testNegativeSeek() throws Throwable { - instream = fs.open(smallSeekFile); - assertEquals(0, instream.getPos()); - try { - instream.seek(-1); - long p = instream.getPos(); - LOG.warn("Seek to -1 returned a position of " + p); - int result = instream.read(); - fail( - "expected an exception, got data " + result + " at a position of " + p); - } catch (IOException e) { - //bad seek -expected - } - assertEquals(0, instream.getPos()); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testSeekFile() throws Throwable { - instream = fs.open(smallSeekFile); - assertEquals(0, instream.getPos()); - //expect that seek to 0 works - instream.seek(0); - int result = instream.read(); - assertEquals(0, result); - assertEquals(1, instream.read()); - assertEquals(2, instream.getPos()); - assertEquals(2, instream.read()); - assertEquals(3, instream.getPos()); - instream.seek(128); - assertEquals(128, instream.getPos()); - assertEquals(128, instream.read()); - instream.seek(63); - assertEquals(63, instream.read()); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testSeekAndReadPastEndOfFile() throws Throwable { - instream = fs.open(smallSeekFile); - assertEquals(0, instream.getPos()); - //expect that seek to 0 works - //go just before the end - instream.seek(SMALL_SEEK_FILE_LEN - 2); - assertTrue("Premature EOF", instream.read() != -1); - assertTrue("Premature EOF", instream.read() != -1); - assertMinusOne("read past end of file", instream.read()); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testSeekAndPastEndOfFileThenReseekAndRead() throws Throwable { - instream = fs.open(smallSeekFile); - //go just before the end. This may or may not fail; it may be delayed until the - //read - try { - instream.seek(SMALL_SEEK_FILE_LEN); - //if this doesn't trigger, then read() is expected to fail - assertMinusOne("read after seeking past EOF", instream.read()); - } catch (EOFException expected) { - //here an exception was raised in seek - } - instream.seek(1); - assertTrue("Premature EOF", instream.read() != -1); - } - - @Override - protected Configuration createConfiguration() { - Configuration conf = super.createConfiguration(); - conf.set(SwiftProtocolConstants.SWIFT_REQUEST_SIZE, "1"); - return conf; - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testSeekBigFile() throws Throwable { - Path testSeekFile = new Path(testPath, "bigseekfile.txt"); - byte[] block = SwiftTestUtils.dataset(65536, 0, 255); - createFile(testSeekFile, block); - instream = fs.open(testSeekFile); - assertEquals(0, instream.getPos()); - //expect that seek to 0 works - instream.seek(0); - int result = instream.read(); - assertEquals(0, result); - assertEquals(1, instream.read()); - assertEquals(2, instream.read()); - - //do seek 32KB ahead - instream.seek(32768); - assertEquals("@32768", block[32768], (byte) instream.read()); - instream.seek(40000); - assertEquals("@40000", block[40000], (byte) instream.read()); - instream.seek(8191); - assertEquals("@8191", block[8191], (byte) instream.read()); - instream.seek(0); - assertEquals("@0", 0, (byte) instream.read()); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testPositionedBulkReadDoesntChangePosition() throws Throwable { - Path testSeekFile = new Path(testPath, "bigseekfile.txt"); - byte[] block = SwiftTestUtils.dataset(65536, 0, 255); - createFile(testSeekFile, block); - instream = fs.open(testSeekFile); - instream.seek(39999); - assertTrue(-1 != instream.read()); - assertEquals (40000, instream.getPos()); - - byte[] readBuffer = new byte[256]; - instream.read(128, readBuffer, 0, readBuffer.length); - //have gone back - assertEquals(40000, instream.getPos()); - //content is the same too - assertEquals("@40000", block[40000], (byte) instream.read()); - //now verify the picked up data - for (int i = 0; i < 256; i++) { - assertEquals("@" + i, block[i + 128], readBuffer[i]); - } - } - - /** - * work out the expected byte from a specific offset - * @param offset offset in the file - * @return the value - */ - int expectedByte(int offset) { - return offset & 0xff; - } -} - diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftConfig.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftConfig.java deleted file mode 100644 index 0212b4d9c65..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftConfig.java +++ /dev/null @@ -1,194 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.fs.swift; - -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.swift.http.SwiftRestClient; -import org.junit.Assert; -import org.junit.Test; - -import java.io.IOException; -import java.net.URI; -import java.net.URISyntaxException; - -import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.DOT_AUTH_URL; -import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.DOT_LOCATION_AWARE; -import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.DOT_PASSWORD; -import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.DOT_TENANT; -import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.DOT_USERNAME; -import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.SWIFT_BLOCKSIZE; -import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.SWIFT_CONNECTION_TIMEOUT; -import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.SWIFT_PARTITION_SIZE; -import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.SWIFT_PROXY_HOST_PROPERTY; -import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.SWIFT_PROXY_PORT_PROPERTY; -import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.SWIFT_RETRY_COUNT; -import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.SWIFT_SERVICE_PREFIX; - -/** - * Test the swift service-specific configuration binding features - */ -public class TestSwiftConfig extends Assert { - - - public static final String SERVICE = "openstack"; - - @Test(expected = org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException.class) - public void testEmptyUrl() throws Exception { - final Configuration configuration = new Configuration(); - - set(configuration, DOT_TENANT, "tenant"); - set(configuration, DOT_USERNAME, "username"); - set(configuration, DOT_PASSWORD, "password"); - mkInstance(configuration); - } - -@Test - public void testEmptyTenant() throws Exception { - final Configuration configuration = new Configuration(); - set(configuration, DOT_AUTH_URL, "http://localhost:8080"); - set(configuration, DOT_USERNAME, "username"); - set(configuration, DOT_PASSWORD, "password"); - mkInstance(configuration); - } - - @Test(expected = org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException.class) - public void testEmptyUsername() throws Exception { - final Configuration configuration = new Configuration(); - set(configuration, DOT_AUTH_URL, "http://localhost:8080"); - set(configuration, DOT_TENANT, "tenant"); - set(configuration, DOT_PASSWORD, "password"); - mkInstance(configuration); - } - - @Test(expected = org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException.class) - public void testEmptyPassword() throws Exception { - final Configuration configuration = new Configuration(); - set(configuration, DOT_AUTH_URL, "http://localhost:8080"); - set(configuration, DOT_TENANT, "tenant"); - set(configuration, DOT_USERNAME, "username"); - mkInstance(configuration); - } - - @Test - public void testGoodRetryCount() throws Exception { - final Configuration configuration = createCoreConfig(); - configuration.set(SWIFT_RETRY_COUNT, "3"); - mkInstance(configuration); - } - - @Test(expected = org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException.class) - public void testBadRetryCount() throws Exception { - final Configuration configuration = createCoreConfig(); - configuration.set(SWIFT_RETRY_COUNT, "three"); - mkInstance(configuration); - } - - @Test(expected = org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException.class) - public void testBadConnectTimeout() throws Exception { - final Configuration configuration = createCoreConfig(); - configuration.set(SWIFT_CONNECTION_TIMEOUT, "three"); - mkInstance(configuration); - } - - @Test(expected = org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException.class) - public void testZeroBlocksize() throws Exception { - final Configuration configuration = createCoreConfig(); - configuration.set(SWIFT_BLOCKSIZE, "0"); - mkInstance(configuration); - } - - @Test(expected = org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException.class) - public void testNegativeBlocksize() throws Exception { - final Configuration configuration = createCoreConfig(); - configuration.set(SWIFT_BLOCKSIZE, "-1"); - mkInstance(configuration); - } - - @Test - public void testPositiveBlocksize() throws Exception { - final Configuration configuration = createCoreConfig(); - int size = 127; - configuration.set(SWIFT_BLOCKSIZE, Integer.toString(size)); - SwiftRestClient restClient = mkInstance(configuration); - assertEquals(size, restClient.getBlocksizeKB()); - } - - @Test - public void testLocationAwareTruePropagates() throws Exception { - final Configuration configuration = createCoreConfig(); - set(configuration, DOT_LOCATION_AWARE, "true"); - SwiftRestClient restClient = mkInstance(configuration); - assertTrue(restClient.isLocationAware()); - } - - @Test - public void testLocationAwareFalsePropagates() throws Exception { - final Configuration configuration = createCoreConfig(); - set(configuration, DOT_LOCATION_AWARE, "false"); - SwiftRestClient restClient = mkInstance(configuration); - assertFalse(restClient.isLocationAware()); - } - - @Test(expected = org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException.class) - public void testNegativePartsize() throws Exception { - final Configuration configuration = createCoreConfig(); - configuration.set(SWIFT_PARTITION_SIZE, "-1"); - SwiftRestClient restClient = mkInstance(configuration); - } - - @Test - public void testPositivePartsize() throws Exception { - final Configuration configuration = createCoreConfig(); - int size = 127; - configuration.set(SWIFT_PARTITION_SIZE, Integer.toString(size)); - SwiftRestClient restClient = mkInstance(configuration); - assertEquals(size, restClient.getPartSizeKB()); - } - - @Test - public void testProxyData() throws Exception { - final Configuration configuration = createCoreConfig(); - String proxy="web-proxy"; - int port = 8088; - configuration.set(SWIFT_PROXY_HOST_PROPERTY, proxy); - configuration.set(SWIFT_PROXY_PORT_PROPERTY, Integer.toString(port)); - SwiftRestClient restClient = mkInstance(configuration); - assertEquals(proxy, restClient.getProxyHost()); - assertEquals(port, restClient.getProxyPort()); - } - - private Configuration createCoreConfig() { - final Configuration configuration = new Configuration(); - set(configuration, DOT_AUTH_URL, "http://localhost:8080"); - set(configuration, DOT_TENANT, "tenant"); - set(configuration, DOT_USERNAME, "username"); - set(configuration, DOT_PASSWORD, "password"); - return configuration; - } - - private void set(Configuration configuration, String field, String value) { - configuration.set(SWIFT_SERVICE_PREFIX + SERVICE + field, value); - } - - private SwiftRestClient mkInstance(Configuration configuration) throws - IOException, - URISyntaxException { - URI uri = new URI("swift://container.openstack/"); - return SwiftRestClient.getInstance(uri, configuration); - } -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemBasicOps.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemBasicOps.java deleted file mode 100644 index 516dc99fab0..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemBasicOps.java +++ /dev/null @@ -1,296 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift; - -import org.junit.Assert; -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; -import org.apache.hadoop.fs.FileStatus; -import org.apache.hadoop.fs.ParentNotDirectoryException; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.fs.swift.exceptions.SwiftBadRequestException; -import org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem; -import org.apache.hadoop.fs.swift.util.SwiftTestUtils; -import org.junit.Test; - -import java.io.FileNotFoundException; -import java.io.IOException; - -import static org.apache.hadoop.fs.swift.util.SwiftTestUtils.assertFileHasLength; -import static org.apache.hadoop.fs.swift.util.SwiftTestUtils.assertIsDirectory; -import static org.apache.hadoop.fs.swift.util.SwiftTestUtils.readBytesToString; -import static org.apache.hadoop.fs.swift.util.SwiftTestUtils.writeTextFile; - - -/** - * Test basic filesystem operations. - * Many of these are similar to those in {@link TestSwiftFileSystemContract} - * -this is a JUnit4 test suite used to initially test the Swift - * component. Once written, there's no reason not to retain these tests. - */ -public class TestSwiftFileSystemBasicOps extends SwiftFileSystemBaseTest { - - private static final Logger LOG = - LoggerFactory.getLogger(TestSwiftFileSystemBasicOps.class); - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testLsRoot() throws Throwable { - Path path = new Path("/"); - FileStatus[] statuses = fs.listStatus(path); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testMkDir() throws Throwable { - Path path = new Path("/test/MkDir"); - fs.mkdirs(path); - //success then -so try a recursive operation - fs.delete(path, true); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testDeleteNonexistentFile() throws Throwable { - Path path = new Path("/test/DeleteNonexistentFile"); - assertFalse("delete returned true", fs.delete(path, false)); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testPutFile() throws Throwable { - Path path = new Path("/test/PutFile"); - Exception caught = null; - writeTextFile(fs, path, "Testing a put to a file", false); - assertDeleted(path, false); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testPutGetFile() throws Throwable { - Path path = new Path("/test/PutGetFile"); - try { - String text = "Testing a put and get to a file " - + System.currentTimeMillis(); - writeTextFile(fs, path, text, false); - - String result = readBytesToString(fs, path, text.length()); - assertEquals(text, result); - } finally { - delete(fs, path); - } - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testPutDeleteFileInSubdir() throws Throwable { - Path path = - new Path("/test/PutDeleteFileInSubdir/testPutDeleteFileInSubdir"); - String text = "Testing a put and get to a file in a subdir " - + System.currentTimeMillis(); - writeTextFile(fs, path, text, false); - assertDeleted(path, false); - //now delete the parent that should have no children - assertDeleted(new Path("/test/PutDeleteFileInSubdir"), false); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testRecursiveDelete() throws Throwable { - Path childpath = - new Path("/test/testRecursiveDelete"); - String text = "Testing a put and get to a file in a subdir " - + System.currentTimeMillis(); - writeTextFile(fs, childpath, text, false); - //now delete the parent that should have no children - assertDeleted(new Path("/test"), true); - assertFalse("child entry still present " + childpath, fs.exists(childpath)); - } - - private void delete(SwiftNativeFileSystem fs, Path path) { - try { - if (!fs.delete(path, false)) { - LOG.warn("Failed to delete " + path); - } - } catch (IOException e) { - LOG.warn("deleting " + path, e); - } - } - - private void deleteR(SwiftNativeFileSystem fs, Path path) { - try { - if (!fs.delete(path, true)) { - LOG.warn("Failed to delete " + path); - } - } catch (IOException e) { - LOG.warn("deleting " + path, e); - } - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testOverwrite() throws Throwable { - Path path = new Path("/test/Overwrite"); - try { - String text = "Testing a put to a file " - + System.currentTimeMillis(); - writeTextFile(fs, path, text, false); - assertFileHasLength(fs, path, text.length()); - String text2 = "Overwriting a file " - + System.currentTimeMillis(); - writeTextFile(fs, path, text2, true); - assertFileHasLength(fs, path, text2.length()); - String result = readBytesToString(fs, path, text2.length()); - assertEquals(text2, result); - } finally { - delete(fs, path); - } - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testOverwriteDirectory() throws Throwable { - Path path = new Path("/test/testOverwriteDirectory"); - try { - fs.mkdirs(path.getParent()); - String text = "Testing a put to a file " - + System.currentTimeMillis(); - writeTextFile(fs, path, text, false); - assertFileHasLength(fs, path, text.length()); - } finally { - delete(fs, path); - } - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testFileStatus() throws Throwable { - Path path = new Path("/test/FileStatus"); - try { - String text = "Testing File Status " - + System.currentTimeMillis(); - writeTextFile(fs, path, text, false); - SwiftTestUtils.assertIsFile(fs, path); - } finally { - delete(fs, path); - } - } - - /** - * Assert that a newly created directory is a directory - * - * @throws Throwable if not, or if something else failed - */ - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testDirStatus() throws Throwable { - Path path = new Path("/test/DirStatus"); - try { - fs.mkdirs(path); - assertIsDirectory(fs, path); - } finally { - delete(fs, path); - } - } - - /** - * Assert that if a directory that has children is deleted, it is still - * a directory - * - * @throws Throwable if not, or if something else failed - */ - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testDirStaysADir() throws Throwable { - Path path = new Path("/test/dirStaysADir"); - Path child = new Path(path, "child"); - try { - //create the dir - fs.mkdirs(path); - //assert the parent has the directory nature - assertIsDirectory(fs, path); - //create the child dir - writeTextFile(fs, child, "child file", true); - //assert the parent has the directory nature - assertIsDirectory(fs, path); - //now rm the child - delete(fs, child); - } finally { - deleteR(fs, path); - } - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testCreateMultilevelDir() throws Throwable { - Path base = new Path("/test/CreateMultilevelDir"); - Path path = new Path(base, "1/2/3"); - fs.mkdirs(path); - assertExists("deep multilevel dir not created", path); - fs.delete(base, true); - assertPathDoesNotExist("Multilevel delete failed", path); - assertPathDoesNotExist("Multilevel delete failed", base); - - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testCreateDirWithFileParent() throws Throwable { - Path path = new Path("/test/CreateDirWithFileParent"); - Path child = new Path(path, "subdir/child"); - fs.mkdirs(path.getParent()); - try { - //create the child dir - writeTextFile(fs, path, "parent", true); - try { - fs.mkdirs(child); - } catch (ParentNotDirectoryException expected) { - LOG.debug("Expected Exception", expected); - } - } finally { - fs.delete(path, true); - } - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testLongObjectNamesForbidden() throws Throwable { - StringBuilder buffer = new StringBuilder(1200); - buffer.append("/"); - for (int i = 0; i < (1200 / 4); i++) { - buffer.append(String.format("%04x", i)); - } - String pathString = buffer.toString(); - Path path = new Path(pathString); - try { - writeTextFile(fs, path, pathString, true); - //if we get here, problems. - fs.delete(path, false); - fail("Managed to create an object with a name of length " - + pathString.length()); - } catch (SwiftBadRequestException e) { - //expected - //LOG.debug("Caught exception " + e, e); - } - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testLsNonExistentFile() throws Exception { - try { - Path path = new Path("/test/hadoop/file"); - FileStatus[] statuses = fs.listStatus(path); - fail("Should throw FileNotFoundException on " + path - + " but got list of length " + statuses.length); - } catch (FileNotFoundException fnfe) { - // expected - } - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testGetCanonicalServiceName() { - Assert.assertNull(fs.getCanonicalServiceName()); - } - - -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemBlockLocation.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemBlockLocation.java deleted file mode 100644 index 1ad28a6cc45..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemBlockLocation.java +++ /dev/null @@ -1,167 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift; - -import org.apache.hadoop.fs.BlockLocation; -import org.apache.hadoop.fs.FileStatus; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.fs.swift.http.SwiftProtocolConstants; -import org.apache.hadoop.fs.swift.util.SwiftTestUtils; -import org.junit.Test; - -import java.io.IOException; - -/** - * Test block location logic. - * The endpoint may or may not be location-aware - */ -public class TestSwiftFileSystemBlockLocation extends SwiftFileSystemBaseTest { - - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testLocateSingleFileBlocks() throws Throwable { - describe("verify that a file returns 1+ blocks"); - FileStatus fileStatus = createFileAndGetStatus(); - BlockLocation[] locations = - getFs().getFileBlockLocations(fileStatus, 0, 1); - assertNotEqual("No block locations supplied for " + fileStatus, 0, - locations.length); - for (BlockLocation location : locations) { - assertLocationValid(location); - } - } - - private void assertLocationValid(BlockLocation location) throws - IOException { - LOG.info("{}", location); - String[] hosts = location.getHosts(); - String[] names = location.getNames(); - assertNotEqual("No hosts supplied for " + location, 0, hosts.length); - //for every host, there's a name. - assertEquals("Unequal names and hosts in " + location, - hosts.length, names.length); - assertEquals(SwiftProtocolConstants.BLOCK_LOCATION, - location.getNames()[0]); - assertEquals(SwiftProtocolConstants.TOPOLOGY_PATH, - location.getTopologyPaths()[0]); - } - - private FileStatus createFileAndGetStatus() throws IOException { - Path path = path("/test/locatedFile"); - createFile(path); - return fs.getFileStatus(path); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testLocateNullStatus() throws Throwable { - describe("verify that a null filestatus maps to a null location array"); - BlockLocation[] locations = - getFs().getFileBlockLocations((FileStatus) null, 0, 1); - assertNull(locations); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testLocateNegativeSeek() throws Throwable { - describe("verify that a negative offset is illegal"); - try { - BlockLocation[] locations = - getFs().getFileBlockLocations(createFileAndGetStatus(), - -1, - 1); - fail("Expected an exception, got " + locations.length + " locations"); - } catch (IllegalArgumentException e) { - //expected - } - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testLocateNegativeLen() throws Throwable { - describe("verify that a negative length is illegal"); - try { - BlockLocation[] locations = - getFs().getFileBlockLocations(createFileAndGetStatus(), - 0, - -1); - fail("Expected an exception, got " + locations.length + " locations"); - } catch (IllegalArgumentException e) { - //expected - } - } - - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testLocateOutOfRangeLen() throws Throwable { - describe("overshooting the length is legal, as long as the" + - " origin location is valid"); - - BlockLocation[] locations = - getFs().getFileBlockLocations(createFileAndGetStatus(), - 0, - data.length + 100); - assertNotNull(locations); - assertTrue(locations.length > 0); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testLocateOutOfRangeSrc() throws Throwable { - describe("Seeking out of the file length returns an empty array"); - - BlockLocation[] locations = - getFs().getFileBlockLocations(createFileAndGetStatus(), - data.length + 100, - 1); - assertEmptyBlockLocations(locations); - } - - private void assertEmptyBlockLocations(BlockLocation[] locations) { - assertNotNull(locations); - if (locations.length!=0) { - fail("non empty locations[] with first entry of " + locations[0]); - } - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testLocateDirectory() throws Throwable { - describe("verify that locating a directory is an error"); - createFile(path("/test/filename")); - FileStatus status = fs.getFileStatus(path("/test")); - LOG.info("Filesystem is " + fs + "; target is " + status); - SwiftTestUtils.assertIsDirectory(status); - BlockLocation[] locations; - locations = getFs().getFileBlockLocations(status, - 0, - 1); - assertEmptyBlockLocations(locations); - } - - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testLocateRootDirectory() throws Throwable { - describe("verify that locating the root directory is an error"); - FileStatus status = fs.getFileStatus(path("/")); - SwiftTestUtils.assertIsDirectory(status); - BlockLocation[] locations; - locations = getFs().getFileBlockLocations(status, - 0, - 1); - assertEmptyBlockLocations(locations); - } - - -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemBlocksize.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemBlocksize.java deleted file mode 100644 index 02111632486..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemBlocksize.java +++ /dev/null @@ -1,60 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift; - -import org.apache.hadoop.fs.FileStatus; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.fs.swift.util.SwiftTestUtils; -import org.junit.Test; - -/** - * Tests that blocksize is never zero for a file, either in the FS default - * or the FileStatus value of a queried file - */ -public class TestSwiftFileSystemBlocksize extends SwiftFileSystemBaseTest { - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testDefaultBlocksizeNonZero() throws Throwable { - assertTrue("Zero default blocksize", 0L != getFs().getDefaultBlockSize()); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testDefaultBlocksizeRootPathNonZero() throws Throwable { - assertTrue("Zero default blocksize", - 0L != getFs().getDefaultBlockSize(new Path("/"))); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testDefaultBlocksizeOtherPathNonZero() throws Throwable { - assertTrue("Zero default blocksize", - 0L != getFs().getDefaultBlockSize(new Path("/test"))); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testBlocksizeNonZeroForFile() throws Throwable { - Path smallfile = new Path("/test/smallfile"); - SwiftTestUtils.writeTextFile(fs, smallfile, "blocksize", true); - createFile(smallfile); - FileStatus status = getFs().getFileStatus(smallfile); - assertTrue("Zero blocksize in " + status, - status.getBlockSize() != 0L); - assertTrue("Zero replication in " + status, - status.getReplication() != 0L); - } -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemConcurrency.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemConcurrency.java deleted file mode 100644 index c447919efa4..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemConcurrency.java +++ /dev/null @@ -1,105 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift; - -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; -import org.apache.hadoop.fs.FSDataOutputStream; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.fs.swift.util.SwiftTestUtils; -import org.junit.Test; - -import java.io.FileNotFoundException; -import java.io.IOException; -import java.util.concurrent.ExecutorService; -import java.util.concurrent.Executors; -import java.util.concurrent.TimeUnit; - -/** - * Test Swift FS concurrency logic. This isn't a very accurate test, - * because it is hard to consistently generate race conditions. - * Consider it "best effort" - */ -public class TestSwiftFileSystemConcurrency extends SwiftFileSystemBaseTest { - protected static final Logger LOG = - LoggerFactory.getLogger(TestSwiftFileSystemConcurrency.class); - private Exception thread1Ex, thread2Ex; - public static final String TEST_RACE_CONDITION_ON_DELETE_DIR = - "/test/testraceconditionondirdeletetest"; - - /** - * test on concurrent file system changes - */ - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testRaceConditionOnDirDeleteTest() throws Exception { - SwiftTestUtils.skip("Skipping unreliable test"); - - final String message = "message"; - final Path fileToRead = new Path( - TEST_RACE_CONDITION_ON_DELETE_DIR +"/files/many-files/file"); - final ExecutorService executorService = Executors.newFixedThreadPool(2); - fs.create(new Path(TEST_RACE_CONDITION_ON_DELETE_DIR +"/file/test/file1")); - fs.create(new Path(TEST_RACE_CONDITION_ON_DELETE_DIR + "/documents/doc1")); - fs.create(new Path( - TEST_RACE_CONDITION_ON_DELETE_DIR + "/pictures/picture")); - - - executorService.execute(new Runnable() { - @Override - public void run() { - try { - assertDeleted(new Path(TEST_RACE_CONDITION_ON_DELETE_DIR), true); - } catch (IOException e) { - LOG.warn("deletion thread:" + e, e); - thread1Ex = e; - throw new RuntimeException(e); - } - } - }); - executorService.execute(new Runnable() { - @Override - public void run() { - try { - final FSDataOutputStream outputStream = fs.create(fileToRead); - outputStream.write(message.getBytes()); - outputStream.close(); - } catch (IOException e) { - LOG.warn("writer thread:" + e, e); - thread2Ex = e; - throw new RuntimeException(e); - } - } - }); - - executorService.awaitTermination(1, TimeUnit.MINUTES); - if (thread1Ex != null) { - throw thread1Ex; - } - if (thread2Ex != null) { - throw thread2Ex; - } - try { - fs.open(fileToRead); - LOG.info("concurrency test failed to trigger a failure"); - } catch (FileNotFoundException expected) { - - } - - } -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemContract.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemContract.java deleted file mode 100644 index 1655b95231c..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemContract.java +++ /dev/null @@ -1,138 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift; - -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.FileSystemContractBaseTest; -import org.apache.hadoop.fs.ParentNotDirectoryException; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem; -import org.apache.hadoop.fs.swift.util.SwiftTestUtils; -import org.junit.Before; -import org.junit.Test; - -import static org.junit.Assert.*; - -import java.io.IOException; -import java.net.URI; -import java.net.URISyntaxException; - -/** - * This is the full filesystem contract test -which requires the - * Default config set up to point to a filesystem. - * - * Some of the tests override the base class tests -these - * are where SwiftFS does not implement those features, or - * when the behavior of SwiftFS does not match the normal - * contract -which normally means that directories and equal files - * are being treated as equal. - */ -public class TestSwiftFileSystemContract - extends FileSystemContractBaseTest { - private static final Logger LOG = - LoggerFactory.getLogger(TestSwiftFileSystemContract.class); - - /** - * Override this if the filesystem is not case sensitive - * @return true if the case detection/preservation tests should run - */ - protected boolean filesystemIsCaseSensitive() { - return false; - } - - @Before - public void setUp() throws Exception { - final URI uri = getFilesystemURI(); - final Configuration conf = new Configuration(); - fs = createSwiftFS(); - try { - fs.initialize(uri, conf); - } catch (IOException e) { - //FS init failed, set it to null so that teardown doesn't - //attempt to use it - fs = null; - throw e; - } - } - - protected URI getFilesystemURI() throws URISyntaxException, IOException { - return SwiftTestUtils.getServiceURI(new Configuration()); - } - - protected SwiftNativeFileSystem createSwiftFS() throws IOException { - SwiftNativeFileSystem swiftNativeFileSystem = - new SwiftNativeFileSystem(); - return swiftNativeFileSystem; - } - - @Test - public void testMkdirsFailsForSubdirectoryOfExistingFile() throws Exception { - Path testDir = path("/test/hadoop"); - assertFalse(fs.exists(testDir)); - assertTrue(fs.mkdirs(testDir)); - assertTrue(fs.exists(testDir)); - - Path filepath = path("/test/hadoop/file"); - SwiftTestUtils.writeTextFile(fs, filepath, "hello, world", false); - - Path testSubDir = new Path(filepath, "subdir"); - SwiftTestUtils.assertPathDoesNotExist(fs, "subdir before mkdir", testSubDir); - - try { - fs.mkdirs(testSubDir); - fail("Should throw IOException."); - } catch (ParentNotDirectoryException e) { - // expected - } - //now verify that the subdir path does not exist - SwiftTestUtils.assertPathDoesNotExist(fs, "subdir after mkdir", testSubDir); - - Path testDeepSubDir = path("/test/hadoop/file/deep/sub/dir"); - try { - fs.mkdirs(testDeepSubDir); - fail("Should throw IOException."); - } catch (ParentNotDirectoryException e) { - // expected - } - SwiftTestUtils.assertPathDoesNotExist(fs, "testDeepSubDir after mkdir", - testDeepSubDir); - - } - - @Test - public void testWriteReadAndDeleteEmptyFile() throws Exception { - try { - super.testWriteReadAndDeleteEmptyFile(); - } catch (AssertionError e) { - SwiftTestUtils.downgrade("empty files get mistaken for directories", e); - } - } - - @Test - public void testMkdirsWithUmask() throws Exception { - //unsupported - } - - @Test - public void testZeroByteFilesAreFiles() throws Exception { -// SwiftTestUtils.unsupported("testZeroByteFilesAreFiles"); - } -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemDelete.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemDelete.java deleted file mode 100644 index 81af49c2a34..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemDelete.java +++ /dev/null @@ -1,90 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift; - -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.fs.swift.util.SwiftTestUtils; -import org.junit.Test; - -import java.io.IOException; -/** - * Test deletion operations - */ -public class TestSwiftFileSystemDelete extends SwiftFileSystemBaseTest { - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testDeleteEmptyFile() throws IOException { - final Path file = new Path("/test/testDeleteEmptyFile"); - createEmptyFile(file); - SwiftTestUtils.noteAction("about to delete"); - assertDeleted(file, true); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testDeleteEmptyFileTwice() throws IOException { - final Path file = new Path("/test/testDeleteEmptyFileTwice"); - createEmptyFile(file); - assertDeleted(file, true); - SwiftTestUtils.noteAction("multiple creates, and deletes"); - assertFalse("Delete returned true", fs.delete(file, false)); - createEmptyFile(file); - assertDeleted(file, true); - assertFalse("Delete returned true", fs.delete(file, false)); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testDeleteNonEmptyFile() throws IOException { - final Path file = new Path("/test/testDeleteNonEmptyFile"); - createFile(file); - assertDeleted(file, true); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testDeleteNonEmptyFileTwice() throws IOException { - final Path file = new Path("/test/testDeleteNonEmptyFileTwice"); - createFile(file); - assertDeleted(file, true); - assertFalse("Delete returned true", fs.delete(file, false)); - createFile(file); - assertDeleted(file, true); - assertFalse("Delete returned true", fs.delete(file, false)); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testDeleteTestDir() throws IOException { - final Path file = new Path("/test/"); - fs.delete(file, true); - assertPathDoesNotExist("Test dir found", file); - } - - /** - * Test recursive root directory deletion fails if there is an entry underneath - * @throws Throwable - */ - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testRmRootDirRecursiveIsForbidden() throws Throwable { - Path root = path("/"); - Path testFile = path("/test"); - createFile(testFile); - assertTrue("rm(/) returned false", fs.delete(root, true)); - assertExists("Root dir is missing", root); - assertPathDoesNotExist("test file not deleted", testFile); - } - -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemDirectories.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemDirectories.java deleted file mode 100644 index 9b4ba5e8c6f..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemDirectories.java +++ /dev/null @@ -1,141 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift; - -import org.apache.hadoop.fs.FileStatus; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.fs.swift.snative.SwiftFileStatus; -import org.apache.hadoop.fs.swift.util.SwiftTestUtils; -import org.junit.Test; - -import java.io.FileNotFoundException; - -/** - * Test swift-specific directory logic. - * This class is HDFS-1 compatible; its designed to be subclasses by something - * with HDFS2 extensions - */ -public class TestSwiftFileSystemDirectories extends SwiftFileSystemBaseTest { - - /** - * Asserts that a zero byte file has a status of file and not - * directory or symlink - * - * @throws Exception on failures - */ - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testZeroByteFilesAreDirectories() throws Exception { - Path src = path("/test/testZeroByteFilesAreFiles"); - //create a zero byte file - SwiftTestUtils.touch(fs, src); - SwiftTestUtils.assertIsDirectory(fs, src); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testNoStatusForMissingDirectories() throws Throwable { - Path missing = path("/test/testNoStatusForMissingDirectories"); - assertPathDoesNotExist("leftover?", missing); - try { - FileStatus[] statuses = fs.listStatus(missing); - //not expected - fail("Expected a FileNotFoundException, got the status " + statuses); - } catch (FileNotFoundException expected) { - //expected - } - } - - /** - * test that a dir off root has a listStatus() call that - * works as expected. and that when a child is added. it changes - * - * @throws Exception on failures - */ - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testDirectoriesOffRootHaveMatchingFileStatus() throws Exception { - Path test = path("/test"); - fs.delete(test, true); - mkdirs(test); - assertExists("created test directory", test); - FileStatus[] statuses = fs.listStatus(test); - String statusString = statusToString(test.toString(), statuses); - assertEquals("Wrong number of elements in file status " + statusString, 0, - statuses.length); - - Path src = path("/test/file"); - - //create a zero byte file - SwiftTestUtils.touch(fs, src); - //stat it - statuses = fs.listStatus(test); - statusString = statusToString(test.toString(), statuses); - assertEquals("Wrong number of elements in file status " + statusString, 1, - statuses.length); - SwiftFileStatus stat = (SwiftFileStatus) statuses[0]; - assertTrue("isDir(): Not a directory: " + stat, stat.isDirectory()); - extraStatusAssertions(stat); - } - - /** - * test that a dir two levels down has a listStatus() call that - * works as expected. - * - * @throws Exception on failures - */ - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testDirectoriesLowerDownHaveMatchingFileStatus() throws Exception { - Path test = path("/test/testDirectoriesLowerDownHaveMatchingFileStatus"); - fs.delete(test, true); - mkdirs(test); - assertExists("created test sub directory", test); - FileStatus[] statuses = fs.listStatus(test); - String statusString = statusToString(test.toString(), statuses); - assertEquals("Wrong number of elements in file status " + statusString,0, - statuses.length); - } - - private String statusToString(String pathname, - FileStatus[] statuses) { - assertNotNull(statuses); - return SwiftTestUtils.dumpStats(pathname,statuses); - } - - /** - * method for subclasses to add extra assertions - * @param stat status to look at - */ - protected void extraStatusAssertions(SwiftFileStatus stat) { - - } - - /** - * Asserts that a zero byte file has a status of file and not - * directory or symlink - * - * @throws Exception on failures - */ - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testMultiByteFilesAreFiles() throws Exception { - Path src = path("/test/testMultiByteFilesAreFiles"); - SwiftTestUtils.writeTextFile(fs, src, "testMultiByteFilesAreFiles", false); - assertIsFile(src); - FileStatus status = fs.getFileStatus(src); - assertFalse(status.isDirectory()); - } - -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemExtendedContract.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemExtendedContract.java deleted file mode 100644 index 844463db6d9..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemExtendedContract.java +++ /dev/null @@ -1,143 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift; - -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.FSDataInputStream; -import org.apache.hadoop.fs.FSDataOutputStream; -import org.apache.hadoop.fs.FileStatus; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.fs.swift.http.RestClientBindings; -import org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem; -import org.apache.hadoop.fs.swift.util.SwiftTestUtils; -import org.apache.hadoop.io.IOUtils; -import org.apache.hadoop.util.StringUtils; -import org.junit.Test; - -import java.io.FileNotFoundException; -import java.io.IOException; -import java.net.URI; - -public class TestSwiftFileSystemExtendedContract extends SwiftFileSystemBaseTest { - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testOpenNonExistingFile() throws IOException { - final Path p = new Path("/test/testOpenNonExistingFile"); - //open it as a file, should get FileNotFoundException - try { - final FSDataInputStream in = fs.open(p); - in.close(); - fail("didn't expect to get here"); - } catch (FileNotFoundException fnfe) { - LOG.debug("Expected: " + fnfe, fnfe); - } - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testFilesystemHasURI() throws Throwable { - assertNotNull(fs.getUri()); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testCreateFile() throws Exception { - final Path f = new Path("/test/testCreateFile"); - final FSDataOutputStream fsDataOutputStream = fs.create(f); - fsDataOutputStream.close(); - assertExists("created file", f); - } - - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testWriteReadFile() throws Exception { - final Path f = new Path("/test/test"); - final FSDataOutputStream fsDataOutputStream = fs.create(f); - final String message = "Test string"; - fsDataOutputStream.write(message.getBytes()); - fsDataOutputStream.close(); - assertExists("created file", f); - FSDataInputStream open = null; - try { - open = fs.open(f); - final byte[] bytes = new byte[512]; - final int read = open.read(bytes); - final byte[] buffer = new byte[read]; - System.arraycopy(bytes, 0, buffer, 0, read); - assertEquals(message, new String(buffer)); - } finally { - fs.delete(f, false); - IOUtils.closeStream(open); - } - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testConfDefinesFilesystem() throws Throwable { - Configuration conf = new Configuration(); - SwiftTestUtils.getServiceURI(conf); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testConfIsValid() throws Throwable { - Configuration conf = new Configuration(); - URI fsURI = SwiftTestUtils.getServiceURI(conf); - RestClientBindings.bind(fsURI, conf); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testGetSchemeImplemented() throws Throwable { - String scheme = fs.getScheme(); - assertEquals(SwiftNativeFileSystem.SWIFT,scheme); - } - - /** - * Assert that a filesystem is case sensitive. - * This is done by creating a mixed-case filename and asserting that - * its lower case version is not there. - * - * @throws Exception failures - */ - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testFilesystemIsCaseSensitive() throws Exception { - String mixedCaseFilename = "/test/UPPER.TXT"; - Path upper = path(mixedCaseFilename); - Path lower = path(StringUtils.toLowerCase(mixedCaseFilename)); - assertFalse("File exists" + upper, fs.exists(upper)); - assertFalse("File exists" + lower, fs.exists(lower)); - FSDataOutputStream out = fs.create(upper); - out.writeUTF("UPPER"); - out.close(); - FileStatus upperStatus = fs.getFileStatus(upper); - assertExists("Original upper case file" + upper, upper); - //verify the lower-case version of the filename doesn't exist - assertPathDoesNotExist("lower case file", lower); - //now overwrite the lower case version of the filename with a - //new version. - out = fs.create(lower); - out.writeUTF("l"); - out.close(); - assertExists("lower case file", lower); - //verify the length of the upper file hasn't changed - assertExists("Original upper case file " + upper, upper); - FileStatus newStatus = fs.getFileStatus(upper); - assertEquals("Expected status:" + upperStatus - + " actual status " + newStatus, - upperStatus.getLen(), - newStatus.getLen()); - } - -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemLsOperations.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemLsOperations.java deleted file mode 100644 index 5e2b1b72319..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemLsOperations.java +++ /dev/null @@ -1,169 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift; - -import org.apache.hadoop.fs.FileStatus; -import org.apache.hadoop.fs.Path; -import org.junit.Test; - -import java.io.IOException; - -import static org.apache.hadoop.fs.swift.util.SwiftTestUtils.assertListStatusFinds; -import static org.apache.hadoop.fs.swift.util.SwiftTestUtils.cleanup; -import static org.apache.hadoop.fs.swift.util.SwiftTestUtils.dumpStats; -import static org.apache.hadoop.fs.swift.util.SwiftTestUtils.touch; - -/** - * Test the FileSystem#listStatus() operations - */ -public class TestSwiftFileSystemLsOperations extends SwiftFileSystemBaseTest { - - private Path[] testDirs; - - /** - * Setup creates dirs under test/hadoop - * - * @throws Exception - */ - @Override - public void setUp() throws Exception { - super.setUp(); - //delete the test directory - Path test = path("/test"); - fs.delete(test, true); - mkdirs(test); - } - - /** - * Create subdirectories and files under test/ for those tests - * that want them. Doing so adds overhead to setup and teardown, - * so should only be done for those tests that need them. - * @throws IOException on an IO problem - */ - private void createTestSubdirs() throws IOException { - testDirs = new Path[]{ - path("/test/hadoop/a"), - path("/test/hadoop/b"), - path("/test/hadoop/c/1"), - }; - - assertPathDoesNotExist("test directory setup", testDirs[0]); - for (Path path : testDirs) { - mkdirs(path); - } - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testListLevelTest() throws Exception { - createTestSubdirs(); - FileStatus[] paths = fs.listStatus(path("/test")); - assertEquals(dumpStats("/test", paths), 1, paths.length); - assertEquals(path("/test/hadoop"), paths[0].getPath()); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testListLevelTestHadoop() throws Exception { - createTestSubdirs(); - FileStatus[] paths; - paths = fs.listStatus(path("/test/hadoop")); - String stats = dumpStats("/test/hadoop", paths); - assertEquals("Paths.length wrong in " + stats, 3, paths.length); - assertEquals("Path element[0] wrong: " + stats, path("/test/hadoop/a"), - paths[0].getPath()); - assertEquals("Path element[1] wrong: " + stats, path("/test/hadoop/b"), - paths[1].getPath()); - assertEquals("Path element[2] wrong: " + stats, path("/test/hadoop/c"), - paths[2].getPath()); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testListStatusEmptyDirectory() throws Exception { - createTestSubdirs(); - FileStatus[] paths; - paths = fs.listStatus(path("/test/hadoop/a")); - assertEquals(dumpStats("/test/hadoop/a", paths), 0, - paths.length); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testListStatusFile() throws Exception { - describe("Create a single file under /test;" + - " assert that listStatus(/test) finds it"); - Path file = path("/test/filename"); - createFile(file); - FileStatus[] pathStats = fs.listStatus(file); - assertEquals(dumpStats("/test/", pathStats), - 1, - pathStats.length); - //and assert that the len of that ls'd path is the same as the original - FileStatus lsStat = pathStats[0]; - assertEquals("Wrong file len in listing of " + lsStat, - data.length, lsStat.getLen()); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testListEmptyRoot() throws Throwable { - describe("Empty the root dir and verify that an LS / returns {}"); - cleanup("testListEmptyRoot", fs, "/test"); - cleanup("testListEmptyRoot", fs, "/user"); - FileStatus[] fileStatuses = fs.listStatus(path("/")); - assertEquals("Non-empty root" + dumpStats("/", fileStatuses), - 0, - fileStatuses.length); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testListNonEmptyRoot() throws Throwable { - Path test = path("/test"); - touch(fs, test); - FileStatus[] fileStatuses = fs.listStatus(path("/")); - String stats = dumpStats("/", fileStatuses); - assertEquals("Wrong #of root children" + stats, 1, fileStatuses.length); - FileStatus status = fileStatuses[0]; - assertEquals("Wrong path value" + stats,test, status.getPath()); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testListStatusRootDir() throws Throwable { - Path dir = path("/"); - Path child = path("/test"); - touch(fs, child); - assertListStatusFinds(fs, dir, child); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testListStatusFiltered() throws Throwable { - Path dir = path("/"); - Path child = path("/test"); - touch(fs, child); - FileStatus[] stats = fs.listStatus(dir, new AcceptAllFilter()); - boolean found = false; - StringBuilder builder = new StringBuilder(); - for (FileStatus stat : stats) { - builder.append(stat.toString()).append('\n'); - if (stat.getPath().equals(child)) { - found = true; - } - } - assertTrue("Path " + child - + " not found in directory " + dir + ":" + builder, - found); - } - -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemPartitionedUploads.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemPartitionedUploads.java deleted file mode 100644 index 419d0303a04..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemPartitionedUploads.java +++ /dev/null @@ -1,442 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.fs.swift; - -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.BlockLocation; -import org.apache.hadoop.fs.FSDataOutputStream; -import org.apache.hadoop.fs.FileStatus; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.fs.swift.http.SwiftProtocolConstants; -import org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem; -import org.apache.hadoop.fs.swift.util.SwiftTestUtils; -import org.apache.hadoop.fs.swift.util.SwiftUtils; -import org.apache.hadoop.io.IOUtils; -import org.apache.http.Header; -import org.junit.Test; -import org.junit.internal.AssumptionViolatedException; - -import java.io.IOException; -import java.net.URI; -import java.net.URISyntaxException; - -import static org.apache.hadoop.fs.swift.util.SwiftTestUtils.assertPathExists; -import static org.apache.hadoop.fs.swift.util.SwiftTestUtils.readDataset; - -/** - * Test partitioned uploads. - * This is done by forcing a very small partition size and verifying that it - * is picked up. - */ -public class TestSwiftFileSystemPartitionedUploads extends - SwiftFileSystemBaseTest { - - public static final String WRONG_PARTITION_COUNT = - "wrong number of partitions written into "; - public static final int PART_SIZE = 1; - public static final int PART_SIZE_BYTES = PART_SIZE * 1024; - public static final int BLOCK_SIZE = 1024; - private URI uri; - - @Override - protected Configuration createConfiguration() { - Configuration conf = super.createConfiguration(); - //set the partition size to 1 KB - conf.setInt(SwiftProtocolConstants.SWIFT_PARTITION_SIZE, PART_SIZE); - return conf; - } - - @Test(timeout = SWIFT_BULK_IO_TEST_TIMEOUT) - public void testPartitionPropertyPropagatesToConf() throws Throwable { - assertEquals(1, - getConf().getInt(SwiftProtocolConstants.SWIFT_PARTITION_SIZE, - 0)); - } - - @Test(timeout = SWIFT_BULK_IO_TEST_TIMEOUT) - public void testPartionPropertyPropagatesToStore() throws Throwable { - assertEquals(1, fs.getStore().getPartsizeKB()); - } - - /** - * tests functionality for big files ( > 5Gb) upload - */ - @Test(timeout = SWIFT_BULK_IO_TEST_TIMEOUT) - public void testFilePartUpload() throws Throwable { - - final Path path = new Path("/test/testFilePartUpload"); - - int len = 8192; - final byte[] src = SwiftTestUtils.dataset(len, 32, 144); - FSDataOutputStream out = fs.create(path, - false, - getBufferSize(), - (short) 1, - BLOCK_SIZE); - - try { - int totalPartitionsToWrite = len / PART_SIZE_BYTES; - assertPartitionsWritten("Startup", out, 0); - //write 2048 - int firstWriteLen = 2048; - out.write(src, 0, firstWriteLen); - //assert - long expected = getExpectedPartitionsWritten(firstWriteLen, - PART_SIZE_BYTES, - false); - SwiftUtils.debug(LOG, "First write: predict %d partitions written", - expected); - assertPartitionsWritten("First write completed", out, expected); - //write the rest - int remainder = len - firstWriteLen; - SwiftUtils.debug(LOG, "remainder: writing: %d bytes", remainder); - - out.write(src, firstWriteLen, remainder); - expected = - getExpectedPartitionsWritten(len, PART_SIZE_BYTES, false); - assertPartitionsWritten("Remaining data", out, expected); - out.close(); - expected = - getExpectedPartitionsWritten(len, PART_SIZE_BYTES, true); - assertPartitionsWritten("Stream closed", out, expected); - - Header[] headers = fs.getStore().getObjectHeaders(path, true); - for (Header header : headers) { - LOG.info(header.toString()); - } - - byte[] dest = readDataset(fs, path, len); - LOG.info("Read dataset from " + path + ": data length =" + len); - //compare data - SwiftTestUtils.compareByteArrays(src, dest, len); - FileStatus status; - - final Path qualifiedPath = fs.makeQualified(path); - status = fs.getFileStatus(qualifiedPath); - //now see what block location info comes back. - //This will vary depending on the Swift version, so the results - //aren't checked -merely that the test actually worked - BlockLocation[] locations = fs.getFileBlockLocations(status, 0, len); - assertNotNull("Null getFileBlockLocations()", locations); - assertTrue("empty array returned for getFileBlockLocations()", - locations.length > 0); - - //last bit of test -which seems to play up on partitions, which we download - //to a skip - try { - validatePathLen(path, len); - } catch (AssertionError e) { - //downgrade to a skip - throw new AssumptionViolatedException(e, null); - } - - } finally { - IOUtils.closeStream(out); - } - } - /** - * tests functionality for big files ( > 5Gb) upload - */ - @Test(timeout = SWIFT_BULK_IO_TEST_TIMEOUT) - public void testFilePartUploadNoLengthCheck() throws IOException, URISyntaxException { - - final Path path = new Path("/test/testFilePartUploadLengthCheck"); - - int len = 8192; - final byte[] src = SwiftTestUtils.dataset(len, 32, 144); - FSDataOutputStream out = fs.create(path, - false, - getBufferSize(), - (short) 1, - BLOCK_SIZE); - - try { - int totalPartitionsToWrite = len / PART_SIZE_BYTES; - assertPartitionsWritten("Startup", out, 0); - //write 2048 - int firstWriteLen = 2048; - out.write(src, 0, firstWriteLen); - //assert - long expected = getExpectedPartitionsWritten(firstWriteLen, - PART_SIZE_BYTES, - false); - SwiftUtils.debug(LOG, "First write: predict %d partitions written", - expected); - assertPartitionsWritten("First write completed", out, expected); - //write the rest - int remainder = len - firstWriteLen; - SwiftUtils.debug(LOG, "remainder: writing: %d bytes", remainder); - - out.write(src, firstWriteLen, remainder); - expected = - getExpectedPartitionsWritten(len, PART_SIZE_BYTES, false); - assertPartitionsWritten("Remaining data", out, expected); - out.close(); - expected = - getExpectedPartitionsWritten(len, PART_SIZE_BYTES, true); - assertPartitionsWritten("Stream closed", out, expected); - - Header[] headers = fs.getStore().getObjectHeaders(path, true); - for (Header header : headers) { - LOG.info(header.toString()); - } - - byte[] dest = readDataset(fs, path, len); - LOG.info("Read dataset from " + path + ": data length =" + len); - //compare data - SwiftTestUtils.compareByteArrays(src, dest, len); - FileStatus status = fs.getFileStatus(path); - - //now see what block location info comes back. - //This will vary depending on the Swift version, so the results - //aren't checked -merely that the test actually worked - BlockLocation[] locations = fs.getFileBlockLocations(status, 0, len); - assertNotNull("Null getFileBlockLocations()", locations); - assertTrue("empty array returned for getFileBlockLocations()", - locations.length > 0); - } finally { - IOUtils.closeStream(out); - } - } - - private FileStatus validatePathLen(Path path, int len) throws IOException { - //verify that the length is what was written in a direct status check - final Path qualifiedPath = fs.makeQualified(path); - FileStatus[] parentDirListing = fs.listStatus(qualifiedPath.getParent()); - StringBuilder listing = lsToString(parentDirListing); - String parentDirLS = listing.toString(); - FileStatus status = fs.getFileStatus(qualifiedPath); - assertEquals("Length of written file " + qualifiedPath - + " from status check " + status - + " in dir " + listing, - len, - status.getLen()); - String fileInfo = qualifiedPath + " " + status; - assertFalse("File claims to be a directory " + fileInfo, - status.isDirectory()); - - FileStatus listedFileStat = resolveChild(parentDirListing, qualifiedPath); - assertNotNull("Did not find " + path + " in " + parentDirLS, - listedFileStat); - //file is in the parent dir. Now validate it's stats - assertEquals("Wrong len for " + path + " in listing " + parentDirLS, - len, - listedFileStat.getLen()); - listedFileStat.toString(); - return status; - } - - private FileStatus resolveChild(FileStatus[] parentDirListing, - Path childPath) { - FileStatus listedFileStat = null; - for (FileStatus stat : parentDirListing) { - if (stat.getPath().equals(childPath)) { - listedFileStat = stat; - } - } - return listedFileStat; - } - - private StringBuilder lsToString(FileStatus[] parentDirListing) { - StringBuilder listing = new StringBuilder(); - for (FileStatus stat : parentDirListing) { - listing.append(stat).append("\n"); - } - return listing; - } - - /** - * Calculate the #of partitions expected from the upload - * @param uploaded number of bytes uploaded - * @param partSizeBytes the partition size - * @param closed whether or not the stream has closed - * @return the expected number of partitions, for use in assertions. - */ - private int getExpectedPartitionsWritten(long uploaded, - int partSizeBytes, - boolean closed) { - //#of partitions in total - int partitions = (int) (uploaded / partSizeBytes); - //#of bytes past the last partition - int remainder = (int) (uploaded % partSizeBytes); - if (closed) { - //all data is written, so if there was any remainder, it went up - //too - return partitions + ((remainder > 0) ? 1 : 0); - } else { - //not closed. All the remainder is buffered, - return partitions; - } - } - - private int getBufferSize() { - return fs.getConf().getInt("io.file.buffer.size", 4096); - } - - /** - * Test sticks up a very large partitioned file and verifies that - * it comes back unchanged. - * @throws Throwable - */ - @Test(timeout = SWIFT_BULK_IO_TEST_TIMEOUT) - public void testManyPartitionedFile() throws Throwable { - final Path path = new Path("/test/testManyPartitionedFile"); - - int len = PART_SIZE_BYTES * 15; - final byte[] src = SwiftTestUtils.dataset(len, 32, 144); - FSDataOutputStream out = fs.create(path, - false, - getBufferSize(), - (short) 1, - BLOCK_SIZE); - - out.write(src, 0, src.length); - int expected = - getExpectedPartitionsWritten(len, PART_SIZE_BYTES, true); - out.close(); - assertPartitionsWritten("write completed", out, expected); - assertEquals("too few bytes written", len, - SwiftNativeFileSystem.getBytesWritten(out)); - assertEquals("too few bytes uploaded", len, - SwiftNativeFileSystem.getBytesUploaded(out)); - //now we verify that the data comes back. If it - //doesn't, it means that the ordering of the partitions - //isn't right - byte[] dest = readDataset(fs, path, len); - //compare data - SwiftTestUtils.compareByteArrays(src, dest, len); - //finally, check the data - FileStatus[] stats = fs.listStatus(path); - assertEquals("wrong entry count in " - + SwiftTestUtils.dumpStats(path.toString(), stats), - expected, stats.length); - } - - /** - * Test that when a partitioned file is overwritten by a smaller one, - * all the old partitioned files go away - * @throws Throwable - */ - @Test(timeout = SWIFT_BULK_IO_TEST_TIMEOUT) - public void testOverwritePartitionedFile() throws Throwable { - final Path path = new Path("/test/testOverwritePartitionedFile"); - - final int len1 = 8192; - final byte[] src1 = SwiftTestUtils.dataset(len1, 'A', 'Z'); - FSDataOutputStream out = fs.create(path, - false, - getBufferSize(), - (short) 1, - 1024); - out.write(src1, 0, len1); - out.close(); - long expected = getExpectedPartitionsWritten(len1, - PART_SIZE_BYTES, - false); - assertPartitionsWritten("initial upload", out, expected); - assertExists("Exists", path); - FileStatus status = fs.getFileStatus(path); - assertEquals("Length", len1, status.getLen()); - //now write a shorter file with a different dataset - final int len2 = 4095; - final byte[] src2 = SwiftTestUtils.dataset(len2, 'a', 'z'); - out = fs.create(path, - true, - getBufferSize(), - (short) 1, - 1024); - out.write(src2, 0, len2); - out.close(); - status = fs.getFileStatus(path); - assertEquals("Length", len2, status.getLen()); - byte[] dest = readDataset(fs, path, len2); - //compare data - SwiftTestUtils.compareByteArrays(src2, dest, len2); - } - - @Test(timeout = SWIFT_BULK_IO_TEST_TIMEOUT) - public void testDeleteSmallPartitionedFile() throws Throwable { - final Path path = new Path("/test/testDeleteSmallPartitionedFile"); - - final int len1 = 1024; - final byte[] src1 = SwiftTestUtils.dataset(len1, 'A', 'Z'); - SwiftTestUtils.writeDataset(fs, path, src1, len1, 1024, false); - assertExists("Exists", path); - - Path part_0001 = new Path(path, SwiftUtils.partitionFilenameFromNumber(1)); - Path part_0002 = new Path(path, SwiftUtils.partitionFilenameFromNumber(2)); - String ls = SwiftTestUtils.ls(fs, path); - assertExists("Partition 0001 Exists in " + ls, part_0001); - assertPathDoesNotExist("partition 0002 found under " + ls, part_0002); - assertExists("Partition 0002 Exists in " + ls, part_0001); - fs.delete(path, false); - assertPathDoesNotExist("deleted file still there", path); - ls = SwiftTestUtils.ls(fs, path); - assertPathDoesNotExist("partition 0001 file still under " + ls, part_0001); - } - - @Test(timeout = SWIFT_BULK_IO_TEST_TIMEOUT) - public void testDeletePartitionedFile() throws Throwable { - final Path path = new Path("/test/testDeletePartitionedFile"); - - SwiftTestUtils.writeDataset(fs, path, data, data.length, 1024, false); - assertExists("Exists", path); - - Path part_0001 = new Path(path, SwiftUtils.partitionFilenameFromNumber(1)); - Path part_0002 = new Path(path, SwiftUtils.partitionFilenameFromNumber(2)); - String ls = SwiftTestUtils.ls(fs, path); - assertExists("Partition 0001 Exists in " + ls, part_0001); - assertExists("Partition 0002 Exists in " + ls, part_0001); - fs.delete(path, false); - assertPathDoesNotExist("deleted file still there", path); - ls = SwiftTestUtils.ls(fs, path); - assertPathDoesNotExist("partition 0001 file still under " + ls, part_0001); - assertPathDoesNotExist("partition 0002 file still under " + ls, part_0002); - } - - - @Test(timeout = SWIFT_BULK_IO_TEST_TIMEOUT) - public void testRenamePartitionedFile() throws Throwable { - Path src = new Path("/test/testRenamePartitionedFileSrc"); - - int len = data.length; - SwiftTestUtils.writeDataset(fs, src, data, len, 1024, false); - assertExists("Exists", src); - - String partOneName = SwiftUtils.partitionFilenameFromNumber(1); - Path srcPart = new Path(src, partOneName); - Path dest = new Path("/test/testRenamePartitionedFileDest"); - Path destPart = new Path(src, partOneName); - assertExists("Partition Exists", srcPart); - fs.rename(src, dest); - assertPathExists(fs, "dest file missing", dest); - FileStatus status = fs.getFileStatus(dest); - assertEquals("Length of renamed file is wrong", len, status.getLen()); - byte[] destData = readDataset(fs, dest, len); - //compare data - SwiftTestUtils.compareByteArrays(data, destData, len); - String srcLs = SwiftTestUtils.ls(fs, src); - String destLs = SwiftTestUtils.ls(fs, dest); - - assertPathDoesNotExist("deleted file still found in " + srcLs, src); - - assertPathDoesNotExist("partition file still found in " + srcLs, srcPart); - } - - -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemRead.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemRead.java deleted file mode 100644 index 84794cb7250..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemRead.java +++ /dev/null @@ -1,94 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift; - -import org.apache.hadoop.fs.BlockLocation; -import org.apache.hadoop.fs.FileStatus; -import org.apache.hadoop.fs.Path; -import org.junit.Test; - -import java.io.EOFException; -import java.io.IOException; - -import static org.apache.hadoop.fs.swift.util.SwiftTestUtils.readBytesToString; -import static org.apache.hadoop.fs.swift.util.SwiftTestUtils.writeTextFile; - -/** - * Test filesystem read operations - */ -public class TestSwiftFileSystemRead extends SwiftFileSystemBaseTest { - - - /** - * Read past the end of a file: expect the operation to fail - * @throws IOException - */ - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testOverRead() throws IOException { - final String message = "message"; - final Path filePath = new Path("/test/file.txt"); - - writeTextFile(fs, filePath, message, false); - - try { - readBytesToString(fs, filePath, 20); - fail("expected an exception"); - } catch (EOFException e) { - //expected - } - } - - /** - * Read and write some JSON - * @throws IOException - */ - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testRWJson() throws IOException { - final String message = "{" + - " 'json': { 'i':43, 'b':true}," + - " 's':'string'" + - "}"; - final Path filePath = new Path("/test/file.json"); - - writeTextFile(fs, filePath, message, false); - String readJson = readBytesToString(fs, filePath, message.length()); - assertEquals(message,readJson); - //now find out where it is - FileStatus status = fs.getFileStatus(filePath); - BlockLocation[] locations = fs.getFileBlockLocations(status, 0, 10); - } - - /** - * Read and write some XML - * @throws IOException - */ - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testRWXML() throws IOException { - final String message = "" + - " " + - " string" + - ""; - final Path filePath = new Path("/test/file.xml"); - - writeTextFile(fs, filePath, message, false); - String read = readBytesToString(fs, filePath, message.length()); - assertEquals(message,read); - } - -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemRename.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemRename.java deleted file mode 100644 index f5ad155ffe3..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemRename.java +++ /dev/null @@ -1,275 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift; - -import org.apache.hadoop.fs.FSDataInputStream; -import org.apache.hadoop.fs.FSDataOutputStream; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.fs.swift.exceptions.SwiftOperationFailedException; -import org.apache.hadoop.fs.swift.util.SwiftTestUtils; -import org.junit.Test; - -import java.io.IOException; - -import static org.apache.hadoop.fs.swift.util.SwiftTestUtils.compareByteArrays; -import static org.apache.hadoop.fs.swift.util.SwiftTestUtils.dataset; -import static org.apache.hadoop.fs.swift.util.SwiftTestUtils.readBytesToString; -import static org.apache.hadoop.fs.swift.util.SwiftTestUtils.readDataset; -import static org.apache.hadoop.fs.swift.util.SwiftTestUtils.writeDataset; - -public class TestSwiftFileSystemRename extends SwiftFileSystemBaseTest { - - /** - * Rename a file into a directory - * - * @throws Exception - */ - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testRenameFileIntoExistingDirectory() throws Exception { - assumeRenameSupported(); - - Path src = path("/test/olddir/file"); - createFile(src); - Path dst = path("/test/new/newdir"); - fs.mkdirs(dst); - rename(src, dst, true, false, true); - Path newFile = path("/test/new/newdir/file"); - if (!fs.exists(newFile)) { - String ls = ls(dst); - LOG.info(ls(path("/test/new"))); - LOG.info(ls(path("/test/hadoop"))); - fail("did not find " + newFile + " - directory: " + ls); - } - assertTrue("Destination changed", - fs.exists(path("/test/new/newdir/file"))); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testRenameFile() throws Exception { - assumeRenameSupported(); - - final Path old = new Path("/test/alice/file"); - final Path newPath = new Path("/test/bob/file"); - fs.mkdirs(newPath.getParent()); - final FSDataOutputStream fsDataOutputStream = fs.create(old); - final byte[] message = "Some data".getBytes(); - fsDataOutputStream.write(message); - fsDataOutputStream.close(); - - assertTrue(fs.exists(old)); - rename(old, newPath, true, false, true); - - final FSDataInputStream bobStream = fs.open(newPath); - final byte[] bytes = new byte[512]; - final int read = bobStream.read(bytes); - bobStream.close(); - final byte[] buffer = new byte[read]; - System.arraycopy(bytes, 0, buffer, 0, read); - assertEquals(new String(message), new String(buffer)); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testRenameDirectory() throws Exception { - assumeRenameSupported(); - - final Path old = new Path("/test/data/logs"); - final Path newPath = new Path("/test/var/logs"); - fs.mkdirs(old); - fs.mkdirs(newPath.getParent()); - assertTrue(fs.exists(old)); - rename(old, newPath, true, false, true); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testRenameTheSameDirectory() throws Exception { - assumeRenameSupported(); - - final Path old = new Path("/test/usr/data"); - fs.mkdirs(old); - rename(old, old, false, true, true); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testRenameDirectoryIntoExistingDirectory() throws Exception { - assumeRenameSupported(); - - Path src = path("/test/olddir/dir"); - fs.mkdirs(src); - createFile(path("/test/olddir/dir/file1")); - createFile(path("/test/olddir/dir/subdir/file2")); - - Path dst = path("/test/new/newdir"); - fs.mkdirs(dst); - //this renames into a child - rename(src, dst, true, false, true); - assertExists("new dir", path("/test/new/newdir/dir")); - assertExists("Renamed nested file1", path("/test/new/newdir/dir/file1")); - assertPathDoesNotExist("Nested file1 should have been deleted", - path("/test/olddir/dir/file1")); - assertExists("Renamed nested subdir", - path("/test/new/newdir/dir/subdir/")); - assertExists("file under subdir", - path("/test/new/newdir/dir/subdir/file2")); - - assertPathDoesNotExist("Nested /test/hadoop/dir/subdir/file2 still exists", - path("/test/olddir/dir/subdir/file2")); - } - - /** - * trying to rename a directory onto itself should fail, - * preserving everything underneath. - */ - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testRenameDirToSelf() throws Throwable { - assumeRenameSupported(); - Path parentdir = path("/test/parentdir"); - fs.mkdirs(parentdir); - Path child = new Path(parentdir, "child"); - createFile(child); - - rename(parentdir, parentdir, false, true, true); - //verify the child is still there - assertIsFile(child); - } - - /** - * Assert that root directory renames are not allowed - * - * @throws Exception on failures - */ - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testRenameRootDirForbidden() throws Exception { - assumeRenameSupported(); - rename(path("/"), - path("/test/newRootDir"), - false, true, false); - } - - /** - * Assert that renaming a parent directory to be a child - * of itself is forbidden - * - * @throws Exception on failures - */ - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testRenameChildDirForbidden() throws Exception { - assumeRenameSupported(); - - Path parentdir = path("/test/parentdir"); - fs.mkdirs(parentdir); - Path childFile = new Path(parentdir, "childfile"); - createFile(childFile); - //verify one level down - Path childdir = new Path(parentdir, "childdir"); - rename(parentdir, childdir, false, true, false); - //now another level - fs.mkdirs(childdir); - Path childchilddir = new Path(childdir, "childdir"); - rename(parentdir, childchilddir, false, true, false); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testRenameFileAndVerifyContents() throws IOException { - assumeRenameSupported(); - - final Path filePath = new Path("/test/home/user/documents/file.txt"); - final Path newFilePath = new Path("/test/home/user/files/file.txt"); - mkdirs(newFilePath.getParent()); - int len = 1024; - byte[] dataset = dataset(len, 'A', 26); - writeDataset(fs, filePath, dataset, len, len, false); - rename(filePath, newFilePath, true, false, true); - byte[] dest = readDataset(fs, newFilePath, len); - compareByteArrays(dataset, dest, len); - String reread = readBytesToString(fs, newFilePath, 20); - } - - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testMoveFileUnderParent() throws Throwable { - if (!renameSupported()) return; - Path filepath = path("test/file"); - createFile(filepath); - //HDFS expects rename src, src -> true - rename(filepath, filepath, true, true, true); - //verify the file is still there - assertIsFile(filepath); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testMoveDirUnderParent() throws Throwable { - if (!renameSupported()) { - return; - } - Path testdir = path("test/dir"); - fs.mkdirs(testdir); - Path parent = testdir.getParent(); - //the outcome here is ambiguous, so is not checked - try { - fs.rename(testdir, parent); - } catch (SwiftOperationFailedException e) { - // allowed - } - assertExists("Source directory has been deleted ", testdir); - } - - /** - * trying to rename a file onto itself should succeed (it's a no-op) - */ - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testRenameFileToSelf() throws Throwable { - if (!renameSupported()) return; - Path filepath = path("test/file"); - createFile(filepath); - //HDFS expects rename src, src -> true - rename(filepath, filepath, true, true, true); - //verify the file is still there - assertIsFile(filepath); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testRenamedConsistence() throws IOException { - assumeRenameSupported(); - describe("verify that overwriting a file with new data doesn't impact" + - " the existing content"); - - final Path filePath = new Path("/test/home/user/documents/file.txt"); - final Path newFilePath = new Path("/test/home/user/files/file.txt"); - mkdirs(newFilePath.getParent()); - int len = 1024; - byte[] dataset = dataset(len, 'A', 26); - byte[] dataset2 = dataset(len, 'a', 26); - writeDataset(fs, filePath, dataset, len, len, false); - rename(filePath, newFilePath, true, false, true); - SwiftTestUtils.writeAndRead(fs, filePath, dataset2, len, len, false, true); - byte[] dest = readDataset(fs, newFilePath, len); - compareByteArrays(dataset, dest, len); - String reread = readBytesToString(fs, newFilePath, 20); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testRenameMissingFile() throws Throwable { - assumeRenameSupported(); - Path path = path("/test/RenameMissingFile"); - Path path2 = path("/test/RenameMissingFileDest"); - mkdirs(path("test")); - rename(path, path2, false, false, false); - } - -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftObjectPath.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftObjectPath.java deleted file mode 100644 index 5692b48f116..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftObjectPath.java +++ /dev/null @@ -1,171 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.fs.swift; - -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.fs.swift.http.RestClientBindings; -import org.apache.hadoop.fs.swift.http.SwiftRestClient; -import org.apache.hadoop.fs.swift.util.SwiftObjectPath; -import org.apache.hadoop.fs.swift.util.SwiftUtils; -import org.junit.Test; - -import java.net.URI; - -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertTrue; - -/** - * Unit tests for SwiftObjectPath class. - */ -public class TestSwiftObjectPath implements SwiftTestConstants { - private static final Logger LOG = - LoggerFactory.getLogger(TestSwiftObjectPath.class); - - /** - * What an endpoint looks like. This is derived from a (valid) - * rackspace endpoint address - */ - private static final String ENDPOINT = - "https://storage101.region1.example.org/v1/MossoCloudFS_9fb40cc0-1234-5678-9abc-def000c9a66"; - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testParsePath() throws Exception { - final String pathString = "/home/user/files/file1"; - final Path path = new Path(pathString); - final URI uri = new URI("http://container.localhost"); - final SwiftObjectPath expected = SwiftObjectPath.fromPath(uri, path); - final SwiftObjectPath actual = new SwiftObjectPath( - RestClientBindings.extractContainerName(uri), - pathString); - - assertEquals(expected, actual); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testParseUrlPath() throws Exception { - final String pathString = "swift://container.service1/home/user/files/file1"; - final URI uri = new URI(pathString); - final Path path = new Path(pathString); - final SwiftObjectPath expected = SwiftObjectPath.fromPath(uri, path); - final SwiftObjectPath actual = new SwiftObjectPath( - RestClientBindings.extractContainerName(uri), - "/home/user/files/file1"); - - assertEquals(expected, actual); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testHandleUrlAsPath() throws Exception { - final String hostPart = "swift://container.service1"; - final String pathPart = "/home/user/files/file1"; - final String uriString = hostPart + pathPart; - - final SwiftObjectPath expected = new SwiftObjectPath(uriString, pathPart); - final SwiftObjectPath actual = new SwiftObjectPath(uriString, uriString); - - assertEquals(expected, actual); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testParseAuthenticatedUrl() throws Exception { - final String pathString = "swift://container.service1/v2/AUTH_00345h34l93459y4/home/tom/documents/finance.docx"; - final URI uri = new URI(pathString); - final Path path = new Path(pathString); - final SwiftObjectPath expected = SwiftObjectPath.fromPath(uri, path); - final SwiftObjectPath actual = new SwiftObjectPath( - RestClientBindings.extractContainerName(uri), - "/home/tom/documents/finance.docx"); - - assertEquals(expected, actual); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testConvertToPath() throws Throwable { - String initialpath = "/dir/file1"; - Path ipath = new Path(initialpath); - SwiftObjectPath objectPath = SwiftObjectPath.fromPath(new URI(initialpath), - ipath); - URI endpoint = new URI(ENDPOINT); - URI uri = SwiftRestClient.pathToURI(objectPath, endpoint); - LOG.info("Inital Hadoop Path =" + initialpath); - LOG.info("Merged URI=" + uri); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testRootDirProbeEmptyPath() throws Throwable { - SwiftObjectPath object=new SwiftObjectPath("container",""); - assertTrue(SwiftUtils.isRootDir(object)); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testRootDirProbeRootPath() throws Throwable { - SwiftObjectPath object=new SwiftObjectPath("container","/"); - assertTrue(SwiftUtils.isRootDir(object)); - } - - private void assertParentOf(SwiftObjectPath p1, SwiftObjectPath p2) { - assertTrue(p1.toString() + " is not a parent of " + p2 ,p1.isEqualToOrParentOf( - p2)); - } - - private void assertNotParentOf(SwiftObjectPath p1, SwiftObjectPath p2) { - assertFalse(p1.toString() + " is a parent of " + p2, p1.isEqualToOrParentOf( - p2)); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testChildOfProbe() throws Throwable { - SwiftObjectPath parent = new SwiftObjectPath("container", - "/parent"); - SwiftObjectPath parent2 = new SwiftObjectPath("container", - "/parent2"); - SwiftObjectPath child = new SwiftObjectPath("container", - "/parent/child"); - SwiftObjectPath sibling = new SwiftObjectPath("container", - "/parent/sibling"); - SwiftObjectPath grandchild = new SwiftObjectPath("container", - "/parent/child/grandchild"); - assertParentOf(parent, child); - assertParentOf(parent, grandchild); - assertParentOf(child, grandchild); - assertParentOf(parent, parent); - assertNotParentOf(child, parent); - assertParentOf(child, child); - assertNotParentOf(parent, parent2); - assertNotParentOf(grandchild, parent); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testChildOfRoot() throws Throwable { - SwiftObjectPath root = new SwiftObjectPath("container", "/"); - SwiftObjectPath child = new SwiftObjectPath("container", "child"); - SwiftObjectPath grandchild = new SwiftObjectPath("container", - "/child/grandchild"); - assertParentOf(root, child); - assertParentOf(root, grandchild); - assertParentOf(child, grandchild); - assertParentOf(root, root); - assertNotParentOf(child, root); - assertParentOf(child, child); - assertNotParentOf(grandchild, root); - } - -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/contract/SwiftContract.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/contract/SwiftContract.java deleted file mode 100644 index 99f72b7be98..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/contract/SwiftContract.java +++ /dev/null @@ -1,44 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.contract; - -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.contract.AbstractBondedFSContract; -import org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem; - -/** - * The contract of OpenStack Swift: only enabled if the test binding data is provided - */ -public class SwiftContract extends AbstractBondedFSContract { - - public static final String CONTRACT_XML = "contract/swift.xml"; - - public SwiftContract(Configuration conf) { - super(conf); - //insert the base features - addConfResource(CONTRACT_XML); - } - - - @Override - public String getScheme() { - return SwiftNativeFileSystem.SWIFT; - } - -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/contract/TestSwiftContractCreate.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/contract/TestSwiftContractCreate.java deleted file mode 100644 index df15a0a84c3..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/contract/TestSwiftContractCreate.java +++ /dev/null @@ -1,37 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.contract; - -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.contract.AbstractContractCreateTest; -import org.apache.hadoop.fs.contract.AbstractFSContract; -import org.apache.hadoop.fs.contract.ContractTestUtils; - -public class TestSwiftContractCreate extends AbstractContractCreateTest { - - @Override - protected AbstractFSContract createContract(Configuration conf) { - return new SwiftContract(conf); - } - - @Override - public void testOverwriteEmptyDirectory() throws Throwable { - ContractTestUtils.skip("blobstores can't distinguish empty directories from files"); - } -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/contract/TestSwiftContractDelete.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/contract/TestSwiftContractDelete.java deleted file mode 100644 index 65d031cd398..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/contract/TestSwiftContractDelete.java +++ /dev/null @@ -1,31 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.contract; - -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.contract.AbstractContractDeleteTest; -import org.apache.hadoop.fs.contract.AbstractFSContract; - -public class TestSwiftContractDelete extends AbstractContractDeleteTest { - - @Override - protected AbstractFSContract createContract(Configuration conf) { - return new SwiftContract(conf); - } -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/contract/TestSwiftContractMkdir.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/contract/TestSwiftContractMkdir.java deleted file mode 100644 index b82ba776386..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/contract/TestSwiftContractMkdir.java +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.contract; - -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.contract.AbstractContractMkdirTest; -import org.apache.hadoop.fs.contract.AbstractFSContract; - -/** - * Test dir operations on S3 - */ -public class TestSwiftContractMkdir extends AbstractContractMkdirTest { - - @Override - protected AbstractFSContract createContract(Configuration conf) { - return new SwiftContract(conf); - } -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/contract/TestSwiftContractOpen.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/contract/TestSwiftContractOpen.java deleted file mode 100644 index 0f91b6f823e..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/contract/TestSwiftContractOpen.java +++ /dev/null @@ -1,42 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.contract; - -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.contract.AbstractContractOpenTest; -import org.apache.hadoop.fs.contract.AbstractFSContract; -import org.apache.hadoop.fs.contract.ContractTestUtils; - -public class TestSwiftContractOpen extends AbstractContractOpenTest { - - @Override - protected AbstractFSContract createContract(Configuration conf) { - return new SwiftContract(conf); - } - - @Override - public void testOpenReadDir() throws Throwable { - ContractTestUtils.skip("Skipping object-store quirk"); - } - - @Override - public void testOpenReadDirWithChild() throws Throwable { - ContractTestUtils.skip("Skipping object-store quirk"); - } -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/contract/TestSwiftContractRename.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/contract/TestSwiftContractRename.java deleted file mode 100644 index 8f1edb9b6a3..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/contract/TestSwiftContractRename.java +++ /dev/null @@ -1,32 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.contract; - -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.contract.AbstractContractRenameTest; -import org.apache.hadoop.fs.contract.AbstractFSContract; - -public class TestSwiftContractRename extends AbstractContractRenameTest { - - @Override - protected AbstractFSContract createContract(Configuration conf) { - return new SwiftContract(conf); - } - -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/contract/TestSwiftContractRootDir.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/contract/TestSwiftContractRootDir.java deleted file mode 100644 index c7b766edd49..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/contract/TestSwiftContractRootDir.java +++ /dev/null @@ -1,35 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.contract; - -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest; -import org.apache.hadoop.fs.contract.AbstractFSContract; - -/** - * root dir operations against an S3 bucket - */ -public class TestSwiftContractRootDir extends - AbstractContractRootDirectoryTest { - - @Override - protected AbstractFSContract createContract(Configuration conf) { - return new SwiftContract(conf); - } -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/contract/TestSwiftContractSeek.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/contract/TestSwiftContractSeek.java deleted file mode 100644 index d045980e698..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/contract/TestSwiftContractSeek.java +++ /dev/null @@ -1,31 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.contract; - -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.contract.AbstractContractSeekTest; -import org.apache.hadoop.fs.contract.AbstractFSContract; - -public class TestSwiftContractSeek extends AbstractContractSeekTest { - - @Override - protected AbstractFSContract createContract(Configuration conf) { - return new SwiftContract(conf); - } -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/hdfs2/TestSwiftFileSystemDirectoriesHdfs2.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/hdfs2/TestSwiftFileSystemDirectoriesHdfs2.java deleted file mode 100644 index cb64bef6c5c..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/hdfs2/TestSwiftFileSystemDirectoriesHdfs2.java +++ /dev/null @@ -1,43 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.hdfs2; - -import org.apache.hadoop.fs.swift.TestSwiftFileSystemDirectories; -import org.apache.hadoop.fs.swift.snative.SwiftFileStatus; - -/** - * Add some HDFS-2 only assertions to {@link TestSwiftFileSystemDirectories} - */ -public class TestSwiftFileSystemDirectoriesHdfs2 extends - TestSwiftFileSystemDirectories { - - - /** - * make assertions about fields that only appear in - * FileStatus in HDFS2 - * @param stat status to look at - */ - protected void extraStatusAssertions(SwiftFileStatus stat) { - //HDFS2 - assertTrue("isDirectory(): Not a directory: " + stat, stat.isDirectory()); - assertFalse("isFile(): declares itself a file: " + stat, stat.isFile()); - assertFalse("isFile(): declares itself a file: " + stat, stat.isSymlink()); - } - -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/hdfs2/TestV2LsOperations.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/hdfs2/TestV2LsOperations.java deleted file mode 100644 index 833b91d57f2..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/hdfs2/TestV2LsOperations.java +++ /dev/null @@ -1,129 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.hdfs2; - -import org.apache.hadoop.fs.FileSystem; -import org.apache.hadoop.fs.LocatedFileStatus; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.fs.RemoteIterator; -import org.apache.hadoop.fs.swift.SwiftFileSystemBaseTest; -import org.apache.hadoop.fs.swift.util.SwiftTestUtils; -import org.junit.Test; - -import java.io.IOException; - -public class TestV2LsOperations extends SwiftFileSystemBaseTest { - - private Path[] testDirs; - - /** - * Setup creates dirs under test/hadoop - * @throws Exception - */ - @Override - public void setUp() throws Exception { - super.setUp(); - //delete the test directory - Path test = path("/test"); - fs.delete(test, true); - mkdirs(test); - } - - /** - * Create subdirectories and files under test/ for those tests - * that want them. Doing so adds overhead to setup and teardown, - * so should only be done for those tests that need them. - * @throws IOException on an IO problem - */ - private void createTestSubdirs() throws IOException { - testDirs = new Path[]{ - path("/test/hadoop/a"), - path("/test/hadoop/b"), - path("/test/hadoop/c/1"), - }; - assertPathDoesNotExist("test directory setup", testDirs[0]); - for (Path path : testDirs) { - mkdirs(path); - } - } - - /** - * To get this project to compile under Hadoop 1, this code needs to be - * commented out - * - * - * @param fs filesystem - * @param dir dir - * @param subdir subdir - * @param recursive recurse? - * @throws IOException IO problems - */ - public static void assertListFilesFinds(FileSystem fs, - Path dir, - Path subdir, - boolean recursive) throws IOException { - RemoteIterator iterator = - fs.listFiles(dir, recursive); - boolean found = false; - int entries = 0; - StringBuilder builder = new StringBuilder(); - while (iterator.hasNext()) { - LocatedFileStatus next = iterator.next(); - entries++; - builder.append(next.toString()).append('\n'); - if (next.getPath().equals(subdir)) { - found = true; - } - } - assertTrue("Path " + subdir - + " not found in directory " + dir + " : " - + " entries=" + entries - + " content" - + builder.toString(), - found); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testListFilesRootDir() throws Throwable { - Path dir = path("/"); - Path child = new Path(dir, "test"); - fs.delete(child, true); - SwiftTestUtils.writeTextFile(fs, child, "text", false); - assertListFilesFinds(fs, dir, child, false); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testListFilesSubDir() throws Throwable { - createTestSubdirs(); - Path dir = path("/test/subdir"); - Path child = new Path(dir, "text.txt"); - SwiftTestUtils.writeTextFile(fs, child, "text", false); - assertListFilesFinds(fs, dir, child, false); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testListFilesRecursive() throws Throwable { - createTestSubdirs(); - Path dir = path("/test/recursive"); - Path child = new Path(dir, "hadoop/a/a.txt"); - SwiftTestUtils.writeTextFile(fs, child, "text", false); - assertListFilesFinds(fs, dir, child, true); - } - -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/http/TestRestClientBindings.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/http/TestRestClientBindings.java deleted file mode 100644 index 8075e08404a..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/http/TestRestClientBindings.java +++ /dev/null @@ -1,198 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.http; - -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.swift.SwiftTestConstants; -import org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException; -import org.junit.Assert; -import org.junit.Before; -import org.junit.Test; - -import java.net.URI; -import java.net.URISyntaxException; -import java.util.Properties; - -import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.DOT_AUTH_URL; -import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.DOT_PASSWORD; -import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.DOT_USERNAME; -import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.SWIFT_AUTH_PROPERTY; -import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.SWIFT_CONTAINER_PROPERTY; -import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.SWIFT_HTTPS_PORT_PROPERTY; -import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.SWIFT_HTTP_PORT_PROPERTY; -import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.SWIFT_PASSWORD_PROPERTY; -import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.SWIFT_REGION_PROPERTY; -import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.SWIFT_SERVICE_PROPERTY; -import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.SWIFT_TENANT_PROPERTY; -import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.SWIFT_USERNAME_PROPERTY; -import static org.apache.hadoop.fs.swift.util.SwiftTestUtils.assertPropertyEquals; - -public class TestRestClientBindings extends Assert - implements SwiftTestConstants { - - private static final String SERVICE = "sname"; - private static final String CONTAINER = "cname"; - private static final String FS_URI = "swift://" - + CONTAINER + "." + SERVICE + "/"; - private static final String AUTH_URL = "http://localhost:8080/auth"; - private static final String USER = "user"; - private static final String PASS = "pass"; - private static final String TENANT = "tenant"; - private URI filesysURI; - private Configuration conf; - - @Before - public void setup() throws URISyntaxException { - filesysURI = new URI(FS_URI); - conf = new Configuration(true); - setInstanceVal(conf, SERVICE, DOT_AUTH_URL, AUTH_URL); - setInstanceVal(conf, SERVICE, DOT_USERNAME, USER); - setInstanceVal(conf, SERVICE, DOT_PASSWORD, PASS); - } - - private void setInstanceVal(Configuration conf, - String host, - String key, - String val) { - String instance = RestClientBindings.buildSwiftInstancePrefix(host); - String confkey = instance - + key; - conf.set(confkey, val); - } - - public void testPrefixBuilder() throws Throwable { - String built = RestClientBindings.buildSwiftInstancePrefix(SERVICE); - assertEquals("fs.swift.service." + SERVICE, built); - } - - public void testBindAgainstConf() throws Exception { - Properties props = RestClientBindings.bind(filesysURI, conf); - assertPropertyEquals(props, SWIFT_CONTAINER_PROPERTY, CONTAINER); - assertPropertyEquals(props, SWIFT_SERVICE_PROPERTY, SERVICE); - assertPropertyEquals(props, SWIFT_AUTH_PROPERTY, AUTH_URL); - assertPropertyEquals(props, SWIFT_AUTH_PROPERTY, AUTH_URL); - assertPropertyEquals(props, SWIFT_USERNAME_PROPERTY, USER); - assertPropertyEquals(props, SWIFT_PASSWORD_PROPERTY, PASS); - - assertPropertyEquals(props, SWIFT_TENANT_PROPERTY, null); - assertPropertyEquals(props, SWIFT_REGION_PROPERTY, null); - assertPropertyEquals(props, SWIFT_HTTP_PORT_PROPERTY, null); - assertPropertyEquals(props, SWIFT_HTTPS_PORT_PROPERTY, null); - } - - public void expectBindingFailure(URI fsURI, Configuration config) { - try { - Properties binding = RestClientBindings.bind(fsURI, config); - //if we get here, binding didn't fail- there is something else. - //list the properties but not the values. - StringBuilder details = new StringBuilder() ; - for (Object key: binding.keySet()) { - details.append(key.toString()).append(" "); - } - fail("Expected a failure, got the binding [ "+ details+"]"); - } catch (SwiftConfigurationException expected) { - - } - } - - public void testBindAgainstConfMissingInstance() throws Exception { - Configuration badConf = new Configuration(); - expectBindingFailure(filesysURI, badConf); - } - - -/* Hadoop 2.x+ only, as conf.unset() isn't a v1 feature - public void testBindAgainstConfIncompleteInstance() throws Exception { - String instance = RestClientBindings.buildSwiftInstancePrefix(SERVICE); - conf.unset(instance + DOT_PASSWORD); - expectBindingFailure(filesysURI, conf); - } -*/ - - @Test(expected = SwiftConfigurationException.class) - public void testDottedServiceURL() throws Exception { - RestClientBindings.bind(new URI("swift://hadoop.apache.org/"), conf); - } - - @Test(expected = SwiftConfigurationException.class) - public void testMissingServiceURL() throws Exception { - RestClientBindings.bind(new URI("swift:///"), conf); - } - - /** - * inner test method that expects container extraction to fail - * -if not prints a meaningful error message. - * - * @param hostname hostname to parse - */ - private static void expectExtractContainerFail(String hostname) { - try { - String container = RestClientBindings.extractContainerName(hostname); - fail("Expected an error -got a container of '" + container - + "' from " + hostname); - } catch (SwiftConfigurationException expected) { - //expected - } - } - - /** - * inner test method that expects service extraction to fail - * -if not prints a meaningful error message. - * - * @param hostname hostname to parse - */ - public static void expectExtractServiceFail(String hostname) { - try { - String service = RestClientBindings.extractServiceName(hostname); - fail("Expected an error -got a service of '" + service - + "' from " + hostname); - } catch (SwiftConfigurationException expected) { - //expected - } - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testEmptyHostname() throws Throwable { - expectExtractContainerFail(""); - expectExtractServiceFail(""); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testDot() throws Throwable { - expectExtractContainerFail("."); - expectExtractServiceFail("."); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testSimple() throws Throwable { - expectExtractContainerFail("simple"); - expectExtractServiceFail("simple"); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testTrailingDot() throws Throwable { - expectExtractServiceFail("simple."); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testLeadingDot() throws Throwable { - expectExtractServiceFail(".leading"); - } - -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/http/TestSwiftRestClient.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/http/TestSwiftRestClient.java deleted file mode 100644 index 7568c11c562..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/http/TestSwiftRestClient.java +++ /dev/null @@ -1,117 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.http; - -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.fs.swift.SwiftTestConstants; -import org.apache.hadoop.fs.swift.util.Duration; -import org.apache.hadoop.fs.swift.util.DurationStats; -import org.apache.hadoop.fs.swift.util.SwiftObjectPath; -import org.apache.hadoop.fs.swift.util.SwiftTestUtils; -import org.apache.http.Header; -import org.junit.Assert; -import org.junit.Assume; -import org.junit.Before; -import org.junit.Test; - -import java.io.ByteArrayInputStream; -import java.io.FileNotFoundException; -import java.io.IOException; -import java.net.URI; - -public class TestSwiftRestClient implements SwiftTestConstants { - private static final Logger LOG = - LoggerFactory.getLogger(TestSwiftRestClient.class); - - private Configuration conf; - private boolean runTests; - private URI serviceURI; - - @Before - public void setup() throws IOException { - conf = new Configuration(); - runTests = SwiftTestUtils.hasServiceURI(conf); - if (runTests) { - serviceURI = SwiftTestUtils.getServiceURI(conf); - } - } - - protected void assumeEnabled() { - Assume.assumeTrue(runTests); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testCreate() throws Throwable { - assumeEnabled(); - SwiftRestClient client = createClient(); - } - - private SwiftRestClient createClient() throws IOException { - return SwiftRestClient.getInstance(serviceURI, conf); - } - - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testAuthenticate() throws Throwable { - assumeEnabled(); - SwiftRestClient client = createClient(); - client.authenticate(); - } - - @Test(timeout = SWIFT_TEST_TIMEOUT) - public void testPutAndDelete() throws Throwable { - assumeEnabled(); - SwiftRestClient client = createClient(); - client.authenticate(); - Path path = new Path("restTestPutAndDelete"); - SwiftObjectPath sobject = SwiftObjectPath.fromPath(serviceURI, path); - byte[] stuff = new byte[1]; - stuff[0] = 'a'; - client.upload(sobject, new ByteArrayInputStream(stuff), stuff.length); - //check file exists - Duration head = new Duration(); - Header[] responseHeaders = client.headRequest("expect success", - sobject, - SwiftRestClient.NEWEST); - head.finished(); - LOG.info("head request duration " + head); - for (Header header: responseHeaders) { - LOG.info(header.toString()); - } - //delete the file - client.delete(sobject); - //check file is gone - try { - Header[] headers = client.headRequest("expect fail", - sobject, - SwiftRestClient.NEWEST); - Assert.fail("Expected deleted file, but object is still present: " - + sobject); - } catch (FileNotFoundException e) { - //expected - } - for (DurationStats stats: client.getOperationStatistics()) { - LOG.info(stats.toString()); - } - } - -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/scale/SwiftScaleTestBase.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/scale/SwiftScaleTestBase.java deleted file mode 100644 index 314e7a1dfb8..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/scale/SwiftScaleTestBase.java +++ /dev/null @@ -1,37 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.scale; - -import org.apache.hadoop.fs.swift.SwiftFileSystemBaseTest; - -/** - * Base class for scale tests; here is where the common scale configuration - * keys are defined - */ - -public class SwiftScaleTestBase extends SwiftFileSystemBaseTest { - - public static final String SCALE_TEST = "scale.test."; - public static final String KEY_OPERATION_COUNT = SCALE_TEST + "operation.count"; - public static final long DEFAULT_OPERATION_COUNT = 10; - - protected long getOperationCount() { - return getConf().getLong(KEY_OPERATION_COUNT, DEFAULT_OPERATION_COUNT); - } -} diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/scale/TestWriteManySmallFiles.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/scale/TestWriteManySmallFiles.java deleted file mode 100644 index 1d6cfa2e866..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/scale/TestWriteManySmallFiles.java +++ /dev/null @@ -1,97 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.swift.scale; - -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; -import org.apache.hadoop.fs.FileStatus; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.fs.swift.util.Duration; -import org.apache.hadoop.fs.swift.util.DurationStats; -import org.apache.hadoop.fs.swift.util.SwiftTestUtils; -import org.junit.Test; - -public class TestWriteManySmallFiles extends SwiftScaleTestBase { - - public static final Logger LOG = - LoggerFactory.getLogger(TestWriteManySmallFiles.class); - - @Test(timeout = SWIFT_BULK_IO_TEST_TIMEOUT) - public void testScaledWriteThenRead() throws Throwable { - Path dir = new Path("/test/manysmallfiles"); - Duration rm1 = new Duration(); - fs.delete(dir, true); - rm1.finished(); - fs.mkdirs(dir); - Duration ls1 = new Duration(); - fs.listStatus(dir); - ls1.finished(); - long count = getOperationCount(); - SwiftTestUtils.noteAction("Beginning Write of "+ count + " files "); - DurationStats writeStats = new DurationStats("write"); - DurationStats readStats = new DurationStats("read"); - String format = "%08d"; - for (long l = 0; l < count; l++) { - String name = String.format(format, l); - Path p = new Path(dir, "part-" + name); - Duration d = new Duration(); - SwiftTestUtils.writeTextFile(fs, p, name, false); - d.finished(); - writeStats.add(d); - Thread.sleep(1000); - } - //at this point, the directory is full. - SwiftTestUtils.noteAction("Beginning ls"); - - Duration ls2 = new Duration(); - FileStatus[] status2 = (FileStatus[]) fs.listStatus(dir); - ls2.finished(); - assertEquals("Not enough entries in the directory", count, status2.length); - - SwiftTestUtils.noteAction("Beginning read"); - - for (long l = 0; l < count; l++) { - String name = String.format(format, l); - Path p = new Path(dir, "part-" + name); - Duration d = new Duration(); - String result = SwiftTestUtils.readBytesToString(fs, p, name.length()); - assertEquals(name, result); - d.finished(); - readStats.add(d); - } - //do a recursive delete - SwiftTestUtils.noteAction("Beginning delete"); - Duration rm2 = new Duration(); - fs.delete(dir, true); - rm2.finished(); - //print the stats - LOG.info(String.format("'filesystem','%s'",fs.getUri())); - LOG.info(writeStats.toString()); - LOG.info(readStats.toString()); - LOG.info(String.format( - "'rm1',%d,'ls1',%d", - rm1.value(), - ls1.value())); - LOG.info(String.format( - "'rm2',%d,'ls2',%d", - rm2.value(), - ls2.value())); - } - -} diff --git a/hadoop-tools/hadoop-openstack/src/test/resources/contract/swift.xml b/hadoop-tools/hadoop-openstack/src/test/resources/contract/swift.xml deleted file mode 100644 index fbf3a177c91..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/resources/contract/swift.xml +++ /dev/null @@ -1,105 +0,0 @@ - - - - - - - fs.contract.test.root-tests-enabled - true - - - - fs.contract.test.random-seek-count - 10 - - - - fs.contract.is-blobstore - true - - - - fs.contract.create-overwrites-directory - true - - - - fs.contract.create-visibility-delayed - true - - - - fs.contract.is-case-sensitive - true - - - - fs.contract.supports-append - false - - - - fs.contract.supports-atomic-directory-delete - false - - - - fs.contract.supports-atomic-rename - false - - - - fs.contract.supports-block-locality - false - - - - fs.contract.supports-concat - false - - - - fs.contract.supports-seek - true - - - - fs.contract.rejects-seek-past-eof - true - - - - fs.contract.supports-strict-exceptions - true - - - - fs.contract.supports-unix-permissions - false - - - - fs.contract.rename-returns-false-if-source-missing - true - - - diff --git a/hadoop-tools/hadoop-openstack/src/test/resources/core-site.xml b/hadoop-tools/hadoop-openstack/src/test/resources/core-site.xml deleted file mode 100644 index 9252e885871..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/resources/core-site.xml +++ /dev/null @@ -1,51 +0,0 @@ - - - - - - - - - - - - - hadoop.tmp.dir - target/build/test - A base for other temporary directories. - true - - - - - hadoop.security.authentication - simple - - - - - - diff --git a/hadoop-tools/hadoop-openstack/src/test/resources/log4j.properties b/hadoop-tools/hadoop-openstack/src/test/resources/log4j.properties deleted file mode 100644 index a3bb8204f0d..00000000000 --- a/hadoop-tools/hadoop-openstack/src/test/resources/log4j.properties +++ /dev/null @@ -1,39 +0,0 @@ -# -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# - -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# log4j configuration used during build and unit tests - -log4j.rootLogger=INFO,stdout -log4j.threshold=ALL -log4j.appender.stdout=org.apache.log4j.ConsoleAppender -log4j.appender.stdout.target=System.out -log4j.appender.stdout.layout=org.apache.log4j.PatternLayout -log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} [%t] %-5p %c{2} (%F:%M(%L)) - %m%n -#log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} [%t] %-5p %c %x - %m%n" -#log4j.logger.org.apache.hadoop.fs.swift=DEBUG diff --git a/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/JobConfigurationParser.java b/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/JobConfigurationParser.java index 7e79179721f..9cd2f4778fc 100644 --- a/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/JobConfigurationParser.java +++ b/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/JobConfigurationParser.java @@ -25,6 +25,8 @@ import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.DocumentBuilderFactory; import javax.xml.parsers.ParserConfigurationException; +import org.apache.hadoop.util.XMLUtils; + import org.w3c.dom.Document; import org.w3c.dom.Element; import org.w3c.dom.NodeList; @@ -55,7 +57,7 @@ public class JobConfigurationParser { Properties result = new Properties(); try { - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); diff --git a/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/ParsedConfigFile.java b/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/ParsedConfigFile.java index 1d85872c08d..a6c8bdad87d 100644 --- a/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/ParsedConfigFile.java +++ b/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/ParsedConfigFile.java @@ -17,28 +17,27 @@ */ package org.apache.hadoop.tools.rumen; +import java.io.IOException; +import java.io.StringReader; import java.util.Properties; import java.util.regex.Pattern; import java.util.regex.Matcher; -import java.io.InputStream; -import java.io.ByteArrayInputStream; -import java.io.IOException; - -import java.nio.charset.Charset; - import javax.xml.parsers.DocumentBuilderFactory; import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.ParserConfigurationException; import org.apache.hadoop.mapreduce.MRConfig; import org.apache.hadoop.mapreduce.MRJobConfig; +import org.apache.hadoop.util.XMLUtils; + import org.w3c.dom.Document; import org.w3c.dom.NodeList; import org.w3c.dom.Node; import org.w3c.dom.Element; import org.w3c.dom.Text; +import org.xml.sax.InputSource; import org.xml.sax.SAXException; class ParsedConfigFile { @@ -46,7 +45,6 @@ class ParsedConfigFile { Pattern.compile("_(job_[0-9]+_[0-9]+)_"); private static final Pattern heapPattern = Pattern.compile("-Xmx([0-9]+)([mMgG])"); - private static final Charset UTF_8 = Charset.forName("UTF-8"); final int heapMegabytes; @@ -103,13 +101,11 @@ class ParsedConfigFile { } try { - InputStream is = new ByteArrayInputStream(xmlString.getBytes(UTF_8)); - - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); - Document doc = db.parse(is); + Document doc = db.parse(new InputSource(new StringReader(xmlString))); Element root = doc.getDocumentElement(); diff --git a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/SLSCapacityScheduler.java b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/SLSCapacityScheduler.java index 18e12cca05f..6cc4cf7597e 100644 --- a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/SLSCapacityScheduler.java +++ b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/SLSCapacityScheduler.java @@ -36,6 +36,8 @@ import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.Capacity import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.ResourceCommitRequest; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.SchedulerEvent; import org.apache.hadoop.yarn.sls.SLSRunner; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; @Private @Unstable @@ -45,6 +47,7 @@ public class SLSCapacityScheduler extends CapacityScheduler implements private final SLSSchedulerCommons schedulerCommons; private Configuration conf; private SLSRunner runner; + private static final Logger LOG = LoggerFactory.getLogger(SLSCapacityScheduler.class); public SLSCapacityScheduler() { schedulerCommons = new SLSSchedulerCommons(this); @@ -105,7 +108,12 @@ public class SLSCapacityScheduler extends CapacityScheduler implements @Override public void handle(SchedulerEvent schedulerEvent) { - schedulerCommons.handle(schedulerEvent); + try { + schedulerCommons.handle(schedulerEvent); + } catch(Exception e) { + LOG.error("Caught exception while handling scheduler event", e); + throw e; + } } @Override diff --git a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/SLSFairScheduler.java b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/SLSFairScheduler.java index 1b4d5ced69b..beb411025f8 100644 --- a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/SLSFairScheduler.java +++ b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/SLSFairScheduler.java @@ -31,6 +31,8 @@ import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ContainerUpdates; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.SchedulerEvent; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler; import org.apache.hadoop.yarn.sls.SLSRunner; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; import java.util.List; @@ -40,6 +42,7 @@ public class SLSFairScheduler extends FairScheduler implements SchedulerWrapper, Configurable { private final SLSSchedulerCommons schedulerCommons; private SLSRunner runner; + private static final Logger LOG = LoggerFactory.getLogger(SLSFairScheduler.class); public SLSFairScheduler() { schedulerCommons = new SLSSchedulerCommons(this); @@ -63,7 +66,12 @@ public class SLSFairScheduler extends FairScheduler @Override public void handle(SchedulerEvent schedulerEvent) { - schedulerCommons.handle(schedulerEvent); + try { + schedulerCommons.handle(schedulerEvent); + } catch (Exception e){ + LOG.error("Caught exception while handling scheduler event", e); + throw e; + } } @Override diff --git a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/SchedulerMetrics.java b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/SchedulerMetrics.java index a1e530a6f77..f66cf4384d9 100644 --- a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/SchedulerMetrics.java +++ b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/SchedulerMetrics.java @@ -178,7 +178,7 @@ public abstract class SchedulerMetrics { pool.scheduleAtFixedRate(new HistogramsRunnable(), 0, 1000, TimeUnit.MILLISECONDS); - // a thread to output metrics for real-tiem tracking + // a thread to output metrics for real-time tracking pool.scheduleAtFixedRate(new MetricsLogRunnable(), 0, 1000, TimeUnit.MILLISECONDS); @@ -467,6 +467,9 @@ public abstract class SchedulerMetrics { schedulerHistogramList.add(histogram); histogramTimerMap.put(histogram, schedulerHandleTimerMap.get(e)); } + } catch (Exception e) { + LOG.error("Caught exception while registering scheduler metrics", e); + throw e; } finally { samplerLock.unlock(); } @@ -510,6 +513,9 @@ public abstract class SchedulerMetrics { } ); } + } catch (Exception e) { + LOG.error("Caught exception while registering nodes usage metrics", e); + throw e; } finally { samplerLock.unlock(); } diff --git a/hadoop-tools/hadoop-tools-dist/pom.xml b/hadoop-tools/hadoop-tools-dist/pom.xml index 73b6ae075d6..8a3e93c1037 100644 --- a/hadoop-tools/hadoop-tools-dist/pom.xml +++ b/hadoop-tools/hadoop-tools-dist/pom.xml @@ -92,12 +92,6 @@ pom ${project.version} - - org.apache.hadoop - hadoop-openstack - compile - ${project.version} - org.apache.hadoop hadoop-aws diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftConnectionClosedException.java b/hadoop-yarn-project/hadoop-yarn/bin/FederationStateStore/SQLServer/FederationStateStoreDatabase.sql similarity index 64% rename from hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftConnectionClosedException.java rename to hadoop-yarn-project/hadoop-yarn/bin/FederationStateStore/SQLServer/FederationStateStoreDatabase.sql index eeaf8a5606f..ae0a1c5b77f 100644 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftConnectionClosedException.java +++ b/hadoop-yarn-project/hadoop-yarn/bin/FederationStateStore/SQLServer/FederationStateStoreDatabase.sql @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,22 +15,12 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -package org.apache.hadoop.fs.swift.exceptions; -/** - * Exception raised when an attempt is made to use a closed stream - */ -public class SwiftConnectionClosedException extends SwiftException { +-- Script to create a new Database in SQLServer for the Federation StateStore - public static final String MESSAGE = - "Connection to Swift service has been closed"; +IF DB_ID ( '[FederationStateStore]') IS NOT NULL + DROP DATABASE [FederationStateStore]; +GO - public SwiftConnectionClosedException() { - super(MESSAGE); - } - - public SwiftConnectionClosedException(String reason) { - super(MESSAGE + ": " + reason); - } - -} +CREATE database FederationStateStore; +GO diff --git a/hadoop-yarn-project/hadoop-yarn/bin/FederationStateStore/SQLServer/FederationStateStoreStoredProcs.sql b/hadoop-yarn-project/hadoop-yarn/bin/FederationStateStore/SQLServer/FederationStateStoreStoredProcs.sql index 17f9e96909c..cc8a79d6273 100644 --- a/hadoop-yarn-project/hadoop-yarn/bin/FederationStateStore/SQLServer/FederationStateStoreStoredProcs.sql +++ b/hadoop-yarn-project/hadoop-yarn/bin/FederationStateStore/SQLServer/FederationStateStoreStoredProcs.sql @@ -24,10 +24,10 @@ IF OBJECT_ID ( '[sp_addApplicationHomeSubCluster]', 'P' ) IS NOT NULL GO CREATE PROCEDURE [dbo].[sp_addApplicationHomeSubCluster] - @applicationId VARCHAR(64), - @homeSubCluster VARCHAR(256), - @storedHomeSubCluster VARCHAR(256) OUTPUT, - @rowCount int OUTPUT + @applicationId_IN VARCHAR(64), + @homeSubCluster_IN VARCHAR(256), + @storedHomeSubCluster_OUT VARCHAR(256) OUTPUT, + @rowCount_OUT int OUTPUT AS BEGIN DECLARE @errorMessage nvarchar(4000) @@ -37,21 +37,21 @@ AS BEGIN -- Otherwise don't change the current mapping. IF NOT EXISTS (SELECT TOP 1 * FROM [dbo].[applicationsHomeSubCluster] - WHERE [applicationId] = @applicationId) + WHERE [applicationId] = @applicationId_IN) INSERT INTO [dbo].[applicationsHomeSubCluster] ( [applicationId], [homeSubCluster]) VALUES ( - @applicationId, - @homeSubCluster); + @applicationId_IN, + @homeSubCluster_IN); -- End of the IF block - SELECT @rowCount = @@ROWCOUNT; + SELECT @rowCount_OUT = @@ROWCOUNT; - SELECT @storedHomeSubCluster = [homeSubCluster] + SELECT @storedHomeSubCluster_OUT = [homeSubCluster] FROM [dbo].[applicationsHomeSubCluster] - WHERE [applicationId] = @applicationId; + WHERE [applicationId] = @applicationId_IN; COMMIT TRAN END TRY @@ -75,9 +75,9 @@ IF OBJECT_ID ( '[sp_updateApplicationHomeSubCluster]', 'P' ) IS NOT NULL GO CREATE PROCEDURE [dbo].[sp_updateApplicationHomeSubCluster] - @applicationId VARCHAR(64), - @homeSubCluster VARCHAR(256), - @rowCount int OUTPUT + @applicationId_IN VARCHAR(64), + @homeSubCluster_IN VARCHAR(256), + @rowCount_OUT int OUTPUT AS BEGIN DECLARE @errorMessage nvarchar(4000) @@ -85,9 +85,9 @@ AS BEGIN BEGIN TRAN UPDATE [dbo].[applicationsHomeSubCluster] - SET [homeSubCluster] = @homeSubCluster - WHERE [applicationId] = @applicationid; - SELECT @rowCount = @@ROWCOUNT; + SET [homeSubCluster] = @homeSubCluster_IN + WHERE [applicationId] = @applicationId_IN; + SELECT @rowCount_OUT = @@ROWCOUNT; COMMIT TRAN END TRY @@ -111,8 +111,8 @@ IF OBJECT_ID ( '[sp_getApplicationsHomeSubCluster]', 'P' ) IS NOT NULL GO CREATE PROCEDURE [dbo].[sp_getApplicationsHomeSubCluster] - @limit int, - @homeSubCluster VARCHAR(256) + @limit_IN int, + @homeSubCluster_IN VARCHAR(256) AS BEGIN DECLARE @errorMessage nvarchar(4000) @@ -128,8 +128,8 @@ AS BEGIN [createTime], row_number() over(order by [createTime] desc) AS app_rank FROM [dbo].[applicationsHomeSubCluster] - WHERE [homeSubCluster] = @homeSubCluster OR @homeSubCluster = '') AS applicationsHomeSubCluster - WHERE app_rank <= @limit; + WHERE [homeSubCluster] = @homeSubCluster_IN OR @homeSubCluster = '') AS applicationsHomeSubCluster + WHERE app_rank <= @limit_IN; END TRY @@ -150,16 +150,16 @@ IF OBJECT_ID ( '[sp_getApplicationHomeSubCluster]', 'P' ) IS NOT NULL GO CREATE PROCEDURE [dbo].[sp_getApplicationHomeSubCluster] - @applicationId VARCHAR(64), - @homeSubCluster VARCHAR(256) OUTPUT + @applicationId_IN VARCHAR(64), + @homeSubCluster_OUT VARCHAR(256) OUTPUT AS BEGIN DECLARE @errorMessage nvarchar(4000) BEGIN TRY - SELECT @homeSubCluster = [homeSubCluster] + SELECT @homeSubCluster_OUT = [homeSubCluster] FROM [dbo].[applicationsHomeSubCluster] - WHERE [applicationId] = @applicationid; + WHERE [applicationId] = @applicationId_IN; END TRY @@ -181,8 +181,8 @@ IF OBJECT_ID ( '[sp_deleteApplicationHomeSubCluster]', 'P' ) IS NOT NULL GO CREATE PROCEDURE [dbo].[sp_deleteApplicationHomeSubCluster] - @applicationId VARCHAR(64), - @rowCount int OUTPUT + @applicationId_IN VARCHAR(64), + @rowCount_OUT int OUTPUT AS BEGIN DECLARE @errorMessage nvarchar(4000) @@ -190,8 +190,8 @@ AS BEGIN BEGIN TRAN DELETE FROM [dbo].[applicationsHomeSubCluster] - WHERE [applicationId] = @applicationId; - SELECT @rowCount = @@ROWCOUNT; + WHERE [applicationId] = @applicationId_IN; + SELECT @rowCount_OUT = @@ROWCOUNT; COMMIT TRAN END TRY @@ -215,15 +215,15 @@ IF OBJECT_ID ( '[sp_registerSubCluster]', 'P' ) IS NOT NULL GO CREATE PROCEDURE [dbo].[sp_registerSubCluster] - @subClusterId VARCHAR(256), - @amRMServiceAddress VARCHAR(256), - @clientRMServiceAddress VARCHAR(256), - @rmAdminServiceAddress VARCHAR(256), - @rmWebServiceAddress VARCHAR(256), - @state VARCHAR(32), - @lastStartTime BIGINT, - @capability VARCHAR(6000), - @rowCount int OUTPUT + @subClusterId_IN VARCHAR(256), + @amRMServiceAddress_IN VARCHAR(256), + @clientRMServiceAddress_IN VARCHAR(256), + @rmAdminServiceAddress_IN VARCHAR(256), + @rmWebServiceAddress_IN VARCHAR(256), + @state_IN VARCHAR(32), + @lastStartTime_IN BIGINT, + @capability_IN VARCHAR(6000), + @rowCount_OUT int OUTPUT AS BEGIN DECLARE @errorMessage nvarchar(4000) @@ -231,7 +231,7 @@ AS BEGIN BEGIN TRAN DELETE FROM [dbo].[membership] - WHERE [subClusterId] = @subClusterId; + WHERE [subClusterId] = @subClusterId_IN; INSERT INTO [dbo].[membership] ( [subClusterId], [amRMServiceAddress], @@ -243,16 +243,16 @@ AS BEGIN [lastStartTime], [capability] ) VALUES ( - @subClusterId, - @amRMServiceAddress, - @clientRMServiceAddress, - @rmAdminServiceAddress, - @rmWebServiceAddress, + @subClusterId_IN, + @amRMServiceAddress_IN, + @clientRMServiceAddress_IN, + @rmAdminServiceAddress_IN, + @rmWebServiceAddress_IN, GETUTCDATE(), - @state, - @lastStartTime, - @capability); - SELECT @rowCount = @@ROWCOUNT; + @state_IN, + @lastStartTime_IN, + @capability_IN); + SELECT @rowCount_OUT = @@ROWCOUNT; COMMIT TRAN END TRY @@ -303,32 +303,32 @@ IF OBJECT_ID ( '[sp_getSubCluster]', 'P' ) IS NOT NULL GO CREATE PROCEDURE [dbo].[sp_getSubCluster] - @subClusterId VARCHAR(256), - @amRMServiceAddress VARCHAR(256) OUTPUT, - @clientRMServiceAddress VARCHAR(256) OUTPUT, - @rmAdminServiceAddress VARCHAR(256) OUTPUT, - @rmWebServiceAddress VARCHAR(256) OUTPUT, - @lastHeartbeat DATETIME2 OUTPUT, - @state VARCHAR(256) OUTPUT, - @lastStartTime BIGINT OUTPUT, - @capability VARCHAR(6000) OUTPUT + @subClusterId_IN VARCHAR(256), + @amRMServiceAddress_OUT VARCHAR(256) OUTPUT, + @clientRMServiceAddress_OUT VARCHAR(256) OUTPUT, + @rmAdminServiceAddress_OUT VARCHAR(256) OUTPUT, + @rmWebServiceAddress_OUT VARCHAR(256) OUTPUT, + @lastHeartBeat_OUT DATETIME2 OUTPUT, + @state_OUT VARCHAR(256) OUTPUT, + @lastStartTime_OUT BIGINT OUTPUT, + @capability_OUT VARCHAR(6000) OUTPUT AS BEGIN DECLARE @errorMessage nvarchar(4000) BEGIN TRY BEGIN TRAN - SELECT @subClusterId = [subClusterId], - @amRMServiceAddress = [amRMServiceAddress], - @clientRMServiceAddress = [clientRMServiceAddress], - @rmAdminServiceAddress = [rmAdminServiceAddress], - @rmWebServiceAddress = [rmWebServiceAddress], - @lastHeartBeat = [lastHeartBeat], - @state = [state], - @lastStartTime = [lastStartTime], - @capability = [capability] + SELECT @subClusterId_IN = [subClusterId], + @amRMServiceAddress_OUT = [amRMServiceAddress], + @clientRMServiceAddress_OUT = [clientRMServiceAddress], + @rmAdminServiceAddress_OUT = [rmAdminServiceAddress], + @rmWebServiceAddress_OUT = [rmWebServiceAddress], + @lastHeartBeat_OUT = [lastHeartBeat], + @state_OUT = [state], + @lastStartTime_OUT = [lastStartTime], + @capability_OUT = [capability] FROM [dbo].[membership] - WHERE [subClusterId] = @subClusterId + WHERE [subClusterId] = @subClusterId_IN COMMIT TRAN END TRY @@ -353,10 +353,10 @@ IF OBJECT_ID ( '[sp_subClusterHeartbeat]', 'P' ) IS NOT NULL GO CREATE PROCEDURE [dbo].[sp_subClusterHeartbeat] - @subClusterId VARCHAR(256), - @state VARCHAR(256), - @capability VARCHAR(6000), - @rowCount int OUTPUT + @subClusterId_IN VARCHAR(256), + @state_IN VARCHAR(256), + @capability_IN VARCHAR(6000), + @rowCount_OUT int OUTPUT AS BEGIN DECLARE @errorMessage nvarchar(4000) @@ -364,11 +364,11 @@ AS BEGIN BEGIN TRAN UPDATE [dbo].[membership] - SET [state] = @state, + SET [state] = @state_IN, [lastHeartbeat] = GETUTCDATE(), - [capability] = @capability - WHERE [subClusterId] = @subClusterId; - SELECT @rowCount = @@ROWCOUNT; + [capability] = @capability_IN + WHERE [subClusterId] = @subClusterId_IN; + SELECT @rowCount_OUT = @@ROWCOUNT; COMMIT TRAN END TRY @@ -392,9 +392,9 @@ IF OBJECT_ID ( '[sp_deregisterSubCluster]', 'P' ) IS NOT NULL GO CREATE PROCEDURE [dbo].[sp_deregisterSubCluster] - @subClusterId VARCHAR(256), - @state VARCHAR(256), - @rowCount int OUTPUT + @subClusterId_IN VARCHAR(256), + @state_IN VARCHAR(256), + @rowCount_OUT int OUTPUT AS BEGIN DECLARE @errorMessage nvarchar(4000) @@ -402,9 +402,9 @@ AS BEGIN BEGIN TRAN UPDATE [dbo].[membership] - SET [state] = @state - WHERE [subClusterId] = @subClusterId; - SELECT @rowCount = @@ROWCOUNT; + SET [state] = @state_IN + WHERE [subClusterId] = @subClusterId_IN; + SELECT @rowCount_OUT = @@ROWCOUNT; COMMIT TRAN END TRY @@ -428,10 +428,10 @@ IF OBJECT_ID ( '[sp_setPolicyConfiguration]', 'P' ) IS NOT NULL GO CREATE PROCEDURE [dbo].[sp_setPolicyConfiguration] - @queue VARCHAR(256), - @policyType VARCHAR(256), - @params VARBINARY(512), - @rowCount int OUTPUT + @queue_IN VARCHAR(256), + @policyType_IN VARCHAR(256), + @params_IN VARBINARY(512), + @rowCount_OUT int OUTPUT AS BEGIN DECLARE @errorMessage nvarchar(4000) @@ -439,16 +439,16 @@ AS BEGIN BEGIN TRAN DELETE FROM [dbo].[policies] - WHERE [queue] = @queue; + WHERE [queue] = @queue_IN; INSERT INTO [dbo].[policies] ( [queue], [policyType], [params]) VALUES ( - @queue, - @policyType, - @params); - SELECT @rowCount = @@ROWCOUNT; + @queue_IN, + @policyType_IN, + @params_IN); + SELECT @rowCount_OUT = @@ROWCOUNT; COMMIT TRAN END TRY @@ -472,18 +472,18 @@ IF OBJECT_ID ( '[sp_getPolicyConfiguration]', 'P' ) IS NOT NULL GO CREATE PROCEDURE [dbo].[sp_getPolicyConfiguration] - @queue VARCHAR(256), - @policyType VARCHAR(256) OUTPUT, - @params VARBINARY(6000) OUTPUT + @queue_IN VARCHAR(256), + @policyType_OUT VARCHAR(256) OUTPUT, + @params_OUT VARBINARY(6000) OUTPUT AS BEGIN DECLARE @errorMessage nvarchar(4000) BEGIN TRY - SELECT @policyType = [policyType], - @params = [params] + SELECT @policyType_OUT = [policyType], + @params_OUT = [params] FROM [dbo].[policies] - WHERE [queue] = @queue + WHERE [queue] = @queue_IN END TRY @@ -524,15 +524,15 @@ AS BEGIN END; GO -IF OBJECT_ID ( '[sp_addApplicationHomeSubCluster]', 'P' ) IS NOT NULL - DROP PROCEDURE [sp_addApplicationHomeSubCluster]; +IF OBJECT_ID ( '[sp_addReservationHomeSubCluster]', 'P' ) IS NOT NULL + DROP PROCEDURE [sp_addReservationHomeSubCluster]; GO CREATE PROCEDURE [dbo].[sp_addReservationHomeSubCluster] - @reservationId VARCHAR(128), - @homeSubCluster VARCHAR(256), - @storedHomeSubCluster VARCHAR(256) OUTPUT, - @rowCount int OUTPUT + @reservationId_IN VARCHAR(128), + @homeSubCluster_IN VARCHAR(256), + @storedHomeSubCluster_OUT VARCHAR(256) OUTPUT, + @rowCount_OUT int OUTPUT AS BEGIN DECLARE @errorMessage nvarchar(4000) @@ -542,21 +542,21 @@ AS BEGIN -- Otherwise don't change the current mapping. IF NOT EXISTS (SELECT TOP 1 * FROM [dbo].[reservationsHomeSubCluster] - WHERE [reservationId] = @reservationId) + WHERE [reservationId] = @reservationId_IN) INSERT INTO [dbo].[reservationsHomeSubCluster] ( [reservationId], [homeSubCluster]) VALUES ( - @reservationId, - @homeSubCluster); + @reservationId_IN, + @homeSubCluster_IN); -- End of the IF block - SELECT @rowCount = @@ROWCOUNT; + SELECT @rowCount_OUT = @@ROWCOUNT; - SELECT @storedHomeSubCluster = [homeSubCluster] + SELECT @storedHomeSubCluster_OUT = [homeSubCluster] FROM [dbo].[reservationsHomeSubCluster] - WHERE [reservationId] = @reservationId; + WHERE [reservationId] = @reservationId_IN; COMMIT TRAN END TRY @@ -580,9 +580,9 @@ IF OBJECT_ID ( '[sp_updateReservationHomeSubCluster]', 'P' ) IS NOT NULL GO CREATE PROCEDURE [dbo].[sp_updateReservationHomeSubCluster] - @reservationId VARCHAR(128), - @homeSubCluster VARCHAR(256), - @rowCount int OUTPUT + @reservationId_IN VARCHAR(128), + @homeSubCluster_IN VARCHAR(256), + @rowCount_OUT int OUTPUT AS BEGIN DECLARE @errorMessage nvarchar(4000) @@ -590,9 +590,9 @@ AS BEGIN BEGIN TRAN UPDATE [dbo].[reservationsHomeSubCluster] - SET [homeSubCluster] = @homeSubCluster - WHERE [reservationId] = @reservationId; - SELECT @rowCount = @@ROWCOUNT; + SET [homeSubCluster] = @homeSubCluster_IN + WHERE [reservationId] = @reservationId_IN; + SELECT @rowCount_OUT = @@ROWCOUNT; COMMIT TRAN END TRY @@ -641,16 +641,16 @@ IF OBJECT_ID ( '[sp_getReservationHomeSubCluster]', 'P' ) IS NOT NULL GO CREATE PROCEDURE [dbo].[sp_getReservationHomeSubCluster] - @reservationId VARCHAR(128), - @homeSubCluster VARCHAR(256) OUTPUT + @reservationId_IN VARCHAR(128), + @homeSubCluster_OUT VARCHAR(256) OUTPUT AS BEGIN DECLARE @errorMessage nvarchar(4000) BEGIN TRY - SELECT @homeSubCluster = [homeSubCluster] + SELECT @homeSubCluster_OUT = [homeSubCluster] FROM [dbo].[reservationsHomeSubCluster] - WHERE [reservationId] = @reservationId; + WHERE [reservationId] = @reservationId_IN; END TRY @@ -672,8 +672,8 @@ IF OBJECT_ID ( '[sp_deleteReservationHomeSubCluster]', 'P' ) IS NOT NULL GO CREATE PROCEDURE [dbo].[sp_deleteReservationHomeSubCluster] - @reservationId VARCHAR(128), - @rowCount int OUTPUT + @reservationId_IN VARCHAR(128), + @rowCount_OUT int OUTPUT AS BEGIN DECLARE @errorMessage nvarchar(4000) @@ -681,8 +681,8 @@ AS BEGIN BEGIN TRAN DELETE FROM [dbo].[reservationsHomeSubCluster] - WHERE [reservationId] = @reservationId; - SELECT @rowCount = @@ROWCOUNT; + WHERE [reservationId] = @reservationId_IN; + SELECT @rowCount_OUT = @@ROWCOUNT; COMMIT TRAN END TRY diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/timeline/TimelineVersion.java b/hadoop-yarn-project/hadoop-yarn/bin/FederationStateStore/SQLServer/FederationStateStoreUser.sql similarity index 66% rename from hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/timeline/TimelineVersion.java rename to hadoop-yarn-project/hadoop-yarn/bin/FederationStateStore/SQLServer/FederationStateStoreUser.sql index 57439de078f..3f9553fbe32 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/timeline/TimelineVersion.java +++ b/hadoop-yarn-project/hadoop-yarn/bin/FederationStateStore/SQLServer/FederationStateStoreUser.sql @@ -16,16 +16,16 @@ * limitations under the License. */ -package org.apache.hadoop.yarn.server.timeline; +-- Script to create a new User in SQLServer for the Federation StateStore -import java.lang.annotation.ElementType; -import java.lang.annotation.Retention; -import java.lang.annotation.RetentionPolicy; -import java.lang.annotation.Target; +USE [FederationStateStore] +GO -@Retention(value = RetentionPolicy.RUNTIME) -@Target(value = {ElementType.METHOD}) -public @interface TimelineVersion { - float value() default TimelineVersionWatcher.DEFAULT_TIMELINE_VERSION; -} +CREATE LOGIN 'FederationUser' with password = 'FederationPassword', default_database=[FederationStateStore] ; +GO +CREATE USER 'FederationUser' FOR LOGIN 'FederationUser' WITH default_schema=dbo; +GO + +EXEC sp_addrolemember 'db_owner', 'FederationUser'; +GO diff --git a/hadoop-yarn-project/hadoop-yarn/bin/FederationStateStore/SQLServer/dropDatabase.sql b/hadoop-yarn-project/hadoop-yarn/bin/FederationStateStore/SQLServer/dropDatabase.sql new file mode 100644 index 00000000000..8d434ab4d7b --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/bin/FederationStateStore/SQLServer/dropDatabase.sql @@ -0,0 +1,23 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +-- Script to drop the Federation StateStore in SQLServer + +IF DB_ID ( '[FederationStateStore]') IS NOT NULL + DROP DATABASE [FederationStateStore]; +GO diff --git a/hadoop-yarn-project/hadoop-yarn/bin/FederationStateStore/SQLServer/dropStoreProcedures.sql b/hadoop-yarn-project/hadoop-yarn/bin/FederationStateStore/SQLServer/dropStoreProcedures.sql new file mode 100644 index 00000000000..6204df2f418 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/bin/FederationStateStore/SQLServer/dropStoreProcedures.sql @@ -0,0 +1,76 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +-- Script to drop all the stored procedures for the Federation StateStore in SQLServer + +USE [FederationStateStore] +GO + +DROP PROCEDURE IF EXISTS [sp_addApplicationHomeSubCluster]; +GO + +DROP PROCEDURE IF EXISTS [sp_updateApplicationHomeSubCluster]; +GO + +DROP PROCEDURE IF EXISTS [sp_getApplicationsHomeSubCluster]; +GO + +DROP PROCEDURE IF EXISTS [sp_getApplicationHomeSubCluster]; +GO + +DROP PROCEDURE IF EXISTS [sp_deleteApplicationHomeSubCluster]; +GO + +DROP PROCEDURE IF EXISTS [sp_registerSubCluster]; +GO + +DROP PROCEDURE IF EXISTS [sp_getSubClusters]; +GO + +DROP PROCEDURE IF EXISTS [sp_getSubCluster]; +GO + +DROP PROCEDURE IF EXISTS [sp_subClusterHeartbeat]; +GO + +DROP PROCEDURE IF EXISTS [sp_deregisterSubCluster]; +GO + +DROP PROCEDURE IF EXISTS [sp_setPolicyConfiguration]; +GO + +DROP PROCEDURE IF EXISTS [sp_getPolicyConfiguration]; +GO + +DROP PROCEDURE IF EXISTS [sp_getPoliciesConfigurations]; +GO + +DROP PROCEDURE IF EXISTS [sp_addApplicationHomeSubCluster]; +GO + +DROP PROCEDURE IF EXISTS [sp_updateReservationHomeSubCluster]; +GO + +DROP PROCEDURE IF EXISTS [sp_getReservationsHomeSubCluster]; +GO + +DROP PROCEDURE IF EXISTS [sp_getReservationHomeSubCluster]; +GO + +DROP PROCEDURE IF EXISTS [sp_deleteReservationHomeSubCluster]; +GO diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftConnectionException.java b/hadoop-yarn-project/hadoop-yarn/bin/FederationStateStore/SQLServer/dropTables.sql similarity index 66% rename from hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftConnectionException.java rename to hadoop-yarn-project/hadoop-yarn/bin/FederationStateStore/SQLServer/dropTables.sql index 74607b8915a..9bcacb7f885 100644 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftConnectionException.java +++ b/hadoop-yarn-project/hadoop-yarn/bin/FederationStateStore/SQLServer/dropTables.sql @@ -16,20 +16,19 @@ * limitations under the License. */ -package org.apache.hadoop.fs.swift.exceptions; +-- Script to drop all the tables from the Federation StateStore in SQLServer -/** - * Thrown to indicate that connection is lost or failed to be made - */ -public class SwiftConnectionException extends SwiftException { - public SwiftConnectionException() { - } +USE [FederationStateStore] +GO - public SwiftConnectionException(String message) { - super(message); - } +DROP TABLE [applicationsHomeSubCluster]; +GO - public SwiftConnectionException(String message, Throwable cause) { - super(message, cause); - } -} +DROP TABLE [membership]; +GO + +DROP TABLE [policies]; +GO + +DROP TABLE [reservationsHomeSubCluster]; +GO diff --git a/hadoop-yarn-project/hadoop-yarn/bin/FederationStateStore/SQLServer/dropUser.sql b/hadoop-yarn-project/hadoop-yarn/bin/FederationStateStore/SQLServer/dropUser.sql new file mode 100644 index 00000000000..6d5203a52e5 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/bin/FederationStateStore/SQLServer/dropUser.sql @@ -0,0 +1,22 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +-- Script to drop the user from Federation StateStore in MySQL + +DROP USER 'FederationUser'; +GO \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/bin/yarn.cmd b/hadoop-yarn-project/hadoop-yarn/bin/yarn.cmd index a4340d08adb..ed0294c2edf 100644 --- a/hadoop-yarn-project/hadoop-yarn/bin/yarn.cmd +++ b/hadoop-yarn-project/hadoop-yarn/bin/yarn.cmd @@ -268,7 +268,7 @@ goto :eof :nodemanager set CLASSPATH=%CLASSPATH%;%YARN_CONF_DIR%\nm-config\log4j.properties set CLASSPATH=%CLASSPATH%;%HADOOP_YARN_HOME%\%YARN_DIR%\timelineservice\* - set CLASSPATH=HADOOP_YARN_HOME%\%YARN_DIR%\timelineservice\lib\*;%CLASSPATH% + set CLASSPATH=%HADOOP_YARN_HOME%\%YARN_DIR%\timelineservice\lib\*;%CLASSPATH% set CLASS=org.apache.hadoop.yarn.server.nodemanager.NodeManager set YARN_OPTS=%YARN_OPTS% -server %HADOOP_NODEMANAGER_OPTS% if defined YARN_NODEMANAGER_HEAPSIZE ( diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationBaseProtocol.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationBaseProtocol.java index 8234c2fb80a..3e8c0d69648 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationBaseProtocol.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationBaseProtocol.java @@ -94,8 +94,8 @@ public interface ApplicationBaseProtocol { * @param request * request for an application report * @return application report - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ @Public @Stable @@ -126,8 +126,8 @@ public interface ApplicationBaseProtocol { * request for report on applications * @return report on applications matching the given application types defined * in the request - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. * @see GetApplicationsRequest */ @Public @@ -166,8 +166,8 @@ public interface ApplicationBaseProtocol { * @param request * request for an application attempt report * @return application attempt report - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ @Public @Unstable @@ -199,8 +199,8 @@ public interface ApplicationBaseProtocol { * @param request * request for reports on all application attempts of an application * @return reports on all application attempts of an application - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ @Public @Unstable @@ -234,8 +234,8 @@ public interface ApplicationBaseProtocol { * @param request * request for a container report * @return container report - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ @Public @Unstable @@ -271,8 +271,8 @@ public interface ApplicationBaseProtocol { * @param request * request for a list of container reports of an application attempt. * @return reports on all containers of an application attempt - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ @Public @Unstable @@ -293,8 +293,8 @@ public interface ApplicationBaseProtocol { * @param request * request to get a delegation token for the client. * @return delegation token that can be used to talk to this service - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ @Public @Stable @@ -308,8 +308,8 @@ public interface ApplicationBaseProtocol { * @param request * the delegation token to be renewed. * @return the new expiry time for the delegation token. - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ @Private @Unstable @@ -323,8 +323,8 @@ public interface ApplicationBaseProtocol { * @param request * the delegation token to be cancelled. * @return an empty response. - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ @Private @Unstable diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationClientProtocol.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationClientProtocol.java index 941a688134f..fa742915adf 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationClientProtocol.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationClientProtocol.java @@ -112,8 +112,8 @@ public interface ApplicationClientProtocol extends ApplicationBaseProtocol { * @param request request to get a new ApplicationId * @return response containing the new ApplicationId to be used * to submit an application - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. * @see #submitApplication(SubmitApplicationRequest) */ @Public @@ -157,8 +157,8 @@ public interface ApplicationClientProtocol extends ApplicationBaseProtocol { * * @param request request to submit a new application * @return (empty) response on accepting the submission - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. * @see #getNewApplication(GetNewApplicationRequest) */ @Public @@ -184,8 +184,8 @@ public interface ApplicationClientProtocol extends ApplicationBaseProtocol { * @param request request to fail an attempt * @return ResourceManager returns an empty response * on success and throws an exception on rejecting the request - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. * @see #getQueueUserAcls(GetQueueUserAclsInfoRequest) */ @Public @@ -210,8 +210,8 @@ public interface ApplicationClientProtocol extends ApplicationBaseProtocol { * @param request request to abort a submitted application * @return ResourceManager returns an empty response * on success and throws an exception on rejecting the request - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. * @see #getQueueUserAcls(GetQueueUserAclsInfoRequest) */ @Public @@ -232,8 +232,8 @@ public interface ApplicationClientProtocol extends ApplicationBaseProtocol { * * @param request request for cluster metrics * @return cluster metrics - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ @Public @Stable @@ -252,8 +252,8 @@ public interface ApplicationClientProtocol extends ApplicationBaseProtocol { * * @param request request for report on all nodes * @return report on all nodes - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ @Public @Stable @@ -274,8 +274,8 @@ public interface ApplicationClientProtocol extends ApplicationBaseProtocol { * * @param request request to get queue information * @return queue information - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ @Public @Stable @@ -294,8 +294,8 @@ public interface ApplicationClientProtocol extends ApplicationBaseProtocol { * * @param request request to get queue acls for current user * @return queue acls for current user - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ @Public @Stable @@ -309,8 +309,8 @@ public interface ApplicationClientProtocol extends ApplicationBaseProtocol { * * @param request the application ID and the target queue * @return an empty response - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ @Public @Unstable @@ -383,7 +383,7 @@ public interface ApplicationClientProtocol extends ApplicationBaseProtocol { * @return response the {@link ReservationId} on accepting the submission * @throws YarnException if the request is invalid or reservation cannot be * created successfully - * @throws IOException + * @throws IOException io error occur. * */ @Public @@ -417,7 +417,7 @@ public interface ApplicationClientProtocol extends ApplicationBaseProtocol { * @return response empty on successfully updating the existing reservation * @throws YarnException if the request is invalid or reservation cannot be * updated successfully - * @throws IOException + * @throws IOException io error occur. * */ @Public @@ -439,7 +439,7 @@ public interface ApplicationClientProtocol extends ApplicationBaseProtocol { * @return response empty on successfully deleting the existing reservation * @throws YarnException if the request is invalid or reservation cannot be * deleted successfully - * @throws IOException + * @throws IOException io error occur. * */ @Public @@ -494,13 +494,13 @@ public interface ApplicationClientProtocol extends ApplicationBaseProtocol { /** *

- * The interface used by client to get node to labels mappings in existing cluster + * The interface used by client to get node to labels mappings in existing cluster. *

* - * @param request + * @param request get node to labels request. * @return node to labels mappings - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ @Public @Unstable @@ -510,13 +510,13 @@ public interface ApplicationClientProtocol extends ApplicationBaseProtocol { /** *

* The interface used by client to get labels to nodes mappings - * in existing cluster + * in existing cluster. *

* - * @param request + * @param request get label to nodes request. * @return labels to nodes mappings - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ @Public @Unstable @@ -530,8 +530,8 @@ public interface ApplicationClientProtocol extends ApplicationBaseProtocol { * * @param request to get node labels collection of this cluster * @return node labels collection of this cluster - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ @Public @Unstable @@ -544,8 +544,8 @@ public interface ApplicationClientProtocol extends ApplicationBaseProtocol { *

* @param request to set priority of an application * @return an empty response - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ @Public @Unstable @@ -573,8 +573,8 @@ public interface ApplicationClientProtocol extends ApplicationBaseProtocol { * @param request request to signal a container * @return ResourceManager returns an empty response * on success and throws an exception on rejecting the request - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ @Public @Unstable @@ -691,7 +691,7 @@ public interface ApplicationClientProtocol extends ApplicationBaseProtocol { * @param request request to get nodes to attributes mapping. * @return nodes to attributes mappings. * @throws YarnException if any error happens inside YARN. - * @throws IOException + * @throws IOException io error occur. */ @Public @Unstable diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationMasterProtocol.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationMasterProtocol.java index eb40fc7f3e0..cc92239e5c5 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationMasterProtocol.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationMasterProtocol.java @@ -76,9 +76,9 @@ public interface ApplicationMasterProtocol { *

* * @param request registration request - * @return registration respose - * @throws YarnException - * @throws IOException + * @return registration response + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. * @throws InvalidApplicationMasterRequestException The exception is thrown * when an ApplicationMaster tries to register more then once. * @see RegisterApplicationMasterRequest @@ -104,8 +104,8 @@ public interface ApplicationMasterProtocol { * * @param request completion request * @return completion response - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. * @see FinishApplicationMasterRequest * @see FinishApplicationMasterResponse */ @@ -154,8 +154,8 @@ public interface ApplicationMasterProtocol { * @param request * allocation request * @return allocation response - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. * @throws InvalidApplicationMasterRequestException * This exception is thrown when an ApplicationMaster calls allocate * without registering first. diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ClientSCMProtocol.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ClientSCMProtocol.java index d63fa117f91..4fa524e932f 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ClientSCMProtocol.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ClientSCMProtocol.java @@ -55,8 +55,8 @@ public interface ClientSCMProtocol { * * @param request request to claim a resource in the shared cache * @return response indicating if the resource is already in the cache - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ public UseSharedCacheResourceResponse use( UseSharedCacheResourceRequest request) throws YarnException, IOException; @@ -81,8 +81,8 @@ public interface ClientSCMProtocol { * * @param request request to release a resource in the shared cache * @return (empty) response on releasing the resource - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ public ReleaseSharedCacheResourceResponse release( ReleaseSharedCacheResourceRequest request) throws YarnException, IOException; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ContainerManagementProtocol.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ContainerManagementProtocol.java index 0444440ebad..9991248e666 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ContainerManagementProtocol.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ContainerManagementProtocol.java @@ -102,8 +102,8 @@ public interface ContainerManagementProtocol { * @return response including conatinerIds of all successfully launched * containers, a containerId-to-exception map for failed requests and * a allServicesMetaData map. - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ @Public @Stable @@ -138,8 +138,8 @@ public interface ContainerManagementProtocol { * @return response which includes a list of containerIds of successfully * stopped containers, a containerId-to-exception map for failed * requests. - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ @Public @Stable @@ -174,8 +174,8 @@ public interface ContainerManagementProtocol { * successfully queried containers and a containerId-to-exception map * for failed requests. * - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ @Public @Stable @@ -195,8 +195,8 @@ public interface ContainerManagementProtocol { * whose resource has been successfully increased and a * containerId-to-exception map for failed requests. * - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ @Public @Unstable diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/CsiAdaptorPlugin.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/CsiAdaptorPlugin.java index 26b45f7c31f..6974dda318b 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/CsiAdaptorPlugin.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/CsiAdaptorPlugin.java @@ -36,7 +36,7 @@ public interface CsiAdaptorPlugin extends CsiAdaptorProtocol { * customized configuration from yarn-site.xml. * @param driverName the name of the csi-driver. * @param conf configuration. - * @throws YarnException + * @throws YarnException exceptions from yarn servers. */ void init(String driverName, Configuration conf) throws YarnException; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/CsiAdaptorProtocol.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/CsiAdaptorProtocol.java index 4e064eb2196..f78345a5d41 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/CsiAdaptorProtocol.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/CsiAdaptorProtocol.java @@ -39,8 +39,8 @@ public interface CsiAdaptorProtocol { * the name of the driver and its version. * @param request get plugin info request. * @return response that contains driver name and its version. - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ GetPluginInfoResponse getPluginInfo(GetPluginInfoRequest request) throws YarnException, IOException; @@ -51,8 +51,8 @@ public interface CsiAdaptorProtocol { * or not, with a detailed message. * @param request validate volume capability request. * @return validation response. - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ ValidateVolumeCapabilitiesResponse validateVolumeCapacity( ValidateVolumeCapabilitiesRequest request) throws YarnException, @@ -63,8 +63,8 @@ public interface CsiAdaptorProtocol { * to the local file system and become visible for clients. * @param request publish volume request. * @return publish volume response. - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ NodePublishVolumeResponse nodePublishVolume( NodePublishVolumeRequest request) throws YarnException, IOException; @@ -75,8 +75,8 @@ public interface CsiAdaptorProtocol { * volume from given node. * @param request un-publish volume request. * @return un-publish volume response. - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ NodeUnpublishVolumeResponse nodeUnpublishVolume( NodeUnpublishVolumeRequest request) throws YarnException, IOException; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/package-info.java index d1f4ea75b7d..9fe3696e52e 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,7 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.Public +@Public package org.apache.hadoop.yarn.api; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.Public; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetApplicationsRequest.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetApplicationsRequest.java index 5a00fa93f0c..37c7d8787cd 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetApplicationsRequest.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetApplicationsRequest.java @@ -137,6 +137,7 @@ public abstract class GetApplicationsRequest { * * * @see ApplicationClientProtocol#getApplications(GetApplicationsRequest) + * @param applicationTypes application types. * @return a report of Applications in {@link GetApplicationsRequest} */ @Public @@ -158,6 +159,7 @@ public abstract class GetApplicationsRequest { * * * @see ApplicationClientProtocol#getApplications(GetApplicationsRequest) + * @param applicationStates application states. * @return a report of Applications in {@link GetApplicationsRequest} */ @Public @@ -173,12 +175,14 @@ public abstract class GetApplicationsRequest { /** *

* The request from clients to get a report of Applications matching the - * giving and application types and application types in the cluster from the + * giving and application types and application states in the cluster from the * ResourceManager. *

* * * @see ApplicationClientProtocol#getApplications(GetApplicationsRequest) + * @param applicationStates application states. + * @param applicationTypes application types. * @return a report of Applications in GetApplicationsRequest */ @Public @@ -309,20 +313,20 @@ public abstract class GetApplicationsRequest { public abstract Range getStartRange(); /** - * Set the range of start times to filter applications on + * Set the range of start times to filter applications. * - * @param range + * @param range range of start times. */ @Private @Unstable public abstract void setStartRange(Range range); /** - * Set the range of start times to filter applications on + * Set the range of start times to filter applications. * * @param begin beginning of the range * @param end end of the range - * @throws IllegalArgumentException + * @throws IllegalArgumentException if an argument is invalid. */ @Private @Unstable @@ -330,7 +334,7 @@ public abstract class GetApplicationsRequest { throws IllegalArgumentException; /** - * Get the range of finish times to filter applications on + * Get the range of finish times to filter applications. * * @return {@link Range} of finish times to filter applications on */ @@ -339,38 +343,38 @@ public abstract class GetApplicationsRequest { public abstract Range getFinishRange(); /** - * Set the range of finish times to filter applications on + * Set the range of finish times to filter applications. * - * @param range + * @param range range of finish times. */ @Private @Unstable public abstract void setFinishRange(Range range); /** - * Set the range of finish times to filter applications on + * Set the range of finish times to filter applications. * * @param begin beginning of the range * @param end end of the range - * @throws IllegalArgumentException + * @throws IllegalArgumentException if an argument is invalid. */ @Private @Unstable public abstract void setFinishRange(long begin, long end); /** - * Get the tags to filter applications on + * Get the tags to filter applications. * - * @return list of tags to filter on + * @return list of tags to filter. */ @Private @Unstable public abstract Set getApplicationTags(); /** - * Set the list of tags to filter applications on + * Set the list of tags to filter applications. * - * @param tags list of tags to filter on + * @param tags list of tags to filter. */ @Private @Unstable diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodeAttributesResponse.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodeAttributesResponse.java index b0ccd906a32..5a3f5c9ec34 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodeAttributesResponse.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodeAttributesResponse.java @@ -42,7 +42,7 @@ public abstract class GetClusterNodeAttributesResponse { /** * Create instance of GetClusterNodeAttributesResponse. * - * @param attributes + * @param attributes Map of Node attributeKey to Type. * @return GetClusterNodeAttributesResponse. */ public static GetClusterNodeAttributesResponse newInstance( diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetContainerStatusesResponse.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetContainerStatusesResponse.java index 75baf6e9ff5..34d9fe83fd6 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetContainerStatusesResponse.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetContainerStatusesResponse.java @@ -65,6 +65,7 @@ public abstract class GetContainerStatusesResponse { /** * Set the ContainerStatuses of the requested containers. + * @param statuses ContainerStatuses of the requested containers. */ @Private @Unstable @@ -72,7 +73,7 @@ public abstract class GetContainerStatusesResponse { /** * Get the containerId-to-exception map in which the exception indicates error - * from per container for failed requests + * from per container for failed requests. * @return map of containerId-to-exception */ @Public @@ -81,7 +82,9 @@ public abstract class GetContainerStatusesResponse { /** * Set the containerId-to-exception map in which the exception indicates error - * from per container for failed requests + * from per container for failed requests. + * + * @param failedContainers containerId-to-exception map. */ @Private @Unstable diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetLocalizationStatusesResponse.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetLocalizationStatusesResponse.java index 89fca9fbbdd..3f29cbfa86b 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetLocalizationStatusesResponse.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetLocalizationStatusesResponse.java @@ -80,6 +80,8 @@ public abstract class GetLocalizationStatusesResponse { /** * Set the containerId-to-exception map in which the exception indicates error * from per container for failed request. + * + * @param failedContainers containerId-to-exception map. */ @InterfaceAudience.Private public abstract void setFailedRequests( diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetNodesToAttributesRequest.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetNodesToAttributesRequest.java index 4fcd8da6936..0da4bd8cb9a 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetNodesToAttributesRequest.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetNodesToAttributesRequest.java @@ -48,7 +48,7 @@ public abstract class GetNodesToAttributesRequest { /** * Set hostnames for which mapping is required. * - * @param hostnames + * @param hostnames Set of hostnames. */ @InterfaceAudience.Public @InterfaceStability.Evolving diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/IncreaseContainersResourceResponse.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/IncreaseContainersResourceResponse.java index 1d821203a91..ea2b64207e4 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/IncreaseContainersResourceResponse.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/IncreaseContainersResourceResponse.java @@ -68,6 +68,9 @@ public abstract class IncreaseContainersResourceResponse { /** * Set the list of containerIds of containers whose resource have * been successfully increased. + * + * @param succeedIncreasedContainers list of containerIds of containers whose resource have + * been successfully increased. */ @Private @Unstable @@ -86,6 +89,8 @@ public abstract class IncreaseContainersResourceResponse { /** * Set the containerId-to-exception map in which the exception indicates * error from each container for failed requests. + * + * @param failedRequests map of containerId-to-exception. */ @Private @Unstable diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/KillApplicationResponse.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/KillApplicationResponse.java index 83495760be0..15c63fb11cb 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/KillApplicationResponse.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/KillApplicationResponse.java @@ -66,6 +66,8 @@ public abstract class KillApplicationResponse { /** * Set the flag which indicates that the process of killing application is completed or not. + * @param isKillCompleted true if the process of killing application has completed, + * false otherwise. */ @Private @Unstable diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RegisterApplicationMasterRequest.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RegisterApplicationMasterRequest.java index f2d537ae072..6a6d781b910 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RegisterApplicationMasterRequest.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RegisterApplicationMasterRequest.java @@ -53,6 +53,9 @@ public abstract class RegisterApplicationMasterRequest { *
  • trackingUrl: null
  • * * The port is allowed to be any integer larger than or equal to -1. + * @param host host on which the ApplicationMaster is running. + * @param port the RPC port on which the ApplicationMaster is responding. + * @param trackingUrl tracking URL for the ApplicationMaster. * @return the new instance of RegisterApplicationMasterRequest */ @Public diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RegisterApplicationMasterResponse.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RegisterApplicationMasterResponse.java index 1f8a151a4c5..894e8c46d69 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RegisterApplicationMasterResponse.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RegisterApplicationMasterResponse.java @@ -94,7 +94,7 @@ public abstract class RegisterApplicationMasterResponse { /** * Set the ApplicationACLs for the application. - * @param acls + * @param acls ApplicationACLs for the application. */ @Private @Unstable @@ -113,6 +113,8 @@ public abstract class RegisterApplicationMasterResponse { /** * Set ClientToAMToken master key. + * + * @param key ClientToAMToken master key. */ @Public @Stable @@ -127,7 +129,8 @@ public abstract class RegisterApplicationMasterResponse { public abstract String getQueue(); /** - *

    Set the queue that the application was placed in.

    + *

    Set the queue that the application was placed in.

    + * @param queue queue. */ @Public @Stable diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/SignalContainerRequest.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/SignalContainerRequest.java index 2a3861a7730..28cc8ea5b4c 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/SignalContainerRequest.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/SignalContainerRequest.java @@ -56,6 +56,7 @@ public abstract class SignalContainerRequest { /** * Set the ContainerId of the container to signal. + * @param containerId containerId. */ @Public @Unstable @@ -71,6 +72,7 @@ public abstract class SignalContainerRequest { /** * Set the SignalContainerCommand of the signal request. + * @param command signal container command. */ @Public @Unstable diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/StartContainersResponse.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/StartContainersResponse.java index 8905ae2ae4d..c40250a36d7 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/StartContainersResponse.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/StartContainersResponse.java @@ -86,7 +86,8 @@ public abstract class StartContainersResponse { /** * Set the containerId-to-exception map in which the exception indicates error - * from per container for failed requests + * from per container for failed requests. + * @param failedContainers container for failed requests. */ @Private @Unstable diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/StopContainersResponse.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/StopContainersResponse.java index 2763bc94b4c..3fe9eaa12f1 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/StopContainersResponse.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/StopContainersResponse.java @@ -65,6 +65,7 @@ public abstract class StopContainersResponse { /** * Set the list of containerIds of successfully stopped containers. + * @param succeededRequests the list of containerIds of successfully stopped containers. */ @Private @Unstable @@ -73,8 +74,8 @@ public abstract class StopContainersResponse { /** * Get the containerId-to-exception map in which the exception indicates error - * from per container for failed requests - * @return map of containerId-to-exception + * from per container for failed requests. + * @return map of containerId-to-exception. */ @Public @Stable @@ -82,7 +83,8 @@ public abstract class StopContainersResponse { /** * Set the containerId-to-exception map in which the exception indicates error - * from per container for failed requests + * from per container for failed requests. + * @param failedRequests map of containerId-to-exception. */ @Private @Unstable diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/package-info.java index 503c4f0ed25..c20779b1274 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,7 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.Public +@Public package org.apache.hadoop.yarn.api.protocolrecords; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.Public; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ApplicationSubmissionContext.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ApplicationSubmissionContext.java index 1d4ffc9b5b9..71f28a165ab 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ApplicationSubmissionContext.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ApplicationSubmissionContext.java @@ -516,7 +516,7 @@ public abstract class ApplicationSubmissionContext { List requests); /** - * Get the attemptFailuresValidityInterval in milliseconds for the application + * Get the attemptFailuresValidityInterval in milliseconds for the application. * * @return the attemptFailuresValidityInterval */ @@ -525,8 +525,8 @@ public abstract class ApplicationSubmissionContext { public abstract long getAttemptFailuresValidityInterval(); /** - * Set the attemptFailuresValidityInterval in milliseconds for the application - * @param attemptFailuresValidityInterval + * Set the attemptFailuresValidityInterval in milliseconds for the application. + * @param attemptFailuresValidityInterval attempt failures validity interval. */ @Public @Stable diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LocalizationStatus.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LocalizationStatus.java index bca95b7fef1..ca161173e25 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LocalizationStatus.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LocalizationStatus.java @@ -59,7 +59,7 @@ public abstract class LocalizationStatus { /** * Sets the resource key. - * @param resourceKey + * @param resourceKey resource key. */ @InterfaceAudience.Private public abstract void setResourceKey(String resourceKey); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LogAggregationContext.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LogAggregationContext.java index e58012d68b8..0e45b37e3b9 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LogAggregationContext.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LogAggregationContext.java @@ -157,7 +157,7 @@ public abstract class LogAggregationContext { * Set include pattern. This includePattern only takes affect * on logs that exist at the time of application finish. * - * @param includePattern + * @param includePattern include pattern. */ @Public @Unstable @@ -177,7 +177,7 @@ public abstract class LogAggregationContext { * Set exclude pattern. This excludePattern only takes affect * on logs that exist at the time of application finish. * - * @param excludePattern + * @param excludePattern exclude pattern. */ @Public @Unstable @@ -195,7 +195,9 @@ public abstract class LogAggregationContext { /** * Set include pattern in a rolling fashion. * - * @param rolledLogsIncludePattern + * @param rolledLogsIncludePattern It uses Java Regex to filter the log files + * which match the defined include pattern and those log files + * will be aggregated in a rolling fashion. */ @Public @Unstable @@ -214,7 +216,7 @@ public abstract class LogAggregationContext { /** * Set exclude pattern for in a rolling fashion. * - * @param rolledLogsExcludePattern + * @param rolledLogsExcludePattern rolled logs exclude pattern. */ @Public @Unstable @@ -233,7 +235,7 @@ public abstract class LogAggregationContext { /** * Set the log aggregation policy class. * - * @param className + * @param className log aggregation policy class name. */ @Public @Unstable @@ -255,7 +257,7 @@ public abstract class LogAggregationContext { * It is up to the log aggregation policy class to decide how to parse * the parameters string. * - * @param parameters + * @param parameters log aggregation policy parameters. */ @Public @Unstable diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeReport.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeReport.java index 625ad234081..1c6ab43d33a 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeReport.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeReport.java @@ -243,6 +243,7 @@ public abstract class NodeReport { /** * Set the decommissioning timeout in seconds (null indicates absent timeout). + * @param decommissioningTimeout decommissioning time out. * */ public void setDecommissioningTimeout(Integer decommissioningTimeout) {} @@ -256,7 +257,8 @@ public abstract class NodeReport { /** * Set the node update type (null indicates absent node update type). - * */ + * @param nodeUpdateType node update type. + */ public void setNodeUpdateType(NodeUpdateType nodeUpdateType) {} /** diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/QueueInfo.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/QueueInfo.java index 707554cd1c4..c4d78f0c7ae 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/QueueInfo.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/QueueInfo.java @@ -251,6 +251,7 @@ public abstract class QueueInfo { /** * Set the accessible node labels of the queue. + * @param labels node label expression of the queue. */ @Private @Unstable diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Resource.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Resource.java index e9c7dd4a6d3..0c10e017685 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Resource.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Resource.java @@ -467,6 +467,10 @@ public abstract class Resource implements Comparable { return getFormattedString(String.valueOf(getMemorySize())); } + public String toFormattedString() { + return getFormattedString(); + } + private String getFormattedString(String memory) { StringBuilder sb = new StringBuilder(); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceRequest.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceRequest.java index a8639108615..5ffc05e3c7c 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceRequest.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceRequest.java @@ -437,17 +437,17 @@ public abstract class ResourceRequest implements Comparable { /** *

    For a request at a network hierarchy level, set whether locality can be relaxed - * to that level and beyond.

    + * to that level and beyond.

    * *

    If the flag is off on a rack-level ResourceRequest, * containers at that request's priority will not be assigned to nodes on that * request's rack unless requests specifically for those nodes have also been - * submitted.

    + * submitted.

    * *

    If the flag is off on an {@link ResourceRequest#ANY}-level * ResourceRequest, containers at that request's priority will * only be assigned on racks for which specific requests have also been - * submitted.

    + * submitted.

    * *

    For example, to request a container strictly on a specific node, the * corresponding rack-level and any-level requests should have locality diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/SerializedException.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/SerializedException.java index 9355a234ebb..fc76269d1b7 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/SerializedException.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/SerializedException.java @@ -38,6 +38,8 @@ public abstract class SerializedException { /** * Constructs a new SerializedException with the specified detail * message and cause. + * @param message exception detail message. + * @param cause cause of the exception. */ @Private @Unstable @@ -46,6 +48,7 @@ public abstract class SerializedException { /** * Constructs a new SerializedException with the specified detail * message. + * @param message exception detail message. */ @Private @Unstable @@ -53,6 +56,7 @@ public abstract class SerializedException { /** * Constructs a new SerializedException with the specified cause. + * @param cause cause of the exception. */ @Private @Unstable diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/YarnClusterMetrics.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/YarnClusterMetrics.java index fc3edf7fb74..f460e60f483 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/YarnClusterMetrics.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/YarnClusterMetrics.java @@ -53,6 +53,20 @@ public abstract class YarnClusterMetrics { @Unstable public abstract void setNumNodeManagers(int numNodeManagers); + /** + * Get the number of DecommissioningNodeManagers in the cluster. + * + * @return number of DecommissioningNodeManagers in the cluster + */ + @Public + @Unstable + public abstract int getNumDecommissioningNodeManagers(); + + @Private + @Unstable + public abstract void setNumDecommissioningNodeManagers( + int numDecommissioningNodeManagers); + /** * Get the number of DecommissionedNodeManagers in the cluster. * @@ -119,4 +133,16 @@ public abstract class YarnClusterMetrics { @Unstable public abstract void setNumRebootedNodeManagers(int numRebootedNodeManagers); + /** + * Get the number of ShutdownNodeManagers in the cluster. + * + * @return number of ShutdownNodeManagers in the cluster + */ + @Public + @Unstable + public abstract int getNumShutdownNodeManagers(); + + @Private + @Unstable + public abstract void setNumShutdownNodeManagers(int numShutdownNodeManagers); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/package-info.java index b6db5403f0d..94fed33e2c1 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,7 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.Public +@Public package org.apache.hadoop.yarn.api.records; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.Public; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/ConfigurationProvider.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/ConfigurationProvider.java index ae85e82bf37..d888e3c50ed 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/ConfigurationProvider.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/ConfigurationProvider.java @@ -44,12 +44,12 @@ public abstract class ConfigurationProvider { } /** - * Opens an InputStream at the indicated file - * @param bootstrapConf Configuration - * @param name The configuration file name + * Opens an InputStream at the indicated file. + * @param bootstrapConf Configuration. + * @param name The configuration file name. * @return configuration - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ public abstract InputStream getConfigurationInputStream( Configuration bootstrapConf, String name) throws YarnException, @@ -57,12 +57,15 @@ public abstract class ConfigurationProvider { /** * Derived classes initialize themselves using this method. + * @param bootstrapConf bootstrap configuration. + * @throws Exception exception occur. */ public abstract void initInternal(Configuration bootstrapConf) throws Exception; /** * Derived classes close themselves using this method. + * @throws Exception exception occur. */ public abstract void closeInternal() throws Exception; } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/ConfigurationProviderFactory.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/ConfigurationProviderFactory.java index 3562f173acb..e936f0d845c 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/ConfigurationProviderFactory.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/ConfigurationProviderFactory.java @@ -33,8 +33,8 @@ public class ConfigurationProviderFactory { /** * Creates an instance of {@link ConfigurationProvider} using given * configuration. - * @param bootstrapConf - * @return configurationProvider + * @param bootstrapConf bootstrap configuration. + * @return configurationProvider configuration provider. */ @SuppressWarnings("unchecked") public static ConfigurationProvider diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/HAUtil.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/HAUtil.java index 1360359a9a6..c85e211bd31 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/HAUtil.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/HAUtil.java @@ -102,7 +102,7 @@ public class HAUtil { /** * Verify configuration for Resource Manager HA. * @param conf Configuration - * @throws YarnRuntimeException + * @throws YarnRuntimeException thrown by a remote service. */ public static void verifyAndSetConfiguration(Configuration conf) throws YarnRuntimeException { @@ -320,7 +320,10 @@ public class HAUtil { } /** - * Add non empty and non null suffix to a key. + * Add non-empty and non-null suffix to a key. + * + * @param key key. + * @param suffix suffix. * @return the suffixed key */ public static String addSuffix(String key, String suffix) { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java index d5e120695e7..316a6421889 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java @@ -4061,6 +4061,17 @@ public class YarnConfiguration extends Configuration { public static final int DEFAULT_FEDERATION_STATESTORE_MAX_APPLICATIONS = 1000; + public static final String FEDERATION_STATESTORE_CLEANUP_RETRY_COUNT = + FEDERATION_PREFIX + "state-store.clean-up-retry-count"; + + public static final int DEFAULT_FEDERATION_STATESTORE_CLEANUP_RETRY_COUNT = 1; + + public static final String FEDERATION_STATESTORE_CLEANUP_RETRY_SLEEP_TIME = + FEDERATION_PREFIX + "state-store.clean-up-retry-sleep-time"; + + public static final long DEFAULT_FEDERATION_STATESTORE_CLEANUP_RETRY_SLEEP_TIME = + TimeUnit.SECONDS.toMillis(1); + public static final String ROUTER_PREFIX = YARN_PREFIX + "router."; public static final String ROUTER_BIND_HOST = ROUTER_PREFIX + "bind-host"; @@ -4106,6 +4117,14 @@ public class YarnConfiguration extends Configuration { ROUTER_PREFIX + "submit.retry"; public static final int DEFAULT_ROUTER_CLIENTRM_SUBMIT_RETRY = 3; + /** + * GetNewApplication and SubmitApplication request retry interval time. + */ + public static final String ROUTER_CLIENTRM_SUBMIT_INTERVAL_TIME = + ROUTER_PREFIX + "submit.interval.time"; + public static final long DEFAULT_CLIENTRM_SUBMIT_INTERVAL_TIME = + TimeUnit.MILLISECONDS.toMillis(10); + /** * The interceptor class used in FederationClientInterceptor should return * partial ApplicationReports. @@ -4117,11 +4136,87 @@ public class YarnConfiguration extends Configuration { public static final String ROUTER_WEBAPP_PREFIX = ROUTER_PREFIX + "webapp."; + /** + * This configurable that controls the thread pool size of the threadpool of the interceptor. + * The corePoolSize(minimumPoolSize) and maximumPoolSize of the thread pool + * are controlled by this configurable. + * In order to control the thread pool more accurately, this parameter is deprecated. + * + * corePoolSize(minimumPoolSize) use + * {@link YarnConfiguration#ROUTER_USER_CLIENT_THREAD_POOL_MINIMUM_POOL_SIZE} + * + * maximumPoolSize use + * {@link YarnConfiguration#ROUTER_USER_CLIENT_THREAD_POOL_MAXIMUM_POOL_SIZE} + * + * This configurable will be deprecated. + */ public static final String ROUTER_USER_CLIENT_THREADS_SIZE = ROUTER_PREFIX + "interceptor.user.threadpool-size"; + /** + * The default value is 5. + * which means that the corePoolSize(minimumPoolSize) and maximumPoolSize + * of the thread pool are both 5s. + * + * corePoolSize(minimumPoolSize) default value use + * {@link YarnConfiguration#DEFAULT_ROUTER_USER_CLIENT_THREAD_POOL_MINIMUM_POOL_SIZE} + * + * maximumPoolSize default value use + * {@link YarnConfiguration#DEFAULT_ROUTER_USER_CLIENT_THREAD_POOL_MAXIMUM_POOL_SIZE} + */ public static final int DEFAULT_ROUTER_USER_CLIENT_THREADS_SIZE = 5; + /** + * This configurable is used to set the corePoolSize(minimumPoolSize) + * of the thread pool of the interceptor. + * + * corePoolSize the number of threads to keep in the pool, even if they are idle. + */ + public static final String ROUTER_USER_CLIENT_THREAD_POOL_MINIMUM_POOL_SIZE = + ROUTER_PREFIX + "interceptor.user-thread-pool.minimum-pool-size"; + + /** + * This configuration is used to set the default value of corePoolSize (minimumPoolSize) + * of the thread pool of the interceptor. + * + * Default is 5. + */ + public static final int DEFAULT_ROUTER_USER_CLIENT_THREAD_POOL_MINIMUM_POOL_SIZE = 5; + + /** + * This configurable is used to set the maximumPoolSize of the thread pool of the interceptor. + * + * maximumPoolSize the maximum number of threads to allow in the pool. + */ + public static final String ROUTER_USER_CLIENT_THREAD_POOL_MAXIMUM_POOL_SIZE = + ROUTER_PREFIX + "interceptor.user-thread-pool.maximum-pool-size"; + + /** + * This configuration is used to set the default value of maximumPoolSize + * of the thread pool of the interceptor. + * + * Default is 5. + */ + public static final int DEFAULT_ROUTER_USER_CLIENT_THREAD_POOL_MAXIMUM_POOL_SIZE = 5; + + /** + * This configurable is used to set the keepAliveTime of the thread pool of the interceptor. + * + * keepAliveTime when the number of threads is greater than the core, + * this is the maximum time that excess idle threads will wait for new tasks before terminating. + */ + public static final String ROUTER_USER_CLIENT_THREAD_POOL_KEEP_ALIVE_TIME = + ROUTER_PREFIX + "interceptor.user-thread-pool.keep-alive-time"; + + /** + * This configurable is used to set the default time of keepAliveTime + * of the thread pool of the interceptor. + * + * the default value is 0s. + */ + public static final long DEFAULT_ROUTER_USER_CLIENT_THREAD_POOL_KEEP_ALIVE_TIME = + TimeUnit.SECONDS.toMillis(0); // 0s + /** The address of the Router web application. */ public static final String ROUTER_WEBAPP_ADDRESS = ROUTER_WEBAPP_PREFIX + "address"; @@ -4194,6 +4289,16 @@ public class YarnConfiguration extends Configuration { ROUTER_WEBAPP_PREFIX + "appsinfo-cached-count"; public static final int DEFAULT_ROUTER_APPSINFO_CACHED_COUNT = 100; + /** Enable cross origin (CORS) support. **/ + public static final String ROUTER_WEBAPP_ENABLE_CORS_FILTER = + ROUTER_PREFIX + "webapp.cross-origin.enabled"; + public static final boolean DEFAULT_ROUTER_WEBAPP_ENABLE_CORS_FILTER = false; + + /** Router Interceptor Allow Partial Result Enable. **/ + public static final String ROUTER_INTERCEPTOR_ALLOW_PARTIAL_RESULT_ENABLED = + ROUTER_PREFIX + "interceptor.allow-partial-result.enable"; + public static final boolean DEFAULT_ROUTER_INTERCEPTOR_ALLOW_PARTIAL_RESULT_ENABLED = false; + //////////////////////////////// // CSI Volume configs //////////////////////////////// diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/package-info.java index ca0aa31426b..8acd6a2a93f 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,7 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.Public +@Public package org.apache.hadoop.yarn.conf; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.Public; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/exceptions/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/exceptions/package-info.java index 89b73f0578d..5ed6c2c9a7e 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/exceptions/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/exceptions/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,6 +15,6 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.Public +@Public package org.apache.hadoop.yarn.exceptions; -import org.apache.hadoop.classification.InterfaceAudience; \ No newline at end of file +import org.apache.hadoop.classification.InterfaceAudience.Public; \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/factories/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/factories/package-info.java index 66bf20f3c8a..90720e91d06 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/factories/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/factories/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,6 +15,6 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.Private +@Private package org.apache.hadoop.yarn.factories; -import org.apache.hadoop.classification.InterfaceAudience; \ No newline at end of file +import org.apache.hadoop.classification.InterfaceAudience.Private; \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/factory/providers/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/factory/providers/package-info.java index 4f397f3987a..f25bd41bac2 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/factory/providers/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/factory/providers/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,7 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.Private +@Private package org.apache.hadoop.yarn.factory.providers; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.Private; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ResourceManagerAdministrationProtocol.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ResourceManagerAdministrationProtocol.java index 4777cf8b62a..6bd1a39eea4 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ResourceManagerAdministrationProtocol.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ResourceManagerAdministrationProtocol.java @@ -106,8 +106,8 @@ public interface ResourceManagerAdministrationProtocol extends GetUserMappingsPr * * @param request request to update resource for a node in cluster. * @return (empty) response on accepting update. - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ @Private @Idempotent diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/SCMAdminProtocol.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/SCMAdminProtocol.java index 5c791fab4e3..5e37259e067 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/SCMAdminProtocol.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/SCMAdminProtocol.java @@ -42,8 +42,8 @@ public interface SCMAdminProtocol { * @param request request SharedCacheManager to run a cleaner task * @return SharedCacheManager returns an empty response * on success and throws an exception on rejecting the request - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException io error occur. */ @Public @Unstable diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/RefreshNodesRequest.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/RefreshNodesRequest.java index 732d98ebe44..1675e3ace42 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/RefreshNodesRequest.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/RefreshNodesRequest.java @@ -53,17 +53,28 @@ public abstract class RefreshNodesRequest { return request; } + @Private + @Unstable + public static RefreshNodesRequest newInstance( + DecommissionType decommissionType, Integer timeout, String subClusterId) { + RefreshNodesRequest request = Records.newRecord(RefreshNodesRequest.class); + request.setDecommissionType(decommissionType); + request.setDecommissionTimeout(timeout); + request.setSubClusterId(subClusterId); + return request; + } + /** - * Set the DecommissionType + * Set the DecommissionType. * - * @param decommissionType + * @param decommissionType decommission type. */ public abstract void setDecommissionType(DecommissionType decommissionType); /** - * Get the DecommissionType + * Get the DecommissionType. * - * @return decommissionType + * @return decommissionType decommission type. */ public abstract DecommissionType getDecommissionType(); @@ -80,4 +91,18 @@ public abstract class RefreshNodesRequest { * @return decommissionTimeout */ public abstract Integer getDecommissionTimeout(); + + /** + * Get the subClusterId. + * + * @return subClusterId. + */ + public abstract String getSubClusterId(); + + /** + * Set the subClusterId. + * + * @param subClusterId subCluster Id. + */ + public abstract void setSubClusterId(String subClusterId); } \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/RefreshQueuesRequest.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/RefreshQueuesRequest.java index eff4b7f4d28..ba332ad40cd 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/RefreshQueuesRequest.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/RefreshQueuesRequest.java @@ -20,6 +20,7 @@ package org.apache.hadoop.yarn.server.api.protocolrecords; import org.apache.hadoop.classification.InterfaceAudience.Private; import org.apache.hadoop.classification.InterfaceAudience.Public; +import org.apache.hadoop.classification.InterfaceStability.Unstable; import org.apache.hadoop.classification.InterfaceStability.Stable; import org.apache.hadoop.yarn.util.Records; @@ -33,4 +34,20 @@ public abstract class RefreshQueuesRequest { Records.newRecord(RefreshQueuesRequest.class); return request; } + + @Public + @Stable + public static RefreshQueuesRequest newInstance(String subClusterId) { + RefreshQueuesRequest request = Records.newRecord(RefreshQueuesRequest.class); + request.setSubClusterId(subClusterId); + return request; + } + + @Public + @Unstable + public abstract String getSubClusterId(); + + @Private + @Unstable + public abstract void setSubClusterId(String subClusterId); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/RefreshSuperUserGroupsConfigurationRequest.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/RefreshSuperUserGroupsConfigurationRequest.java index abe142ce01c..846e7fc2c39 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/RefreshSuperUserGroupsConfigurationRequest.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/RefreshSuperUserGroupsConfigurationRequest.java @@ -33,4 +33,27 @@ public abstract class RefreshSuperUserGroupsConfigurationRequest { Records.newRecord(RefreshSuperUserGroupsConfigurationRequest.class); return request; } + + @Public + @Stable + public static RefreshSuperUserGroupsConfigurationRequest newInstance(String subClusterId) { + RefreshSuperUserGroupsConfigurationRequest request = + Records.newRecord(RefreshSuperUserGroupsConfigurationRequest.class); + request.setSubClusterId(subClusterId); + return request; + } + + /** + * Get the subClusterId. + * + * @return subClusterId. + */ + public abstract String getSubClusterId(); + + /** + * Set the subClusterId. + * + * @param subClusterId subCluster Id. + */ + public abstract void setSubClusterId(String subClusterId); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/RefreshUserToGroupsMappingsRequest.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/RefreshUserToGroupsMappingsRequest.java index 953574a898e..08966d47c2b 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/RefreshUserToGroupsMappingsRequest.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/RefreshUserToGroupsMappingsRequest.java @@ -33,4 +33,27 @@ public abstract class RefreshUserToGroupsMappingsRequest { Records.newRecord(RefreshUserToGroupsMappingsRequest.class); return request; } + + @Public + @Stable + public static RefreshUserToGroupsMappingsRequest newInstance(String subClusterId) { + RefreshUserToGroupsMappingsRequest request = + Records.newRecord(RefreshUserToGroupsMappingsRequest.class); + request.setSubClusterId(subClusterId); + return request; + } + + /** + * Get the subClusterId. + * + * @return subClusterId. + */ + public abstract String getSubClusterId(); + + /** + * Set the subClusterId. + * + * @param subClusterId subCluster Id. + */ + public abstract void setSubClusterId(String subClusterId); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/package-info.java index a1a78c802a2..56aaa15816f 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,7 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.Private +@Private package org.apache.hadoop.yarn.server.api.protocolrecords; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.Private; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/constraint/PlacementConstraintParser.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/constraint/PlacementConstraintParser.java index 85352b3f5d1..d0bf888ea36 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/constraint/PlacementConstraintParser.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/constraint/PlacementConstraintParser.java @@ -165,7 +165,8 @@ public final class PlacementConstraintParser { /** * Validate the schema before actual parsing the expression. - * @throws PlacementConstraintParseException + * @throws PlacementConstraintParseException when the placement constraint parser + * fails to parse an expression. */ default void validate() throws PlacementConstraintParseException { // do nothing @@ -633,9 +634,10 @@ public final class PlacementConstraintParser { /** * Parses source tags from expression "sourceTags(numOfAllocations)". - * @param expr + * @param expr expression string. * @return source tags, see {@link SourceTags} - * @throws PlacementConstraintParseException + * @throws PlacementConstraintParseException when the placement constraint parser + * fails to parse an expression. */ public static SourceTags parseFrom(String expr) throws PlacementConstraintParseException { @@ -718,7 +720,8 @@ public final class PlacementConstraintParser { * * @param expression expression string. * @return a map of source tags to placement constraint mapping. - * @throws PlacementConstraintParseException + * @throws PlacementConstraintParseException when the placement constraint parser + * fails to parse an expression. */ public static Map parsePlacementSpec( String expression) throws PlacementConstraintParseException { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/csi/CsiConfigUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/csi/CsiConfigUtils.java index 428fedbc1f5..6987741be51 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/csi/CsiConfigUtils.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/csi/CsiConfigUtils.java @@ -53,10 +53,10 @@ public final class CsiConfigUtils { * Resolve the CSI adaptor address for a CSI driver from configuration. * Expected configuration property name is * yarn.nodemanager.csi-driver-adaptor.${driverName}.address. - * @param driverName - * @param conf + * @param driverName driver name. + * @param conf configuration. * @return adaptor service address - * @throws YarnException + * @throws YarnException exceptions from yarn servers. */ public static InetSocketAddress getCsiAdaptorAddressForDriver( String driverName, Configuration conf) throws YarnException { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java index 1433c24e9e1..cde38219fcc 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java @@ -818,6 +818,28 @@ public class ResourceUtils { return res; } + public static Resource multiplyFloor(Resource resource, double multiplier) { + Resource newResource = Resource.newInstance(0, 0); + + for (ResourceInformation resourceInformation : resource.getResources()) { + newResource.setResourceValue(resourceInformation.getName(), + (long) Math.floor(resourceInformation.getValue() * multiplier)); + } + + return newResource; + } + + public static Resource multiplyRound(Resource resource, double multiplier) { + Resource newResource = Resource.newInstance(0, 0); + + for (ResourceInformation resourceInformation : resource.getResources()) { + newResource.setResourceValue(resourceInformation.getName(), + Math.round(resourceInformation.getValue() * multiplier)); + } + + return newResource; + } + @InterfaceAudience.Private @InterfaceStability.Unstable public static Resource createResourceFromString( diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/server/yarn_server_resourcemanager_service_protos.proto b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/server/yarn_server_resourcemanager_service_protos.proto index 3f9913b9896..97e29f954cd 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/server/yarn_server_resourcemanager_service_protos.proto +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/server/yarn_server_resourcemanager_service_protos.proto @@ -32,6 +32,7 @@ package hadoop.yarn; import "yarn_protos.proto"; message RefreshQueuesRequestProto { + optional string sub_cluster_id = 1; } message RefreshQueuesResponseProto { } @@ -39,16 +40,19 @@ message RefreshQueuesResponseProto { message RefreshNodesRequestProto { optional DecommissionTypeProto decommissionType = 1 [default = NORMAL]; optional int32 decommissionTimeout = 2; + optional string sub_cluster_id = 3; } message RefreshNodesResponseProto { } message RefreshSuperUserGroupsConfigurationRequestProto { + optional string sub_cluster_id = 1; } message RefreshSuperUserGroupsConfigurationResponseProto { } message RefreshUserToGroupsMappingsRequestProto { + optional string sub_cluster_id = 1; } message RefreshUserToGroupsMappingsResponseProto { } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto index ab3c9d4da0d..300bee24cbe 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto @@ -572,6 +572,8 @@ message YarnClusterMetricsProto { optional int32 num_lost_nms = 4; optional int32 num_unhealthy_nms = 5; optional int32 num_rebooted_nms = 6; + optional int32 num_decommissioning_nms = 7; + optional int32 num_shutdown_nms = 8; } enum QueueStateProto { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java index 84e4b561e5d..f7747c6216a 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java @@ -223,6 +223,20 @@ public class TestYarnConfigurationFields extends TestConfigurationFieldsBase { "yarn.log-aggregation.file-controller.TFile.class"); // Add the filters used for checking for collision of default values. initDefaultValueCollisionCheck(); + + configurationPropsToSkipCompare.add(YarnConfiguration.LOG_AGGREGATION_REMOTE_APP_LOG_DIR_FMT); + configurationPropsToSkipCompare.add( + YarnConfiguration.LOG_AGGREGATION_REMOTE_APP_LOG_DIR_SUFFIX_FMT); + configurationPropsToSkipCompare.add(YarnConfiguration.LOG_AGGREGATION_FILE_CONTROLLER_FMT); + configurationPropsToSkipCompare.add(YarnConfiguration.NM_AUX_SERVICE_FMT); + configurationPropsToSkipCompare.add( + YarnConfiguration.NM_HEALTH_CHECK_SCRIPT_TIMEOUT_MS_TEMPLATE); + configurationPropsToSkipCompare.add(YarnConfiguration.NM_HEALTH_CHECK_SCRIPT_OPTS_TEMPLATE); + configurationPropsToSkipCompare.add(YarnConfiguration.NM_HEALTH_CHECK_SCRIPT_PATH_TEMPLATE); + configurationPropsToSkipCompare.add( + YarnConfiguration.NM_HEALTH_CHECK_SCRIPT_INTERVAL_MS_TEMPLATE); + configurationPropsToSkipCompare.add(YarnConfiguration.NM_AUX_SERVICE_REMOTE_CLASSPATH); + configurationPropsToSkipCompare.add(YarnConfiguration.LINUX_CONTAINER_RUNTIME_CLASS_FMT); } /** diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/pom.xml index 21b3172a59f..3acd9ce0ea8 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/pom.xml +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/pom.xml @@ -39,59 +39,75 @@ *Spec - - - junit - junit - test - - - io.swagger - swagger-annotations - - - org.mockito - mockito-all - 1.9.5 - test - - - org.powermock - powermock-module-junit4 - ${powermock.version} - test - - - org.powermock - powermock-api-mockito - ${powermock.version} - - - org.mockito - mockito-all - - - test - + + + io.swagger + swagger-annotations + + + org.mockito + mockito-all + 1.9.5 + test + + + org.powermock + powermock-module-junit4 + ${powermock.version} + test + + + junit + junit + + + + + org.powermock + powermock-api-mockito + ${powermock.version} + + + org.mockito + mockito-all + + + test + - - org.glassfish.jersey.media - jersey-media-json-jackson - 2.12 - test - - - org.javassist - javassist - - - com.fasterxml.jackson.jaxrs - jackson-jaxrs-base - - - + + org.glassfish.jersey.media + jersey-media-json-jackson + 2.12 + test + + + org.javassist + javassist + + + com.fasterxml.jackson.jaxrs + jackson-jaxrs-base + + + + + org.junit.jupiter + junit-jupiter-api + test + + + org.junit.jupiter + junit-jupiter-engine + test + + + org.junit.platform + junit-platform-launcher + test + - + com.github.pjfanning jersey-json diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/src/test/java/org/apache/hadoop/yarn/appcatalog/application/TestAppCatalogSolrClient.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/src/test/java/org/apache/hadoop/yarn/appcatalog/application/TestAppCatalogSolrClient.java index 37a382021ea..72e89151302 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/src/test/java/org/apache/hadoop/yarn/appcatalog/application/TestAppCatalogSolrClient.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/src/test/java/org/apache/hadoop/yarn/appcatalog/application/TestAppCatalogSolrClient.java @@ -22,14 +22,15 @@ import org.apache.hadoop.yarn.appcatalog.model.AppEntry; import org.apache.hadoop.yarn.appcatalog.model.AppStoreEntry; import org.apache.hadoop.yarn.appcatalog.model.Application; import org.apache.solr.client.solrj.SolrClient; -import org.junit.After; -import org.junit.Before; -import org.junit.Test; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; import org.powermock.api.mockito.PowerMockito; import static org.powermock.api.mockito.PowerMockito.when; import static org.powermock.api.support.membermodification.MemberMatcher.method; -import static org.junit.Assert.*; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertTrue; import java.util.List; @@ -42,7 +43,7 @@ public class TestAppCatalogSolrClient { private static SolrClient solrClient; private static AppCatalogSolrClient spy; - @Before + @BeforeEach public void setup() throws Exception { String targetLocation = EmbeddedSolrServerFactory.class .getProtectionDomain().getCodeSource().getLocation().getFile() + "/.."; @@ -55,7 +56,7 @@ public class TestAppCatalogSolrClient { .withNoArguments().thenReturn(solrClient); } - @After + @AfterEach public void teardown() throws Exception { try { solrClient.close(); @@ -64,7 +65,7 @@ public class TestAppCatalogSolrClient { } @Test - public void testRegister() throws Exception { + void testRegister() throws Exception { Application example = new Application(); example.setOrganization("jenkins-ci.org"); example.setName("jenkins"); @@ -76,7 +77,7 @@ public class TestAppCatalogSolrClient { } @Test - public void testSearch() throws Exception { + void testSearch() throws Exception { Application example = new Application(); example.setOrganization("jenkins-ci.org"); example.setName("jenkins"); @@ -90,7 +91,7 @@ public class TestAppCatalogSolrClient { } @Test - public void testNotFoundSearch() throws Exception { + void testNotFoundSearch() throws Exception { Application example = new Application(); example.setOrganization("jenkins-ci.org"); example.setName("jenkins"); @@ -104,7 +105,7 @@ public class TestAppCatalogSolrClient { } @Test - public void testGetRecommendedApps() throws Exception { + void testGetRecommendedApps() throws Exception { AppStoreEntry example = new AppStoreEntry(); example.setOrg("jenkins-ci.org"); example.setName("jenkins"); @@ -121,15 +122,15 @@ public class TestAppCatalogSolrClient { spy.register(example2); List actual = spy.getRecommendedApps(); long previous = 1000L; - for (AppStoreEntry app: actual) { - assertTrue("Recommend app is not sort by download count.", - previous > app.getDownload()); + for (AppStoreEntry app : actual) { + assertTrue(previous > app.getDownload(), + "Recommend app is not sort by download count."); previous = app.getDownload(); } } @Test - public void testUpgradeApp() throws Exception { + void testUpgradeApp() throws Exception { Application example = new Application(); String expected = "2.0"; String actual = ""; @@ -143,7 +144,7 @@ public class TestAppCatalogSolrClient { example.setVersion("2.0"); spy.upgradeApp(example); List appEntries = spy.listAppEntries(); - actual = appEntries.get(appEntries.size() -1).getYarnfile().getVersion(); + actual = appEntries.get(appEntries.size() - 1).getYarnfile().getVersion(); assertEquals(expected, actual); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/src/test/java/org/apache/hadoop/yarn/appcatalog/controller/AppDetailsControllerTest.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/src/test/java/org/apache/hadoop/yarn/appcatalog/controller/AppDetailsControllerTest.java index 437d50b737b..1ceab0a69a9 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/src/test/java/org/apache/hadoop/yarn/appcatalog/controller/AppDetailsControllerTest.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/src/test/java/org/apache/hadoop/yarn/appcatalog/controller/AppDetailsControllerTest.java @@ -22,8 +22,8 @@ import org.apache.hadoop.yarn.service.api.records.Service; import org.apache.hadoop.yarn.appcatalog.model.AppEntry; import org.apache.hadoop.yarn.service.api.records.Component; import org.apache.hadoop.yarn.service.api.records.Container; -import org.junit.Before; -import org.junit.Test; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; import org.mockito.Mockito; import javax.ws.rs.Path; @@ -31,8 +31,9 @@ import javax.ws.rs.core.Response; import static org.hamcrest.MatcherAssert.assertThat; import static org.hamcrest.core.Is.is; -import static org.junit.Assert.*; -import static org.mockito.Mockito.*; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.mockito.Mockito.when; import java.util.ArrayList; import java.util.List; @@ -44,14 +45,14 @@ public class AppDetailsControllerTest { private AppDetailsController controller; - @Before + @BeforeEach public void setUp() throws Exception { this.controller = new AppDetailsController(); } @Test - public void testGetDetails() throws Exception { + void testGetDetails() throws Exception { String id = "application 1"; AppDetailsController ac = Mockito.mock(AppDetailsController.class); @@ -63,7 +64,7 @@ public class AppDetailsControllerTest { } @Test - public void testGetStatus() throws Exception { + void testGetStatus() throws Exception { String id = "application 1"; AppDetailsController ac = Mockito.mock(AppDetailsController.class); @@ -84,7 +85,7 @@ public class AppDetailsControllerTest { } @Test - public void testStopApp() throws Exception { + void testStopApp() throws Exception { String id = "application 1"; AppDetailsController ac = Mockito.mock(AppDetailsController.class); @@ -103,7 +104,7 @@ public class AppDetailsControllerTest { } @Test - public void testRestartApp() throws Exception { + void testRestartApp() throws Exception { String id = "application 1"; AppDetailsController ac = Mockito.mock(AppDetailsController.class); @@ -122,12 +123,12 @@ public class AppDetailsControllerTest { } @Test - public void testPathAnnotation() throws Exception { + void testPathAnnotation() throws Exception { assertNotNull(this.controller.getClass() .getAnnotations()); assertThat("The controller has the annotation Path", this.controller.getClass() - .isAnnotationPresent(Path.class)); + .isAnnotationPresent(Path.class)); final Path path = this.controller.getClass() .getAnnotation(Path.class); @@ -136,7 +137,7 @@ public class AppDetailsControllerTest { } @Test - public void testUpgradeApp() throws Exception { + void testUpgradeApp() throws Exception { String id = "application1"; AppDetailsController ac = Mockito.mock(AppDetailsController.class); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/src/test/java/org/apache/hadoop/yarn/appcatalog/controller/AppListControllerTest.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/src/test/java/org/apache/hadoop/yarn/appcatalog/controller/AppListControllerTest.java index 97f288e2476..d788de618a6 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/src/test/java/org/apache/hadoop/yarn/appcatalog/controller/AppListControllerTest.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/src/test/java/org/apache/hadoop/yarn/appcatalog/controller/AppListControllerTest.java @@ -20,8 +20,8 @@ package org.apache.hadoop.yarn.appcatalog.controller; import org.apache.hadoop.yarn.appcatalog.model.AppEntry; import org.apache.hadoop.yarn.service.api.records.Service; -import org.junit.Before; -import org.junit.Test; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; import org.mockito.Mockito; import javax.ws.rs.Path; @@ -29,8 +29,9 @@ import javax.ws.rs.core.Response; import static org.hamcrest.MatcherAssert.assertThat; import static org.hamcrest.core.Is.is; -import static org.junit.Assert.*; -import static org.mockito.Mockito.*; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.mockito.Mockito.when; import java.util.ArrayList; import java.util.List; @@ -42,14 +43,14 @@ public class AppListControllerTest { private AppListController controller; - @Before + @BeforeEach public void setUp() throws Exception { this.controller = new AppListController(); } @Test - public void testGetList() throws Exception { + void testGetList() throws Exception { AppListController ac = Mockito.mock(AppListController.class); List actual = new ArrayList(); @@ -59,7 +60,7 @@ public class AppListControllerTest { } @Test - public void testDelete() throws Exception { + void testDelete() throws Exception { String id = "application 1"; AppListController ac = Mockito.mock(AppListController.class); @@ -70,7 +71,7 @@ public class AppListControllerTest { } @Test - public void testDeploy() throws Exception { + void testDeploy() throws Exception { String id = "application 1"; AppListController ac = Mockito.mock(AppListController.class); Service service = new Service(); @@ -81,7 +82,7 @@ public class AppListControllerTest { } @Test - public void testPathAnnotation() throws Exception { + void testPathAnnotation() throws Exception { assertNotNull(this.controller.getClass() .getAnnotations()); assertThat("The controller has the annotation Path", diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/src/test/java/org/apache/hadoop/yarn/appcatalog/controller/AppStoreControllerTest.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/src/test/java/org/apache/hadoop/yarn/appcatalog/controller/AppStoreControllerTest.java index d09952b1395..df0c6802b5a 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/src/test/java/org/apache/hadoop/yarn/appcatalog/controller/AppStoreControllerTest.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/src/test/java/org/apache/hadoop/yarn/appcatalog/controller/AppStoreControllerTest.java @@ -20,8 +20,8 @@ package org.apache.hadoop.yarn.appcatalog.controller; import org.apache.hadoop.yarn.appcatalog.model.AppStoreEntry; import org.apache.hadoop.yarn.appcatalog.model.Application; -import org.junit.Before; -import org.junit.Test; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; import org.mockito.Mockito; import javax.ws.rs.Path; @@ -29,8 +29,9 @@ import javax.ws.rs.core.Response; import static org.hamcrest.MatcherAssert.assertThat; import static org.hamcrest.core.Is.is; -import static org.junit.Assert.*; -import static org.mockito.Mockito.*; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.mockito.Mockito.when; import java.util.ArrayList; import java.util.List; @@ -42,14 +43,14 @@ public class AppStoreControllerTest { private AppStoreController controller; - @Before + @BeforeEach public void setUp() throws Exception { this.controller = new AppStoreController(); } @Test - public void testGetRecommended() throws Exception { + void testGetRecommended() throws Exception { AppStoreController ac = Mockito.mock(AppStoreController.class); List actual = new ArrayList(); when(ac.get()).thenReturn(actual); @@ -58,7 +59,7 @@ public class AppStoreControllerTest { } @Test - public void testSearch() throws Exception { + void testSearch() throws Exception { String keyword = "jenkins"; AppStoreController ac = Mockito.mock(AppStoreController.class); List expected = new ArrayList(); @@ -68,7 +69,7 @@ public class AppStoreControllerTest { } @Test - public void testRegister() throws Exception { + void testRegister() throws Exception { AppStoreController ac = Mockito.mock(AppStoreController.class); Application app = new Application(); app.setName("jenkins"); @@ -82,12 +83,12 @@ public class AppStoreControllerTest { } @Test - public void testPathAnnotation() throws Exception { + void testPathAnnotation() throws Exception { assertNotNull(this.controller.getClass() .getAnnotations()); assertThat("The controller has the annotation Path", this.controller.getClass() - .isAnnotationPresent(Path.class)); + .isAnnotationPresent(Path.class)); final Path path = this.controller.getClass() .getAnnotation(Path.class); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core/pom.xml index 2809c75ffd8..770fceaaa36 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core/pom.xml +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core/pom.xml @@ -31,23 +31,33 @@ - - junit - junit - test - org.apache.hadoop hadoop-common - - org.apache.hadoop - hadoop-common - test-jar - test - + + org.apache.hadoop + hadoop-common + test-jar + test + + + org.junit.jupiter + junit-jupiter-api + test + + + org.junit.jupiter + junit-jupiter-engine + test + + + org.junit.platform + junit-platform-launcher + test + com.google.inject diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core/src/test/java/org/apache/hadoop/applications/mawo/server/common/TestMaWoConfiguration.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core/src/test/java/org/apache/hadoop/applications/mawo/server/common/TestMaWoConfiguration.java index e189bcb8f43..edfed8db1d7 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core/src/test/java/org/apache/hadoop/applications/mawo/server/common/TestMaWoConfiguration.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core/src/test/java/org/apache/hadoop/applications/mawo/server/common/TestMaWoConfiguration.java @@ -19,8 +19,10 @@ package org.apache.hadoop.applications.mawo.server.common; -import org.junit.Assert; -import org.junit.Test; +import org.junit.jupiter.api.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertTrue; /** * Test MaWo configuration. @@ -31,29 +33,29 @@ public class TestMaWoConfiguration { * Validate default MaWo Configurations. */ @Test - public void testMaWoConfiguration() { + void testMaWoConfiguration() { MawoConfiguration mawoConf = new MawoConfiguration(); // validate Rpc server port - Assert.assertEquals(mawoConf.getRpcServerPort(), 5120); + assertEquals(5120, mawoConf.getRpcServerPort()); // validate Rpc hostname - Assert.assertTrue("localhost".equals(mawoConf.getRpcHostName())); + assertEquals("localhost", mawoConf.getRpcHostName()); // validate job queue storage conf boolean jobQueueStorage = mawoConf.getJobQueueStorageEnabled(); - Assert.assertTrue(jobQueueStorage); + assertTrue(jobQueueStorage); // validate default teardownWorkerValidity Interval - Assert.assertEquals(mawoConf.getTeardownWorkerValidityInterval(), 120000); + assertEquals(120000, mawoConf.getTeardownWorkerValidityInterval()); // validate Zk related configs - Assert.assertTrue("/tmp/mawoRoot".equals(mawoConf.getZKParentPath())); - Assert.assertTrue("localhost:2181".equals(mawoConf.getZKAddress())); - Assert.assertEquals(1000, mawoConf.getZKRetryIntervalMS()); - Assert.assertEquals(10000, mawoConf.getZKSessionTimeoutMS()); - Assert.assertEquals(1000, mawoConf.getZKRetriesNum()); + assertEquals("/tmp/mawoRoot", mawoConf.getZKParentPath()); + assertEquals("localhost:2181", mawoConf.getZKAddress()); + assertEquals(1000, mawoConf.getZKRetryIntervalMS()); + assertEquals(10000, mawoConf.getZKSessionTimeoutMS()); + assertEquals(1000, mawoConf.getZKRetriesNum()); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher/pom.xml index 405e44188dc..67be3758a5b 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher/pom.xml +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher/pom.xml @@ -72,6 +72,21 @@ test-jar test + + org.junit.jupiter + junit-jupiter-api + test + + + org.junit.jupiter + junit-jupiter-engine + test + + + org.junit.platform + junit-platform-launcher + test + diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher/src/test/java/org/apache/hadoop/yarn/applications/unmanagedamlauncher/TestUnmanagedAMLauncher.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher/src/test/java/org/apache/hadoop/yarn/applications/unmanagedamlauncher/TestUnmanagedAMLauncher.java index 134cba80ddc..ed903f7f9b8 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher/src/test/java/org/apache/hadoop/yarn/applications/unmanagedamlauncher/TestUnmanagedAMLauncher.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher/src/test/java/org/apache/hadoop/yarn/applications/unmanagedamlauncher/TestUnmanagedAMLauncher.java @@ -18,8 +18,10 @@ package org.apache.hadoop.yarn.applications.unmanagedamlauncher; -import static org.junit.Assert.assertTrue; -import static org.junit.Assert.fail; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; import java.io.ByteArrayOutputStream; import java.io.File; @@ -41,10 +43,10 @@ import org.apache.hadoop.yarn.client.ClientRMProxy; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.exceptions.YarnException; import org.apache.hadoop.yarn.server.MiniYARNCluster; -import org.junit.AfterClass; -import org.junit.Assert; -import org.junit.BeforeClass; -import org.junit.Test; +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -55,7 +57,7 @@ public class TestUnmanagedAMLauncher { protected static MiniYARNCluster yarnCluster = null; protected static Configuration conf = new YarnConfiguration(); - @BeforeClass + @BeforeAll public static void setup() throws InterruptedException, IOException { LOG.info("Starting up YARN cluster"); conf.setInt(YarnConfiguration.RM_SCHEDULER_MINIMUM_ALLOCATION_MB, 128); @@ -71,9 +73,9 @@ public class TestUnmanagedAMLauncher { LOG.info("MiniYARN ResourceManager published web address: " + yarnClusterConfig.get(YarnConfiguration.RM_WEBAPP_ADDRESS)); String webapp = yarnClusterConfig.get(YarnConfiguration.RM_WEBAPP_ADDRESS); - assertTrue("Web app address still unbound to a host at " + webapp, - !webapp.startsWith("0.0.0.0")); - LOG.info("Yarn webapp is at "+ webapp); + assertFalse(webapp.startsWith("0.0.0.0"), + "Web app address still unbound to a host at " + webapp); + LOG.info("Yarn webapp is at " + webapp); URL url = Thread.currentThread().getContextClassLoader() .getResource("yarn-site.xml"); if (url == null) { @@ -97,7 +99,7 @@ public class TestUnmanagedAMLauncher { } } - @AfterClass + @AfterAll public static void tearDown() throws IOException { if (yarnCluster != null) { try { @@ -123,8 +125,9 @@ public class TestUnmanagedAMLauncher { return envClassPath; } - @Test(timeout=30000) - public void testUMALauncher() throws Exception { + @Test + @Timeout(30000) + void testUMALauncher() throws Exception { String classpath = getTestRuntimeClasspath(); String javaHome = System.getenv("JAVA_HOME"); if (javaHome == null) { @@ -140,7 +143,7 @@ public class TestUnmanagedAMLauncher { javaHome + "/bin/java -Xmx512m " + TestUnmanagedAMLauncher.class.getCanonicalName() - + " success" }; + + " success"}; LOG.info("Initializing Launcher"); UnmanagedAMLauncher launcher = @@ -149,24 +152,24 @@ public class TestUnmanagedAMLauncher { throws IOException, YarnException { YarnApplicationAttemptState attemptState = rmClient.getApplicationAttemptReport(attemptId) - .getYarnApplicationAttemptState(); - Assert.assertTrue(attemptState - .equals(YarnApplicationAttemptState.LAUNCHED)); + .getYarnApplicationAttemptState(); + assertEquals(YarnApplicationAttemptState.LAUNCHED, attemptState); super.launchAM(attemptId); } }; boolean initSuccess = launcher.init(args); - Assert.assertTrue(initSuccess); + assertTrue(initSuccess); LOG.info("Running Launcher"); boolean result = launcher.run(); LOG.info("Launcher run completed. Result=" + result); - Assert.assertTrue(result); + assertTrue(result); } - @Test(timeout=30000) - public void testUMALauncherError() throws Exception { + @Test + @Timeout(30000) + void testUMALauncherError() throws Exception { String classpath = getTestRuntimeClasspath(); String javaHome = System.getenv("JAVA_HOME"); if (javaHome == null) { @@ -182,13 +185,13 @@ public class TestUnmanagedAMLauncher { javaHome + "/bin/java -Xmx512m " + TestUnmanagedAMLauncher.class.getCanonicalName() - + " failure" }; + + " failure"}; LOG.info("Initializing Launcher"); UnmanagedAMLauncher launcher = new UnmanagedAMLauncher(new Configuration( yarnCluster.getConfig())); boolean initSuccess = launcher.init(args); - Assert.assertTrue(initSuccess); + assertTrue(initSuccess); LOG.info("Running Launcher"); try { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/pom.xml index 2da2cdd42a8..dbe0c69d550 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/pom.xml +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/pom.xml @@ -197,6 +197,16 @@ test-jar test + + org.junit.jupiter + junit-jupiter-api + test + + + org.junit.jupiter + junit-jupiter-engine + test + org.apache.hadoop hadoop-minicluster @@ -211,7 +221,17 @@ org.apache.hadoop hadoop-minikdc test + + + junit + junit + + + + + org.junit.platform + junit-platform-launcher + test - diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/test/java/org/apache/hadoop/yarn/service/TestApiServer.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/test/java/org/apache/hadoop/yarn/service/TestApiServer.java index 7d895d1cdd6..db2cffc9345 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/test/java/org/apache/hadoop/yarn/service/TestApiServer.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/test/java/org/apache/hadoop/yarn/service/TestApiServer.java @@ -17,7 +17,7 @@ package org.apache.hadoop.yarn.service; -import static org.junit.Assert.*; +import static org.junit.jupiter.api.Assertions.*; import java.io.BufferedWriter; import java.io.File; @@ -47,9 +47,9 @@ import org.apache.hadoop.yarn.service.api.records.ServiceState; import org.apache.hadoop.yarn.service.api.records.ServiceStatus; import org.apache.hadoop.yarn.service.conf.RestApiConstants; import org.apache.hadoop.yarn.service.webapp.ApiServer; -import org.junit.After; -import org.junit.Before; -import org.junit.Test; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; import org.mockito.Mockito; /** @@ -61,7 +61,7 @@ public class TestApiServer { private HttpServletRequest request; private ServiceClientTest mockServerClient; - @Before + @BeforeEach public void setup() throws Exception { request = Mockito.mock(HttpServletRequest.class); Mockito.when(request.getRemoteUser()) @@ -74,40 +74,41 @@ public class TestApiServer { apiServer.setServiceClient(mockServerClient); } - @After + @AfterEach public void teardown() { mockServerClient.forceStop(); } @Test - public void testPathAnnotation() { + void testPathAnnotation() { assertNotNull(this.apiServer.getClass().getAnnotation(Path.class)); - assertTrue("The controller has the annotation Path", - this.apiServer.getClass().isAnnotationPresent(Path.class)); + assertTrue(this.apiServer.getClass().isAnnotationPresent(Path.class), + "The controller has the annotation Path"); final Path path = this.apiServer.getClass() .getAnnotation(Path.class); - assertEquals("The path has /v1 annotation", "/v1", path.value()); + assertEquals("/v1", path.value(), "The path has /v1 annotation"); } @Test - public void testGetVersion() { + void testGetVersion() { final Response actual = apiServer.getVersion(); - assertEquals("Version number is", Response.ok().build().getStatus(), - actual.getStatus()); + assertEquals(Response.ok().build().getStatus(), + actual.getStatus(), + "Version number is"); } @Test - public void testBadCreateService() { + void testBadCreateService() { Service service = new Service(); // Test for invalid argument final Response actual = apiServer.createService(request, service); - assertEquals("Create service is ", - Response.status(Status.BAD_REQUEST).build().getStatus(), - actual.getStatus()); + assertEquals(Response.status(Status.BAD_REQUEST).build().getStatus(), + actual.getStatus(), + "Create service is "); } @Test - public void testGoodCreateService() throws Exception { + void testGoodCreateService() throws Exception { String json = "{\"auths\": " + "{\"https://index.docker.io/v1/\": " + "{\"auth\": \"foobarbaz\"}," @@ -122,13 +123,13 @@ public class TestApiServer { bw.close(); Service service = ServiceClientTest.buildGoodService(); final Response actual = apiServer.createService(request, service); - assertEquals("Create service is ", - Response.status(Status.ACCEPTED).build().getStatus(), - actual.getStatus()); + assertEquals(Response.status(Status.ACCEPTED).build().getStatus(), + actual.getStatus(), + "Create service is "); } @Test - public void testInternalServerErrorDockerClientConfigMissingCreateService() { + void testInternalServerErrorDockerClientConfigMissingCreateService() { Service service = new Service(); service.setName("jenkins"); service.setVersion("v1"); @@ -149,97 +150,94 @@ public class TestApiServer { components.add(c); service.setComponents(components); final Response actual = apiServer.createService(request, service); - assertEquals("Create service is ", - Response.status(Status.BAD_REQUEST).build().getStatus(), - actual.getStatus()); + assertEquals(Response.status(Status.BAD_REQUEST).build().getStatus(), + actual.getStatus(), + "Create service is "); } @Test - public void testBadGetService() { + void testBadGetService() { final String serviceName = "nonexistent-jenkins"; final Response actual = apiServer.getService(request, serviceName); - assertEquals("Get service is ", - Response.status(Status.NOT_FOUND).build().getStatus(), - actual.getStatus()); + assertEquals(Response.status(Status.NOT_FOUND).build().getStatus(), + actual.getStatus(), + "Get service is "); ServiceStatus serviceStatus = (ServiceStatus) actual.getEntity(); - assertEquals("Response code don't match", - RestApiConstants.ERROR_CODE_APP_NAME_INVALID, serviceStatus.getCode()); - assertEquals("Response diagnostics don't match", - "Service " + serviceName + " not found", - serviceStatus.getDiagnostics()); + assertEquals(RestApiConstants.ERROR_CODE_APP_NAME_INVALID, serviceStatus.getCode(), + "Response code don't match"); + assertEquals("Service " + serviceName + " not found", serviceStatus.getDiagnostics(), + "Response diagnostics don't match"); } @Test - public void testBadGetService2() { + void testBadGetService2() { final Response actual = apiServer.getService(request, null); - assertEquals("Get service is ", - Response.status(Status.NOT_FOUND).build().getStatus(), - actual.getStatus()); + assertEquals(Response.status(Status.NOT_FOUND).build().getStatus(), actual.getStatus(), + "Get service is "); ServiceStatus serviceStatus = (ServiceStatus) actual.getEntity(); - assertEquals("Response code don't match", - RestApiConstants.ERROR_CODE_APP_NAME_INVALID, serviceStatus.getCode()); - assertEquals("Response diagnostics don't match", - "Service name cannot be null.", serviceStatus.getDiagnostics()); + assertEquals(RestApiConstants.ERROR_CODE_APP_NAME_INVALID, serviceStatus.getCode(), + "Response code don't match"); + assertEquals("Service name cannot be null.", serviceStatus.getDiagnostics(), + "Response diagnostics don't match"); } @Test - public void testGoodGetService() { + void testGoodGetService() { final Response actual = apiServer.getService(request, "jenkins"); - assertEquals("Get service is ", - Response.status(Status.OK).build().getStatus(), actual.getStatus()); + assertEquals(Response.status(Status.OK).build().getStatus(), actual.getStatus(), + "Get service is "); } @Test - public void testBadDeleteService() { + void testBadDeleteService() { final Response actual = apiServer.deleteService(request, "no-jenkins"); - assertEquals("Delete service is ", - Response.status(Status.BAD_REQUEST).build().getStatus(), - actual.getStatus()); + assertEquals(Response.status(Status.BAD_REQUEST).build().getStatus(), + actual.getStatus(), + "Delete service is "); } @Test - public void testBadDeleteService2() { + void testBadDeleteService2() { final Response actual = apiServer.deleteService(request, null); - assertEquals("Delete service is ", - Response.status(Status.BAD_REQUEST).build().getStatus(), - actual.getStatus()); + assertEquals(Response.status(Status.BAD_REQUEST).build().getStatus(), + actual.getStatus(), + "Delete service is "); } @Test - public void testBadDeleteService3() { + void testBadDeleteService3() { final Response actual = apiServer.deleteService(request, "jenkins-doesn't-exist"); - assertEquals("Delete service is ", - Response.status(Status.BAD_REQUEST).build().getStatus(), - actual.getStatus()); + assertEquals(Response.status(Status.BAD_REQUEST).build().getStatus(), + actual.getStatus(), + "Delete service is "); } @Test - public void testBadDeleteService4() { + void testBadDeleteService4() { final Response actual = apiServer.deleteService(request, "jenkins-error-cleaning-registry"); - assertEquals("Delete service is ", - Response.status(Status.INTERNAL_SERVER_ERROR).build().getStatus(), - actual.getStatus()); + assertEquals(Response.status(Status.INTERNAL_SERVER_ERROR).build().getStatus(), + actual.getStatus(), + "Delete service is "); } @Test - public void testGoodDeleteService() { + void testGoodDeleteService() { final Response actual = apiServer.deleteService(request, "jenkins"); - assertEquals("Delete service is ", - Response.status(Status.OK).build().getStatus(), actual.getStatus()); + assertEquals(Response.status(Status.OK).build().getStatus(), actual.getStatus(), + "Delete service is "); } @Test - public void testDeleteStoppedService() { - final Response actual = apiServer.deleteService(request, - "jenkins-already-stopped"); - assertEquals("Delete service is ", - Response.status(Status.OK).build().getStatus(), actual.getStatus()); + void testDeleteStoppedService() { + final Response actual = apiServer.deleteService(request, "jenkins-already-stopped"); + assertEquals(Response.status(Status.OK).build().getStatus(), actual.getStatus(), + "Delete service is "); } @Test - public void testDecreaseContainerAndStop() { + void testDecreaseContainerAndStop() { Service service = new Service(); service.setState(ServiceState.STOPPED); service.setName("jenkins"); @@ -260,12 +258,12 @@ public class TestApiServer { service.setComponents(components); final Response actual = apiServer.updateService(request, "jenkins", service); - assertEquals("update service is ", - Response.status(Status.OK).build().getStatus(), actual.getStatus()); + assertEquals(Response.status(Status.OK).build().getStatus(), actual.getStatus(), + "update service is "); } @Test - public void testBadDecreaseContainerAndStop() { + void testBadDecreaseContainerAndStop() { Service service = new Service(); service.setState(ServiceState.STOPPED); service.setName("no-jenkins"); @@ -287,13 +285,13 @@ public class TestApiServer { System.out.println("before stop"); final Response actual = apiServer.updateService(request, "no-jenkins", service); - assertEquals("flex service is ", - Response.status(Status.BAD_REQUEST).build().getStatus(), - actual.getStatus()); + assertEquals(Response.status(Status.BAD_REQUEST).build().getStatus(), + actual.getStatus(), + "flex service is "); } @Test - public void testIncreaseContainersAndStart() { + void testIncreaseContainersAndStart() { Service service = new Service(); service.setState(ServiceState.STARTED); service.setName("jenkins"); @@ -314,12 +312,12 @@ public class TestApiServer { service.setComponents(components); final Response actual = apiServer.updateService(request, "jenkins", service); - assertEquals("flex service is ", - Response.status(Status.OK).build().getStatus(), actual.getStatus()); + assertEquals(Response.status(Status.OK).build().getStatus(), actual.getStatus(), + "flex service is "); } @Test - public void testBadStartServices() { + void testBadStartServices() { Service service = new Service(); service.setState(ServiceState.STARTED); service.setName("no-jenkins"); @@ -340,13 +338,13 @@ public class TestApiServer { service.setComponents(components); final Response actual = apiServer.updateService(request, "no-jenkins", service); - assertEquals("start service is ", - Response.status(Status.BAD_REQUEST).build().getStatus(), - actual.getStatus()); + assertEquals(Response.status(Status.BAD_REQUEST).build().getStatus(), + actual.getStatus(), + "start service is "); } @Test - public void testGoodStartServices() { + void testGoodStartServices() { Service service = new Service(); service.setState(ServiceState.STARTED); service.setName("jenkins"); @@ -367,12 +365,12 @@ public class TestApiServer { service.setComponents(components); final Response actual = apiServer.updateService(request, "jenkins", service); - assertEquals("start service is ", - Response.status(Status.OK).build().getStatus(), actual.getStatus()); + assertEquals(Response.status(Status.OK).build().getStatus(), actual.getStatus(), + "start service is "); } @Test - public void testBadStopServices() { + void testBadStopServices() { Service service = new Service(); service.setState(ServiceState.STOPPED); service.setName("no-jenkins"); @@ -394,25 +392,25 @@ public class TestApiServer { System.out.println("before stop"); final Response actual = apiServer.updateService(request, "no-jenkins", service); - assertEquals("stop service is ", - Response.status(Status.BAD_REQUEST).build().getStatus(), - actual.getStatus()); + assertEquals(Response.status(Status.BAD_REQUEST).build().getStatus(), + actual.getStatus(), + "stop service is "); } @Test - public void testGoodStopServices() { + void testGoodStopServices() { Service service = new Service(); service.setState(ServiceState.STOPPED); service.setName("jenkins"); System.out.println("before stop"); final Response actual = apiServer.updateService(request, "jenkins", service); - assertEquals("stop service is ", - Response.status(Status.OK).build().getStatus(), actual.getStatus()); + assertEquals(Response.status(Status.OK).build().getStatus(), actual.getStatus(), + "stop service is "); } @Test - public void testBadSecondStopServices() throws Exception { + void testBadSecondStopServices() throws Exception { Service service = new Service(); service.setState(ServiceState.STOPPED); service.setName("jenkins-second-stop"); @@ -420,17 +418,17 @@ public class TestApiServer { System.out.println("before second stop"); final Response actual = apiServer.updateService(request, "jenkins-second-stop", service); - assertEquals("stop service should have thrown 400 Bad Request: ", - Response.status(Status.BAD_REQUEST).build().getStatus(), - actual.getStatus()); + assertEquals(Response.status(Status.BAD_REQUEST).build().getStatus(), + actual.getStatus(), + "stop service should have thrown 400 Bad Request: "); ServiceStatus serviceStatus = (ServiceStatus) actual.getEntity(); - assertEquals("Stop service should have failed with service already stopped", - "Service jenkins-second-stop is already stopped", - serviceStatus.getDiagnostics()); + assertEquals("Service jenkins-second-stop is already stopped", + serviceStatus.getDiagnostics(), + "Stop service should have failed with service already stopped"); } @Test - public void testUpdateService() { + void testUpdateService() { Service service = new Service(); service.setState(ServiceState.STARTED); service.setName("no-jenkins"); @@ -452,72 +450,71 @@ public class TestApiServer { System.out.println("before stop"); final Response actual = apiServer.updateService(request, "no-jenkins", service); - assertEquals("update service is ", - Response.status(Status.BAD_REQUEST) - .build().getStatus(), actual.getStatus()); + assertEquals(Response.status(Status.BAD_REQUEST) + .build().getStatus(), actual.getStatus(), "update service is "); } @Test - public void testUpdateComponent() { + void testUpdateComponent() { Response actual = apiServer.updateComponent(request, "jenkins", "jenkins-master", null); ServiceStatus serviceStatus = (ServiceStatus) actual.getEntity(); - assertEquals("Update component should have failed with 400 bad request", - Response.status(Status.BAD_REQUEST).build().getStatus(), - actual.getStatus()); - assertEquals("Update component should have failed with no data error", - "No component data provided", serviceStatus.getDiagnostics()); + assertEquals(Response.status(Status.BAD_REQUEST).build().getStatus(), + actual.getStatus(), + "Update component should have failed with 400 bad request"); + assertEquals("No component data provided", serviceStatus.getDiagnostics(), + "Update component should have failed with no data error"); Component comp = new Component(); actual = apiServer.updateComponent(request, "jenkins", "jenkins-master", comp); serviceStatus = (ServiceStatus) actual.getEntity(); - assertEquals("Update component should have failed with 400 bad request", - Response.status(Status.BAD_REQUEST).build().getStatus(), - actual.getStatus()); - assertEquals("Update component should have failed with no count error", - "No container count provided", serviceStatus.getDiagnostics()); + assertEquals(Response.status(Status.BAD_REQUEST).build().getStatus(), + actual.getStatus(), + "Update component should have failed with 400 bad request"); + assertEquals("No container count provided", serviceStatus.getDiagnostics(), + "Update component should have failed with no count error"); comp.setNumberOfContainers(-1L); actual = apiServer.updateComponent(request, "jenkins", "jenkins-master", comp); serviceStatus = (ServiceStatus) actual.getEntity(); - assertEquals("Update component should have failed with 400 bad request", - Response.status(Status.BAD_REQUEST).build().getStatus(), - actual.getStatus()); - assertEquals("Update component should have failed with no count error", - "Invalid number of containers specified -1", serviceStatus.getDiagnostics()); + assertEquals(Response.status(Status.BAD_REQUEST).build().getStatus(), + actual.getStatus(), + "Update component should have failed with 400 bad request"); + assertEquals("Invalid number of containers specified -1", serviceStatus.getDiagnostics(), + "Update component should have failed with no count error"); comp.setName("jenkins-slave"); comp.setNumberOfContainers(1L); actual = apiServer.updateComponent(request, "jenkins", "jenkins-master", comp); serviceStatus = (ServiceStatus) actual.getEntity(); - assertEquals("Update component should have failed with 400 bad request", - Response.status(Status.BAD_REQUEST).build().getStatus(), - actual.getStatus()); + assertEquals(Response.status(Status.BAD_REQUEST).build().getStatus(), + actual.getStatus(), + "Update component should have failed with 400 bad request"); assertEquals( - "Update component should have failed with component name mismatch " - + "error", "Component name in the request object (jenkins-slave) does not match " + "that in the URI path (jenkins-master)", - serviceStatus.getDiagnostics()); + serviceStatus.getDiagnostics(), + "Update component should have failed with component name mismatch " + + "error"); } @Test - public void testInitiateUpgrade() { + void testInitiateUpgrade() { Service goodService = ServiceClientTest.buildLiveGoodService(); goodService.setVersion("v2"); goodService.setState(ServiceState.UPGRADING); final Response actual = apiServer.updateService(request, goodService.getName(), goodService); - assertEquals("Initiate upgrade is ", - Response.status(Status.ACCEPTED).build().getStatus(), - actual.getStatus()); + assertEquals(Response.status(Status.ACCEPTED).build().getStatus(), + actual.getStatus(), + "Initiate upgrade is "); } @Test - public void testUpgradeSingleInstance() { + void testUpgradeSingleInstance() { Service goodService = ServiceClientTest.buildLiveGoodService(); Component comp = goodService.getComponents().iterator().next(); Container container = comp.getContainers().iterator().next(); @@ -536,13 +533,13 @@ public class TestApiServer { final Response actual = apiServer.updateComponentInstance(request, goodService.getName(), comp.getName(), container.getComponentInstanceName(), container); - assertEquals("Instance upgrade is ", - Response.status(Status.ACCEPTED).build().getStatus(), - actual.getStatus()); + assertEquals(Response.status(Status.ACCEPTED).build().getStatus(), + actual.getStatus(), + "Instance upgrade is "); } @Test - public void testUpgradeMultipleInstances() { + void testUpgradeMultipleInstances() { Service goodService = ServiceClientTest.buildLiveGoodService(); Component comp = goodService.getComponents().iterator().next(); comp.getContainers().forEach(container -> @@ -563,13 +560,13 @@ public class TestApiServer { final Response actual = apiServer.updateComponentInstances(request, goodService.getName(), comp.getContainers()); - assertEquals("Instance upgrade is ", - Response.status(Status.ACCEPTED).build().getStatus(), - actual.getStatus()); + assertEquals(Response.status(Status.ACCEPTED).build().getStatus(), + actual.getStatus(), + "Instance upgrade is "); } @Test - public void testUpgradeComponent() { + void testUpgradeComponent() { Service goodService = ServiceClientTest.buildLiveGoodService(); Component comp = goodService.getComponents().iterator().next(); comp.setState(ComponentState.UPGRADING); @@ -589,13 +586,13 @@ public class TestApiServer { final Response actual = apiServer.updateComponent(request, goodService.getName(), comp.getName(), comp); - assertEquals("Component upgrade is ", - Response.status(Status.ACCEPTED).build().getStatus(), - actual.getStatus()); + assertEquals(Response.status(Status.ACCEPTED).build().getStatus(), + actual.getStatus(), + "Component upgrade is "); } @Test - public void testUpgradeMultipleComps() { + void testUpgradeMultipleComps() { Service goodService = ServiceClientTest.buildLiveGoodService(); goodService.getComponents().forEach(comp -> comp.setState(ComponentState.UPGRADING)); @@ -616,8 +613,8 @@ public class TestApiServer { final Response actual = apiServer.updateComponents(request, goodService.getName(), goodService.getComponents()); - assertEquals("Component upgrade is ", - Response.status(Status.ACCEPTED).build().getStatus(), - actual.getStatus()); + assertEquals(Response.status(Status.ACCEPTED).build().getStatus(), + actual.getStatus(), + "Component upgrade is "); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/test/java/org/apache/hadoop/yarn/service/TestCleanupAfterKill.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/test/java/org/apache/hadoop/yarn/service/TestCleanupAfterKill.java index 51e834a34d9..c2f3c689e23 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/test/java/org/apache/hadoop/yarn/service/TestCleanupAfterKill.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/test/java/org/apache/hadoop/yarn/service/TestCleanupAfterKill.java @@ -27,16 +27,19 @@ import org.apache.hadoop.yarn.exceptions.YarnException; import org.apache.hadoop.yarn.service.api.records.Service; import org.apache.hadoop.yarn.service.client.ServiceClient; import org.apache.hadoop.yarn.service.conf.YarnServiceConstants; -import org.junit.After; -import org.junit.Assert; -import org.junit.Before; -import org.junit.Rule; -import org.junit.Test; -import org.junit.rules.TemporaryFolder; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.io.File; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertTrue; + import java.io.IOException; /** @@ -47,22 +50,20 @@ public class TestCleanupAfterKill extends ServiceTestUtils { private static final Logger LOG = LoggerFactory.getLogger(TestCleanupAfterKill.class); - @Rule - public TemporaryFolder tmpFolder = new TemporaryFolder(); - - @Before + @BeforeEach public void setup() throws Exception { File tmpYarnDir = new File("target", "tmp"); FileUtils.deleteQuietly(tmpYarnDir); } - @After + @AfterEach public void tearDown() throws IOException { shutdown(); } - @Test(timeout = 200000) - public void testRegistryCleanedOnLifetimeExceeded() throws Exception { + @Test + @Timeout(200000) + void testRegistryCleanedOnLifetimeExceeded() throws Exception { setupInternal(NUM_NMS); ServiceClient client = createClient(getConf()); Service exampleApp = createExampleApplication(); @@ -71,8 +72,8 @@ public class TestCleanupAfterKill extends ServiceTestUtils { waitForServiceToBeStable(client, exampleApp); String serviceZKPath = RegistryUtils.servicePath(RegistryUtils .currentUser(), YarnServiceConstants.APP_TYPE, exampleApp.getName()); - Assert.assertTrue("Registry ZK service path doesn't exist", - getCuratorService().zkPathExists(serviceZKPath)); + assertTrue(getCuratorService().zkPathExists(serviceZKPath), + "Registry ZK service path doesn't exist"); // wait for app to be killed by RM ApplicationId exampleAppId = ApplicationId.fromString(exampleApp.getId()); @@ -85,10 +86,10 @@ public class TestCleanupAfterKill extends ServiceTestUtils { throw new RuntimeException("while waiting", e); } }, 2000, 200000); - Assert.assertFalse("Registry ZK service path still exists after killed", - getCuratorService().zkPathExists(serviceZKPath)); + assertFalse(getCuratorService().zkPathExists(serviceZKPath), + "Registry ZK service path still exists after killed"); LOG.info("Destroy the service"); - Assert.assertEquals(0, client.actionDestroy(exampleApp.getName())); + assertEquals(0, client.actionDestroy(exampleApp.getName())); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/test/java/org/apache/hadoop/yarn/service/client/TestApiServiceClient.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/test/java/org/apache/hadoop/yarn/service/client/TestApiServiceClient.java index 1d08b82fff1..fe9c081ed64 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/test/java/org/apache/hadoop/yarn/service/client/TestApiServiceClient.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/test/java/org/apache/hadoop/yarn/service/client/TestApiServiceClient.java @@ -16,7 +16,7 @@ */ package org.apache.hadoop.yarn.service.client; -import static org.junit.Assert.*; +import static org.junit.jupiter.api.Assertions.*; import java.io.IOException; import java.util.HashMap; @@ -35,9 +35,10 @@ import org.eclipse.jetty.server.ServerConnector; import org.eclipse.jetty.servlet.ServletContextHandler; import org.eclipse.jetty.servlet.ServletHolder; import org.eclipse.jetty.util.thread.QueuedThreadPool; -import org.junit.AfterClass; -import org.junit.BeforeClass; -import org.junit.Test; +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.Test; + import static org.apache.hadoop.yarn.service.exceptions.LauncherExitCodes.*; /** @@ -88,7 +89,7 @@ public class TestApiServiceClient { } - @BeforeClass + @BeforeAll public static void setup() throws Exception { server = new Server(8088); ((QueuedThreadPool)server.getThreadPool()).setMaxThreads(20); @@ -112,13 +113,13 @@ public class TestApiServiceClient { badAsc.serviceInit(conf2); } - @AfterClass + @AfterAll public static void tearDown() throws Exception { server.stop(); } @Test - public void testGetRMWebAddress() throws Exception { + void testGetRMWebAddress() throws Exception { Configuration conf = new Configuration(); conf.setBoolean(YarnConfiguration.RM_HA_ENABLED, true); conf.set(YarnConfiguration.RM_HA_IDS, "rm1"); @@ -129,17 +130,17 @@ public class TestApiServiceClient { String diagnosticsMsg = null; try { String rmWebAddress = asc1.getRMWebAddress(); - } catch (IOException e){ + } catch (IOException e) { exceptionCaught = true; diagnosticsMsg = e.getMessage(); } - assertTrue("ApiServiceClient failed to throw exception", exceptionCaught); - assertTrue("Exception Message does not match", - diagnosticsMsg.contains("Error connecting to localhost:0")); + assertTrue(exceptionCaught, "ApiServiceClient failed to throw exception"); + assertTrue(diagnosticsMsg.contains("Error connecting to localhost:0"), + "Exception Message does not match"); } @Test - public void testLaunch() { + void testLaunch() { String fileName = "target/test-classes/example-app.json"; String appName = "example-app"; long lifetime = 3600L; @@ -153,7 +154,7 @@ public class TestApiServiceClient { } @Test - public void testBadLaunch() { + void testBadLaunch() { String fileName = "unknown_file"; String appName = "unknown_app"; long lifetime = 3600L; @@ -167,19 +168,18 @@ public class TestApiServiceClient { } @Test - public void testStatus() { + void testStatus() { String appName = "nonexistent-app"; try { String result = asc.getStatusString(appName); - assertEquals("Status reponse don't match", - " Service " + appName + " not found", result); + assertEquals(" Service " + appName + " not found", result, "Status reponse don't match"); } catch (IOException | YarnException e) { fail(); } } @Test - public void testStop() { + void testStop() { String appName = "example-app"; try { int result = asc.actionStop(appName); @@ -190,7 +190,7 @@ public class TestApiServiceClient { } @Test - public void testBadStop() { + void testBadStop() { String appName = "unknown_app"; try { int result = badAsc.actionStop(appName); @@ -201,7 +201,7 @@ public class TestApiServiceClient { } @Test - public void testStart() { + void testStart() { String appName = "example-app"; try { int result = asc.actionStart(appName); @@ -212,7 +212,7 @@ public class TestApiServiceClient { } @Test - public void testBadStart() { + void testBadStart() { String appName = "unknown_app"; try { int result = badAsc.actionStart(appName); @@ -223,7 +223,7 @@ public class TestApiServiceClient { } @Test - public void testSave() { + void testSave() { String fileName = "target/test-classes/example-app.json"; String appName = "example-app"; long lifetime = 3600L; @@ -237,7 +237,7 @@ public class TestApiServiceClient { } @Test - public void testBadSave() { + void testBadSave() { String fileName = "unknown_file"; String appName = "unknown_app"; long lifetime = 3600L; @@ -251,7 +251,7 @@ public class TestApiServiceClient { } @Test - public void testFlex() { + void testFlex() { String appName = "example-app"; HashMap componentCounts = new HashMap(); try { @@ -263,7 +263,7 @@ public class TestApiServiceClient { } @Test - public void testBadFlex() { + void testBadFlex() { String appName = "unknown_app"; HashMap componentCounts = new HashMap(); try { @@ -275,7 +275,7 @@ public class TestApiServiceClient { } @Test - public void testDestroy() { + void testDestroy() { String appName = "example-app"; try { int result = asc.actionDestroy(appName); @@ -286,7 +286,7 @@ public class TestApiServiceClient { } @Test - public void testBadDestroy() { + void testBadDestroy() { String appName = "unknown_app"; try { int result = badAsc.actionDestroy(appName); @@ -297,7 +297,7 @@ public class TestApiServiceClient { } @Test - public void testInitiateServiceUpgrade() { + void testInitiateServiceUpgrade() { String appName = "example-app"; String upgradeFileName = "target/test-classes/example-app.json"; try { @@ -309,7 +309,7 @@ public class TestApiServiceClient { } @Test - public void testInstancesUpgrade() { + void testInstancesUpgrade() { String appName = "example-app"; try { int result = asc.actionUpgradeInstances(appName, Lists.newArrayList( @@ -321,7 +321,7 @@ public class TestApiServiceClient { } @Test - public void testComponentsUpgrade() { + void testComponentsUpgrade() { String appName = "example-app"; try { int result = asc.actionUpgradeComponents(appName, Lists.newArrayList( @@ -333,12 +333,12 @@ public class TestApiServiceClient { } @Test - public void testNoneSecureApiClient() throws IOException { + void testNoneSecureApiClient() throws IOException { String url = asc.getServicePath("/foobar"); - assertTrue("User.name flag is missing in service path.", - url.contains("user.name")); - assertTrue("User.name flag is not matching JVM user.", - url.contains(System.getProperty("user.name"))); + assertTrue(url.contains("user.name"), + "User.name flag is missing in service path."); + assertTrue(url.contains(System.getProperty("user.name")), + "User.name flag is not matching JVM user."); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/test/java/org/apache/hadoop/yarn/service/client/TestSecureApiServiceClient.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/test/java/org/apache/hadoop/yarn/service/client/TestSecureApiServiceClient.java index e2d613a3059..60c06e9aa75 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/test/java/org/apache/hadoop/yarn/service/client/TestSecureApiServiceClient.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/test/java/org/apache/hadoop/yarn/service/client/TestSecureApiServiceClient.java @@ -18,7 +18,7 @@ package org.apache.hadoop.yarn.service.client; -import static org.junit.Assert.*; +import static org.junit.jupiter.api.Assertions.*; import java.io.File; import java.io.IOException; @@ -49,9 +49,9 @@ import org.eclipse.jetty.server.ServerConnector; import org.eclipse.jetty.servlet.ServletContextHandler; import org.eclipse.jetty.servlet.ServletHolder; import org.eclipse.jetty.util.thread.QueuedThreadPool; -import org.junit.After; -import org.junit.Before; -import org.junit.Test; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; /** * Test Spnego Client Login. @@ -129,8 +129,9 @@ public class TestSecureApiServiceClient extends KerberosSecurityTestcase { } } - @Before + @BeforeEach public void setUp() throws Exception { + startMiniKdc(); keytabFile = new File(getWorkDir(), "keytab"); getKdc().createPrincipal(keytabFile, clientPrincipal, server1Principal, server2Principal); @@ -163,13 +164,14 @@ public class TestSecureApiServiceClient extends KerberosSecurityTestcase { asc.serviceInit(testConf); } - @After + @AfterEach public void tearDown() throws Exception { server.stop(); + stopMiniKdc(); } @Test - public void testHttpSpnegoChallenge() throws Exception { + void testHttpSpnegoChallenge() throws Exception { UserGroupInformation.loginUserFromKeytab(clientPrincipal, keytabFile .getCanonicalPath()); String challenge = YarnClientUtils.generateToken("localhost"); @@ -177,7 +179,7 @@ public class TestSecureApiServiceClient extends KerberosSecurityTestcase { } @Test - public void testAuthorizationHeader() throws Exception { + void testAuthorizationHeader() throws Exception { UserGroupInformation.loginUserFromKeytab(clientPrincipal, keytabFile .getCanonicalPath()); String rmAddress = asc.getRMWebAddress(); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/test/java/org/apache/hadoop/yarn/service/client/TestSystemServiceManagerImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/test/java/org/apache/hadoop/yarn/service/client/TestSystemServiceManagerImpl.java index 4954b478a52..a2f698fd31f 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/test/java/org/apache/hadoop/yarn/service/client/TestSystemServiceManagerImpl.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/test/java/org/apache/hadoop/yarn/service/client/TestSystemServiceManagerImpl.java @@ -29,14 +29,16 @@ import org.apache.hadoop.yarn.service.api.records.Service; import org.apache.hadoop.yarn.service.conf.SliderExitCodes; import org.apache.hadoop.yarn.service.conf.YarnServiceConf; import org.apache.hadoop.yarn.service.exceptions.SliderException; -import org.junit.After; -import org.junit.Assert; -import org.junit.Before; -import org.junit.Test; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.io.File; + +import static org.junit.jupiter.api.Assertions.*; + import java.io.IOException; import java.util.HashMap; import java.util.HashSet; @@ -44,8 +46,6 @@ import java.util.Iterator; import java.util.Map; import java.util.Set; -import static org.junit.Assert.fail; - /** * Test class for system service manager. */ @@ -62,7 +62,7 @@ public class TestSystemServiceManagerImpl { private static Map> savedServices = new HashMap<>(); private static Map> submittedServices = new HashMap<>(); - @Before + @BeforeEach public void setup() { File file = new File( getClass().getClassLoader().getResource(resourcePath).getFile()); @@ -80,30 +80,30 @@ public class TestSystemServiceManagerImpl { constructUserService(users[1], "example-app1", "example-app2"); } - @After + @AfterEach public void tearDown() { systemService.stop(); } @Test - public void testSystemServiceSubmission() throws Exception { + void testSystemServiceSubmission() throws Exception { systemService.start(); /* verify for ignored sevices count */ Map ignoredUserServices = systemService.getIgnoredUserServices(); - Assert.assertEquals(1, ignoredUserServices.size()); - Assert.assertTrue("User user1 doesn't exist.", - ignoredUserServices.containsKey(users[0])); + assertEquals(1, ignoredUserServices.size()); + assertTrue(ignoredUserServices.containsKey(users[0]), + "User user1 doesn't exist."); int count = ignoredUserServices.get(users[0]); - Assert.assertEquals(1, count); - Assert.assertEquals(1, + assertEquals(1, count); + assertEquals(1, systemService.getBadFileNameExtensionSkipCounter()); - Assert.assertEquals(1, systemService.getBadDirSkipCounter()); + assertEquals(1, systemService.getBadDirSkipCounter()); Map> userServices = systemService.getSyncUserServices(); - Assert.assertEquals(loadedServices.size(), userServices.size()); + assertEquals(loadedServices.size(), userServices.size()); verifyForScannedUserServices(userServices); verifyForLaunchedUserServices(); @@ -123,13 +123,12 @@ public class TestSystemServiceManagerImpl { for (String user : users) { Set services = userServices.get(user); Set serviceNames = loadedServices.get(user); - Assert.assertEquals(serviceNames.size(), services.size()); + assertEquals(serviceNames.size(), services.size()); Iterator iterator = services.iterator(); while (iterator.hasNext()) { Service next = iterator.next(); - Assert.assertTrue( - "Service name doesn't exist in expected userService " - + serviceNames, serviceNames.contains(next.getName())); + assertTrue(serviceNames.contains(next.getName()), + "Service name doesn't exist in expected userService " + serviceNames); } } } @@ -203,19 +202,19 @@ public class TestSystemServiceManagerImpl { } private void verifyForLaunchedUserServices() { - Assert.assertEquals(loadedServices.size(), submittedServices.size()); + assertEquals(loadedServices.size(), submittedServices.size()); for (Map.Entry> entry : submittedServices.entrySet()) { String user = entry.getKey(); Set serviceSet = entry.getValue(); - Assert.assertTrue(loadedServices.containsKey(user)); + assertTrue(loadedServices.containsKey(user)); Set services = loadedServices.get(user); - Assert.assertEquals(services.size(), serviceSet.size()); - Assert.assertTrue(services.containsAll(serviceSet)); + assertEquals(services.size(), serviceSet.size()); + assertTrue(services.containsAll(serviceSet)); } } @Test - public void testFileSystemCloseWhenCleanUpService() throws Exception { + void testFileSystemCloseWhenCleanUpService() throws Exception { FileSystem fs = null; Path path = new Path("/tmp/servicedir"); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/ContainerShellWebSocket.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/ContainerShellWebSocket.java index efcc2ea0aed..66a901fc36a 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/ContainerShellWebSocket.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/ContainerShellWebSocket.java @@ -20,7 +20,7 @@ package org.apache.hadoop.yarn.client.api; import java.io.IOException; import java.io.OutputStream; -import java.nio.charset.Charset; +import java.nio.charset.StandardCharsets; import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; @@ -62,7 +62,7 @@ public class ContainerShellWebSocket { session.getRemote().flush(); sttySet = true; } - terminal.output().write(message.getBytes(Charset.forName("UTF-8"))); + terminal.output().write(message.getBytes(StandardCharsets.UTF_8)); terminal.output().flush(); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/NMClientAsyncImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/NMClientAsyncImpl.java index 3215aa299a4..eb5b9b227fb 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/NMClientAsyncImpl.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/NMClientAsyncImpl.java @@ -18,7 +18,6 @@ package org.apache.hadoop.yarn.client.api.async.impl; -import java.io.IOException; import java.nio.ByteBuffer; import java.util.EnumSet; import java.util.HashSet; @@ -51,7 +50,6 @@ import org.apache.hadoop.yarn.client.api.impl.NMClientImpl; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.event.AbstractEvent; import org.apache.hadoop.yarn.event.EventHandler; -import org.apache.hadoop.yarn.exceptions.YarnException; import org.apache.hadoop.yarn.ipc.RPCUtil; import org.apache.hadoop.yarn.state.InvalidStateTransitionException; import org.apache.hadoop.yarn.state.MultipleArcTransition; @@ -636,12 +634,8 @@ public class NMClientAsyncImpl extends NMClientAsync { + "Container " + containerId, thr); } return ContainerState.RUNNING; - } catch (YarnException e) { + } catch (Throwable e) { return onExceptionRaised(container, event, e); - } catch (IOException e) { - return onExceptionRaised(container, event, e); - } catch (Throwable t) { - return onExceptionRaised(container, event, t); } } @@ -854,12 +848,8 @@ public class NMClientAsyncImpl extends NMClientAsync { + "Container " + event.getContainerId(), thr); } return ContainerState.DONE; - } catch (YarnException e) { + } catch (Throwable e) { return onExceptionRaised(container, event, e); - } catch (IOException e) { - return onExceptionRaised(container, event, e); - } catch (Throwable t) { - return onExceptionRaised(container, event, t); } } @@ -966,12 +956,8 @@ public class NMClientAsyncImpl extends NMClientAsync { "Unchecked exception is thrown from onContainerStatusReceived" + " for Container " + event.getContainerId(), thr); } - } catch (YarnException e) { + } catch (Throwable e) { onExceptionRaised(containerId, e); - } catch (IOException e) { - onExceptionRaised(containerId, e); - } catch (Throwable t) { - onExceptionRaised(containerId, t); } } else { StatefulContainer container = containers.get(containerId); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/SharedCacheClientImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/SharedCacheClientImpl.java index cce6ae8df61..103080a8502 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/SharedCacheClientImpl.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/SharedCacheClientImpl.java @@ -158,14 +158,8 @@ public class SharedCacheClientImpl extends SharedCacheClient { public String getFileChecksum(Path sourceFile) throws IOException { FileSystem fs = sourceFile.getFileSystem(this.conf); - FSDataInputStream in = null; - try { - in = fs.open(sourceFile); + try (FSDataInputStream in = fs.open(sourceFile)) { return this.checksum.computeChecksum(in); - } finally { - if (in != null) { - in.close(); - } } } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/SchedConfCLI.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/SchedConfCLI.java index a9f1c542ab2..9823a1afb68 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/SchedConfCLI.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/SchedConfCLI.java @@ -37,6 +37,7 @@ import org.apache.hadoop.security.authentication.client.AuthenticatedURL; import org.apache.hadoop.security.ssl.SSLFactory; import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.util.Tool; +import org.apache.hadoop.util.XMLUtils; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.webapp.dao.ConfInfo; import org.apache.hadoop.yarn.webapp.dao.QueueConfigInfo; @@ -190,7 +191,7 @@ public class SchedConfCLI extends Configured implements Tool { Source xmlInput = new StreamSource(new StringReader(input)); StringWriter sw = new StringWriter(); StreamResult xmlOutput = new StreamResult(sw); - TransformerFactory transformerFactory = TransformerFactory.newInstance(); + TransformerFactory transformerFactory = XMLUtils.newSecureTransformerFactory(); transformerFactory.setAttribute("indent-number", indent); Transformer transformer = transformerFactory.newTransformer(); transformer.setOutputProperty(OutputKeys.INDENT, "yes"); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/TopCLI.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/TopCLI.java index b1ec48f0a47..c16fe03b82a 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/TopCLI.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/TopCLI.java @@ -339,9 +339,11 @@ public class TopCLI extends YarnCLI { int totalNodes; int runningNodes; int unhealthyNodes; + int decommissioningNodes; int decommissionedNodes; int lostNodes; int rebootedNodes; + int shutdownNodes; } private static class QueueMetrics { @@ -696,6 +698,8 @@ public class TopCLI extends YarnCLI { return nodeInfo; } + nodeInfo.decommissioningNodes = + yarnClusterMetrics.getNumDecommissioningNodeManagers(); nodeInfo.decommissionedNodes = yarnClusterMetrics.getNumDecommissionedNodeManagers(); nodeInfo.totalNodes = yarnClusterMetrics.getNumNodeManagers(); @@ -703,6 +707,7 @@ public class TopCLI extends YarnCLI { nodeInfo.lostNodes = yarnClusterMetrics.getNumLostNodeManagers(); nodeInfo.unhealthyNodes = yarnClusterMetrics.getNumUnhealthyNodeManagers(); nodeInfo.rebootedNodes = yarnClusterMetrics.getNumRebootedNodeManagers(); + nodeInfo.shutdownNodes = yarnClusterMetrics.getNumShutdownNodeManagers(); return nodeInfo; } @@ -880,11 +885,11 @@ public class TopCLI extends YarnCLI { ret.append(CLEAR_LINE) .append(limitLineLength(String.format( "NodeManager(s)" - + ": %d total, %d active, %d unhealthy, %d decommissioned," - + " %d lost, %d rebooted%n", + + ": %d total, %d active, %d unhealthy, %d decommissioning," + + " %d decommissioned, %d lost, %d rebooted, %d shutdown%n", nodes.totalNodes, nodes.runningNodes, nodes.unhealthyNodes, - nodes.decommissionedNodes, nodes.lostNodes, - nodes.rebootedNodes), terminalWidth, true)); + nodes.decommissioningNodes, nodes.decommissionedNodes, nodes.lostNodes, + nodes.rebootedNodes, nodes.shutdownNodes), terminalWidth, true)); ret.append(CLEAR_LINE) .append(limitLineLength(String.format( @@ -1039,7 +1044,8 @@ public class TopCLI extends YarnCLI { } } - protected void showTopScreen() { + @VisibleForTesting + void showTopScreen() { List appsInfo = new ArrayList<>(); List apps; try { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/util/YarnClientUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/util/YarnClientUtils.java index 041152d7df8..049dbd7962c 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/util/YarnClientUtils.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/util/YarnClientUtils.java @@ -145,7 +145,7 @@ public abstract class YarnClientUtils { // Now we only support one property, which is exclusive, so check if // key = exclusive and value = {true/false} - if (key.equals("exclusive") + if ("exclusive".equals(key) && ImmutableSet.of("true", "false").contains(value)) { exclusive = Boolean.parseBoolean(value); } else { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestTopCLI.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestTopCLI.java index 706400f80d7..63ebffaca44 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestTopCLI.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestTopCLI.java @@ -18,7 +18,12 @@ package org.apache.hadoop.yarn.client.cli; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; + +import java.io.ByteArrayOutputStream; import java.io.IOException; +import java.io.PrintStream; import java.net.URL; import java.util.Arrays; import java.util.HashMap; @@ -27,9 +32,13 @@ import java.util.Map; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.net.NetUtils; +import org.apache.hadoop.yarn.api.records.YarnClusterMetrics; +import org.apache.hadoop.yarn.client.api.YarnClient; import org.apache.hadoop.yarn.conf.YarnConfiguration; +import org.junit.After; import org.junit.AfterClass; import org.junit.Assert; +import org.junit.Before; import org.junit.BeforeClass; import org.junit.Test; @@ -47,6 +56,9 @@ public class TestTopCLI { private static Map savedStaticResolution = new HashMap<>(); + private PrintStream stdout; + private PrintStream stderr; + @BeforeClass public static void initializeDummyHostnameResolution() throws Exception { String previousIpAddress; @@ -68,6 +80,18 @@ public class TestTopCLI { } } + @Before + public void before() { + this.stdout = System.out; + this.stderr = System.err; + } + + @After + public void after() { + System.setOut(this.stdout); + System.setErr(this.stderr); + } + @Test public void testHAClusterInfoURL() throws IOException, InterruptedException { TopCLI topcli = new TopCLI(); @@ -103,4 +127,44 @@ public class TestTopCLI { Assert.assertEquals("https", clusterUrl.getProtocol()); Assert.assertEquals(rm1Address, clusterUrl.getAuthority()); } -} \ No newline at end of file + + @Test + public void testHeaderNodeManagers() throws Exception { + YarnClusterMetrics ymetrics = mock(YarnClusterMetrics.class); + when(ymetrics.getNumNodeManagers()).thenReturn(0); + when(ymetrics.getNumDecommissioningNodeManagers()).thenReturn(1); + when(ymetrics.getNumDecommissionedNodeManagers()).thenReturn(2); + when(ymetrics.getNumActiveNodeManagers()).thenReturn(3); + when(ymetrics.getNumLostNodeManagers()).thenReturn(4); + when(ymetrics.getNumUnhealthyNodeManagers()).thenReturn(5); + when(ymetrics.getNumRebootedNodeManagers()).thenReturn(6); + when(ymetrics.getNumShutdownNodeManagers()).thenReturn(7); + + YarnClient client = mock(YarnClient.class); + when(client.getYarnClusterMetrics()).thenReturn(ymetrics); + + TopCLI topcli = new TopCLI() { + @Override protected void createAndStartYarnClient() { + } + }; + topcli.setClient(client); + topcli.terminalWidth = 200; + + String actual; + try (ByteArrayOutputStream outStream = new ByteArrayOutputStream(); + PrintStream out = new PrintStream(outStream)) { + System.setOut(out); + System.setErr(out); + topcli.showTopScreen(); + out.flush(); + actual = outStream.toString("UTF-8"); + } + + String expected = "NodeManager(s)" + + ": 0 total, 3 active, 5 unhealthy, 1 decommissioning," + + " 2 decommissioned, 4 lost, 6 rebooted, 7 shutdown"; + Assert.assertTrue( + String.format("Expected output to contain [%s], actual output was [%s].", expected, actual), + actual.contains(expected)); + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml index 86249f0a88c..f1731751c47 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml @@ -138,6 +138,26 @@ bcprov-jdk15on test + + org.junit.jupiter + junit-jupiter-api + test + + + org.junit.jupiter + junit-jupiter-engine + test + + + org.junit.jupiter + junit-jupiter-params + test + + + org.junit.platform + junit-platform-launcher + test + com.sun.jersey.jersey-test-framework jersey-test-framework-grizzly2 @@ -247,12 +267,12 @@ src/main/resources/webapps/test/.keep src/main/resources/webapps/proxy/.keep src/main/resources/webapps/node/.keep - src/main/resources/webapps/static/dt-1.10.18/css/jquery.dataTables.css - src/main/resources/webapps/static/dt-1.10.18/css/custom_datatable.css - src/main/resources/webapps/static/dt-1.10.18/css/jui-dt.css - src/main/resources/webapps/static/dt-1.10.18/css/demo_table.css - src/main/resources/webapps/static/dt-1.10.18/images/Sorting icons.psd - src/main/resources/webapps/static/dt-1.10.18/js/jquery.dataTables.min.js + src/main/resources/webapps/static/dt-1.11.5/css/jquery.dataTables.css + src/main/resources/webapps/static/dt-1.11.5/css/custom_datatable.css + src/main/resources/webapps/static/dt-1.11.5/css/jui-dt.css + src/main/resources/webapps/static/dt-1.11.5/css/demo_table.css + src/main/resources/webapps/static/dt-1.11.5/images/Sorting icons.psd + src/main/resources/webapps/static/dt-1.11.5/js/jquery.dataTables.min.js src/main/resources/webapps/static/jt/jquery.jstree.js src/main/resources/webapps/static/jquery/jquery-ui-1.13.2.custom.min.js src/main/resources/webapps/static/jquery/jquery-3.6.0.min.js diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/ContainerLogAppender.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/ContainerLogAppender.java index 751d9af093f..09efe41e0c2 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/ContainerLogAppender.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/ContainerLogAppender.java @@ -91,6 +91,8 @@ public class ContainerLogAppender extends FileAppender /** * Getter/Setter methods for log4j. + * + * @return containerLogDir. */ public String getContainerLogDir() { @@ -118,6 +120,8 @@ public class ContainerLogAppender extends FileAppender /** * Setter so that log4j can configure it from the * configuration(log4j.properties). + * + * @param logSize log size. */ public void setTotalLogFileSize(long logSize) { maxEvents = (int)(logSize / EVENT_SIZE); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/ContainerRollingLogAppender.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/ContainerRollingLogAppender.java index 7dd712e156b..f0e00fc1940 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/ContainerRollingLogAppender.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/ContainerRollingLogAppender.java @@ -54,6 +54,8 @@ public class ContainerRollingLogAppender extends RollingFileAppender /** * Getter/Setter methods for log4j. + * + * @return containerLogDir. */ public String getContainerLogDir() { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/impl/pb/client/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/impl/pb/client/package-info.java index a4349b22957..6ded96f0e7e 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/impl/pb/client/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/impl/pb/client/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,6 +15,6 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.Private +@Private package org.apache.hadoop.yarn.api.impl.pb.client; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.Private; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/impl/pb/service/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/impl/pb/service/package-info.java index 1d3d435385f..730ac9c628b 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/impl/pb/service/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/impl/pb/service/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,6 +15,6 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.Private +@Private package org.apache.hadoop.yarn.api.impl.pb.service; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.Private; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/pb/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/pb/package-info.java index 18da80f4869..167e0786433 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/pb/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/pb/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -18,6 +18,6 @@ /** * API related to protobuf objects that are not backed by PBImpl classes. */ -@InterfaceAudience.Private +@Private package org.apache.hadoop.yarn.api.pb; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.Private; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/package-info.java index 4b29e4f740e..e783448d2b4 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,7 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.Private +@Private package org.apache.hadoop.yarn.api.protocolrecords.impl.pb; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.Private; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/YarnClusterMetricsPBImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/YarnClusterMetricsPBImpl.java index 14f8bff5819..608d3c28af4 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/YarnClusterMetricsPBImpl.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/YarnClusterMetricsPBImpl.java @@ -89,6 +89,22 @@ public class YarnClusterMetricsPBImpl extends YarnClusterMetrics { builder.setNumNodeManagers((numNodeManagers)); } + @Override + public int getNumDecommissioningNodeManagers() { + YarnClusterMetricsProtoOrBuilder p = viaProto ? proto : builder; + if (p.hasNumDecommissioningNms()) { + return (p.getNumDecommissioningNms()); + } + return 0; + } + + @Override + public void + setNumDecommissioningNodeManagers(int numDecommissioningNodeManagers) { + maybeInitBuilder(); + builder.setNumDecommissioningNms(numDecommissioningNodeManagers); + } + @Override public int getNumDecommissionedNodeManagers() { YarnClusterMetricsProtoOrBuilder p = viaProto ? proto : builder; @@ -165,4 +181,19 @@ public class YarnClusterMetricsPBImpl extends YarnClusterMetrics { maybeInitBuilder(); builder.setNumRebootedNms((numRebootedNodeManagers)); } -} \ No newline at end of file + + @Override + public int getNumShutdownNodeManagers() { + YarnClusterMetricsProtoOrBuilder p = viaProto ? proto : builder; + if (p.hasNumShutdownNms()) { + return (p.getNumShutdownNms()); + } + return 0; + } + + @Override + public void setNumShutdownNodeManagers(int numShutdownNodeManagers) { + maybeInitBuilder(); + builder.setNumShutdownNms(numShutdownNodeManagers); + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/package-info.java index 2571db8e8dc..81334c0591d 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,7 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.Private +@Private package org.apache.hadoop.yarn.api.records.impl.pb; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.Private; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/resource/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/resource/package-info.java index 660dc02d45c..e1bd6dc1737 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/resource/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/resource/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -18,6 +18,6 @@ /** * API related to resources. */ -@InterfaceAudience.Private +@Private package org.apache.hadoop.yarn.api.resource; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.Private; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/AutoRefreshNoHARMFailoverProxyProvider.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/AutoRefreshNoHARMFailoverProxyProvider.java index f81fe1649b0..cc20214169d 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/AutoRefreshNoHARMFailoverProxyProvider.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/AutoRefreshNoHARMFailoverProxyProvider.java @@ -73,7 +73,7 @@ public class AutoRefreshNoHARMFailoverProxyProvider /** * Stop the current proxy when performFailover. - * @param currentProxy + * @param currentProxy currentProxy. */ @Override public synchronized void performFailover(T currentProxy) { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/ClientRMProxy.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/ClientRMProxy.java index c56f9b30c8e..35b1906698b 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/ClientRMProxy.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/ClientRMProxy.java @@ -65,7 +65,7 @@ public class ClientRMProxy extends RMProxy { * @param protocol Client protocol for which proxy is being requested. * @param Type of proxy. * @return Proxy to the ResourceManager for the specified client protocol. - * @throws IOException + * @throws IOException io error occur. */ public static T createRMProxy(final Configuration configuration, final Class protocol) throws IOException { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/DefaultNoHARMFailoverProxyProvider.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/DefaultNoHARMFailoverProxyProvider.java index e5197cfd1a4..d64d1a37913 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/DefaultNoHARMFailoverProxyProvider.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/DefaultNoHARMFailoverProxyProvider.java @@ -80,7 +80,7 @@ public class DefaultNoHARMFailoverProxyProvider /** * PerformFailover does nothing in this class. - * @param currentProxy + * @param currentProxy currentProxy. */ @Override public void performFailover(T currentProxy) { @@ -89,7 +89,7 @@ public class DefaultNoHARMFailoverProxyProvider /** * Close the current proxy. - * @throws IOException + * @throws IOException io error occur. */ @Override public void close() throws IOException { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/RMProxy.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/RMProxy.java index 95c8f2a01ce..a2ea992aeea 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/RMProxy.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/RMProxy.java @@ -70,6 +70,8 @@ public class RMProxy { /** * Verify the passed protocol is supported. + * + * @param protocol protocol. */ @Private public void checkAllowedProtocols(Class protocol) {} @@ -77,6 +79,11 @@ public class RMProxy { /** * Get the ResourceManager address from the provided Configuration for the * given protocol. + * + * @param conf configuration. + * @param protocol protocol. + * @return inet socket address. + * @throws IOException io error occur. */ @Private public InetSocketAddress getRMAddress( @@ -91,6 +98,13 @@ public class RMProxy { * this is a direct connection to the ResourceManager address. When HA is * enabled, the proxy handles the failover between the ResourceManagers as * well. + * + * @param configuration configuration. + * @param protocol protocol. + * @param instance RMProxy instance. + * @param Generic T. + * @return RMProxy. + * @throws IOException io error occur. */ @Private protected static T createRMProxy(final Configuration configuration, @@ -108,6 +122,15 @@ public class RMProxy { * this is a direct connection to the ResourceManager address. When HA is * enabled, the proxy handles the failover between the ResourceManagers as * well. + * + * @param configuration configuration. + * @param protocol protocol. + * @param instance RMProxy instance. + * @param retryTime retry Time. + * @param retryInterval retry Interval. + * @param Generic T. + * @return RMProxy. + * @throws IOException io error occur. */ @Private protected static T createRMProxy(final Configuration configuration, @@ -136,6 +159,13 @@ public class RMProxy { /** * Get a proxy to the RM at the specified address. To be used to create a * RetryProxy. + * + * @param conf configuration. + * @param protocol protocol. + * @param rmAddress rmAddress. + * @param Generic T. + * @return RM proxy. + * @throws IOException io error occur. */ @Private public T getProxy(final Configuration conf, @@ -195,7 +225,11 @@ public class RMProxy { } /** - * Fetch retry policy from Configuration + * Fetch retry policy from Configuration. + * + * @param conf configuration. + * @param isHAEnabled is HA enabled. + * @return RetryPolicy. */ @Private @VisibleForTesting @@ -218,6 +252,12 @@ public class RMProxy { /** * Fetch retry policy from Configuration and create the * retry policy with specified retryTime and retry interval. + * + * @param conf configuration. + * @param retryTime retry time. + * @param retryInterval retry interval. + * @param isHAEnabled is HA enabled. + * @return RetryPolicy. */ protected static RetryPolicy createRetryPolicy(Configuration conf, long retryTime, long retryInterval, boolean isHAEnabled) { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/AppAdminClient.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/AppAdminClient.java index b9f72484a5d..38c31599b16 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/AppAdminClient.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/AppAdminClient.java @@ -255,6 +255,9 @@ public abstract class AppAdminClient extends CompositeService { * * @param appName the name of the application. * @param componentInstances the name of the component instances. + * @return exit code. + * @throws IOException io error occur. + * @throws YarnException exceptions from yarn servers. */ @Public @Unstable @@ -267,6 +270,9 @@ public abstract class AppAdminClient extends CompositeService { * * @param appName the name of the application. * @param components the name of the components. + * @return exit code. + * @throws IOException io error occur. + * @throws YarnException exceptions from yarn servers. */ @Public @Unstable @@ -279,6 +285,9 @@ public abstract class AppAdminClient extends CompositeService { * @param appName the name of the application. * @param userName the name of the user. * @return exit code + * @throws IOException io error occur. + * @throws YarnException exceptions from yarn servers. + * @throws InterruptedException if interrupted. */ @Public @Unstable @@ -297,6 +306,8 @@ public abstract class AppAdminClient extends CompositeService { * @param appName the name of the application * @param fileName specification of application upgrade to save. * @return exit code + * @throws IOException io error occur. + * @throws YarnException exceptions from yarn servers. */ @Public @Unstable @@ -308,8 +319,8 @@ public abstract class AppAdminClient extends CompositeService { * * @param appName the name of the application * @return exit code - * @throws IOException - * @throws YarnException + * @throws IOException io error occur. + * @throws YarnException exceptions from yarn servers. */ @Public @Unstable @@ -321,6 +332,9 @@ public abstract class AppAdminClient extends CompositeService { * * @param appName the name of the application. * @param componentInstances the name of the component instances. + * @throws IOException io error occur. + * @throws YarnException exceptions from yarn servers. + * @return exit code. */ @Public @Unstable diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/TimelineClient.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/TimelineClient.java index 4835239a920..68e44ffc815 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/TimelineClient.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/TimelineClient.java @@ -114,8 +114,8 @@ public abstract class TimelineClient extends CompositeService implements * * @param domain * an {@link TimelineDomain} object - * @throws IOException - * @throws YarnException + * @throws IOException io error occur. + * @throws YarnException exceptions from yarn servers. */ @Public public abstract void putDomain( @@ -133,8 +133,8 @@ public abstract class TimelineClient extends CompositeService implements * @param domain * an {@link TimelineDomain} object * @param appAttemptId {@link ApplicationAttemptId} - * @throws IOException - * @throws YarnException + * @throws IOException io error occur. + * @throws YarnException exceptions from yarn servers. */ @Public public abstract void putDomain(ApplicationAttemptId appAttemptId, @@ -151,8 +151,8 @@ public abstract class TimelineClient extends CompositeService implements * securely talking to the timeline server * @return a delegation token ({@link Token}) that can be used to talk to the * timeline server - * @throws IOException - * @throws YarnException + * @throws IOException io error occur. + * @throws YarnException exceptions from yarn servers. */ @Public public abstract Token getDelegationToken( @@ -166,8 +166,8 @@ public abstract class TimelineClient extends CompositeService implements * @param timelineDT * the delegation token to renew * @return the new expiration time - * @throws IOException - * @throws YarnException + * @throws IOException io error occur. + * @throws YarnException exceptions from yarn servers. */ @Public public abstract long renewDelegationToken( @@ -181,8 +181,8 @@ public abstract class TimelineClient extends CompositeService implements * * @param timelineDT * the delegation token to cancel - * @throws IOException - * @throws YarnException + * @throws IOException io error occur. + * @throws YarnException exceptions from yarn servers. */ @Public public abstract void cancelDelegationToken( diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/TimelineReaderClient.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/TimelineReaderClient.java index f73c2d37330..3c450f46e6b 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/TimelineReaderClient.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/TimelineReaderClient.java @@ -42,6 +42,8 @@ public abstract class TimelineReaderClient extends CompositeService { /** * Create a new instance of Timeline Reader Client. + * + * @return instance of Timeline Reader Client. */ @InterfaceAudience.Public public static TimelineReaderClient createTimelineReaderClient() { @@ -59,7 +61,7 @@ public abstract class TimelineReaderClient extends CompositeService { * @param fields Fields to be fetched. Defaults to INFO. * @param filters Filters to be applied while fetching entities. * @return entity of the application - * @throws IOException + * @throws IOException io error occur. */ public abstract TimelineEntity getApplicationEntity( ApplicationId appId, String fields, Map filters) @@ -71,7 +73,7 @@ public abstract class TimelineReaderClient extends CompositeService { * @param fields Fields to be fetched. Defaults to INFO. * @param filters Filters to be applied while fetching entities. * @return entity associated with application attempt - * @throws IOException + * @throws IOException io error occur. */ public abstract TimelineEntity getApplicationAttemptEntity( ApplicationAttemptId appAttemptId, String fields, @@ -85,7 +87,7 @@ public abstract class TimelineReaderClient extends CompositeService { * @param limit Number of entities to return. * @param fromId Retrieve next set of generic ids from given fromId * @return list of application attempt entities - * @throws IOException + * @throws IOException io error occur. */ public abstract List getApplicationAttemptEntities( ApplicationId appId, String fields, Map filters, @@ -97,7 +99,7 @@ public abstract class TimelineReaderClient extends CompositeService { * @param fields Fields to be fetched. Defaults to INFO. * @param filters Filters to be applied while fetching entities. * @return timeline entity for container - * @throws IOException + * @throws IOException io error occur. */ public abstract TimelineEntity getContainerEntity( ContainerId containerId, String fields, Map filters) @@ -111,7 +113,7 @@ public abstract class TimelineReaderClient extends CompositeService { * @param limit Number of entities to return. * @param fromId Retrieve next set of generic ids from given fromId * @return list of entities - * @throws IOException + * @throws IOException io error occur. */ public abstract List getContainerEntities( ApplicationId appId, String fields, diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/event/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/event/package-info.java index c410160f841..a9129dc1456 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/event/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/event/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,7 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.Public +@Public package org.apache.hadoop.yarn.event; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.Public; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/factories/impl/pb/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/factories/impl/pb/package-info.java index aae0b4896fc..ea47367e88a 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/factories/impl/pb/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/factories/impl/pb/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,7 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.Private +@Private package org.apache.hadoop.yarn.factories.impl.pb; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.Private; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/ipc/RPCUtil.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/ipc/RPCUtil.java index 088c2fc95bd..b9f77510ab8 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/ipc/RPCUtil.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/ipc/RPCUtil.java @@ -22,24 +22,28 @@ import java.io.IOException; import java.lang.reflect.Constructor; import java.lang.reflect.InvocationTargetException; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate; import org.apache.hadoop.ipc.RemoteException; import org.apache.hadoop.yarn.exceptions.YarnException; import org.apache.hadoop.thirdparty.protobuf.ServiceException; -@InterfaceAudience.LimitedPrivate({ "MapReduce", "YARN" }) +@LimitedPrivate({ "MapReduce", "YARN" }) public class RPCUtil { /** - * Returns an instance of {@link YarnException} + * Returns an instance of {@link YarnException}. + * @param t instance of Throwable. + * @return instance of YarnException. */ public static YarnException getRemoteException(Throwable t) { return new YarnException(t); } /** - * Returns an instance of {@link YarnException} + * Returns an instance of {@link YarnException}. + * @param message yarn exception message. + * @return instance of YarnException. */ public static YarnException getRemoteException(String message) { return new YarnException(message); @@ -92,6 +96,8 @@ public class RPCUtil { * ServiceException * @return An instance of the actual exception, which will be a subclass of * {@link YarnException} or {@link IOException} + * @throws IOException io error occur. + * @throws YarnException exceptions from yarn servers. */ public static Void unwrapAndThrowException(ServiceException se) throws IOException, YarnException { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/ipc/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/ipc/package-info.java index eec93feb538..62d5e02797c 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/ipc/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/ipc/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,7 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.LimitedPrivate({ "MapReduce", "YARN" }) +@LimitedPrivate({ "MapReduce", "YARN" }) package org.apache.hadoop.yarn.ipc; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat.java index 5a49f9ff501..fc0b71e6a42 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat.java @@ -919,6 +919,7 @@ public class AggregatedLogFormat { * @param logUploadedTime the log uploaded time stamp * @param logType the given log type * @throws IOException if we can not read the container logs + * @return If logType contains fileType, return 1, otherwise return 0. */ public static int readContainerLogsForALogType( DataInputStream valueStream, PrintStream out, long logUploadedTime, @@ -934,7 +935,9 @@ public class AggregatedLogFormat { * @param out the output print stream * @param logUploadedTime the log uploaded time stamp * @param logType the given log type + * @param bytes log bytes. * @throws IOException if we can not read the container logs + * @return If logType contains fileType, return 1, otherwise return 0. */ public static int readContainerLogsForALogType( DataInputStream valueStream, PrintStream out, long logUploadedTime, diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/ifile/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/ifile/package-info.java index 08ddecef5db..9cbc99baad8 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/ifile/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/ifile/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,7 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.Public +@Public package org.apache.hadoop.yarn.logaggregation.filecontroller.ifile; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.Public; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/package-info.java index cad238a9a42..1b53de6d1d6 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,7 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.Public +@Public package org.apache.hadoop.yarn.logaggregation.filecontroller; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.Public; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/tfile/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/tfile/package-info.java index b2e91ab48a9..e014350ec25 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/tfile/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/tfile/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,6 +15,6 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.Public +@Public package org.apache.hadoop.yarn.logaggregation.filecontroller.tfile; -import org.apache.hadoop.classification.InterfaceAudience; \ No newline at end of file +import org.apache.hadoop.classification.InterfaceAudience.Public; \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/package-info.java index 90dce80e63c..c1f1379ee80 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,7 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.Public +@Public package org.apache.hadoop.yarn.logaggregation; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.Public; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/metrics/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/metrics/package-info.java index 5df20b1bf88..7497e433315 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/metrics/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/metrics/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -18,6 +18,6 @@ /** * Provides common metrics (available, allocated) for custom resources. */ -@InterfaceAudience.Private +@Private package org.apache.hadoop.yarn.metrics; -import org.apache.hadoop.classification.InterfaceAudience; \ No newline at end of file +import org.apache.hadoop.classification.InterfaceAudience.Private; \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/AttributeValue.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/AttributeValue.java index d1d75cf1e92..9e95a2e6426 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/AttributeValue.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/AttributeValue.java @@ -34,8 +34,8 @@ public interface AttributeValue { * validate the value based on the type and initialize for further compare * operations. * - * @param value - * @throws IOException + * @param value value. + * @throws IOException io error occur. */ void validateAndInitializeValue(String value) throws IOException; @@ -43,8 +43,8 @@ public interface AttributeValue { * compare the value against the other based on the * AttributeExpressionOperation. * - * @param other - * @param op + * @param other attribute value. + * @param op attribute expression operation. * @return true if value other matches the current value for the * operation op. */ diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/CommonNodeLabelsManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/CommonNodeLabelsManager.java index 7417b9a1c5e..a3dfcafba6d 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/CommonNodeLabelsManager.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/CommonNodeLabelsManager.java @@ -337,10 +337,11 @@ public class CommonNodeLabelsManager extends AbstractService { } /** - * Add multiple node labels to repository + * Add multiple node labels to repository. * * @param labels * new node labels added + * @throws IOException io error occur. */ @VisibleForTesting public void addToCluserNodeLabelsWithDefaultExclusivity(Set labels) @@ -394,9 +395,10 @@ public class CommonNodeLabelsManager extends AbstractService { } /** - * add more labels to nodes + * add more labels to nodes. * * @param addedLabelsToNode node {@literal ->} labels map + * @throws IOException io error occur. */ public void addLabelsToNode(Map> addedLabelsToNode) throws IOException { @@ -466,7 +468,7 @@ public class CommonNodeLabelsManager extends AbstractService { * * @param labelsToRemove * node labels to remove - * @throws IOException + * @throws IOException io error occur. */ public void removeFromClusterNodeLabels(Collection labelsToRemove) throws IOException { @@ -707,7 +709,7 @@ public class CommonNodeLabelsManager extends AbstractService { if (null != dispatcher && isCentralizedNodeLabelConfiguration) { // In case of DistributedNodeLabelConfiguration or - // DelegatedCentralizedNodeLabelConfiguration, no need to save the the + // DelegatedCentralizedNodeLabelConfiguration, no need to save the // NodeLabels Mapping to the back-end store, as on RM restart/failover // NodeLabels are collected from NM through Register/Heartbeat again // in case of DistributedNodeLabelConfiguration and collected from @@ -727,9 +729,10 @@ public class CommonNodeLabelsManager extends AbstractService { /** * remove labels from nodes, labels being removed most be contained by these - * nodes + * nodes. * * @param removeLabelsFromNode node {@literal ->} labels map + * @throws IOException io error occur. */ public void removeLabelsFromNode(Map> removeLabelsFromNode) @@ -784,6 +787,7 @@ public class CommonNodeLabelsManager extends AbstractService { * replace labels to nodes * * @param replaceLabelsToNode node {@literal ->} labels map + * @throws IOException io error occur. */ public void replaceLabelsOnNode(Map> replaceLabelsToNode) throws IOException { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/NodeAttributeStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/NodeAttributeStore.java index 8e9f9ff9f0f..f55c66abac3 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/NodeAttributeStore.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/NodeAttributeStore.java @@ -35,7 +35,7 @@ public interface NodeAttributeStore extends Closeable { * Replace labels on node. * * @param nodeToAttribute node to attribute list. - * @throws IOException + * @throws IOException io error occur. */ void replaceNodeAttributes(List nodeToAttribute) throws IOException; @@ -44,7 +44,7 @@ public interface NodeAttributeStore extends Closeable { * Add attribute to node. * * @param nodeToAttribute node to attribute list. - * @throws IOException + * @throws IOException io error occur. */ void addNodeAttributes(List nodeToAttribute) throws IOException; @@ -53,7 +53,7 @@ public interface NodeAttributeStore extends Closeable { * Remove attribute from node. * * @param nodeToAttribute node to attribute list. - * @throws IOException + * @throws IOException io error occur. */ void removeNodeAttributes(List nodeToAttribute) throws IOException; @@ -62,16 +62,16 @@ public interface NodeAttributeStore extends Closeable { * Initialize based on configuration and NodeAttributesManager. * * @param configuration configuration instance. - * @param mgr nodeattributemanager instance. - * @throws Exception + * @param mgr node attribute manager instance. + * @throws Exception exception occurs. */ void init(Configuration configuration, NodeAttributesManager mgr) throws Exception; /** - * Recover store on resourcemanager startup. - * @throws IOException - * @throws YarnException + * Recover store on resource manager startup. + * @throws IOException io error occur. + * @throws YarnException exceptions from yarn servers. */ void recover() throws IOException, YarnException; } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/NodeAttributesManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/NodeAttributesManager.java index a4c90a420a9..23675794ced 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/NodeAttributesManager.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/NodeAttributesManager.java @@ -58,8 +58,8 @@ public abstract class NodeAttributesManager extends AbstractService { * impacting other existing attribute mapping. Key would be name of the node * and value would be set of Attributes to be mapped. * - * @param nodeAttributeMapping - * @throws IOException + * @param nodeAttributeMapping host name to a set of node attributes mapping. + * @throws IOException io error occur. */ public abstract void addNodeAttributes( Map> nodeAttributeMapping) throws IOException; @@ -69,8 +69,8 @@ public abstract class NodeAttributesManager extends AbstractService { * impacting other existing attribute mapping. Key would be name of the node * and value would be set of Attributes to be removed. * - * @param nodeAttributeMapping - * @throws IOException + * @param nodeAttributeMapping host name to a set of node attributes mapping. + * @throws IOException io error occur. */ public abstract void removeNodeAttributes( Map> nodeAttributeMapping) throws IOException; @@ -93,6 +93,7 @@ public abstract class NodeAttributesManager extends AbstractService { * If the attributeKeys set is null or empty, then mapping for all attributes * are returned. * + * @param attributes attributes set. * @return a Map of attributeKeys to a map of hostnames to its attribute * values. */ @@ -103,6 +104,7 @@ public abstract class NodeAttributesManager extends AbstractService { /** * NodeAttribute to AttributeValue Map. * + * @param hostName host name. * @return Map of NodeAttribute to AttributeValue. */ public abstract Map getAttributesForNode( @@ -111,6 +113,7 @@ public abstract class NodeAttributesManager extends AbstractService { /** * Get All node to Attributes list based on filter. * + * @param prefix filter prefix set. * @return List of NodeToAttributes matching filter. If empty * or null is passed as argument will return all. */ @@ -120,6 +123,7 @@ public abstract class NodeAttributesManager extends AbstractService { /** * Get all node to Attributes mapping. * + * @param hostNames host names. * @return Map of String to Set of nodesToAttributes matching * filter. If empty or null is passed as argument will return all. */ diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/NodeLabelUtil.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/NodeLabelUtil.java index 02f2188bfc8..f33a30e0f2b 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/NodeLabelUtil.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/NodeLabelUtil.java @@ -133,8 +133,8 @@ public final class NodeLabelUtil { *

  • Missing prefix: the attribute doesn't have prefix defined
  • *
  • Malformed attribute prefix: the prefix is not in valid format
  • * - * @param attributeSet - * @throws IOException + * @param attributeSet node attribute set. + * @throws IOException io error occur. */ public static void validateNodeAttributes(Set attributeSet) throws IOException { @@ -179,6 +179,9 @@ public final class NodeLabelUtil { /** * Are these two input node attributes the same. + * + * @param leftNodeAttributes left node attribute. + * @param rightNodeAttributes right node attribute. * @return true if they are the same */ public static boolean isNodeAttributesEquals( diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/NodeLabelsStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/NodeLabelsStore.java index e4efd68f92b..aac3a767d07 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/NodeLabelsStore.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/NodeLabelsStore.java @@ -37,18 +37,24 @@ public interface NodeLabelsStore extends Closeable { /** * Store node {@literal ->} label. + * @param nodeToLabels node to labels mapping. + * @throws IOException io error occur. */ void updateNodeToLabelsMappings( Map> nodeToLabels) throws IOException; /** * Store new labels. + * @param labels labels. + * @throws IOException io error occur. */ - void storeNewClusterNodeLabels(List label) + void storeNewClusterNodeLabels(List labels) throws IOException; /** * Remove labels. + * @param labels labels. + * @throws IOException io error occur. */ void removeClusterNodeLabels(Collection labels) throws IOException; @@ -60,8 +66,8 @@ public interface NodeLabelsStore extends Closeable { * ignoreNodeToLabelsMappings will be set to true and recover will be invoked * as RM will collect the node labels from NM through registration/HB. * - * @throws IOException - * @throws YarnException + * @throws IOException io error occur. + * @throws YarnException exceptions from yarn servers. */ void recover() throws IOException, YarnException; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/store/FSStoreOpHandler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/store/FSStoreOpHandler.java index 59a1860e315..15d4efc03e6 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/store/FSStoreOpHandler.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/store/FSStoreOpHandler.java @@ -100,7 +100,7 @@ public class FSStoreOpHandler { /** * Get mirror operation of store Type. * - * @param storeType + * @param storeType storeType. * @return instance of FSNodeStoreLogOp. */ public static FSNodeStoreLogOp getMirrorOp(StoreType storeType) { @@ -108,9 +108,9 @@ public class FSStoreOpHandler { } /** - * Will return StoreOp instance basead on opCode and StoreType. - * @param opCode - * @param storeType + * Will return StoreOp instance based on opCode and StoreType. + * @param opCode opCode. + * @param storeType storeType. * @return instance of FSNodeStoreLogOp. */ public static FSNodeStoreLogOp get(int opCode, StoreType storeType) { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/store/StoreOp.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/store/StoreOp.java index e0b26da82e7..44c22255599 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/store/StoreOp.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/store/StoreOp.java @@ -34,7 +34,7 @@ public interface StoreOp { * * @param write write to be done to * @param mgr manager used by store - * @throws IOException + * @throws IOException io error occur. */ void write(W write, M mgr) throws IOException; @@ -43,7 +43,7 @@ public interface StoreOp { * * @param read read to be done from * @param mgr manager used by store - * @throws IOException + * @throws IOException io error occur. */ void recover(R read, M mgr) throws IOException; } \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/store/op/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/store/op/package-info.java index f6fb3d3ecaa..3175b2c5788 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/store/op/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/store/op/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,7 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.Private +@Private package org.apache.hadoop.yarn.nodelabels.store.op; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.Private; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/store/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/store/package-info.java index 0444807071a..d1546cf2b68 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/store/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/store/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,7 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.Private +@Private package org.apache.hadoop.yarn.nodelabels.store; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.Private; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/package-info.java index 99ab44ce42d..69aeda005df 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,7 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.Public +@Public package org.apache.hadoop.yarn; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.Public; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/ContainerTokenIdentifier.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/ContainerTokenIdentifier.java index f425e8dfe40..b29e7e6fe08 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/ContainerTokenIdentifier.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/ContainerTokenIdentifier.java @@ -418,7 +418,8 @@ public class ContainerTokenIdentifier extends TokenIdentifier { } } /** - * Get the node-label-expression in the original ResourceRequest + * Get the node-label-expression in the original ResourceRequest. + * @return node label expression. */ public String getNodeLabelExpression() { if (proto.hasNodeLabelExpression()) { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/YarnAuthorizationProvider.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/YarnAuthorizationProvider.java index c6cabdbff74..d69f9abfc5e 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/YarnAuthorizationProvider.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/YarnAuthorizationProvider.java @@ -79,6 +79,7 @@ public abstract class YarnAuthorizationProvider { /** * Initialize the provider. Invoked on daemon startup. DefaultYarnAuthorizer is * initialized based on configurations. + * @param conf configuration. */ public abstract void init(Configuration conf); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/admin/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/admin/package-info.java index c66be222aea..99b857ac2ab 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/admin/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/admin/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,7 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.Public +@Public package org.apache.hadoop.yarn.security.admin; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.Public; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/YARNDelegationTokenIdentifier.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/YARNDelegationTokenIdentifier.java index 6d8bc4bc1da..95fe4bbc64e 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/YARNDelegationTokenIdentifier.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/YARNDelegationTokenIdentifier.java @@ -24,9 +24,11 @@ import java.io.IOException; import org.apache.hadoop.classification.VisibleForTesting; import org.apache.hadoop.classification.InterfaceAudience.Private; +import org.apache.hadoop.classification.InterfaceStability.Unstable; import org.apache.hadoop.io.Text; import org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier; import org.apache.hadoop.yarn.proto.YarnSecurityTokenProtos.YARNDelegationTokenIdentifierProto; +import org.apache.hadoop.yarn.util.Records; @Private public abstract class YARNDelegationTokenIdentifier extends @@ -112,4 +114,14 @@ public abstract class YARNDelegationTokenIdentifier extends setBuilderFields(); return builder.build(); } + + @Private + @Unstable + public static YARNDelegationTokenIdentifier newInstance(Text owner, Text renewer, Text realUser) { + YARNDelegationTokenIdentifier policy = Records.newRecord(YARNDelegationTokenIdentifier.class); + policy.setOwner(owner); + policy.setRenewer(renewer); + policy.setRenewer(realUser); + return policy; + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/impl/pb/YARNDelegationTokenIdentifierPBImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/impl/pb/YARNDelegationTokenIdentifierPBImpl.java new file mode 100644 index 00000000000..f977a3391fe --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/impl/pb/YARNDelegationTokenIdentifierPBImpl.java @@ -0,0 +1,200 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.yarn.security.client.impl.pb; + +import org.apache.hadoop.classification.InterfaceAudience.Private; +import org.apache.hadoop.classification.InterfaceStability.Unstable; +import org.apache.hadoop.io.Text; +import org.apache.hadoop.thirdparty.protobuf.TextFormat; +import org.apache.hadoop.yarn.proto.YarnSecurityTokenProtos.YARNDelegationTokenIdentifierProto; +import org.apache.hadoop.yarn.proto.YarnSecurityTokenProtos.YARNDelegationTokenIdentifierProtoOrBuilder; +import org.apache.hadoop.yarn.security.client.YARNDelegationTokenIdentifier; + +@Private +@Unstable +public class YARNDelegationTokenIdentifierPBImpl extends YARNDelegationTokenIdentifier { + + private YARNDelegationTokenIdentifierProto proto = + YARNDelegationTokenIdentifierProto.getDefaultInstance(); + private YARNDelegationTokenIdentifierProto.Builder builder = null; + private boolean viaProto = false; + + public YARNDelegationTokenIdentifierPBImpl() { + builder = YARNDelegationTokenIdentifierProto.newBuilder(); + } + + public YARNDelegationTokenIdentifierPBImpl(YARNDelegationTokenIdentifierProto identifierProto) { + this.proto = identifierProto; + viaProto = true; + } + + public YARNDelegationTokenIdentifierProto getProto() { + mergeLocalToProto(); + proto = viaProto ? proto : builder.build(); + viaProto = true; + return proto; + } + + private void mergeLocalToProto() { + if (viaProto) { + maybeInitBuilder(); + } + // mergeLocalToBuilder(); + proto = builder.build(); + viaProto = true; + } + + @Override + public String toString() { + return TextFormat.shortDebugString(getProto()); + } + + private void maybeInitBuilder() { + if (viaProto || builder == null) { + if (proto == null) { + proto = YARNDelegationTokenIdentifierProto.getDefaultInstance(); + } + builder = YARNDelegationTokenIdentifierProto.newBuilder(proto); + } + viaProto = false; + } + + @Override + public Text getOwner() { + YARNDelegationTokenIdentifierProtoOrBuilder p = viaProto ? proto : builder; + return new Text(p.getOwner()); + } + + @Override + public void setOwner(Text owner) { + super.setOwner(owner); + maybeInitBuilder(); + if (owner == null) { + builder.clearOwner(); + return; + } + builder.setOwner(owner.toString()); + } + + @Override + public Text getRenewer() { + YARNDelegationTokenIdentifierProtoOrBuilder p = viaProto ? proto : builder; + return new Text(p.getRenewer()); + } + + @Override + public void setRenewer(Text renewer) { + super.setRenewer(renewer); + maybeInitBuilder(); + if (renewer == null) { + builder.clearRenewer(); + return; + } + builder.setOwner(renewer.toString()); + } + + @Override + public Text getRealUser() { + YARNDelegationTokenIdentifierProtoOrBuilder p = viaProto ? proto : builder; + return new Text(p.getRealUser()); + } + + @Override + public void setRealUser(Text realUser) { + super.setRealUser(realUser); + maybeInitBuilder(); + if (realUser == null) { + builder.clearRealUser(); + return; + } + builder.setRealUser(realUser.toString()); + } + + @Override + public void setIssueDate(long issueDate) { + super.setIssueDate(issueDate); + maybeInitBuilder(); + builder.setIssueDate(issueDate); + } + + @Override + public long getIssueDate() { + YARNDelegationTokenIdentifierProtoOrBuilder p = viaProto ? proto : builder; + return p.getIssueDate(); + } + + @Override + public void setMaxDate(long maxDate) { + super.setMaxDate(maxDate); + maybeInitBuilder(); + builder.setMaxDate(maxDate); + } + + @Override + public long getMaxDate() { + YARNDelegationTokenIdentifierProtoOrBuilder p = viaProto ? proto : builder; + return p.getMaxDate(); + } + + @Override + public void setSequenceNumber(int seqNum) { + super.setSequenceNumber(seqNum); + maybeInitBuilder(); + builder.setSequenceNumber(seqNum); + } + + @Override + public int getSequenceNumber() { + YARNDelegationTokenIdentifierProtoOrBuilder p = viaProto ? proto : builder; + return p.getSequenceNumber(); + } + + @Override + public void setMasterKeyId(int newId) { + super.setMasterKeyId(newId); + maybeInitBuilder(); + builder.setMasterKeyId(newId); + } + + @Override + public int getMasterKeyId() { + YARNDelegationTokenIdentifierProtoOrBuilder p = viaProto ? proto : builder; + return p.getMasterKeyId(); + } + + @Override + public Text getKind() { + return null; + } + + @Override + public boolean equals(Object other) { + if (other == null) { + return false; + } + if (other.getClass().isAssignableFrom(this.getClass())) { + return this.getProto().equals(this.getClass().cast(other).getProto()); + } + return false; + } + + @Override + public int hashCode() { + return getProto().hashCode(); + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/impl/pb/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/impl/pb/package-info.java new file mode 100644 index 00000000000..cb6b1be8807 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/impl/pb/package-info.java @@ -0,0 +1,20 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +@Public +package org.apache.hadoop.yarn.security.client.impl.pb; +import org.apache.hadoop.classification.InterfaceAudience.Public; \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/package-info.java index 7aa12fd75fd..e44f8a270c0 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,7 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.Public +@Public package org.apache.hadoop.yarn.security.client; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.Public; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/package-info.java index f09a1f32011..38a174db369 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,7 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.Public +@Public package org.apache.hadoop.yarn.security; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.Public; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshNodesRequestPBImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshNodesRequestPBImpl.java index 62a82912b59..a14aae74f6b 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshNodesRequestPBImpl.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshNodesRequestPBImpl.java @@ -31,9 +31,9 @@ import org.apache.hadoop.thirdparty.protobuf.TextFormat; @Private @Unstable public class RefreshNodesRequestPBImpl extends RefreshNodesRequest { - RefreshNodesRequestProto proto = RefreshNodesRequestProto.getDefaultInstance(); - RefreshNodesRequestProto.Builder builder = null; - boolean viaProto = false; + private RefreshNodesRequestProto proto = RefreshNodesRequestProto.getDefaultInstance(); + private RefreshNodesRequestProto.Builder builder = null; + private boolean viaProto = false; private DecommissionType decommissionType; public RefreshNodesRequestPBImpl() { @@ -123,6 +123,22 @@ public class RefreshNodesRequestPBImpl extends RefreshNodesRequest { return p.hasDecommissionTimeout()? p.getDecommissionTimeout() : null; } + @Override + public synchronized String getSubClusterId() { + RefreshNodesRequestProtoOrBuilder p = viaProto ? proto : builder; + return (p.hasSubClusterId()) ? p.getSubClusterId() : null; + } + + @Override + public synchronized void setSubClusterId(String subClusterId) { + maybeInitBuilder(); + if (subClusterId == null) { + builder.clearSubClusterId(); + return; + } + builder.setSubClusterId(subClusterId); + } + private DecommissionType convertFromProtoFormat(DecommissionTypeProto p) { return DecommissionType.valueOf(p.name()); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshQueuesRequestPBImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshQueuesRequestPBImpl.java index c21ec6d362c..2c174ad18fb 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshQueuesRequestPBImpl.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshQueuesRequestPBImpl.java @@ -21,6 +21,7 @@ package org.apache.hadoop.yarn.server.api.protocolrecords.impl.pb; import org.apache.hadoop.classification.InterfaceAudience.Private; import org.apache.hadoop.classification.InterfaceStability.Unstable; import org.apache.hadoop.yarn.proto.YarnServerResourceManagerServiceProtos.RefreshQueuesRequestProto; +import org.apache.hadoop.yarn.proto.YarnServerResourceManagerServiceProtos.RefreshQueuesRequestProtoOrBuilder; import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshQueuesRequest; import org.apache.hadoop.thirdparty.protobuf.TextFormat; @@ -29,9 +30,9 @@ import org.apache.hadoop.thirdparty.protobuf.TextFormat; @Unstable public class RefreshQueuesRequestPBImpl extends RefreshQueuesRequest { - RefreshQueuesRequestProto proto = RefreshQueuesRequestProto.getDefaultInstance(); - RefreshQueuesRequestProto.Builder builder = null; - boolean viaProto = false; + private RefreshQueuesRequestProto proto = RefreshQueuesRequestProto.getDefaultInstance(); + private RefreshQueuesRequestProto.Builder builder = null; + private boolean viaProto = false; public RefreshQueuesRequestPBImpl() { builder = RefreshQueuesRequestProto.newBuilder(); @@ -55,8 +56,9 @@ public class RefreshQueuesRequestPBImpl extends RefreshQueuesRequest { @Override public boolean equals(Object other) { - if (other == null) + if (other == null) { return false; + } if (other.getClass().isAssignableFrom(this.getClass())) { return this.getProto().equals(this.getClass().cast(other).getProto()); } @@ -67,4 +69,27 @@ public class RefreshQueuesRequestPBImpl extends RefreshQueuesRequest { public String toString() { return TextFormat.shortDebugString(getProto()); } + + private void maybeInitBuilder() { + if (viaProto || builder == null) { + builder = RefreshQueuesRequestProto.newBuilder(proto); + } + viaProto = false; + } + + @Override + public String getSubClusterId() { + RefreshQueuesRequestProtoOrBuilder p = viaProto ? proto : builder; + return (p.hasSubClusterId()) ? p.getSubClusterId() : null; + } + + @Override + public void setSubClusterId(String clusterId) { + maybeInitBuilder(); + if (clusterId == null) { + builder.clearSubClusterId(); + return; + } + builder.setSubClusterId(clusterId); + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshSuperUserGroupsConfigurationRequestPBImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshSuperUserGroupsConfigurationRequestPBImpl.java index dd36bdf0c61..e1047d618aa 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshSuperUserGroupsConfigurationRequestPBImpl.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshSuperUserGroupsConfigurationRequestPBImpl.java @@ -18,8 +18,10 @@ package org.apache.hadoop.yarn.server.api.protocolrecords.impl.pb; +import org.apache.commons.lang3.builder.EqualsBuilder; import org.apache.hadoop.classification.InterfaceAudience.Private; import org.apache.hadoop.classification.InterfaceStability.Unstable; +import org.apache.hadoop.yarn.proto.YarnServerResourceManagerServiceProtos.RefreshSuperUserGroupsConfigurationRequestProtoOrBuilder; import org.apache.hadoop.yarn.proto.YarnServerResourceManagerServiceProtos.RefreshSuperUserGroupsConfigurationRequestProto; import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshSuperUserGroupsConfigurationRequest; @@ -27,18 +29,20 @@ import org.apache.hadoop.thirdparty.protobuf.TextFormat; @Private @Unstable -public class RefreshSuperUserGroupsConfigurationRequestPBImpl -extends RefreshSuperUserGroupsConfigurationRequest { +public class RefreshSuperUserGroupsConfigurationRequestPBImpl + extends RefreshSuperUserGroupsConfigurationRequest { - RefreshSuperUserGroupsConfigurationRequestProto proto = RefreshSuperUserGroupsConfigurationRequestProto.getDefaultInstance(); - RefreshSuperUserGroupsConfigurationRequestProto.Builder builder = null; - boolean viaProto = false; + private RefreshSuperUserGroupsConfigurationRequestProto proto = + RefreshSuperUserGroupsConfigurationRequestProto.getDefaultInstance(); + private RefreshSuperUserGroupsConfigurationRequestProto.Builder builder = null; + private boolean viaProto = false; public RefreshSuperUserGroupsConfigurationRequestPBImpl() { builder = RefreshSuperUserGroupsConfigurationRequestProto.newBuilder(); } - public RefreshSuperUserGroupsConfigurationRequestPBImpl(RefreshSuperUserGroupsConfigurationRequestProto proto) { + public RefreshSuperUserGroupsConfigurationRequestPBImpl( + RefreshSuperUserGroupsConfigurationRequestProto proto) { this.proto = proto; viaProto = true; } @@ -56,16 +60,46 @@ extends RefreshSuperUserGroupsConfigurationRequest { @Override public boolean equals(Object other) { - if (other == null) + + if (!(other instanceof RefreshSuperUserGroupsConfigurationRequest)) { return false; - if (other.getClass().isAssignableFrom(this.getClass())) { - return this.getProto().equals(this.getClass().cast(other).getProto()); } - return false; + + RefreshSuperUserGroupsConfigurationRequestPBImpl otherImpl = this.getClass().cast(other); + return new EqualsBuilder() + .append(this.getProto(), otherImpl.getProto()) + .isEquals(); } @Override public String toString() { return TextFormat.shortDebugString(getProto()); } + + private synchronized void maybeInitBuilder() { + if (viaProto || builder == null) { + builder = RefreshSuperUserGroupsConfigurationRequestProto.newBuilder(proto); + } + viaProto = false; + } + + @Override + public String getSubClusterId() { + RefreshSuperUserGroupsConfigurationRequestProtoOrBuilder p = viaProto ? proto : builder; + boolean hasSubClusterId = p.hasSubClusterId(); + if (hasSubClusterId) { + return p.getSubClusterId(); + } + return null; + } + + @Override + public void setSubClusterId(String subClusterId) { + maybeInitBuilder(); + if (subClusterId == null) { + builder.clearSubClusterId(); + return; + } + builder.setSubClusterId(subClusterId); + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshUserToGroupsMappingsRequestPBImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshUserToGroupsMappingsRequestPBImpl.java index 7e4c7a8fb57..931467d4a4e 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshUserToGroupsMappingsRequestPBImpl.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshUserToGroupsMappingsRequestPBImpl.java @@ -18,8 +18,10 @@ package org.apache.hadoop.yarn.server.api.protocolrecords.impl.pb; +import org.apache.commons.lang3.builder.EqualsBuilder; import org.apache.hadoop.classification.InterfaceAudience.Private; import org.apache.hadoop.classification.InterfaceStability.Unstable; +import org.apache.hadoop.yarn.proto.YarnServerResourceManagerServiceProtos.RefreshUserToGroupsMappingsRequestProtoOrBuilder; import org.apache.hadoop.yarn.proto.YarnServerResourceManagerServiceProtos.RefreshUserToGroupsMappingsRequestProto; import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshUserToGroupsMappingsRequest; @@ -27,12 +29,12 @@ import org.apache.hadoop.thirdparty.protobuf.TextFormat; @Private @Unstable -public class RefreshUserToGroupsMappingsRequestPBImpl -extends RefreshUserToGroupsMappingsRequest { +public class RefreshUserToGroupsMappingsRequestPBImpl extends RefreshUserToGroupsMappingsRequest { - RefreshUserToGroupsMappingsRequestProto proto = RefreshUserToGroupsMappingsRequestProto.getDefaultInstance(); - RefreshUserToGroupsMappingsRequestProto.Builder builder = null; - boolean viaProto = false; + private RefreshUserToGroupsMappingsRequestProto proto = + RefreshUserToGroupsMappingsRequestProto.getDefaultInstance(); + private RefreshUserToGroupsMappingsRequestProto.Builder builder = null; + private boolean viaProto = false; public RefreshUserToGroupsMappingsRequestPBImpl() { builder = RefreshUserToGroupsMappingsRequestProto.newBuilder(); @@ -56,16 +58,46 @@ extends RefreshUserToGroupsMappingsRequest { @Override public boolean equals(Object other) { - if (other == null) + + if (!(other instanceof RefreshUserToGroupsMappingsRequest)) { return false; - if (other.getClass().isAssignableFrom(this.getClass())) { - return this.getProto().equals(this.getClass().cast(other).getProto()); } - return false; + + RefreshUserToGroupsMappingsRequestPBImpl otherImpl = this.getClass().cast(other); + return new EqualsBuilder() + .append(this.getProto(), otherImpl.getProto()) + .isEquals(); } @Override public String toString() { return TextFormat.shortDebugString(getProto()); } + + private synchronized void maybeInitBuilder() { + if (viaProto || builder == null) { + builder = RefreshUserToGroupsMappingsRequestProto.newBuilder(proto); + } + viaProto = false; + } + + @Override + public String getSubClusterId() { + RefreshUserToGroupsMappingsRequestProtoOrBuilder p = viaProto ? proto : builder; + boolean hasSubClusterId = p.hasSubClusterId(); + if (hasSubClusterId) { + return p.getSubClusterId(); + } + return null; + } + + @Override + public void setSubClusterId(String subClusterId) { + maybeInitBuilder(); + if (subClusterId == null) { + builder.clearSubClusterId(); + return; + } + builder.setSubClusterId(subClusterId); + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/package-info.java index e9c394bbd78..72e1d777658 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,7 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.Private +@Private package org.apache.hadoop.yarn.server.api.protocolrecords.impl.pb; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.Private; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/metrics/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/metrics/package-info.java index b7ef6fb924f..c293580f196 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/metrics/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/metrics/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -16,7 +16,7 @@ * limitations under the License. */ /** Yarn Common Metrics package. **/ -@InterfaceAudience.Private +@Private package org.apache.hadoop.yarn.server.metrics; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.Private; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/security/ApplicationACLsManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/security/ApplicationACLsManager.java index e7741754899..af7867ca898 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/security/ApplicationACLsManager.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/security/ApplicationACLsManager.java @@ -88,10 +88,11 @@ public class ApplicationACLsManager { *
  • For all other users/groups application-acls are checked
  • * * - * @param callerUGI - * @param applicationAccessType - * @param applicationOwner - * @param applicationId + * @param callerUGI UserGroupInformation for the user. + * @param applicationAccessType Application Access Type. + * @param applicationOwner Application Owner. + * @param applicationId ApplicationId. + * @return true if the user has permission, false otherwise. */ public boolean checkAccess(UserGroupInformation callerUGI, ApplicationAccessType applicationAccessType, String applicationOwner, diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/security/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/security/package-info.java index 8aa55050c4d..4fc7a7df40d 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/security/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/security/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,7 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.Private +@Private package org.apache.hadoop.yarn.server.security; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.Private; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/sharedcache/SharedCacheChecksum.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/sharedcache/SharedCacheChecksum.java index 7e6fddaa263..ca442873c44 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/sharedcache/SharedCacheChecksum.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/sharedcache/SharedCacheChecksum.java @@ -37,7 +37,7 @@ public interface SharedCacheChecksum { * * @param in InputStream to be checksumed * @return the message digest of the input stream - * @throws IOException + * @throws IOException io error occur. */ public String computeChecksum(InputStream in) throws IOException; } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/sharedcache/SharedCacheChecksumFactory.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/sharedcache/SharedCacheChecksumFactory.java index cbfd95db5b9..4397634c8f7 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/sharedcache/SharedCacheChecksumFactory.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/sharedcache/SharedCacheChecksumFactory.java @@ -58,8 +58,9 @@ public class SharedCacheChecksumFactory { /** * Get a new SharedCacheChecksum object based on the configurable * algorithm implementation - * (see yarn.sharedcache.checksum.algo.impl) + * (see yarn.sharedcache.checksum.algo.impl). * + * @param conf configuration. * @return SharedCacheChecksum object */ public static SharedCacheChecksum getChecksum(Configuration conf) { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/state/StateMachine.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/state/StateMachine.java index 51515596306..083869ef3d1 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/state/StateMachine.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/state/StateMachine.java @@ -27,6 +27,7 @@ public interface StateMachine , EVENTTYPE extends Enum, EVENT> { public STATE getCurrentState(); + public STATE getPreviousState(); public STATE doTransition(EVENTTYPE eventType, EVENT event) throws InvalidStateTransitionException; } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/state/StateMachineFactory.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/state/StateMachineFactory.java index 4bb005c0536..3ac04a2b5bb 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/state/StateMachineFactory.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/state/StateMachineFactory.java @@ -60,7 +60,7 @@ final public class StateMachineFactory * Constructor * * This is the only constructor in the API. - * + * @param defaultInitialState default initial state. */ public StateMachineFactory(STATE defaultInitialState) { this.transitionsListNode = null; @@ -457,6 +457,7 @@ final public class StateMachineFactory implements StateMachine { private final OPERAND operand; private STATE currentState; + private STATE previousState; private final StateTransitionListener listener; InternalStateMachine(OPERAND operand, STATE initialState) { @@ -479,14 +480,19 @@ final public class StateMachineFactory return currentState; } + @Override + public synchronized STATE getPreviousState() { + return previousState; + } + @Override public synchronized STATE doTransition(EVENTTYPE eventType, EVENT event) throws InvalidStateTransitionException { listener.preTransition(operand, currentState, event); - STATE oldState = currentState; + previousState = currentState; currentState = StateMachineFactory.this.doTransition (operand, currentState, eventType, event); - listener.postTransition(operand, oldState, currentState, event); + listener.postTransition(operand, previousState, currentState, event); return currentState; } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/state/VisualizeStateMachine.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/state/VisualizeStateMachine.java index 16203976e33..fa3831d341e 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/state/VisualizeStateMachine.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/state/VisualizeStateMachine.java @@ -27,9 +27,13 @@ import org.apache.hadoop.classification.InterfaceAudience.Private; public class VisualizeStateMachine { /** + * get Graph From Classes. + * + * @param graphName graphName. * @param classes list of classes which have static field * stateMachineFactory of type StateMachineFactory * @return graph represent this StateMachine + * @throws Exception exception occurs. */ public static Graph getGraphFromClasses(String graphName, List classes) throws Exception { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/state/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/state/package-info.java index c62c5015e93..4cdee806bbd 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/state/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/state/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,7 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.Public +@Public package org.apache.hadoop.yarn.state; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.Public; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/Apps.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/Apps.java index a53ae039781..3075ac626ab 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/Apps.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/Apps.java @@ -229,6 +229,8 @@ public class Apps { * This older version of this method is kept around for compatibility * because downstream frameworks like Spark and Tez have been using it. * Downstream frameworks are expected to move off of it. + * @param env the environment to update. + * @param envString String containing env variable definitions. */ @Deprecated public static void setEnvFromInputString(Map env, @@ -255,6 +257,10 @@ public class Apps { * This older version of this method is kept around for compatibility * because downstream frameworks like Spark and Tez have been using it. * Downstream frameworks are expected to move off of it. + * + * @param environment map of environment variable. + * @param variable variable. + * @param value value. */ @Deprecated public static void addToEnvironment( diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ConverterUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ConverterUtils.java index 67bc2b74fd1..d4647a14a27 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ConverterUtils.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ConverterUtils.java @@ -57,7 +57,8 @@ public class ConverterUtils { * @param url * url to convert * @return path from {@link URL} - * @throws URISyntaxException + * @throws URISyntaxException exception thrown to indicate that a string could not be parsed as a + * URI reference. */ @Public @Deprecated @@ -171,6 +172,7 @@ public class ConverterUtils { * * @param protoToken the yarn token * @param serviceAddr the connect address for the service + * @param Generic Type T. * @return rpc token */ public static Token convertFromYarn( @@ -191,6 +193,8 @@ public class ConverterUtils { * * @param protoToken the yarn token * @param service the service for the token + * @param Generic Type T. + * @return rpc token */ public static Token convertFromYarn( org.apache.hadoop.yarn.api.records.Token protoToken, diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/DockerClientConfigHandler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/DockerClientConfigHandler.java index 91002e40d6a..b1a186e8a87 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/DockerClientConfigHandler.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/DockerClientConfigHandler.java @@ -131,6 +131,7 @@ public final class DockerClientConfigHandler { * * @param tokens the Tokens from the ContainerLaunchContext. * @return the Credentials object populated from the Tokens. + * @throws IOException io error occur. */ public static Credentials getCredentialsFromTokensByteBuffer( ByteBuffer tokens) throws IOException { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/FSDownload.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/FSDownload.java index 56808c75ff6..fe4a8446192 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/FSDownload.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/FSDownload.java @@ -123,10 +123,13 @@ public class FSDownload implements Callable { * Creates the cache loader for the status loading cache. This should be used * to create an instance of the status cache that is passed into the * FSDownload constructor. + * + * @param conf configuration. + * @return cache loader for the status loading cache. */ - public static CacheLoader> + public static CacheLoader> createStatusCacheLoader(final Configuration conf) { - return new CacheLoader>() { + return new CacheLoader>() { public Future load(Path path) { try { FileSystem fs = path.getFileSystem(conf); @@ -141,14 +144,19 @@ public class FSDownload implements Callable { /** * Returns a boolean to denote whether a cache file is visible to all (public) - * or not + * or not. * + * @param fs fileSystem. + * @param current current path. + * @param sStat file status. + * @param statCache stat cache. * @return true if the path in the current path is visible to all, false * otherwise + * @throws IOException io error occur. */ @Private public static boolean isPublic(FileSystem fs, Path current, FileStatus sStat, - LoadingCache> statCache) throws IOException { + LoadingCache> statCache) throws IOException { current = fs.makeQualified(current); //the leaf level file should be readable by others if (!checkPublicPermsForAll(fs, sStat, FsAction.READ_EXECUTE, FsAction.READ)) { @@ -455,7 +463,7 @@ public class FSDownload implements Callable { * Change to 755 or 700 for dirs, 555 or 500 for files. * @param fs FileSystem * @param path Path to modify perms for - * @throws IOException + * @throws IOException io error occur. * @throws InterruptedException */ private void changePermissions(FileSystem fs, final Path path) diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ProcfsBasedProcessTree.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ProcfsBasedProcessTree.java index 470594d853b..5a518dff7e5 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ProcfsBasedProcessTree.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ProcfsBasedProcessTree.java @@ -579,6 +579,9 @@ public class ProcfsBasedProcessTree extends ResourceCalculatorProcessTree { /** * Returns boolean indicating whether pid * is in process tree. + * + * @param pid pid. + * @return if true, processTree contains pid, false, processTree does not contain pid. */ public boolean contains(String pid) { return processTree.containsKey(pid); @@ -1000,9 +1003,9 @@ public class ProcfsBasedProcessTree extends ResourceCalculatorProcessTree { } /** - * Test the {@link ProcfsBasedProcessTree} + * Test the {@link ProcfsBasedProcessTree}. * - * @param args + * @param args the pid arg. */ public static void main(String[] args) { if (args.length != 1) { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/RackResolver.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/RackResolver.java index 65f0fa62cd8..1ba18d212d4 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/RackResolver.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/RackResolver.java @@ -79,8 +79,8 @@ public final class RackResolver { * Utility method for getting a hostname resolved to a node in the * network topology. This method initializes the class with the * right resolver implementation. - * @param conf - * @param hostName + * @param conf configuration. + * @param hostName hostname. * @return node {@link Node} after resolving the hostname */ public static Node resolve(Configuration conf, String hostName) { @@ -92,8 +92,8 @@ public final class RackResolver { * Utility method for getting a list of hostname resolved to a list of node * in the network topology. This method initializes the class with the * right resolver implementation. - * @param conf - * @param hostNames + * @param conf configuration. + * @param hostNames list of hostName. * @return nodes {@link Node} after resolving the hostnames */ public static List resolve( @@ -106,7 +106,7 @@ public final class RackResolver { * Utility method for getting a hostname resolved to a node in the * network topology. This method doesn't initialize the class. * Call {@link #init(Configuration)} explicitly. - * @param hostName + * @param hostName host name. * @return node {@link Node} after resolving the hostname */ public static Node resolve(String hostName) { @@ -120,7 +120,7 @@ public final class RackResolver { * Utility method for getting a list of hostname resolved to a list of node * in the network topology. This method doesn't initialize the class. * Call {@link #init(Configuration)} explicitly. - * @param hostNames + * @param hostNames list of hostNames. * @return nodes {@link Node} after resolving the hostnames */ public static List resolve(List hostNames) { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/StringHelper.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/StringHelper.java index 54ba886eaed..462ae634912 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/StringHelper.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/StringHelper.java @@ -67,7 +67,7 @@ public final class StringHelper { } /** - * Join on dot + * Join on dot. * @param args to join * @return args joined by dot */ @@ -76,7 +76,7 @@ public final class StringHelper { } /** - * Join on underscore + * Join on underscore. * @param args to join * @return args joined underscore */ @@ -85,7 +85,7 @@ public final class StringHelper { } /** - * Join on slash + * Join on slash. * @param args to join * @return args joined with slash */ @@ -103,8 +103,8 @@ public final class StringHelper { } /** - * Join without separator - * @param args + * Join without separator. + * @param args to join. * @return joined args with no separator */ public static String join(Object... args) { @@ -131,7 +131,7 @@ public final class StringHelper { } /** - * Split on _ and trim results + * Split on _ and trim results. * @param s the string to split * @return an iterable of strings */ @@ -140,7 +140,7 @@ public final class StringHelper { } /** - * Check whether a url is absolute or note + * Check whether a url is absolute or note. * @param url to check * @return true if url starts with scheme:// or // */ @@ -149,7 +149,7 @@ public final class StringHelper { } /** - * Join url components + * Join url components. * @param pathPrefix for relative urls * @param args url components to join * @return an url string diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/TrackingUriPlugin.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/TrackingUriPlugin.java index d29e52fee80..1199a5de9f3 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/TrackingUriPlugin.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/TrackingUriPlugin.java @@ -38,7 +38,8 @@ public abstract class TrackingUriPlugin extends Configured { * Given an application ID, return a tracking URI. * @param id the ID for which a URI is returned * @return the tracking URI - * @throws URISyntaxException + * @throws URISyntaxException exception thrown to indicate that a string could not be parsed as a + * URI reference. */ public abstract URI getTrackingUri(ApplicationId id) throws URISyntaxException; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/YarnVersionInfo.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/YarnVersionInfo.java index 18654920216..c2dccb3dc57 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/YarnVersionInfo.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/YarnVersionInfo.java @@ -80,6 +80,8 @@ public class YarnVersionInfo extends VersionInfo { /** * Get the subversion URL for the root YARN directory. + * + * @return URL for the root YARN directory. */ public static String getUrl() { return YARN_VERSION_INFO._getUrl(); @@ -88,14 +90,18 @@ public class YarnVersionInfo extends VersionInfo { /** * Get the checksum of the source files from which YARN was * built. - **/ + * + * @return srcChecksum. + */ public static String getSrcChecksum() { return YARN_VERSION_INFO._getSrcChecksum(); } /** * Returns the buildVersion which includes version, - * revision, user and date. + * revision, user and date. + * + * @return buildVersion. */ public static String getBuildVersion(){ return YARN_VERSION_INFO._getBuildVersion(); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/package-info.java index 8a22bfc1c5b..0e9d95b9ff5 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,7 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.Public +@Public package org.apache.hadoop.yarn.util; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.Public; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java index 05850137c74..089d1933978 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java @@ -91,13 +91,13 @@ public abstract class ResourceCalculator { /** * Divides lhs by rhs. - * If both lhs and rhs are having a value of 0, then we return 0. + * + * @param lhs left number. + * @param rhs right number. + * @return If both lhs and rhs are having a value of 0, then we return 0. * This is to avoid division by zero and return NaN as a result. * If lhs is zero but rhs is not, Float.infinity will be returned * as the result. - * @param lhs - * @param rhs - * @return */ public static float divideSafelyAsFloat(long lhs, long rhs) { if (lhs == 0 && rhs == 0) { @@ -263,6 +263,9 @@ public abstract class ResourceCalculator { /** * Check if a smaller resource can be contained by bigger resource. + * @param smaller smaller resource. + * @param bigger bigger resource. + * @return if true, smaller resource can be contained by bigger resource; false otherwise. */ public abstract boolean fitsIn(Resource smaller, Resource bigger); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java index 9b96fd72b9a..c39490909c4 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java @@ -20,7 +20,7 @@ package org.apache.hadoop.yarn.util.resource; import org.slf4j.Logger; import org.slf4j.LoggerFactory; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate; import org.apache.hadoop.classification.InterfaceAudience.Private; import org.apache.hadoop.classification.InterfaceStability.Unstable; import org.apache.hadoop.yarn.api.records.Resource; @@ -31,7 +31,7 @@ import org.apache.hadoop.yarn.exceptions.ResourceNotFoundException; * Resources is a computation class which provides a set of apis to do * mathematical operations on Resource object. */ -@InterfaceAudience.LimitedPrivate({ "YARN", "MapReduce" }) +@LimitedPrivate({ "YARN", "MapReduce" }) @Unstable public class Resources { @@ -316,7 +316,11 @@ public class Resources { /** * Multiply {@code rhs} by {@code by}, and add the result to {@code lhs} - * without creating any new {@link Resource} object + * without creating any new {@link Resource} object. + * @param lhs {@link Resource} to subtract from. + * @param rhs {@link Resource} to subtract. + * @param by multiplier. + * @return instance of Resource. */ public static Resource multiplyAndAddTo( Resource lhs, Resource rhs, double by) { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/timeline/TimelineUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/timeline/TimelineUtils.java index 46b37412716..14b9b0ceb7d 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/timeline/TimelineUtils.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/timeline/TimelineUtils.java @@ -66,9 +66,9 @@ public class TimelineUtils { * @param o * an object to serialize * @return a JSON string - * @throws IOException - * @throws JsonMappingException - * @throws JsonGenerationException + * @throws IOException io error occur. + * @throws JsonMappingException exception used to signal fatal problems with mapping of content. + * @throws JsonGenerationException exception type for exceptions during JSON writing. */ public static String dumpTimelineRecordtoJSON(Object o) throws JsonGenerationException, JsonMappingException, IOException { @@ -83,9 +83,9 @@ public class TimelineUtils { * @param pretty * whether in a pretty format or not * @return a JSON string - * @throws IOException - * @throws JsonMappingException - * @throws JsonGenerationException + * @throws IOException io error occur. + * @throws JsonMappingException exception used to signal fatal problems with mapping of content. + * @throws JsonGenerationException exception type for exceptions during JSON writing. */ public static String dumpTimelineRecordtoJSON(Object o, boolean pretty) throws JsonGenerationException, JsonMappingException, IOException { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/timeline/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/timeline/package-info.java index 5c18a55518d..98fe0783694 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/timeline/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/timeline/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,7 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.Public +@Public package org.apache.hadoop.yarn.util.timeline; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.Public; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApp.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApp.java index 4898c266777..6ef1c50cc6d 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApp.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApp.java @@ -28,6 +28,7 @@ import java.util.Map; import org.apache.commons.lang3.StringUtils; import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.VisibleForTesting; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.http.HttpServer2; import org.apache.hadoop.util.Lists; @@ -299,4 +300,8 @@ public abstract class WebApp extends ServletModule { public abstract void setup(); + @VisibleForTesting + public HttpServer2 getHttpServer() { + return httpServer; + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/YarnWebParams.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/YarnWebParams.java index ee9100f8e78..f084adbd9a2 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/YarnWebParams.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/YarnWebParams.java @@ -35,10 +35,12 @@ public interface YarnWebParams { String APP_STATE = "app.state"; String APP_START_TIME_BEGIN = "app.started-time.begin"; String APP_START_TIME_END = "app.started-time.end"; + String APP_SC = "app.subcluster"; String APPS_NUM = "apps.num"; String QUEUE_NAME = "queue.name"; String NODE_STATE = "node.state"; String NODE_LABEL = "node.label"; + String NODE_SC = "node.subcluster"; String WEB_UI_TYPE = "web.ui.type"; String NEXT_REFRESH_INTERVAL = "next.refresh.interval"; String ERROR_MESSAGE = "error.message"; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/dao/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/dao/package-info.java index aec67627729..9e2c2316c77 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/dao/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/dao/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -19,9 +19,9 @@ /** * Data structures for scheduler configuration mutation info. */ -@InterfaceAudience.Private -@InterfaceStability.Unstable +@Private +@Unstable package org.apache.hadoop.yarn.webapp.dao; -import org.apache.hadoop.classification.InterfaceAudience; -import org.apache.hadoop.classification.InterfaceStability; +import org.apache.hadoop.classification.InterfaceAudience.Private; +import org.apache.hadoop.classification.InterfaceStability.Unstable; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/example/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/example/package-info.java index 0f0936b7cd3..e164a0219fb 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/example/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/example/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,7 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.LimitedPrivate({"YARN", "MapReduce"}) +@LimitedPrivate({"YARN", "MapReduce"}) package org.apache.hadoop.yarn.webapp.example; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet2/HamletGen.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet2/HamletGen.java index 0a8f016ac32..ff8094c9780 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet2/HamletGen.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet2/HamletGen.java @@ -73,7 +73,7 @@ public class HamletGen { * @param implClass a generic hamlet implementation. e.g. {@link HamletImpl} * @param outputName name of the output class. e.g. {@link Hamlet} * @param outputPkg package name of the output class. - * @throws IOException + * @throws IOException io error occur. */ public void generate(Class specClass, Class implClass, String outputName, String outputPkg) throws IOException { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet2/HamletSpec.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet2/HamletSpec.java index 8aeba93f098..518a22c154b 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet2/HamletSpec.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet2/HamletSpec.java @@ -1251,7 +1251,7 @@ public class HamletSpec { /** * Add a TEXTAREA element. - * @param selector + * @param selector the css selector in the form of (#id)*(.class)* * @return a new TEXTAREA element builder */ TEXTAREA textarea(String selector); @@ -2080,13 +2080,13 @@ public class HamletSpec { */ public interface INS extends Attrs, Flow, _Child { /** info on reason for change - * @param uri + * @param uri the URI. * @return the current element builder */ INS $cite(String uri); /** date and time of change - * @param datetime + * @param datetime the time. * @return the current element builder */ INS $datetime(String datetime); @@ -2103,7 +2103,7 @@ public class HamletSpec { DEL $cite(String uri); /** date and time of change - * @param datetime the time + * @param datetime the time. * @return the current element builder */ DEL $datetime(String datetime); @@ -2205,13 +2205,13 @@ public class HamletSpec { public interface FORM extends Attrs, _Child, /* (%block;|SCRIPT)+ -(FORM) */ _Script, _Block, _FieldSet { /** server-side form handler - * @param uri + * @param uri the URI. * @return the current element builder */ FORM $action(String uri); /** HTTP method used to submit the form - * @param method + * @param method method. * @return the current element builder */ FORM $method(Method method); @@ -2220,37 +2220,37 @@ public class HamletSpec { * contentype for "POST" method. * The default is "application/x-www-form-urlencoded". * Use "multipart/form-data" for input type=file - * @param enctype + * @param enctype enctype. * @return the current element builder */ FORM $enctype(String enctype); /** list of MIME types for file upload - * @param cdata + * @param cdata the content of the element. * @return the current element builder */ FORM $accept(String cdata); /** name of form for scripting - * @param cdata + * @param cdata the content of the element. * @return the current element builder */ FORM $name(String cdata); /** the form was submitted - * @param script + * @param script to invoke. * @return the current element builder */ FORM $onsubmit(String script); /** the form was reset - * @param script + * @param script to invoke. * @return the current element builder */ FORM $onreset(String script); /** (space and/or comma separated) list of supported charsets - * @param cdata + * @param cdata the content of the element. * @return the current element builder */ FORM $accept_charset(String cdata); @@ -2262,25 +2262,25 @@ public class HamletSpec { public interface LABEL extends Attrs, _Child, /* (%inline;)* -(LABEL) */ PCData, FontStyle, Phrase, Special, _FormCtrl { /** matches field ID value - * @param cdata + * @param cdata the content of the element. * @return the current element builder */ LABEL $for(String cdata); /** accessibility key character - * @param cdata + * @param cdata the content of the element. * @return the current element builder */ LABEL $accesskey(String cdata); /** the element got the focus - * @param script + * @param script to invoke. * @return the current element builder */ LABEL $onfocus(String script); /** the element lost the focus - * @param script + * @param script to invoke. * @return the current element builder */ LABEL $onblur(String script); @@ -2292,19 +2292,19 @@ public class HamletSpec { @Element(endTag=false) public interface INPUT extends Attrs, _Child { /** what kind of widget is needed. default is "text". - * @param inputType + * @param inputType input value. * @return the current element builder */ INPUT $type(InputType inputType); /** submit as part of form - * @param cdata + * @param cdata the content of the element. * @return the current element builder */ INPUT $name(String cdata); /** Specify for radio buttons and checkboxes - * @param cdata + * @param cdata the content of the element. * @return the current element builder */ INPUT $value(String cdata); @@ -2325,25 +2325,25 @@ public class HamletSpec { INPUT $readonly(); /** specific to each type of field - * @param cdata + * @param cdata the content of the element. * @return the current element builder */ INPUT $size(String cdata); /** max chars for text fields - * @param length + * @param length max chars length. * @return the current element builder */ INPUT $maxlength(int length); /** for fields with images - * @param uri + * @param uri the URI. * @return the current element builder */ INPUT $src(String uri); /** short description - * @param cdata + * @param cdata the content of the element. * @return the current element builder */ INPUT $alt(String cdata); @@ -2355,43 +2355,43 @@ public class HamletSpec { INPUT $ismap(); /** position in tabbing order - * @param index + * @param index the index * @return the current element builder */ INPUT $tabindex(int index); /** accessibility key character - * @param cdata + * @param cdata the content of the element. * @return the current element builder */ INPUT $accesskey(String cdata); /** the element got the focus - * @param script + * @param script to invoke. * @return the current element builder */ INPUT $onfocus(String script); /** the element lost the focus - * @param script + * @param script to invoke. * @return the current element builder */ INPUT $onblur(String script); /** some text was selected - * @param script + * @param script to invoke. * @return the current element builder */ INPUT $onselect(String script); /** the element value was changed - * @param script + * @param script to invoke. * @return the current element builder */ INPUT $onchange(String script); /** list of MIME types for file upload (csv) - * @param contentTypes + * @param contentTypes content types. * @return the current element builder */ INPUT $accept(String contentTypes); @@ -2426,13 +2426,13 @@ public class HamletSpec { OPTGROUP optgroup(); /** field name - * @param cdata + * @param cdata the content of the element. * @return the current element builder */ SELECT $name(String cdata); /** rows visible - * @param rows + * @param rows number of rows. * @return the current element builder */ SELECT $size(int rows); @@ -2448,25 +2448,25 @@ public class HamletSpec { SELECT $disabled(); /** position in tabbing order - * @param index + * @param index the index * @return the current element builder */ SELECT $tabindex(int index); /** the element got the focus - * @param script + * @param script to invoke. * @return the current element builder */ SELECT $onfocus(String script); /** the element lost the focus - * @param script + * @param script to invoke. * @return the current element builder */ SELECT $onblur(String script); /** the element value was changed - * @param script + * @param script to invoke. * @return the current element builder */ SELECT $onchange(String script); @@ -2482,7 +2482,7 @@ public class HamletSpec { OPTGROUP $disabled(); /** for use in hierarchical menus - * @param cdata + * @param cdata the content of the element. * @return the current element builder */ OPTGROUP $label(String cdata); @@ -2504,13 +2504,13 @@ public class HamletSpec { OPTION $disabled(); /** for use in hierarchical menus - * @param cdata + * @param cdata the content of the element. * @return the current element builder */ OPTION $label(String cdata); /** defaults to element content - * @param cdata + * @param cdata the content of the element. * @return the current element builder */ OPTION $value(String cdata); @@ -2521,19 +2521,19 @@ public class HamletSpec { */ public interface TEXTAREA extends Attrs, PCData, _Child { /** variable name for the text - * @param cdata + * @param cdata the content of the element. * @return the current element builder */ TEXTAREA $name(String cdata); /** visible rows - * @param rows + * @param rows number of rows. * @return the current element builder */ TEXTAREA $rows(int rows); /** visible columns - * @param cols + * @param cols number of cols. * @return the current element builder */ TEXTAREA $cols(int cols); @@ -2549,37 +2549,37 @@ public class HamletSpec { TEXTAREA $readonly(); /** position in tabbing order - * @param index + * @param index the index * @return the current element builder */ TEXTAREA $tabindex(int index); /** accessibility key character - * @param cdata + * @param cdata the content of the element. * @return the current element builder */ TEXTAREA $accesskey(String cdata); /** the element got the focus - * @param script + * @param script to invoke. * @return the current element builder */ TEXTAREA $onfocus(String script); /** the element lost the focus - * @param script + * @param script to invoke. * @return the current element builder */ TEXTAREA $onblur(String script); /** some text was selected - * @param script + * @param script to invoke. * @return the current element builder */ TEXTAREA $onselect(String script); /** the element value was changed - * @param script + * @param script to invoke. * @return the current element builder */ TEXTAREA $onchange(String script); @@ -2597,7 +2597,7 @@ public class HamletSpec { /** * Add a LEGEND element. - * @param cdata + * @param cdata the content of the element. * @return the current element builder */ _Legend legend(String cdata); @@ -2614,7 +2614,7 @@ public class HamletSpec { */ public interface LEGEND extends Attrs, Inline, _Child { /** accessibility key character - * @param cdata + * @param cdata the content of the element. * @return the current element builder */ LEGEND $accesskey(String cdata); @@ -2626,19 +2626,19 @@ public class HamletSpec { public interface BUTTON extends /* (%flow;)* -(A|%formctrl|FORM|FIELDSET) */ _Block, PCData, FontStyle, Phrase, _Special, _ImgObject, _SubSup, Attrs { /** name of the value - * @param cdata + * @param cdata the content of the element. * @return the current element builder */ BUTTON $name(String cdata); /** sent to server when submitted - * @param cdata + * @param cdata the content of the element. * @return the current element builder */ BUTTON $value(String cdata); /** for use as form button - * @param type + * @param type button type. * @return the current element builder */ BUTTON $type(ButtonType type); @@ -2649,25 +2649,25 @@ public class HamletSpec { BUTTON $disabled(); /** position in tabbing order - * @param index + * @param index the index * @return the current element builder */ BUTTON $tabindex(int index); /** accessibility key character - * @param cdata + * @param cdata the content of the element. * @return the current element builder */ BUTTON $accesskey(String cdata); /** the element got the focus - * @param script + * @param script to invoke. * @return the current element builder */ BUTTON $onfocus(String script); /** the element lost the focus - * @param script + * @param script to invoke. * @return the current element builder */ BUTTON $onblur(String script); @@ -2721,7 +2721,7 @@ public class HamletSpec { /** * Add a CAPTION element. - * @param cdata + * @param cdata the content of the element. * @return the current element builder */ _Table caption(String cdata); @@ -2813,7 +2813,7 @@ public class HamletSpec { @Element(endTag=false) public interface COLGROUP extends Attrs, _TableCol, _Child { /** default number of columns in group. default: 1 - * @param cols + * @param cols number of cols. * @return the current element builder */ COLGROUP $span(int cols); @@ -2827,7 +2827,7 @@ public class HamletSpec { @Element(endTag=false) public interface COL extends Attrs, _Child { /** COL attributes affect N columns. default: 1 - * @param cols + * @param cols number of cols. * @return the current element builder */ COL $span(int cols); @@ -2896,25 +2896,25 @@ public class HamletSpec { // use $title for elaberation, when appropriate. // $axis omitted. use scope. /** space-separated list of id's for header cells - * @param cdata + * @param cdata the content of the element. * @return the current element builder */ _Cell $headers(String cdata); /** scope covered by header cells - * @param scope + * @param scope scope. * @return the current element builder */ _Cell $scope(Scope scope); /** number of rows spanned by cell. default: 1 - * @param rows + * @param rows number of rows. * @return the current element builder */ _Cell $rowspan(int rows); /** number of cols spanned by cell. default: 1 - * @param cols + * @param cols number of cols. * @return the current element builder */ _Cell $colspan(int cols); @@ -2959,7 +2959,7 @@ public class HamletSpec { /** * Add a complete BASE element. - * @param uri + * @param uri the URI. * @return the current element builder */ _Head base(String uri); @@ -2984,7 +2984,7 @@ public class HamletSpec { @Element(endTag=false) public interface BASE extends _Child { /** URI that acts as base URI - * @param uri + * @param uri the URI. * @return the current element builder */ BASE $href(String uri); @@ -2996,19 +2996,19 @@ public class HamletSpec { @Element(endTag=false) public interface META extends I18nAttrs, _Child { /** HTTP response header name - * @param header + * @param header for the http-equiv attribute * @return the current element builder */ META $http_equiv(String header); /** metainformation name - * @param name + * @param name of the meta element * @return the current element builder */ META $name(String name); /** associated information - * @param cdata + * @param cdata the content of the element. * @return the current element builder */ META $content(String cdata); @@ -3021,19 +3021,19 @@ public class HamletSpec { */ public interface STYLE extends I18nAttrs, _Content, _Child { /** content type of style language - * @param cdata + * @param cdata the content of the element. * @return the current element builder */ STYLE $type(String cdata); /** designed for use with these media - * @param media + * @param media set of media. * @return the current element builder */ STYLE $media(EnumSet media); /** advisory title - * @param cdata + * @param cdata the content of the element. * @return the current element builder */ STYLE $title(String cdata); @@ -3044,25 +3044,25 @@ public class HamletSpec { */ public interface SCRIPT extends _Content, _Child { /** char encoding of linked resource - * @param cdata + * @param cdata the content of the element. * @return the current element builder */ SCRIPT $charset(String cdata); /** content type of script language - * @param cdata + * @param cdata the content of the element. * @return the current element builder */ SCRIPT $type(String cdata); /** URI for an external script - * @param cdata + * @param cdata the content of the element. * @return the current element builder */ SCRIPT $src(String cdata); /** UA may defer execution of script - * @param cdata + * @param cdata the content of the element. * @return the current element builder */ SCRIPT $defer(String cdata); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet2/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet2/package-info.java index 64a8447e024..54aa474bf01 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet2/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet2/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -21,7 +21,7 @@ * The old package is using _ as a one-character identifier, * which is banned from JDK9. */ -@InterfaceAudience.LimitedPrivate({"YARN", "MapReduce"}) +@LimitedPrivate({"YARN", "MapReduce"}) package org.apache.hadoop.yarn.webapp.hamlet2; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/log/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/log/package-info.java index 1d64404cc36..23c4328ad8d 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/log/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/log/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,7 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.LimitedPrivate({"YARN", "MapReduce"}) +@LimitedPrivate({"YARN", "MapReduce"}) package org.apache.hadoop.yarn.webapp.log; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/package-info.java index 342f28df283..4d179099eb7 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,7 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.LimitedPrivate({"YARN", "MapReduce"}) +@LimitedPrivate({"YARN", "MapReduce"}) package org.apache.hadoop.yarn.webapp; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/util/WebAppUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/util/WebAppUtils.java index 5b578197100..02fec115059 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/util/WebAppUtils.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/util/WebAppUtils.java @@ -95,6 +95,13 @@ public class WebAppUtils { * Runs a certain function against the active RM. The function's first * argument is expected to be a string which contains the address of * the RM being tried. + * @param conf configuration. + * @param func throwing bi function. + * @param arg T arg. + * @param Generic T. + * @param Generic R. + * @throws Exception exception occurs. + * @return instance of Generic R. */ public static R execOnActiveRM(Configuration conf, ThrowingBiFunction func, T arg) throws Exception { @@ -389,7 +396,7 @@ public class WebAppUtils { * if url has scheme then it will be returned as it is else it will return * url with scheme. * @param schemePrefix eg. http:// or https:// - * @param url + * @param url url. * @return url with scheme */ public static String getURLWithScheme(String schemePrefix, String url) { @@ -428,7 +435,8 @@ public class WebAppUtils { /** * Choose which scheme (HTTP or HTTPS) to use when generating a URL based on * the configuration. - * + * + * @param conf configuration. * @return the scheme (HTTP / HTTPS) */ public static String getHttpSchemePrefix(Configuration conf) { @@ -438,6 +446,8 @@ public class WebAppUtils { /** * Load the SSL keystore / truststore into the HttpServer builder. * @param builder the HttpServer2.Builder to populate with ssl config + * @return HttpServer2.Builder instance (passed in as the first parameter) + * after loading SSL stores */ public static HttpServer2.Builder loadSslConfiguration( HttpServer2.Builder builder) { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/util/WebServiceClient.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/util/WebServiceClient.java index 3f61645461b..39cc2e361f1 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/util/WebServiceClient.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/util/WebServiceClient.java @@ -46,8 +46,8 @@ public class WebServiceClient { * Construct a new WebServiceClient based on the configuration. It will try to * load SSL certificates when it is specified. * - * @param conf - * @throws Exception + * @param conf configuration. + * @throws Exception exception occur. */ public static void initialize(Configuration conf) throws Exception { if (instance == null) { @@ -75,9 +75,9 @@ public class WebServiceClient { /** * Start SSL factory. * - * @param conf - * @return - * @throws Exception + * @param conf configuration. + * @return SSL factory. + * @throws Exception exception occur. */ private static SSLFactory createSSLFactory(Configuration conf) throws Exception { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/HtmlBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/HtmlBlock.java index 6c14b7532f5..c07a76617ee 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/HtmlBlock.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/HtmlBlock.java @@ -97,4 +97,28 @@ public abstract class HtmlBlock extends TextView implements SubView { return callerUGI; } + /** + * Initialize User Help Information Div. + * When the user does not configure the Yarn Federation function, prompt the user. + * + * @param html HTML page. + * @param isEnabled If federation is enabled. + */ + protected void initUserHelpInformationDiv(Block html, boolean isEnabled) { + if (!isEnabled) { + html.style(".alert {padding: 15px; margin-bottom: 20px; " + + " border: 1px solid transparent; border-radius: 4px;}"); + html.style(".alert-dismissable {padding-right: 35px;}"); + html.style(".alert-info {color: #856404;background-color: #fff3cd;border-color: #ffeeba;}"); + + Hamlet.DIV div = html.div("#div_id").$class("alert alert-dismissable alert-info"); + div.p().$style("color:red").__("Federation is not Enabled.").__() + .p().__() + .p().__("We can refer to the following documents to configure Yarn Federation. ").__() + .p().__() + .a("https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/Federation.html", + "Hadoop: YARN Federation"). + __(); + } + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/JQueryUI.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/JQueryUI.java index 8c8abc5a0bc..56d9f25710e 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/JQueryUI.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/JQueryUI.java @@ -65,12 +65,12 @@ public class JQueryUI extends HtmlBlock { @Override protected void render(Block html) { html.link(root_url("static/jquery/themes-1.9.1/base/jquery-ui.css")) - .link(root_url("static/dt-1.10.18/css/jquery.dataTables.css")) - .link(root_url("static/dt-1.10.18/css/jui-dt.css")) - .link(root_url("static/dt-1.10.18/css/custom_datatable.css")) + .link(root_url("static/dt-1.11.5/css/jquery.dataTables.css")) + .link(root_url("static/dt-1.11.5/css/jui-dt.css")) + .link(root_url("static/dt-1.11.5/css/custom_datatable.css")) .script(root_url("static/jquery/jquery-3.6.0.min.js")) .script(root_url("static/jquery/jquery-ui-1.13.2.custom.min.js")) - .script(root_url("static/dt-1.10.18/js/jquery.dataTables.min.js")) + .script(root_url("static/dt-1.11.5/js/jquery.dataTables.min.js")) .script(root_url("static/yarn.dt.plugins.js")) .script(root_url("static/dt-sorting/natural.js")) .style("#jsnotice { padding: 0.2em; text-align: center; }", diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/package-info.java index c1f92eb7075..a22c1b3601c 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/package-info.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,7 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.LimitedPrivate({"YARN", "MapReduce"}) +@LimitedPrivate({"YARN", "MapReduce"}) package org.apache.hadoop.yarn.webapp.view; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/images/sort_asc.png b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/images/sort_asc.png deleted file mode 100644 index a56d0e21902..00000000000 Binary files a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/images/sort_asc.png and /dev/null differ diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/images/sort_asc_disabled.png b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/images/sort_asc_disabled.png deleted file mode 100644 index b7e621ef1c6..00000000000 Binary files a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/images/sort_asc_disabled.png and /dev/null differ diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/images/sort_both.png b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/images/sort_both.png deleted file mode 100644 index 839ac4bb5b0..00000000000 Binary files a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/images/sort_both.png and /dev/null differ diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/images/sort_desc.png b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/images/sort_desc.png deleted file mode 100644 index 90b295159df..00000000000 Binary files a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/images/sort_desc.png and /dev/null differ diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/images/sort_desc_disabled.png b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/images/sort_desc_disabled.png deleted file mode 100644 index 2409653dc94..00000000000 Binary files a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/images/sort_desc_disabled.png and /dev/null differ diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/js/jquery.dataTables.min.js b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/js/jquery.dataTables.min.js deleted file mode 100644 index 3a79ccc169c..00000000000 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/js/jquery.dataTables.min.js +++ /dev/null @@ -1,184 +0,0 @@ -/** -* Licensed to the Apache Software Foundation (ASF) under one -* or more contributor license agreements. See the NOTICE file -* distributed with this work for additional information -* regarding copyright ownership. The ASF licenses this file -* to you under the Apache License, Version 2.0 (the -* "License"); you may not use this file except in compliance -* with the License. You may obtain a copy of the License at -* -* http://www.apache.org/licenses/LICENSE-2.0 -* -* Unless required by applicable law or agreed to in writing, software -* distributed under the License is distributed on an "AS IS" BASIS, -* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -* See the License for the specific language governing permissions and -* limitations under the License. -*/ - -/*! - DataTables 1.10.18 - ©2008-2018 SpryMedia Ltd - datatables.net/license -*/ -(function(h){"function"===typeof define&&define.amd?define(["jquery"],function(E){return h(E,window,document)}):"object"===typeof exports?module.exports=function(E,H){E||(E=window);H||(H="undefined"!==typeof window?require("jquery"):require("jquery")(E));return h(H,E,E.document)}:h(jQuery,window,document)})(function(h,E,H,k){function Z(a){var b,c,d={};h.each(a,function(e){if((b=e.match(/^([^A-Z]+?)([A-Z])/))&&-1!=="a aa ai ao as b fn i m o s ".indexOf(b[1]+" "))c=e.replace(b[0],b[2].toLowerCase()), -d[c]=e,"o"===b[1]&&Z(a[e])});a._hungarianMap=d}function J(a,b,c){a._hungarianMap||Z(a);var d;h.each(b,function(e){d=a._hungarianMap[e];if(d!==k&&(c||b[d]===k))"o"===d.charAt(0)?(b[d]||(b[d]={}),h.extend(!0,b[d],b[e]),J(a[d],b[d],c)):b[d]=b[e]})}function Ca(a){var b=n.defaults.oLanguage,c=b.sDecimal;c&&Da(c);if(a){var d=a.sZeroRecords;!a.sEmptyTable&&(d&&"No data available in table"===b.sEmptyTable)&&F(a,a,"sZeroRecords","sEmptyTable");!a.sLoadingRecords&&(d&&"Loading..."===b.sLoadingRecords)&&F(a, -a,"sZeroRecords","sLoadingRecords");a.sInfoThousands&&(a.sThousands=a.sInfoThousands);(a=a.sDecimal)&&c!==a&&Da(a)}}function eb(a){A(a,"ordering","bSort");A(a,"orderMulti","bSortMulti");A(a,"orderClasses","bSortClasses");A(a,"orderCellsTop","bSortCellsTop");A(a,"order","aaSorting");A(a,"orderFixed","aaSortingFixed");A(a,"paging","bPaginate");A(a,"pagingType","sPaginationType");A(a,"pageLength","iDisplayLength");A(a,"searching","bFilter");"boolean"===typeof a.sScrollX&&(a.sScrollX=a.sScrollX?"100%": -"");"boolean"===typeof a.scrollX&&(a.scrollX=a.scrollX?"100%":"");if(a=a.aoSearchCols)for(var b=0,c=a.length;b").css({position:"fixed",top:0,left:-1*h(E).scrollLeft(),height:1,width:1, -overflow:"hidden"}).append(h("
    ").css({position:"absolute",top:1,left:1,width:100,overflow:"scroll"}).append(h("
    ").css({width:"100%",height:10}))).appendTo("body"),d=c.children(),e=d.children();b.barWidth=d[0].offsetWidth-d[0].clientWidth;b.bScrollOversize=100===e[0].offsetWidth&&100!==d[0].clientWidth;b.bScrollbarLeft=1!==Math.round(e.offset().left);b.bBounding=c[0].getBoundingClientRect().width?!0:!1;c.remove()}h.extend(a.oBrowser,n.__browser);a.oScroll.iBarWidth=n.__browser.barWidth} -function hb(a,b,c,d,e,f){var g,j=!1;c!==k&&(g=c,j=!0);for(;d!==e;)a.hasOwnProperty(d)&&(g=j?b(g,a[d],d,a):a[d],j=!0,d+=f);return g}function Ea(a,b){var c=n.defaults.column,d=a.aoColumns.length,c=h.extend({},n.models.oColumn,c,{nTh:b?b:H.createElement("th"),sTitle:c.sTitle?c.sTitle:b?b.innerHTML:"",aDataSort:c.aDataSort?c.aDataSort:[d],mData:c.mData?c.mData:d,idx:d});a.aoColumns.push(c);c=a.aoPreSearchCols;c[d]=h.extend({},n.models.oSearch,c[d]);ka(a,d,h(b).data())}function ka(a,b,c){var b=a.aoColumns[b], -d=a.oClasses,e=h(b.nTh);if(!b.sWidthOrig){b.sWidthOrig=e.attr("width")||null;var f=(e.attr("style")||"").match(/width:\s*(\d+[pxem%]+)/);f&&(b.sWidthOrig=f[1])}c!==k&&null!==c&&(fb(c),J(n.defaults.column,c),c.mDataProp!==k&&!c.mData&&(c.mData=c.mDataProp),c.sType&&(b._sManualType=c.sType),c.className&&!c.sClass&&(c.sClass=c.className),c.sClass&&e.addClass(c.sClass),h.extend(b,c),F(b,c,"sWidth","sWidthOrig"),c.iDataSort!==k&&(b.aDataSort=[c.iDataSort]),F(b,c,"aDataSort"));var g=b.mData,j=S(g),i=b.mRender? -S(b.mRender):null,c=function(a){return"string"===typeof a&&-1!==a.indexOf("@")};b._bAttrSrc=h.isPlainObject(g)&&(c(g.sort)||c(g.type)||c(g.filter));b._setter=null;b.fnGetData=function(a,b,c){var d=j(a,b,k,c);return i&&b?i(d,b,a,c):d};b.fnSetData=function(a,b,c){return N(g)(a,b,c)};"number"!==typeof g&&(a._rowReadObject=!0);a.oFeatures.bSort||(b.bSortable=!1,e.addClass(d.sSortableNone));a=-1!==h.inArray("asc",b.asSorting);c=-1!==h.inArray("desc",b.asSorting);!b.bSortable||!a&&!c?(b.sSortingClass=d.sSortableNone, -b.sSortingClassJUI=""):a&&!c?(b.sSortingClass=d.sSortableAsc,b.sSortingClassJUI=d.sSortJUIAscAllowed):!a&&c?(b.sSortingClass=d.sSortableDesc,b.sSortingClassJUI=d.sSortJUIDescAllowed):(b.sSortingClass=d.sSortable,b.sSortingClassJUI=d.sSortJUI)}function $(a){if(!1!==a.oFeatures.bAutoWidth){var b=a.aoColumns;Fa(a);for(var c=0,d=b.length;cq[f])d(l.length+q[f],m);else if("string"=== -typeof q[f]){j=0;for(i=l.length;jb&&a[e]--; -1!=d&&c===k&&a.splice(d, -1)}function da(a,b,c,d){var e=a.aoData[b],f,g=function(c,d){for(;c.childNodes.length;)c.removeChild(c.firstChild);c.innerHTML=B(a,b,d,"display")};if("dom"===c||(!c||"auto"===c)&&"dom"===e.src)e._aData=Ia(a,e,d,d===k?k:e._aData).data;else{var j=e.anCells;if(j)if(d!==k)g(j[d],d);else{c=0;for(f=j.length;c").appendTo(g));b=0;for(c=l.length;btr").attr("role","row");h(g).find(">tr>th, >tr>td").addClass(m.sHeaderTH);h(j).find(">tr>th, >tr>td").addClass(m.sFooterTH);if(null!==j){a=a.aoFooter[0];b=0;for(c=a.length;b=a.fnRecordsDisplay()?0:g,a.iInitDisplayStart=-1);var g=a._iDisplayStart,m=a.fnDisplayEnd();if(a.bDeferLoading)a.bDeferLoading=!1,a.iDraw++,C(a,!1);else if(j){if(!a.bDestroying&&!lb(a))return}else a.iDraw++;if(0!==i.length){f=j?a.aoData.length:m;for(j=j?0:g;j",{"class":e?d[0]:""}).append(h("",{valign:"top",colSpan:V(a),"class":a.oClasses.sRowEmpty}).html(c))[0];r(a,"aoHeaderCallback","header",[h(a.nTHead).children("tr")[0],Ka(a),g,m,i]);r(a,"aoFooterCallback","footer",[h(a.nTFoot).children("tr")[0],Ka(a),g,m,i]);d=h(a.nTBody);d.children().detach(); -d.append(h(b));r(a,"aoDrawCallback","draw",[a]);a.bSorted=!1;a.bFiltered=!1;a.bDrawing=!1}}function T(a,b){var c=a.oFeatures,d=c.bFilter;c.bSort&&mb(a);d?ga(a,a.oPreviousSearch):a.aiDisplay=a.aiDisplayMaster.slice();!0!==b&&(a._iDisplayStart=0);a._drawHold=b;P(a);a._drawHold=!1}function nb(a){var b=a.oClasses,c=h(a.nTable),c=h("
    ").insertBefore(c),d=a.oFeatures,e=h("
    ",{id:a.sTableId+"_wrapper","class":b.sWrapper+(a.nTFoot?"":" "+b.sNoFooter)});a.nHolding=c[0];a.nTableWrapper=e[0];a.nTableReinsertBefore= -a.nTable.nextSibling;for(var f=a.sDom.split(""),g,j,i,m,l,q,k=0;k")[0];m=f[k+1];if("'"==m||'"'==m){l="";for(q=2;f[k+q]!=m;)l+=f[k+q],q++;"H"==l?l=b.sJUIHeader:"F"==l&&(l=b.sJUIFooter);-1!=l.indexOf(".")?(m=l.split("."),i.id=m[0].substr(1,m[0].length-1),i.className=m[1]):"#"==l.charAt(0)?i.id=l.substr(1,l.length-1):i.className=l;k+=q}e.append(i);e=h(i)}else if(">"==j)e=e.parent();else if("l"==j&&d.bPaginate&&d.bLengthChange)g=ob(a);else if("f"==j&& -d.bFilter)g=pb(a);else if("r"==j&&d.bProcessing)g=qb(a);else if("t"==j)g=rb(a);else if("i"==j&&d.bInfo)g=sb(a);else if("p"==j&&d.bPaginate)g=tb(a);else if(0!==n.ext.feature.length){i=n.ext.feature;q=0;for(m=i.length;q',j=d.sSearch,j=j.match(/_INPUT_/)?j.replace("_INPUT_", -g):j+g,b=h("
    ",{id:!f.f?c+"_filter":null,"class":b.sFilter}).append(h("
    ").addClass(b.sLength);a.aanFeatures.l||(i[0].id=c+"_length");i.children().append(a.oLanguage.sLengthMenu.replace("_MENU_",e[0].outerHTML));h("select",i).val(a._iDisplayLength).on("change.DT",function(){Ra(a,h(this).val());P(a)});h(a.nTable).on("length.dt.DT",function(b,c,d){a=== -c&&h("select",i).val(d)});return i[0]}function tb(a){var b=a.sPaginationType,c=n.ext.pager[b],d="function"===typeof c,e=function(a){P(a)},b=h("
    ").addClass(a.oClasses.sPaging+b)[0],f=a.aanFeatures;d||c.fnInit(a,b,e);f.p||(b.id=a.sTableId+"_paginate",a.aoDrawCallback.push({fn:function(a){if(d){var b=a._iDisplayStart,i=a._iDisplayLength,h=a.fnRecordsDisplay(),l=-1===i,b=l?0:Math.ceil(b/i),i=l?1:Math.ceil(h/i),h=c(b,i),k,l=0;for(k=f.p.length;lf&&(d=0)):"first"==b?d=0:"previous"==b?(d=0<=e?d-e:0,0>d&&(d=0)):"next"==b?d+e",{id:!a.aanFeatures.r?a.sTableId+"_processing":null,"class":a.oClasses.sProcessing}).html(a.oLanguage.sProcessing).insertBefore(a.nTable)[0]} -function C(a,b){a.oFeatures.bProcessing&&h(a.aanFeatures.r).css("display",b?"block":"none");r(a,null,"processing",[a,b])}function rb(a){var b=h(a.nTable);b.attr("role","grid");var c=a.oScroll;if(""===c.sX&&""===c.sY)return a.nTable;var d=c.sX,e=c.sY,f=a.oClasses,g=b.children("caption"),j=g.length?g[0]._captionSide:null,i=h(b[0].cloneNode(!1)),m=h(b[0].cloneNode(!1)),l=b.children("tfoot");l.length||(l=null);i=h("
    ",{"class":f.sScrollWrapper}).append(h("
    ",{"class":f.sScrollHead}).css({overflow:"hidden", -position:"relative",border:0,width:d?!d?null:v(d):"100%"}).append(h("
    ",{"class":f.sScrollHeadInner}).css({"box-sizing":"content-box",width:c.sXInner||"100%"}).append(i.removeAttr("id").css("margin-left",0).append("top"===j?g:null).append(b.children("thead"))))).append(h("
    ",{"class":f.sScrollBody}).css({position:"relative",overflow:"auto",width:!d?null:v(d)}).append(b));l&&i.append(h("
    ",{"class":f.sScrollFoot}).css({overflow:"hidden",border:0,width:d?!d?null:v(d):"100%"}).append(h("
    ", -{"class":f.sScrollFootInner}).append(m.removeAttr("id").css("margin-left",0).append("bottom"===j?g:null).append(b.children("tfoot")))));var b=i.children(),k=b[0],f=b[1],t=l?b[2]:null;if(d)h(f).on("scroll.DT",function(){var a=this.scrollLeft;k.scrollLeft=a;l&&(t.scrollLeft=a)});h(f).css(e&&c.bCollapse?"max-height":"height",e);a.nScrollHead=k;a.nScrollBody=f;a.nScrollFoot=t;a.aoDrawCallback.push({fn:la,sName:"scrolling"});return i[0]}function la(a){var b=a.oScroll,c=b.sX,d=b.sXInner,e=b.sY,b=b.iBarWidth, -f=h(a.nScrollHead),g=f[0].style,j=f.children("div"),i=j[0].style,m=j.children("table"),j=a.nScrollBody,l=h(j),q=j.style,t=h(a.nScrollFoot).children("div"),n=t.children("table"),o=h(a.nTHead),p=h(a.nTable),s=p[0],r=s.style,u=a.nTFoot?h(a.nTFoot):null,x=a.oBrowser,U=x.bScrollOversize,Xb=D(a.aoColumns,"nTh"),Q,L,R,w,Ua=[],y=[],z=[],A=[],B,C=function(a){a=a.style;a.paddingTop="0";a.paddingBottom="0";a.borderTopWidth="0";a.borderBottomWidth="0";a.height=0};L=j.scrollHeight>j.clientHeight;if(a.scrollBarVis!== -L&&a.scrollBarVis!==k)a.scrollBarVis=L,$(a);else{a.scrollBarVis=L;p.children("thead, tfoot").remove();u&&(R=u.clone().prependTo(p),Q=u.find("tr"),R=R.find("tr"));w=o.clone().prependTo(p);o=o.find("tr");L=w.find("tr");w.find("th, td").removeAttr("tabindex");c||(q.width="100%",f[0].style.width="100%");h.each(ra(a,w),function(b,c){B=aa(a,b);c.style.width=a.aoColumns[B].sWidth});u&&I(function(a){a.style.width=""},R);f=p.outerWidth();if(""===c){r.width="100%";if(U&&(p.find("tbody").height()>j.offsetHeight|| -"scroll"==l.css("overflow-y")))r.width=v(p.outerWidth()-b);f=p.outerWidth()}else""!==d&&(r.width=v(d),f=p.outerWidth());I(C,L);I(function(a){z.push(a.innerHTML);Ua.push(v(h(a).css("width")))},L);I(function(a,b){if(h.inArray(a,Xb)!==-1)a.style.width=Ua[b]},o);h(L).height(0);u&&(I(C,R),I(function(a){A.push(a.innerHTML);y.push(v(h(a).css("width")))},R),I(function(a,b){a.style.width=y[b]},Q),h(R).height(0));I(function(a,b){a.innerHTML='
    '+z[b]+"
    ";a.childNodes[0].style.height= -"0";a.childNodes[0].style.overflow="hidden";a.style.width=Ua[b]},L);u&&I(function(a,b){a.innerHTML='
    '+A[b]+"
    ";a.childNodes[0].style.height="0";a.childNodes[0].style.overflow="hidden";a.style.width=y[b]},R);if(p.outerWidth()j.offsetHeight||"scroll"==l.css("overflow-y")?f+b:f;if(U&&(j.scrollHeight>j.offsetHeight||"scroll"==l.css("overflow-y")))r.width=v(Q-b);(""===c||""!==d)&&K(a,1,"Possible column misalignment",6)}else Q="100%";q.width=v(Q); -g.width=v(Q);u&&(a.nScrollFoot.style.width=v(Q));!e&&U&&(q.height=v(s.offsetHeight+b));c=p.outerWidth();m[0].style.width=v(c);i.width=v(c);d=p.height()>j.clientHeight||"scroll"==l.css("overflow-y");e="padding"+(x.bScrollbarLeft?"Left":"Right");i[e]=d?b+"px":"0px";u&&(n[0].style.width=v(c),t[0].style.width=v(c),t[0].style[e]=d?b+"px":"0px");p.children("colgroup").insertBefore(p.children("thead"));l.scroll();if((a.bSorted||a.bFiltered)&&!a._drawHold)j.scrollTop=0}}function I(a,b,c){for(var d=0,e=0, -f=b.length,g,j;e").appendTo(j.find("tbody"));j.find("thead, tfoot").remove();j.append(h(a.nTHead).clone()).append(h(a.nTFoot).clone());j.find("tfoot th, tfoot td").css("width","");m=ra(a,j.find("thead")[0]);for(n=0;n").css({width:o.sWidthOrig,margin:0,padding:0,border:0,height:1}));if(a.aoData.length)for(n=0;n").css(f||e?{position:"absolute",top:0,left:0,height:1,right:0,overflow:"hidden"}:{}).append(j).appendTo(k);f&&g?j.width(g):f?(j.css("width","auto"),j.removeAttr("width"),j.width()").css("width",v(a)).appendTo(b||H.body),d=c[0].offsetWidth;c.remove();return d}function Fb(a, -b){var c=Gb(a,b);if(0>c)return null;var d=a.aoData[c];return!d.nTr?h("").html(B(a,c,b,"display"))[0]:d.anCells[b]}function Gb(a,b){for(var c,d=-1,e=-1,f=0,g=a.aoData.length;fd&&(d=c.length,e=f);return e}function v(a){return null===a?"0px":"number"==typeof a?0>a?"0px":a+"px":a.match(/\d$/)?a+"px":a}function X(a){var b,c,d=[],e=a.aoColumns,f,g,j,i;b=a.aaSortingFixed;c=h.isPlainObject(b);var m=[];f=function(a){a.length&& -!h.isArray(a[0])?m.push(a):h.merge(m,a)};h.isArray(b)&&f(b);c&&b.pre&&f(b.pre);f(a.aaSorting);c&&b.post&&f(b.post);for(a=0;ae?1:0,0!==c)return"asc"===j.dir?c:-c;c=d[a];e=d[b];return ce?1:0}):i.sort(function(a,b){var c,g,j,i,k=h.length,n=f[a]._aSortData,o=f[b]._aSortData;for(j=0;jg?1:0})}a.bSorted=!0}function Ib(a){for(var b,c,d=a.aoColumns,e=X(a),a=a.oLanguage.oAria,f=0,g=d.length;f/g,"");var i=c.nTh;i.removeAttribute("aria-sort");c.bSortable&&(0e?e+1:3));e=0;for(f=d.length;ee?e+1:3))}a.aLastSort=d}function Hb(a,b){var c=a.aoColumns[b],d=n.ext.order[c.sSortDataType],e;d&&(e=d.call(a.oInstance,a,b,ba(a,b)));for(var f,g=n.ext.type.order[c.sType+"-pre"],j=0,i=a.aoData.length;j=f.length?[0,c[1]]:c)}));b.search!==k&&h.extend(a.oPreviousSearch,Bb(b.search));if(b.columns){d=0;for(e=b.columns.length;d=c&&(b=c-d);b-=b%d;if(-1===d||0>b)b=0;a._iDisplayStart=b}function Na(a,b){var c=a.renderer,d=n.ext.renderer[b];return h.isPlainObject(c)&&c[b]?d[c[b]]||d._:"string"=== -typeof c?d[c]||d._:d._}function y(a){return a.oFeatures.bServerSide?"ssp":a.ajax||a.sAjaxSource?"ajax":"dom"}function ia(a,b){var c=[],c=Kb.numbers_length,d=Math.floor(c/2);b<=c?c=Y(0,b):a<=d?(c=Y(0,c-2),c.push("ellipsis"),c.push(b-1)):(a>=b-1-d?c=Y(b-(c-2),b):(c=Y(a-d+2,a+d-1),c.push("ellipsis"),c.push(b-1)),c.splice(0,0,"ellipsis"),c.splice(0,0,0));c.DT_el="span";return c}function Da(a){h.each({num:function(b){return za(b,a)},"num-fmt":function(b){return za(b,a,Ya)},"html-num":function(b){return za(b, -a,Aa)},"html-num-fmt":function(b){return za(b,a,Aa,Ya)}},function(b,c){x.type.order[b+a+"-pre"]=c;b.match(/^html\-/)&&(x.type.search[b+a]=x.type.search.html)})}function Lb(a){return function(){var b=[ya(this[n.ext.iApiIndex])].concat(Array.prototype.slice.call(arguments));return n.ext.internal[a].apply(this,b)}}var n=function(a){this.$=function(a,b){return this.api(!0).$(a,b)};this._=function(a,b){return this.api(!0).rows(a,b).data()};this.api=function(a){return a?new s(ya(this[x.iApiIndex])):new s(this)}; -this.fnAddData=function(a,b){var c=this.api(!0),d=h.isArray(a)&&(h.isArray(a[0])||h.isPlainObject(a[0]))?c.rows.add(a):c.row.add(a);(b===k||b)&&c.draw();return d.flatten().toArray()};this.fnAdjustColumnSizing=function(a){var b=this.api(!0).columns.adjust(),c=b.settings()[0],d=c.oScroll;a===k||a?b.draw(!1):(""!==d.sX||""!==d.sY)&&la(c)};this.fnClearTable=function(a){var b=this.api(!0).clear();(a===k||a)&&b.draw()};this.fnClose=function(a){this.api(!0).row(a).child.hide()};this.fnDeleteRow=function(a, -b,c){var d=this.api(!0),a=d.rows(a),e=a.settings()[0],h=e.aoData[a[0][0]];a.remove();b&&b.call(this,e,h);(c===k||c)&&d.draw();return h};this.fnDestroy=function(a){this.api(!0).destroy(a)};this.fnDraw=function(a){this.api(!0).draw(a)};this.fnFilter=function(a,b,c,d,e,h){e=this.api(!0);null===b||b===k?e.search(a,c,d,h):e.column(b).search(a,c,d,h);e.draw()};this.fnGetData=function(a,b){var c=this.api(!0);if(a!==k){var d=a.nodeName?a.nodeName.toLowerCase():"";return b!==k||"td"==d||"th"==d?c.cell(a,b).data(): -c.row(a).data()||null}return c.data().toArray()};this.fnGetNodes=function(a){var b=this.api(!0);return a!==k?b.row(a).node():b.rows().nodes().flatten().toArray()};this.fnGetPosition=function(a){var b=this.api(!0),c=a.nodeName.toUpperCase();return"TR"==c?b.row(a).index():"TD"==c||"TH"==c?(a=b.cell(a).index(),[a.row,a.columnVisible,a.column]):null};this.fnIsOpen=function(a){return this.api(!0).row(a).child.isShown()};this.fnOpen=function(a,b,c){return this.api(!0).row(a).child(b,c).show().child()[0]}; -this.fnPageChange=function(a,b){var c=this.api(!0).page(a);(b===k||b)&&c.draw(!1)};this.fnSetColumnVis=function(a,b,c){a=this.api(!0).column(a).visible(b);(c===k||c)&&a.columns.adjust().draw()};this.fnSettings=function(){return ya(this[x.iApiIndex])};this.fnSort=function(a){this.api(!0).order(a).draw()};this.fnSortListener=function(a,b,c){this.api(!0).order.listener(a,b,c)};this.fnUpdate=function(a,b,c,d,e){var h=this.api(!0);c===k||null===c?h.row(b).data(a):h.cell(b,c).data(a);(e===k||e)&&h.columns.adjust(); -(d===k||d)&&h.draw();return 0};this.fnVersionCheck=x.fnVersionCheck;var b=this,c=a===k,d=this.length;c&&(a={});this.oApi=this.internal=x.internal;for(var e in n.ext.internal)e&&(this[e]=Lb(e));this.each(function(){var e={},g=1").appendTo(q)); -p.nTHead=b[0];b=q.children("tbody");b.length===0&&(b=h("").appendTo(q));p.nTBody=b[0];b=q.children("tfoot");if(b.length===0&&a.length>0&&(p.oScroll.sX!==""||p.oScroll.sY!==""))b=h("").appendTo(q);if(b.length===0||b.children().length===0)q.addClass(u.sNoFooter);else if(b.length>0){p.nTFoot=b[0];ea(p.aoFooter,p.nTFoot)}if(g.aaData)for(j=0;j/g,Zb=/^\d{2,4}[\.\/\-]\d{1,2}[\.\/\-]\d{1,2}([T ]{1}\d{1,2}[:\.]\d{2}([\.:]\d{2})?)?$/,$b=RegExp("(\\/|\\.|\\*|\\+|\\?|\\||\\(|\\)|\\[|\\]|\\{|\\}|\\\\|\\$|\\^|\\-)","g"),Ya=/[',$£€¥%\u2009\u202F\u20BD\u20a9\u20BArfkɃΞ]/gi,M=function(a){return!a||!0===a||"-"===a?!0:!1},Nb=function(a){var b=parseInt(a,10);return!isNaN(b)&& -isFinite(a)?b:null},Ob=function(a,b){Za[b]||(Za[b]=RegExp(Qa(b),"g"));return"string"===typeof a&&"."!==b?a.replace(/\./g,"").replace(Za[b],"."):a},$a=function(a,b,c){var d="string"===typeof a;if(M(a))return!0;b&&d&&(a=Ob(a,b));c&&d&&(a=a.replace(Ya,""));return!isNaN(parseFloat(a))&&isFinite(a)},Pb=function(a,b,c){return M(a)?!0:!(M(a)||"string"===typeof a)?null:$a(a.replace(Aa,""),b,c)?!0:null},D=function(a,b,c){var d=[],e=0,f=a.length;if(c!==k)for(;ea.length)){b=a.slice().sort();for(var c=b[0],d=1,e=b.length;d")[0],Wb=va.textContent!==k,Yb= -/<.*?>/g,Oa=n.util.throttle,Rb=[],w=Array.prototype,ac=function(a){var b,c,d=n.settings,e=h.map(d,function(a){return a.nTable});if(a){if(a.nTable&&a.oApi)return[a];if(a.nodeName&&"table"===a.nodeName.toLowerCase())return b=h.inArray(a,e),-1!==b?[d[b]]:null;if(a&&"function"===typeof a.settings)return a.settings().toArray();"string"===typeof a?c=h(a):a instanceof h&&(c=a)}else return[];if(c)return c.map(function(){b=h.inArray(this,e);return-1!==b?d[b]:null}).toArray()};s=function(a,b){if(!(this instanceof -s))return new s(a,b);var c=[],d=function(a){(a=ac(a))&&(c=c.concat(a))};if(h.isArray(a))for(var e=0,f=a.length;ea?new s(b[a],this[a]):null},filter:function(a){var b=[];if(w.filter)b=w.filter.call(this,a,this);else for(var c=0,d=this.length;c").addClass(b),h("td",c).addClass(b).html(a)[0].colSpan=V(d),e.push(c[0]))};f(a,b);c._details&&c._details.detach();c._details=h(e); -c._detailsShow&&c._details.insertAfter(c.nTr)}return this});o(["row().child.show()","row().child().show()"],function(){Tb(this,!0);return this});o(["row().child.hide()","row().child().hide()"],function(){Tb(this,!1);return this});o(["row().child.remove()","row().child().remove()"],function(){db(this);return this});o("row().child.isShown()",function(){var a=this.context;return a.length&&this.length?a[0].aoData[this[0]]._detailsShow||!1:!1});var bc=/^([^:]+):(name|visIdx|visible)$/,Ub=function(a,b, -c,d,e){for(var c=[],d=0,f=e.length;d=0?b:g.length+b];if(typeof a==="function"){var e=Ba(c,f);return h.map(g,function(b,f){return a(f,Ub(c,f,0,0,e),i[f])?f:null})}var k=typeof a==="string"?a.match(bc): -"";if(k)switch(k[2]){case "visIdx":case "visible":b=parseInt(k[1],10);if(b<0){var n=h.map(g,function(a,b){return a.bVisible?b:null});return[n[n.length+b]]}return[aa(c,b)];case "name":return h.map(j,function(a,b){return a===k[1]?b:null});default:return[]}if(a.nodeName&&a._DT_CellIndex)return[a._DT_CellIndex.column];b=h(i).filter(a).map(function(){return h.inArray(this,i)}).toArray();if(b.length||!a.nodeName)return b;b=h(a).closest("*[data-dt-column]");return b.length?[b.data("dt-column")]:[]},c,f)}, -1);c.selector.cols=a;c.selector.opts=b;return c});u("columns().header()","column().header()",function(){return this.iterator("column",function(a,b){return a.aoColumns[b].nTh},1)});u("columns().footer()","column().footer()",function(){return this.iterator("column",function(a,b){return a.aoColumns[b].nTf},1)});u("columns().data()","column().data()",function(){return this.iterator("column-rows",Ub,1)});u("columns().dataSrc()","column().dataSrc()",function(){return this.iterator("column",function(a,b){return a.aoColumns[b].mData}, -1)});u("columns().cache()","column().cache()",function(a){return this.iterator("column-rows",function(b,c,d,e,f){return ja(b.aoData,f,"search"===a?"_aFilterData":"_aSortData",c)},1)});u("columns().nodes()","column().nodes()",function(){return this.iterator("column-rows",function(a,b,c,d,e){return ja(a.aoData,e,"anCells",b)},1)});u("columns().visible()","column().visible()",function(a,b){var c=this.iterator("column",function(b,c){if(a===k)return b.aoColumns[c].bVisible;var f=b.aoColumns,g=f[c],j=b.aoData, -i,m,l;if(a!==k&&g.bVisible!==a){if(a){var n=h.inArray(!0,D(f,"bVisible"),c+1);i=0;for(m=j.length;id;return!0};n.isDataTable= -n.fnIsDataTable=function(a){var b=h(a).get(0),c=!1;if(a instanceof n.Api)return!0;h.each(n.settings,function(a,e){var f=e.nScrollHead?h("table",e.nScrollHead)[0]:null,g=e.nScrollFoot?h("table",e.nScrollFoot)[0]:null;if(e.nTable===b||f===b||g===b)c=!0});return c};n.tables=n.fnTables=function(a){var b=!1;h.isPlainObject(a)&&(b=a.api,a=a.visible);var c=h.map(n.settings,function(b){if(!a||a&&h(b.nTable).is(":visible"))return b.nTable});return b?new s(c):c};n.camelToHungarian=J;o("$()",function(a,b){var c= -this.rows(b).nodes(),c=h(c);return h([].concat(c.filter(a).toArray(),c.find(a).toArray()))});h.each(["on","one","off"],function(a,b){o(b+"()",function(){var a=Array.prototype.slice.call(arguments);a[0]=h.map(a[0].split(/\s/),function(a){return!a.match(/\.dt\b/)?a+".dt":a}).join(" ");var d=h(this.tables().nodes());d[b].apply(d,a);return this})});o("clear()",function(){return this.iterator("table",function(a){oa(a)})});o("settings()",function(){return new s(this.context,this.context)});o("init()",function(){var a= -this.context;return a.length?a[0].oInit:null});o("data()",function(){return this.iterator("table",function(a){return D(a.aoData,"_aData")}).flatten()});o("destroy()",function(a){a=a||!1;return this.iterator("table",function(b){var c=b.nTableWrapper.parentNode,d=b.oClasses,e=b.nTable,f=b.nTBody,g=b.nTHead,j=b.nTFoot,i=h(e),f=h(f),k=h(b.nTableWrapper),l=h.map(b.aoData,function(a){return a.nTr}),o;b.bDestroying=!0;r(b,"aoDestroyCallback","destroy",[b]);a||(new s(b)).columns().visible(!0);k.off(".DT").find(":not(tbody *)").off(".DT"); -h(E).off(".DT-"+b.sInstance);e!=g.parentNode&&(i.children("thead").detach(),i.append(g));j&&e!=j.parentNode&&(i.children("tfoot").detach(),i.append(j));b.aaSorting=[];b.aaSortingFixed=[];wa(b);h(l).removeClass(b.asStripeClasses.join(" "));h("th, td",g).removeClass(d.sSortable+" "+d.sSortableAsc+" "+d.sSortableDesc+" "+d.sSortableNone);f.children().detach();f.append(l);g=a?"remove":"detach";i[g]();k[g]();!a&&c&&(c.insertBefore(e,b.nTableReinsertBefore),i.css("width",b.sDestroyWidth).removeClass(d.sTable), -(o=b.asDestroyStripes.length)&&f.children().each(function(a){h(this).addClass(b.asDestroyStripes[a%o])}));c=h.inArray(b,n.settings);-1!==c&&n.settings.splice(c,1)})});h.each(["column","row","cell"],function(a,b){o(b+"s().every()",function(a){var d=this.selector.opts,e=this;return this.iterator(b,function(f,g,h,i,m){a.call(e[b](g,"cell"===b?h:d,"cell"===b?d:k),g,h,i,m)})})});o("i18n()",function(a,b,c){var d=this.context[0],a=S(a)(d.oLanguage);a===k&&(a=b);c!==k&&h.isPlainObject(a)&&(a=a[c]!==k?a[c]: -a._);return a.replace("%d",c)});n.version="1.10.18";n.settings=[];n.models={};n.models.oSearch={bCaseInsensitive:!0,sSearch:"",bRegex:!1,bSmart:!0};n.models.oRow={nTr:null,anCells:null,_aData:[],_aSortData:null,_aFilterData:null,_sFilterRow:null,_sRowStripe:"",src:null,idx:-1};n.models.oColumn={idx:null,aDataSort:null,asSorting:null,bSearchable:null,bSortable:null,bVisible:null,_sManualType:null,_bAttrSrc:!1,fnCreatedCell:null,fnGetData:null,fnSetData:null,mData:null,mRender:null,nTh:null,nTf:null, -sClass:null,sContentPadding:null,sDefaultContent:null,sName:null,sSortDataType:"std",sSortingClass:null,sSortingClassJUI:null,sTitle:null,sType:null,sWidth:null,sWidthOrig:null};n.defaults={aaData:null,aaSorting:[[0,"asc"]],aaSortingFixed:[],ajax:null,aLengthMenu:[10,25,50,100],aoColumns:null,aoColumnDefs:null,aoSearchCols:[],asStripeClasses:null,bAutoWidth:!0,bDeferRender:!1,bDestroy:!1,bFilter:!0,bInfo:!0,bLengthChange:!0,bPaginate:!0,bProcessing:!1,bRetrieve:!1,bScrollCollapse:!1,bServerSide:!1, -bSort:!0,bSortMulti:!0,bSortCellsTop:!1,bSortClasses:!0,bStateSave:!1,fnCreatedRow:null,fnDrawCallback:null,fnFooterCallback:null,fnFormatNumber:function(a){return a.toString().replace(/\B(?=(\d{3})+(?!\d))/g,this.oLanguage.sThousands)},fnHeaderCallback:null,fnInfoCallback:null,fnInitComplete:null,fnPreDrawCallback:null,fnRowCallback:null,fnServerData:null,fnServerParams:null,fnStateLoadCallback:function(a){try{return JSON.parse((-1===a.iStateDuration?sessionStorage:localStorage).getItem("DataTables_"+ -a.sInstance+"_"+location.pathname))}catch(b){}},fnStateLoadParams:null,fnStateLoaded:null,fnStateSaveCallback:function(a,b){try{(-1===a.iStateDuration?sessionStorage:localStorage).setItem("DataTables_"+a.sInstance+"_"+location.pathname,JSON.stringify(b))}catch(c){}},fnStateSaveParams:null,iStateDuration:7200,iDeferLoading:null,iDisplayLength:10,iDisplayStart:0,iTabIndex:0,oClasses:{},oLanguage:{oAria:{sSortAscending:": activate to sort column ascending",sSortDescending:": activate to sort column descending"}, -oPaginate:{sFirst:"First",sLast:"Last",sNext:"Next",sPrevious:"Previous"},sEmptyTable:"No data available in table",sInfo:"Showing _START_ to _END_ of _TOTAL_ entries",sInfoEmpty:"Showing 0 to 0 of 0 entries",sInfoFiltered:"(filtered from _MAX_ total entries)",sInfoPostFix:"",sDecimal:"",sThousands:",",sLengthMenu:"Show _MENU_ entries",sLoadingRecords:"Loading...",sProcessing:"Processing...",sSearch:"Search:",sSearchPlaceholder:"",sUrl:"",sZeroRecords:"No matching records found"},oSearch:h.extend({}, -n.models.oSearch),sAjaxDataProp:"data",sAjaxSource:null,sDom:"lfrtip",searchDelay:null,sPaginationType:"simple_numbers",sScrollX:"",sScrollXInner:"",sScrollY:"",sServerMethod:"GET",renderer:null,rowId:"DT_RowId"};Z(n.defaults);n.defaults.column={aDataSort:null,iDataSort:-1,asSorting:["asc","desc"],bSearchable:!0,bSortable:!0,bVisible:!0,fnCreatedCell:null,mData:null,mRender:null,sCellType:"td",sClass:"",sContentPadding:"",sDefaultContent:null,sName:"",sSortDataType:"std",sTitle:null,sType:null,sWidth:null}; -Z(n.defaults.column);n.models.oSettings={oFeatures:{bAutoWidth:null,bDeferRender:null,bFilter:null,bInfo:null,bLengthChange:null,bPaginate:null,bProcessing:null,bServerSide:null,bSort:null,bSortMulti:null,bSortClasses:null,bStateSave:null},oScroll:{bCollapse:null,iBarWidth:0,sX:null,sXInner:null,sY:null},oLanguage:{fnInfoCallback:null},oBrowser:{bScrollOversize:!1,bScrollbarLeft:!1,bBounding:!1,barWidth:0},ajax:null,aanFeatures:[],aoData:[],aiDisplay:[],aiDisplayMaster:[],aIds:{},aoColumns:[],aoHeader:[], -aoFooter:[],oPreviousSearch:{},aoPreSearchCols:[],aaSorting:null,aaSortingFixed:[],asStripeClasses:null,asDestroyStripes:[],sDestroyWidth:0,aoRowCallback:[],aoHeaderCallback:[],aoFooterCallback:[],aoDrawCallback:[],aoRowCreatedCallback:[],aoPreDrawCallback:[],aoInitComplete:[],aoStateSaveParams:[],aoStateLoadParams:[],aoStateLoaded:[],sTableId:"",nTable:null,nTHead:null,nTFoot:null,nTBody:null,nTableWrapper:null,bDeferLoading:!1,bInitialised:!1,aoOpenRows:[],sDom:null,searchDelay:null,sPaginationType:"two_button", -iStateDuration:0,aoStateSave:[],aoStateLoad:[],oSavedState:null,oLoadedState:null,sAjaxSource:null,sAjaxDataProp:null,bAjaxDataGet:!0,jqXHR:null,json:k,oAjaxData:k,fnServerData:null,aoServerParams:[],sServerMethod:null,fnFormatNumber:null,aLengthMenu:null,iDraw:0,bDrawing:!1,iDrawError:-1,_iDisplayLength:10,_iDisplayStart:0,_iRecordsTotal:0,_iRecordsDisplay:0,oClasses:{},bFiltered:!1,bSorted:!1,bSortCellsTop:null,oInit:null,aoDestroyCallback:[],fnRecordsTotal:function(){return"ssp"==y(this)?1*this._iRecordsTotal: -this.aiDisplayMaster.length},fnRecordsDisplay:function(){return"ssp"==y(this)?1*this._iRecordsDisplay:this.aiDisplay.length},fnDisplayEnd:function(){var a=this._iDisplayLength,b=this._iDisplayStart,c=b+a,d=this.aiDisplay.length,e=this.oFeatures,f=e.bPaginate;return e.bServerSide?!1===f||-1===a?b+d:Math.min(b+a,this._iRecordsDisplay):!f||c>d||-1===a?d:c},oInstance:null,sInstance:null,iTabIndex:0,nScrollHead:null,nScrollFoot:null,aLastSort:[],oPlugins:{},rowIdFn:null,rowId:null};n.ext=x={buttons:{}, -classes:{},builder:"-source-",errMode:"alert",feature:[],search:[],selector:{cell:[],column:[],row:[]},internal:{},legacy:{ajax:null},pager:{},renderer:{pageButton:{},header:{}},order:{},type:{detect:[],search:{},order:{}},_unique:0,fnVersionCheck:n.fnVersionCheck,iApiIndex:0,oJUIClasses:{},sVersion:n.version};h.extend(x,{afnFiltering:x.search,aTypes:x.type.detect,ofnSearch:x.type.search,oSort:x.type.order,afnSortData:x.order,aoFeatures:x.feature,oApi:x.internal,oStdClasses:x.classes,oPagination:x.pager}); -h.extend(n.ext.classes,{sTable:"dataTable",sNoFooter:"no-footer",sPageButton:"paginate_button",sPageButtonActive:"current",sPageButtonDisabled:"disabled",sStripeOdd:"odd",sStripeEven:"even",sRowEmpty:"dataTables_empty",sWrapper:"dataTables_wrapper",sFilter:"dataTables_filter",sInfo:"dataTables_info",sPaging:"dataTables_paginate paging_",sLength:"dataTables_length",sProcessing:"dataTables_processing",sSortAsc:"sorting_asc",sSortDesc:"sorting_desc",sSortable:"sorting",sSortableAsc:"sorting_asc_disabled", -sSortableDesc:"sorting_desc_disabled",sSortableNone:"sorting_disabled",sSortColumn:"sorting_",sFilterInput:"",sLengthSelect:"",sScrollWrapper:"dataTables_scroll",sScrollHead:"dataTables_scrollHead",sScrollHeadInner:"dataTables_scrollHeadInner",sScrollBody:"dataTables_scrollBody",sScrollFoot:"dataTables_scrollFoot",sScrollFootInner:"dataTables_scrollFootInner",sHeaderTH:"",sFooterTH:"",sSortJUIAsc:"",sSortJUIDesc:"",sSortJUI:"",sSortJUIAscAllowed:"",sSortJUIDescAllowed:"",sSortJUIWrapper:"",sSortIcon:"", -sJUIHeader:"",sJUIFooter:""});var Kb=n.ext.pager;h.extend(Kb,{simple:function(){return["previous","next"]},full:function(){return["first","previous","next","last"]},numbers:function(a,b){return[ia(a,b)]},simple_numbers:function(a,b){return["previous",ia(a,b),"next"]},full_numbers:function(a,b){return["first","previous",ia(a,b),"next","last"]},first_last_numbers:function(a,b){return["first",ia(a,b),"last"]},_numbers:ia,numbers_length:7});h.extend(!0,n.ext.renderer,{pageButton:{_:function(a,b,c,d,e, -f){var g=a.oClasses,j=a.oLanguage.oPaginate,i=a.oLanguage.oAria.paginate||{},m,l,n=0,o=function(b,d){var k,s,u,r,v=function(b){Ta(a,b.data.action,true)};k=0;for(s=d.length;k").appendTo(b);o(u,r)}else{m=null;l="";switch(r){case "ellipsis":b.append('');break;case "first":m=j.sFirst;l=r+(e>0?"":" "+g.sPageButtonDisabled);break;case "previous":m=j.sPrevious;l=r+(e>0?"":" "+g.sPageButtonDisabled);break;case "next":m= -j.sNext;l=r+(e",{"class":g.sPageButton+" "+l,"aria-controls":a.sTableId,"aria-label":i[r],"data-dt-idx":n,tabindex:a.iTabIndex,id:c===0&&typeof r==="string"?a.sTableId+"_"+r:null}).html(m).appendTo(b);Wa(u,{action:r},v);n++}}}},s;try{s=h(b).find(H.activeElement).data("dt-idx")}catch(u){}o(h(b).empty(),d);s!==k&&h(b).find("[data-dt-idx="+ -s+"]").focus()}}});h.extend(n.ext.type.detect,[function(a,b){var c=b.oLanguage.sDecimal;return $a(a,c)?"num"+c:null},function(a){if(a&&!(a instanceof Date)&&!Zb.test(a))return null;var b=Date.parse(a);return null!==b&&!isNaN(b)||M(a)?"date":null},function(a,b){var c=b.oLanguage.sDecimal;return $a(a,c,!0)?"num-fmt"+c:null},function(a,b){var c=b.oLanguage.sDecimal;return Pb(a,c)?"html-num"+c:null},function(a,b){var c=b.oLanguage.sDecimal;return Pb(a,c,!0)?"html-num-fmt"+c:null},function(a){return M(a)|| -"string"===typeof a&&-1!==a.indexOf("<")?"html":null}]);h.extend(n.ext.type.search,{html:function(a){return M(a)?a:"string"===typeof a?a.replace(Mb," ").replace(Aa,""):""},string:function(a){return M(a)?a:"string"===typeof a?a.replace(Mb," "):a}});var za=function(a,b,c,d){if(0!==a&&(!a||"-"===a))return-Infinity;b&&(a=Ob(a,b));a.replace&&(c&&(a=a.replace(c,"")),d&&(a=a.replace(d,"")));return 1*a};h.extend(x.type.order,{"date-pre":function(a){a=Date.parse(a);return isNaN(a)?-Infinity:a},"html-pre":function(a){return M(a)? -"":a.replace?a.replace(/<.*?>/g,"").toLowerCase():a+""},"string-pre":function(a){return M(a)?"":"string"===typeof a?a.toLowerCase():!a.toString?"":a.toString()},"string-asc":function(a,b){return ab?1:0},"string-desc":function(a,b){return ab?-1:0}});Da("");h.extend(!0,n.ext.renderer,{header:{_:function(a,b,c,d){h(a.nTable).on("order.dt.DT",function(e,f,g,h){if(a===f){e=c.idx;b.removeClass(c.sSortingClass+" "+d.sSortAsc+" "+d.sSortDesc).addClass(h[e]=="asc"?d.sSortAsc:h[e]=="desc"?d.sSortDesc: -c.sSortingClass)}})},jqueryui:function(a,b,c,d){h("
    ").addClass(d.sSortJUIWrapper).append(b.contents()).append(h("").addClass(d.sSortIcon+" "+c.sSortingClassJUI)).appendTo(b);h(a.nTable).on("order.dt.DT",function(e,f,g,h){if(a===f){e=c.idx;b.removeClass(d.sSortAsc+" "+d.sSortDesc).addClass(h[e]=="asc"?d.sSortAsc:h[e]=="desc"?d.sSortDesc:c.sSortingClass);b.find("span."+d.sSortIcon).removeClass(d.sSortJUIAsc+" "+d.sSortJUIDesc+" "+d.sSortJUI+" "+d.sSortJUIAscAllowed+" "+d.sSortJUIDescAllowed).addClass(h[e]== -"asc"?d.sSortJUIAsc:h[e]=="desc"?d.sSortJUIDesc:c.sSortingClassJUI)}})}}});var Vb=function(a){return"string"===typeof a?a.replace(//g,">").replace(/"/g,"""):a};n.render={number:function(a,b,c,d,e){return{display:function(f){if("number"!==typeof f&&"string"!==typeof f)return f;var g=0>f?"-":"",h=parseFloat(f);if(isNaN(h))return Vb(f);h=h.toFixed(c);f=Math.abs(h);h=parseInt(f,10);f=c?b+(f-h).toFixed(c).substring(2):"";return g+(d||"")+h.toString().replace(/\B(?=(\d{3})+(?!\d))/g, -a)+f+(e||"")}}},text:function(){return{display:Vb}}};h.extend(n.ext.internal,{_fnExternApiFunc:Lb,_fnBuildAjax:sa,_fnAjaxUpdate:lb,_fnAjaxParameters:ub,_fnAjaxUpdateDraw:vb,_fnAjaxDataSrc:ta,_fnAddColumn:Ea,_fnColumnOptions:ka,_fnAdjustColumnSizing:$,_fnVisibleToColumnIndex:aa,_fnColumnIndexToVisible:ba,_fnVisbleColumns:V,_fnGetColumns:ma,_fnColumnTypes:Ga,_fnApplyColumnDefs:ib,_fnHungarianMap:Z,_fnCamelToHungarian:J,_fnLanguageCompat:Ca,_fnBrowserDetect:gb,_fnAddData:O,_fnAddTr:na,_fnNodeToDataIndex:function(a, -b){return b._DT_RowIndex!==k?b._DT_RowIndex:null},_fnNodeToColumnIndex:function(a,b,c){return h.inArray(c,a.aoData[b].anCells)},_fnGetCellData:B,_fnSetCellData:jb,_fnSplitObjNotation:Ja,_fnGetObjectDataFn:S,_fnSetObjectDataFn:N,_fnGetDataMaster:Ka,_fnClearTable:oa,_fnDeleteIndex:pa,_fnInvalidate:da,_fnGetRowElements:Ia,_fnCreateTr:Ha,_fnBuildHead:kb,_fnDrawHead:fa,_fnDraw:P,_fnReDraw:T,_fnAddOptionsHtml:nb,_fnDetectHeader:ea,_fnGetUniqueThs:ra,_fnFeatureHtmlFilter:pb,_fnFilterComplete:ga,_fnFilterCustom:yb, -_fnFilterColumn:xb,_fnFilter:wb,_fnFilterCreateSearch:Pa,_fnEscapeRegex:Qa,_fnFilterData:zb,_fnFeatureHtmlInfo:sb,_fnUpdateInfo:Cb,_fnInfoMacros:Db,_fnInitialise:ha,_fnInitComplete:ua,_fnLengthChange:Ra,_fnFeatureHtmlLength:ob,_fnFeatureHtmlPaginate:tb,_fnPageChange:Ta,_fnFeatureHtmlProcessing:qb,_fnProcessingDisplay:C,_fnFeatureHtmlTable:rb,_fnScrollDraw:la,_fnApplyToChildren:I,_fnCalculateColumnWidths:Fa,_fnThrottle:Oa,_fnConvertToWidth:Eb,_fnGetWidestNode:Fb,_fnGetMaxLenString:Gb,_fnStringToCss:v, -_fnSortFlatten:X,_fnSort:mb,_fnSortAria:Ib,_fnSortListener:Va,_fnSortAttachListener:Ma,_fnSortingClasses:wa,_fnSortData:Hb,_fnSaveState:xa,_fnLoadState:Jb,_fnSettingsFromNode:ya,_fnLog:K,_fnMap:F,_fnBindAction:Wa,_fnCallbackReg:z,_fnCallbackFire:r,_fnLengthOverflow:Sa,_fnRenderer:Na,_fnDataSource:y,_fnRowAttributes:La,_fnExtend:Xa,_fnCalculateEnd:function(){}});h.fn.dataTable=n;n.$=h;h.fn.dataTableSettings=n.settings;h.fn.dataTableExt=n.ext;h.fn.DataTable=function(a){return h(this).dataTable(a).api()}; -h.each(n,function(a,b){h.fn.DataTable[a]=b});return h.fn.dataTable}); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/css/custom_datatable.css b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/css/custom_datatable.css similarity index 100% rename from hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/css/custom_datatable.css rename to hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/css/custom_datatable.css diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/css/demo_page.css b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/css/demo_page.css similarity index 100% rename from hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/css/demo_page.css rename to hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/css/demo_page.css diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/css/demo_table.css b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/css/demo_table.css similarity index 100% rename from hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/css/demo_table.css rename to hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/css/demo_table.css diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/css/jquery.dataTables.css b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/css/jquery.dataTables.css similarity index 85% rename from hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/css/jquery.dataTables.css rename to hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/css/jquery.dataTables.css index 88bf2f14f70..0420d964e86 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/css/jquery.dataTables.css +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/css/jquery.dataTables.css @@ -39,7 +39,7 @@ table.dataTable tfoot th { table.dataTable thead th, table.dataTable thead td { padding: 10px 18px; - border-bottom: 1px solid #111; + border-bottom: 1px solid #111111; } table.dataTable thead th:active, table.dataTable thead td:active { @@ -48,15 +48,19 @@ table.dataTable thead td:active { table.dataTable tfoot th, table.dataTable tfoot td { padding: 10px 18px 6px 18px; - border-top: 1px solid #111; + border-top: 1px solid #111111; +} +table.dataTable thead .sorting, +table.dataTable thead .sorting_asc, +table.dataTable thead .sorting_desc { + cursor: pointer; + *cursor: hand; } table.dataTable thead .sorting, table.dataTable thead .sorting_asc, table.dataTable thead .sorting_desc, table.dataTable thead .sorting_asc_disabled, table.dataTable thead .sorting_desc_disabled { - cursor: pointer; - *cursor: hand; background-repeat: no-repeat; background-position: center right; } @@ -76,17 +80,17 @@ table.dataTable thead .sorting_desc_disabled { background-image: url("../images/sort_desc_disabled.png"); } table.dataTable tbody tr { - background-color: #ffffff; + background-color: white; } table.dataTable tbody tr.selected { - background-color: #B0BED9; + background-color: #b0bed9; } table.dataTable tbody th, table.dataTable tbody td { padding: 8px 10px; } table.dataTable.row-border tbody th, table.dataTable.row-border tbody td, table.dataTable.display tbody th, table.dataTable.display tbody td { - border-top: 1px solid #ddd; + border-top: 1px solid #dddddd; } table.dataTable.row-border tbody tr:first-child th, table.dataTable.row-border tbody tr:first-child td, table.dataTable.display tbody tr:first-child th, @@ -94,12 +98,12 @@ table.dataTable.display tbody tr:first-child td { border-top: none; } table.dataTable.cell-border tbody th, table.dataTable.cell-border tbody td { - border-top: 1px solid #ddd; - border-right: 1px solid #ddd; + border-top: 1px solid #dddddd; + border-right: 1px solid #dddddd; } table.dataTable.cell-border tbody tr th:first-child, table.dataTable.cell-border tbody tr td:first-child { - border-left: 1px solid #ddd; + border-left: 1px solid #dddddd; } table.dataTable.cell-border tbody tr:first-child th, table.dataTable.cell-border tbody tr:first-child td { @@ -109,27 +113,27 @@ table.dataTable.stripe tbody tr.odd, table.dataTable.display tbody tr.odd { background-color: #f9f9f9; } table.dataTable.stripe tbody tr.odd.selected, table.dataTable.display tbody tr.odd.selected { - background-color: #acbad4; + background-color: #abb9d3; } table.dataTable.hover tbody tr:hover, table.dataTable.display tbody tr:hover { - background-color: #f6f6f6; + background-color: whitesmoke; } table.dataTable.hover tbody tr:hover.selected, table.dataTable.display tbody tr:hover.selected { - background-color: #aab7d1; + background-color: #a9b7d1; } table.dataTable.order-column tbody tr > .sorting_1, table.dataTable.order-column tbody tr > .sorting_2, table.dataTable.order-column tbody tr > .sorting_3, table.dataTable.display tbody tr > .sorting_1, table.dataTable.display tbody tr > .sorting_2, table.dataTable.display tbody tr > .sorting_3 { - background-color: #fafafa; + background-color: #f9f9f9; } table.dataTable.order-column tbody tr.selected > .sorting_1, table.dataTable.order-column tbody tr.selected > .sorting_2, table.dataTable.order-column tbody tr.selected > .sorting_3, table.dataTable.display tbody tr.selected > .sorting_1, table.dataTable.display tbody tr.selected > .sorting_2, table.dataTable.display tbody tr.selected > .sorting_3 { - background-color: #acbad5; + background-color: #acbad4; } table.dataTable.display tbody tr.odd > .sorting_1, table.dataTable.order-column.stripe tbody tr.odd > .sorting_1 { background-color: #f1f1f1; @@ -141,28 +145,28 @@ table.dataTable.display tbody tr.odd > .sorting_3, table.dataTable.order-column. background-color: whitesmoke; } table.dataTable.display tbody tr.odd.selected > .sorting_1, table.dataTable.order-column.stripe tbody tr.odd.selected > .sorting_1 { - background-color: #a6b4cd; + background-color: #a6b3cd; } table.dataTable.display tbody tr.odd.selected > .sorting_2, table.dataTable.order-column.stripe tbody tr.odd.selected > .sorting_2 { - background-color: #a8b5cf; + background-color: #a7b5ce; } table.dataTable.display tbody tr.odd.selected > .sorting_3, table.dataTable.order-column.stripe tbody tr.odd.selected > .sorting_3 { - background-color: #a9b7d1; + background-color: #a9b6d0; } table.dataTable.display tbody tr.even > .sorting_1, table.dataTable.order-column.stripe tbody tr.even > .sorting_1 { - background-color: #fafafa; + background-color: #f9f9f9; } table.dataTable.display tbody tr.even > .sorting_2, table.dataTable.order-column.stripe tbody tr.even > .sorting_2 { - background-color: #fcfcfc; + background-color: #fbfbfb; } table.dataTable.display tbody tr.even > .sorting_3, table.dataTable.order-column.stripe tbody tr.even > .sorting_3 { - background-color: #fefefe; + background-color: #fdfdfd; } table.dataTable.display tbody tr.even.selected > .sorting_1, table.dataTable.order-column.stripe tbody tr.even.selected > .sorting_1 { - background-color: #acbad5; + background-color: #acbad4; } table.dataTable.display tbody tr.even.selected > .sorting_2, table.dataTable.order-column.stripe tbody tr.even.selected > .sorting_2 { - background-color: #aebcd6; + background-color: #adbbd6; } table.dataTable.display tbody tr.even.selected > .sorting_3, table.dataTable.order-column.stripe tbody tr.even.selected > .sorting_3 { background-color: #afbdd8; @@ -171,22 +175,22 @@ table.dataTable.display tbody tr:hover > .sorting_1, table.dataTable.order-colum background-color: #eaeaea; } table.dataTable.display tbody tr:hover > .sorting_2, table.dataTable.order-column.hover tbody tr:hover > .sorting_2 { - background-color: #ececec; + background-color: #ebebeb; } table.dataTable.display tbody tr:hover > .sorting_3, table.dataTable.order-column.hover tbody tr:hover > .sorting_3 { - background-color: #efefef; + background-color: #eeeeee; } table.dataTable.display tbody tr:hover.selected > .sorting_1, table.dataTable.order-column.hover tbody tr:hover.selected > .sorting_1 { - background-color: #a2aec7; + background-color: #a1aec7; } table.dataTable.display tbody tr:hover.selected > .sorting_2, table.dataTable.order-column.hover tbody tr:hover.selected > .sorting_2 { - background-color: #a3b0c9; + background-color: #a2afc8; } table.dataTable.display tbody tr:hover.selected > .sorting_3, table.dataTable.order-column.hover tbody tr:hover.selected > .sorting_3 { - background-color: #a5b2cb; + background-color: #a4b2cb; } table.dataTable.no-footer { - border-bottom: 1px solid #111; + border-bottom: 1px solid #111111; } table.dataTable.nowrap th, table.dataTable.nowrap td { white-space: nowrap; @@ -278,6 +282,7 @@ table.dataTable tbody td.dt-body-nowrap { table.dataTable, table.dataTable th, table.dataTable td { + -webkit-box-sizing: content-box; box-sizing: content-box; } @@ -320,25 +325,25 @@ table.dataTable td { text-decoration: none !important; cursor: pointer; *cursor: hand; - color: #333 !important; + color: #333333 !important; border: 1px solid transparent; border-radius: 2px; } .dataTables_wrapper .dataTables_paginate .paginate_button.current, .dataTables_wrapper .dataTables_paginate .paginate_button.current:hover { - color: #333 !important; + color: #333333 !important; border: 1px solid #979797; background-color: white; - background: -webkit-gradient(linear, left top, left bottom, color-stop(0%, white), color-stop(100%, #dcdcdc)); + background: -webkit-gradient(linear, left top, left bottom, color-stop(0%, white), color-stop(100%, gainsboro)); /* Chrome,Safari4+ */ - background: -webkit-linear-gradient(top, white 0%, #dcdcdc 100%); + background: -webkit-linear-gradient(top, white 0%, gainsboro 100%); /* Chrome10+,Safari5.1+ */ - background: -moz-linear-gradient(top, white 0%, #dcdcdc 100%); + background: -moz-linear-gradient(top, white 0%, gainsboro 100%); /* FF3.6+ */ - background: -ms-linear-gradient(top, white 0%, #dcdcdc 100%); + background: -ms-linear-gradient(top, white 0%, gainsboro 100%); /* IE10+ */ - background: -o-linear-gradient(top, white 0%, #dcdcdc 100%); + background: -o-linear-gradient(top, white 0%, gainsboro 100%); /* Opera 11.10+ */ - background: linear-gradient(to bottom, white 0%, #dcdcdc 100%); + background: linear-gradient(to bottom, white 0%, gainsboro 100%); /* W3C */ } .dataTables_wrapper .dataTables_paginate .paginate_button.disabled, .dataTables_wrapper .dataTables_paginate .paginate_button.disabled:hover, .dataTables_wrapper .dataTables_paginate .paginate_button.disabled:active { @@ -350,19 +355,19 @@ table.dataTable td { } .dataTables_wrapper .dataTables_paginate .paginate_button:hover { color: white !important; - border: 1px solid #111; + border: 1px solid #111111; background-color: #585858; - background: -webkit-gradient(linear, left top, left bottom, color-stop(0%, #585858), color-stop(100%, #111)); + background: -webkit-gradient(linear, left top, left bottom, color-stop(0%, #585858), color-stop(100%, #111111)); /* Chrome,Safari4+ */ - background: -webkit-linear-gradient(top, #585858 0%, #111 100%); + background: -webkit-linear-gradient(top, #585858 0%, #111111 100%); /* Chrome10+,Safari5.1+ */ - background: -moz-linear-gradient(top, #585858 0%, #111 100%); + background: -moz-linear-gradient(top, #585858 0%, #111111 100%); /* FF3.6+ */ - background: -ms-linear-gradient(top, #585858 0%, #111 100%); + background: -ms-linear-gradient(top, #585858 0%, #111111 100%); /* IE10+ */ - background: -o-linear-gradient(top, #585858 0%, #111 100%); + background: -o-linear-gradient(top, #585858 0%, #111111 100%); /* Opera 11.10+ */ - background: linear-gradient(to bottom, #585858 0%, #111 100%); + background: linear-gradient(to bottom, #585858 0%, #111111 100%); /* W3C */ } .dataTables_wrapper .dataTables_paginate .paginate_button:active { @@ -409,7 +414,7 @@ table.dataTable td { .dataTables_wrapper .dataTables_info, .dataTables_wrapper .dataTables_processing, .dataTables_wrapper .dataTables_paginate { - color: #333; + color: #333333; } .dataTables_wrapper .dataTables_scroll { clear: both; @@ -418,22 +423,21 @@ table.dataTable td { *margin-top: -1px; -webkit-overflow-scrolling: touch; } -.dataTables_wrapper .dataTables_scroll div.dataTables_scrollBody > table > thead > tr > th, .dataTables_wrapper .dataTables_scroll div.dataTables_scrollBody > table > thead > tr > td, .dataTables_wrapper .dataTables_scroll div.dataTables_scrollBody > table > tbody > tr > th, .dataTables_wrapper .dataTables_scroll div.dataTables_scrollBody > table > tbody > tr > td { +.dataTables_wrapper .dataTables_scroll div.dataTables_scrollBody th, .dataTables_wrapper .dataTables_scroll div.dataTables_scrollBody td { vertical-align: middle; } -.dataTables_wrapper .dataTables_scroll div.dataTables_scrollBody > table > thead > tr > th > div.dataTables_sizing, -.dataTables_wrapper .dataTables_scroll div.dataTables_scrollBody > table > thead > tr > td > div.dataTables_sizing, .dataTables_wrapper .dataTables_scroll div.dataTables_scrollBody > table > tbody > tr > th > div.dataTables_sizing, -.dataTables_wrapper .dataTables_scroll div.dataTables_scrollBody > table > tbody > tr > td > div.dataTables_sizing { +.dataTables_wrapper .dataTables_scroll div.dataTables_scrollBody th > div.dataTables_sizing, +.dataTables_wrapper .dataTables_scroll div.dataTables_scrollBody td > div.dataTables_sizing { height: 0; overflow: hidden; margin: 0 !important; padding: 0 !important; } .dataTables_wrapper.no-footer .dataTables_scrollBody { - border-bottom: 1px solid #111; + border-bottom: 1px solid #111111; } -.dataTables_wrapper.no-footer div.dataTables_scrollHead table.dataTable, -.dataTables_wrapper.no-footer div.dataTables_scrollBody > table { +.dataTables_wrapper.no-footer div.dataTables_scrollHead table, +.dataTables_wrapper.no-footer div.dataTables_scrollBody table { border-bottom: none; } .dataTables_wrapper:after { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/css/jui-dt.css b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/css/jui-dt.css similarity index 100% rename from hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/css/jui-dt.css rename to hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/css/jui-dt.css diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/images/Sorting icons.psd b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/images/Sorting icons.psd similarity index 100% rename from hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/images/Sorting icons.psd rename to hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/images/Sorting icons.psd diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/images/back_disabled.jpg b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/images/back_disabled.jpg similarity index 100% rename from hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/images/back_disabled.jpg rename to hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/images/back_disabled.jpg diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/images/back_enabled.jpg b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/images/back_enabled.jpg similarity index 100% rename from hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/images/back_enabled.jpg rename to hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/images/back_enabled.jpg diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/images/favicon.ico b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/images/favicon.ico similarity index 100% rename from hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/images/favicon.ico rename to hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/images/favicon.ico diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/images/forward_disabled.jpg b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/images/forward_disabled.jpg similarity index 100% rename from hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/images/forward_disabled.jpg rename to hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/images/forward_disabled.jpg diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/images/forward_enabled.jpg b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/images/forward_enabled.jpg similarity index 100% rename from hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/images/forward_enabled.jpg rename to hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/images/forward_enabled.jpg diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/images/sort_asc.png b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/images/sort_asc.png new file mode 100644 index 00000000000..e1ba61a8055 Binary files /dev/null and b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/images/sort_asc.png differ diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/images/sort_asc_disabled.png b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/images/sort_asc_disabled.png new file mode 100644 index 00000000000..fb11dfe24a6 Binary files /dev/null and b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/images/sort_asc_disabled.png differ diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/images/sort_both.png b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/images/sort_both.png new file mode 100644 index 00000000000..af5bc7c5a10 Binary files /dev/null and b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/images/sort_both.png differ diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/images/sort_desc.png b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/images/sort_desc.png new file mode 100644 index 00000000000..0e156deb5f6 Binary files /dev/null and b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/images/sort_desc.png differ diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/images/sort_desc_disabled.png b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/images/sort_desc_disabled.png new file mode 100644 index 00000000000..c9fdd8a1502 Binary files /dev/null and b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/images/sort_desc_disabled.png differ diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/js/jquery.dataTables.min.js b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/js/jquery.dataTables.min.js new file mode 100644 index 00000000000..d61c9c99af0 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/js/jquery.dataTables.min.js @@ -0,0 +1,187 @@ +/*! + Copyright 2008-2021 SpryMedia Ltd. + + This source file is free software, available under the following license: + MIT license - http://datatables.net/license + + This source file is distributed in the hope that it will be useful, but + WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY + or FITNESS FOR A PARTICULAR PURPOSE. See the license files for details. + + For details please refer to: http://www.datatables.net + DataTables 1.11.5 + ©2008-2021 SpryMedia Ltd - datatables.net/license +*/ +var $jscomp=$jscomp||{};$jscomp.scope={};$jscomp.findInternal=function(l,z,A){l instanceof String&&(l=String(l));for(var q=l.length,E=0;E").css({position:"fixed",top:0,left:-1*l(z).scrollLeft(),height:1, +width:1,overflow:"hidden"}).append(l("
    ").css({position:"absolute",top:1,left:1,width:100,overflow:"scroll"}).append(l("
    ").css({width:"100%",height:10}))).appendTo("body"),d=c.children(),e=d.children();b.barWidth=d[0].offsetWidth-d[0].clientWidth;b.bScrollOversize=100===e[0].offsetWidth&&100!==d[0].clientWidth;b.bScrollbarLeft=1!==Math.round(e.offset().left);b.bBounding=c[0].getBoundingClientRect().width?!0:!1;c.remove()}l.extend(a.oBrowser,u.__browser);a.oScroll.iBarWidth=u.__browser.barWidth} +function Cb(a,b,c,d,e,h){var f=!1;if(c!==q){var g=c;f=!0}for(;d!==e;)a.hasOwnProperty(d)&&(g=f?b(g,a[d],d,a):a[d],f=!0,d+=h);return g}function Ya(a,b){var c=u.defaults.column,d=a.aoColumns.length;c=l.extend({},u.models.oColumn,c,{nTh:b?b:A.createElement("th"),sTitle:c.sTitle?c.sTitle:b?b.innerHTML:"",aDataSort:c.aDataSort?c.aDataSort:[d],mData:c.mData?c.mData:d,idx:d});a.aoColumns.push(c);c=a.aoPreSearchCols;c[d]=l.extend({},u.models.oSearch,c[d]);Ga(a,d,l(b).data())}function Ga(a,b,c){b=a.aoColumns[b]; +var d=a.oClasses,e=l(b.nTh);if(!b.sWidthOrig){b.sWidthOrig=e.attr("width")||null;var h=(e.attr("style")||"").match(/width:\s*(\d+[pxem%]+)/);h&&(b.sWidthOrig=h[1])}c!==q&&null!==c&&(Ab(c),P(u.defaults.column,c,!0),c.mDataProp===q||c.mData||(c.mData=c.mDataProp),c.sType&&(b._sManualType=c.sType),c.className&&!c.sClass&&(c.sClass=c.className),c.sClass&&e.addClass(c.sClass),l.extend(b,c),X(b,c,"sWidth","sWidthOrig"),c.iDataSort!==q&&(b.aDataSort=[c.iDataSort]),X(b,c,"aDataSort"));var f=b.mData,g=na(f), +k=b.mRender?na(b.mRender):null;c=function(m){return"string"===typeof m&&-1!==m.indexOf("@")};b._bAttrSrc=l.isPlainObject(f)&&(c(f.sort)||c(f.type)||c(f.filter));b._setter=null;b.fnGetData=function(m,n,p){var t=g(m,n,q,p);return k&&n?k(t,n,m,p):t};b.fnSetData=function(m,n,p){return ha(f)(m,n,p)};"number"!==typeof f&&(a._rowReadObject=!0);a.oFeatures.bSort||(b.bSortable=!1,e.addClass(d.sSortableNone));a=-1!==l.inArray("asc",b.asSorting);c=-1!==l.inArray("desc",b.asSorting);b.bSortable&&(a||c)?a&&!c? +(b.sSortingClass=d.sSortableAsc,b.sSortingClassJUI=d.sSortJUIAscAllowed):!a&&c?(b.sSortingClass=d.sSortableDesc,b.sSortingClassJUI=d.sSortJUIDescAllowed):(b.sSortingClass=d.sSortable,b.sSortingClassJUI=d.sSortJUI):(b.sSortingClass=d.sSortableNone,b.sSortingClassJUI="")}function sa(a){if(!1!==a.oFeatures.bAutoWidth){var b=a.aoColumns;Za(a);for(var c=0,d=b.length;cm[n])d(g.length+m[n],k);else if("string"===typeof m[n]){var p=0;for(f=g.length;pb&&a[e]--; -1!=d&&c===q&&a.splice(d,1)}function va(a,b,c,d){var e=a.aoData[b],h,f=function(k,m){for(;k.childNodes.length;)k.removeChild(k.firstChild);k.innerHTML=T(a,b,m,"display")};if("dom"!==c&&(c&&"auto"!==c||"dom"!==e.src)){var g=e.anCells;if(g)if(d!==q)f(g[d],d);else for(c=0,h=g.length;c").appendTo(d));var k=0;for(b=g.length;k=a.fnRecordsDisplay()?0:d,a.iInitDisplayStart=-1);c=F(a,"aoPreDrawCallback","preDraw",[a]);if(-1!==l.inArray(!1,c))V(a,!1);else{c=[];var e=0;d=a.asStripeClasses;var h=d.length,f=a.oLanguage,g="ssp"==Q(a),k=a.aiDisplay,m=a._iDisplayStart,n=a.fnDisplayEnd();a.bDrawing=!0;if(a.bDeferLoading)a.bDeferLoading=!1,a.iDraw++,V(a,!1);else if(!g)a.iDraw++;else if(!a.bDestroying&&!b){Gb(a);return}if(0!==k.length)for(b=g?a.aoData.length:n,f=g?0:m;f",{"class":h?d[0]:""}).append(l("",{valign:"top",colSpan:oa(a),"class":a.oClasses.sRowEmpty}).html(e))[0];F(a,"aoHeaderCallback","header",[l(a.nTHead).children("tr")[0],db(a),m,n,k]);F(a,"aoFooterCallback", +"footer",[l(a.nTFoot).children("tr")[0],db(a),m,n,k]);d=l(a.nTBody);d.children().detach();d.append(l(c));F(a,"aoDrawCallback","draw",[a]);a.bSorted=!1;a.bFiltered=!1;a.bDrawing=!1}}function ka(a,b){var c=a.oFeatures,d=c.bFilter;c.bSort&&Hb(a);d?ya(a,a.oPreviousSearch):a.aiDisplay=a.aiDisplayMaster.slice();!0!==b&&(a._iDisplayStart=0);a._drawHold=b;ja(a);a._drawHold=!1}function Ib(a){var b=a.oClasses,c=l(a.nTable);c=l("
    ").insertBefore(c);var d=a.oFeatures,e=l("
    ",{id:a.sTableId+"_wrapper", +"class":b.sWrapper+(a.nTFoot?"":" "+b.sNoFooter)});a.nHolding=c[0];a.nTableWrapper=e[0];a.nTableReinsertBefore=a.nTable.nextSibling;for(var h=a.sDom.split(""),f,g,k,m,n,p,t=0;t")[0];m=h[t+1];if("'"==m||'"'==m){n="";for(p=2;h[t+p]!=m;)n+=h[t+p],p++;"H"==n?n=b.sJUIHeader:"F"==n&&(n=b.sJUIFooter);-1!=n.indexOf(".")?(m=n.split("."),k.id=m[0].substr(1,m[0].length-1),k.className=m[1]):"#"==n.charAt(0)?k.id=n.substr(1,n.length-1):k.className=n;t+=p}e.append(k); +e=l(k)}else if(">"==g)e=e.parent();else if("l"==g&&d.bPaginate&&d.bLengthChange)f=Jb(a);else if("f"==g&&d.bFilter)f=Kb(a);else if("r"==g&&d.bProcessing)f=Lb(a);else if("t"==g)f=Mb(a);else if("i"==g&&d.bInfo)f=Nb(a);else if("p"==g&&d.bPaginate)f=Ob(a);else if(0!==u.ext.feature.length)for(k=u.ext.feature,p=0,m=k.length;p',g=d.sSearch;g=g.match(/_INPUT_/)?g.replace("_INPUT_",f):g+f;b=l("
    ",{id:h.f?null:c+"_filter","class":b.sFilter}).append(l("
    ").addClass(b.sLength);a.aanFeatures.l||(k[0].id=c+"_length");k.children().append(a.oLanguage.sLengthMenu.replace("_MENU_",e[0].outerHTML));l("select",k).val(a._iDisplayLength).on("change.DT",function(m){kb(a,l(this).val());ja(a)});l(a.nTable).on("length.dt.DT",function(m,n,p){a===n&&l("select",k).val(p)});return k[0]}function Ob(a){var b=a.sPaginationType,c=u.ext.pager[b],d="function"===typeof c,e=function(f){ja(f)};b=l("
    ").addClass(a.oClasses.sPaging+b)[0]; +var h=a.aanFeatures;d||c.fnInit(a,b,e);h.p||(b.id=a.sTableId+"_paginate",a.aoDrawCallback.push({fn:function(f){if(d){var g=f._iDisplayStart,k=f._iDisplayLength,m=f.fnRecordsDisplay(),n=-1===k;g=n?0:Math.ceil(g/k);k=n?1:Math.ceil(m/k);m=c(g,k);var p;n=0;for(p=h.p.length;nh&& +(d=0)):"first"==b?d=0:"previous"==b?(d=0<=e?d-e:0,0>d&&(d=0)):"next"==b?d+e",{id:a.aanFeatures.r?null:a.sTableId+"_processing","class":a.oClasses.sProcessing}).html(a.oLanguage.sProcessing).insertBefore(a.nTable)[0]}function V(a,b){a.oFeatures.bProcessing&&l(a.aanFeatures.r).css("display",b?"block":"none"); +F(a,null,"processing",[a,b])}function Mb(a){var b=l(a.nTable),c=a.oScroll;if(""===c.sX&&""===c.sY)return a.nTable;var d=c.sX,e=c.sY,h=a.oClasses,f=b.children("caption"),g=f.length?f[0]._captionSide:null,k=l(b[0].cloneNode(!1)),m=l(b[0].cloneNode(!1)),n=b.children("tfoot");n.length||(n=null);k=l("
    ",{"class":h.sScrollWrapper}).append(l("
    ",{"class":h.sScrollHead}).css({overflow:"hidden",position:"relative",border:0,width:d?d?K(d):null:"100%"}).append(l("
    ",{"class":h.sScrollHeadInner}).css({"box-sizing":"content-box", +width:c.sXInner||"100%"}).append(k.removeAttr("id").css("margin-left",0).append("top"===g?f:null).append(b.children("thead"))))).append(l("
    ",{"class":h.sScrollBody}).css({position:"relative",overflow:"auto",width:d?K(d):null}).append(b));n&&k.append(l("
    ",{"class":h.sScrollFoot}).css({overflow:"hidden",border:0,width:d?d?K(d):null:"100%"}).append(l("
    ",{"class":h.sScrollFootInner}).append(m.removeAttr("id").css("margin-left",0).append("bottom"===g?f:null).append(b.children("tfoot"))))); +b=k.children();var p=b[0];h=b[1];var t=n?b[2]:null;if(d)l(h).on("scroll.DT",function(v){v=this.scrollLeft;p.scrollLeft=v;n&&(t.scrollLeft=v)});l(h).css("max-height",e);c.bCollapse||l(h).css("height",e);a.nScrollHead=p;a.nScrollBody=h;a.nScrollFoot=t;a.aoDrawCallback.push({fn:Ha,sName:"scrolling"});return k[0]}function Ha(a){var b=a.oScroll,c=b.sX,d=b.sXInner,e=b.sY;b=b.iBarWidth;var h=l(a.nScrollHead),f=h[0].style,g=h.children("div"),k=g[0].style,m=g.children("table");g=a.nScrollBody;var n=l(g),p= +g.style,t=l(a.nScrollFoot).children("div"),v=t.children("table"),x=l(a.nTHead),w=l(a.nTable),r=w[0],C=r.style,G=a.nTFoot?l(a.nTFoot):null,aa=a.oBrowser,L=aa.bScrollOversize;U(a.aoColumns,"nTh");var O=[],I=[],H=[],ea=[],Y,Ba=function(D){D=D.style;D.paddingTop="0";D.paddingBottom="0";D.borderTopWidth="0";D.borderBottomWidth="0";D.height=0};var fa=g.scrollHeight>g.clientHeight;if(a.scrollBarVis!==fa&&a.scrollBarVis!==q)a.scrollBarVis=fa,sa(a);else{a.scrollBarVis=fa;w.children("thead, tfoot").remove(); +if(G){var ba=G.clone().prependTo(w);var la=G.find("tr");ba=ba.find("tr")}var mb=x.clone().prependTo(w);x=x.find("tr");fa=mb.find("tr");mb.find("th, td").removeAttr("tabindex");c||(p.width="100%",h[0].style.width="100%");l.each(Na(a,mb),function(D,W){Y=ta(a,D);W.style.width=a.aoColumns[Y].sWidth});G&&ca(function(D){D.style.width=""},ba);h=w.outerWidth();""===c?(C.width="100%",L&&(w.find("tbody").height()>g.offsetHeight||"scroll"==n.css("overflow-y"))&&(C.width=K(w.outerWidth()-b)),h=w.outerWidth()): +""!==d&&(C.width=K(d),h=w.outerWidth());ca(Ba,fa);ca(function(D){var W=z.getComputedStyle?z.getComputedStyle(D).width:K(l(D).width());H.push(D.innerHTML);O.push(W)},fa);ca(function(D,W){D.style.width=O[W]},x);l(fa).css("height",0);G&&(ca(Ba,ba),ca(function(D){ea.push(D.innerHTML);I.push(K(l(D).css("width")))},ba),ca(function(D,W){D.style.width=I[W]},la),l(ba).height(0));ca(function(D,W){D.innerHTML='
    '+H[W]+"
    ";D.childNodes[0].style.height="0";D.childNodes[0].style.overflow= +"hidden";D.style.width=O[W]},fa);G&&ca(function(D,W){D.innerHTML='
    '+ea[W]+"
    ";D.childNodes[0].style.height="0";D.childNodes[0].style.overflow="hidden";D.style.width=I[W]},ba);Math.round(w.outerWidth())g.offsetHeight||"scroll"==n.css("overflow-y")?h+b:h,L&&(g.scrollHeight>g.offsetHeight||"scroll"==n.css("overflow-y"))&&(C.width=K(la-b)),""!==c&&""===d||da(a,1,"Possible column misalignment",6)):la="100%";p.width=K(la);f.width=K(la); +G&&(a.nScrollFoot.style.width=K(la));!e&&L&&(p.height=K(r.offsetHeight+b));c=w.outerWidth();m[0].style.width=K(c);k.width=K(c);d=w.height()>g.clientHeight||"scroll"==n.css("overflow-y");e="padding"+(aa.bScrollbarLeft?"Left":"Right");k[e]=d?b+"px":"0px";G&&(v[0].style.width=K(c),t[0].style.width=K(c),t[0].style[e]=d?b+"px":"0px");w.children("colgroup").insertBefore(w.children("thead"));n.trigger("scroll");!a.bSorted&&!a.bFiltered||a._drawHold||(g.scrollTop=0)}}function ca(a,b,c){for(var d=0,e=0,h= +b.length,f,g;e").appendTo(g.find("tbody"));g.find("thead, tfoot").remove();g.append(l(a.nTHead).clone()).append(l(a.nTFoot).clone());g.find("tfoot th, tfoot td").css("width","");m=Na(a,g.find("thead")[0]);for(v=0;v").css({width:w.sWidthOrig,margin:0,padding:0,border:0,height:1}));if(a.aoData.length)for(v=0;v").css(h||e?{position:"absolute",top:0,left:0,height:1,right:0,overflow:"hidden"}:{}).append(g).appendTo(p);h&&f?g.width(f):h?(g.css("width","auto"),g.removeAttr("width"),g.width()").css("width",K(a)).appendTo(b||A.body);b=a[0].offsetWidth;a.remove();return b}function $b(a,b){var c= +ac(a,b);if(0>c)return null;var d=a.aoData[c];return d.nTr?d.anCells[b]:l("").html(T(a,c,b,"display"))[0]}function ac(a,b){for(var c,d=-1,e=-1,h=0,f=a.aoData.length;hd&&(d=c.length,e=h);return e}function K(a){return null===a?"0px":"number"==typeof a?0>a?"0px":a+"px":a.match(/\d$/)?a+"px":a}function pa(a){var b=[],c=a.aoColumns;var d=a.aaSortingFixed;var e=l.isPlainObject(d);var h=[];var f=function(n){n.length&& +!Array.isArray(n[0])?h.push(n):l.merge(h,n)};Array.isArray(d)&&f(d);e&&d.pre&&f(d.pre);f(a.aaSorting);e&&d.post&&f(d.post);for(a=0;aG?1:0;if(0!==C)return"asc"===r.dir?C:-C}C=c[n];G=c[p];return CG?1:0}):f.sort(function(n,p){var t,v=g.length,x=e[n]._aSortData,w=e[p]._aSortData;for(t=0;tG?1:0})}a.bSorted=!0}function cc(a){var b=a.aoColumns,c=pa(a);a=a.oLanguage.oAria;for(var d=0,e=b.length;d/g,"");var k=h.nTh;k.removeAttribute("aria-sort");h.bSortable&&(0e?e+1:3))}e= +0;for(h=d.length;ee?e+1:3))}a.aLastSort=d}function bc(a,b){var c=a.aoColumns[b],d=u.ext.order[c.sSortDataType],e;d&&(e=d.call(a.oInstance,a,b,ua(a,b)));for(var h,f=u.ext.type.order[c.sType+"-pre"],g=0,k=a.aoData.length;g=e.length?[0,m[1]]:m)}));b.search!==q&&l.extend(a.oPreviousSearch,Wb(b.search));if(b.columns){f=0;for(d=b.columns.length;f=c&&(b=c-d);b-=b%d;if(-1===d||0>b)b=0;a._iDisplayStart=b}function gb(a,b){a=a.renderer;var c=u.ext.renderer[b];return l.isPlainObject(a)&&a[b]?c[a[b]]||c._:"string"===typeof a?c[a]||c._:c._}function Q(a){return a.oFeatures.bServerSide? +"ssp":a.ajax||a.sAjaxSource?"ajax":"dom"}function Da(a,b){var c=ec.numbers_length,d=Math.floor(c/2);b<=c?a=qa(0,b):a<=d?(a=qa(0,c-2),a.push("ellipsis"),a.push(b-1)):(a>=b-1-d?a=qa(b-(c-2),b):(a=qa(a-d+2,a+d-1),a.push("ellipsis"),a.push(b-1)),a.splice(0,0,"ellipsis"),a.splice(0,0,0));a.DT_el="span";return a}function Xa(a){l.each({num:function(b){return Ua(b,a)},"num-fmt":function(b){return Ua(b,a,rb)},"html-num":function(b){return Ua(b,a,Va)},"html-num-fmt":function(b){return Ua(b,a,Va,rb)}},function(b, +c){M.type.order[b+a+"-pre"]=c;b.match(/^html\-/)&&(M.type.search[b+a]=M.type.search.html)})}function fc(a){return function(){var b=[Ta(this[u.ext.iApiIndex])].concat(Array.prototype.slice.call(arguments));return u.ext.internal[a].apply(this,b)}}var u=function(a,b){if(this instanceof u)return l(a).DataTable(b);b=a;this.$=function(f,g){return this.api(!0).$(f,g)};this._=function(f,g){return this.api(!0).rows(f,g).data()};this.api=function(f){return f?new B(Ta(this[M.iApiIndex])):new B(this)};this.fnAddData= +function(f,g){var k=this.api(!0);f=Array.isArray(f)&&(Array.isArray(f[0])||l.isPlainObject(f[0]))?k.rows.add(f):k.row.add(f);(g===q||g)&&k.draw();return f.flatten().toArray()};this.fnAdjustColumnSizing=function(f){var g=this.api(!0).columns.adjust(),k=g.settings()[0],m=k.oScroll;f===q||f?g.draw(!1):(""!==m.sX||""!==m.sY)&&Ha(k)};this.fnClearTable=function(f){var g=this.api(!0).clear();(f===q||f)&&g.draw()};this.fnClose=function(f){this.api(!0).row(f).child.hide()};this.fnDeleteRow=function(f,g,k){var m= +this.api(!0);f=m.rows(f);var n=f.settings()[0],p=n.aoData[f[0][0]];f.remove();g&&g.call(this,n,p);(k===q||k)&&m.draw();return p};this.fnDestroy=function(f){this.api(!0).destroy(f)};this.fnDraw=function(f){this.api(!0).draw(f)};this.fnFilter=function(f,g,k,m,n,p){n=this.api(!0);null===g||g===q?n.search(f,k,m,p):n.column(g).search(f,k,m,p);n.draw()};this.fnGetData=function(f,g){var k=this.api(!0);if(f!==q){var m=f.nodeName?f.nodeName.toLowerCase():"";return g!==q||"td"==m||"th"==m?k.cell(f,g).data(): +k.row(f).data()||null}return k.data().toArray()};this.fnGetNodes=function(f){var g=this.api(!0);return f!==q?g.row(f).node():g.rows().nodes().flatten().toArray()};this.fnGetPosition=function(f){var g=this.api(!0),k=f.nodeName.toUpperCase();return"TR"==k?g.row(f).index():"TD"==k||"TH"==k?(f=g.cell(f).index(),[f.row,f.columnVisible,f.column]):null};this.fnIsOpen=function(f){return this.api(!0).row(f).child.isShown()};this.fnOpen=function(f,g,k){return this.api(!0).row(f).child(g,k).show().child()[0]}; +this.fnPageChange=function(f,g){f=this.api(!0).page(f);(g===q||g)&&f.draw(!1)};this.fnSetColumnVis=function(f,g,k){f=this.api(!0).column(f).visible(g);(k===q||k)&&f.columns.adjust().draw()};this.fnSettings=function(){return Ta(this[M.iApiIndex])};this.fnSort=function(f){this.api(!0).order(f).draw()};this.fnSortListener=function(f,g,k){this.api(!0).order.listener(f,g,k)};this.fnUpdate=function(f,g,k,m,n){var p=this.api(!0);k===q||null===k?p.row(g).data(f):p.cell(g,k).data(f);(n===q||n)&&p.columns.adjust(); +(m===q||m)&&p.draw();return 0};this.fnVersionCheck=M.fnVersionCheck;var c=this,d=b===q,e=this.length;d&&(b={});this.oApi=this.internal=M.internal;for(var h in u.ext.internal)h&&(this[h]=fc(h));this.each(function(){var f={},g=1").appendTo(t));r.nTHead=H[0];var ea=t.children("tbody");0===ea.length&&(ea=l("").insertAfter(H));r.nTBody=ea[0];H=t.children("tfoot");0===H.length&&0").appendTo(t));0===H.length||0===H.children().length?t.addClass(C.sNoFooter):0/g,vc=/^\d{2,4}[\.\/\-]\d{1,2}[\.\/\-]\d{1,2}([T ]{1}\d{1,2}[:\.]\d{2}([\.:]\d{2})?)?$/,wc=/(\/|\.|\*|\+|\?|\||\(|\)|\[|\]|\{|\}|\\|\$|\^|\-)/g,rb=/['\u00A0,$£€¥%\u2009\u202F\u20BD\u20a9\u20BArfkɃΞ]/gi,Z=function(a){return a&&!0!==a&&"-"!==a?!1:!0},hc= +function(a){var b=parseInt(a,10);return!isNaN(b)&&isFinite(a)?b:null},ic=function(a,b){sb[b]||(sb[b]=new RegExp(jb(b),"g"));return"string"===typeof a&&"."!==b?a.replace(/\./g,"").replace(sb[b],"."):a},tb=function(a,b,c){var d="string"===typeof a;if(Z(a))return!0;b&&d&&(a=ic(a,b));c&&d&&(a=a.replace(rb,""));return!isNaN(parseFloat(a))&&isFinite(a)},jc=function(a,b,c){return Z(a)?!0:Z(a)||"string"===typeof a?tb(a.replace(Va,""),b,c)?!0:null:null},U=function(a,b,c){var d=[],e=0,h=a.length;if(c!==q)for(;e< +h;e++)a[e]&&a[e][b]&&d.push(a[e][b][c]);else for(;ea.length)){var b=a.slice().sort();for(var c=b[0],d=1,e=b.length;d< +e;d++){if(b[d]===c){b=!1;break a}c=b[d]}}b=!0}if(b)return a.slice();b=[];e=a.length;var h,f=0;d=0;a:for(;d")[0],tc=Qa.textContent!==q,uc=/<.*?>/g,hb=u.util.throttle,nc=[],N=Array.prototype,xc=function(a){var b,c=u.settings,d=l.map(c,function(h,f){return h.nTable});if(a){if(a.nTable&&a.oApi)return[a];if(a.nodeName&&"table"===a.nodeName.toLowerCase()){var e= +l.inArray(a,d);return-1!==e?[c[e]]:null}if(a&&"function"===typeof a.settings)return a.settings().toArray();"string"===typeof a?b=l(a):a instanceof l&&(b=a)}else return[];if(b)return b.map(function(h){e=l.inArray(this,d);return-1!==e?c[e]:null}).toArray()};var B=function(a,b){if(!(this instanceof B))return new B(a,b);var c=[],d=function(f){(f=xc(f))&&c.push.apply(c,f)};if(Array.isArray(a))for(var e=0,h=a.length;ea?new B(b[a],this[a]):null},filter:function(a){var b=[];if(N.filter)b=N.filter.call(this,a,this);else for(var c=0,d=this.length;c").addClass(g),l("td",k).addClass(g).html(f)[0].colSpan=oa(a),e.push(k[0]))};h(c,d);b._details&&b._details.detach();b._details=l(e);b._detailsShow&&b._details.insertAfter(b.nTr)},qc=u.util.throttle(function(a){Ca(a[0])},500),xb=function(a,b){var c=a.context;c.length&&(a=c[0].aoData[b!== +q?b:a[0]])&&a._details&&(a._details.remove(),a._detailsShow=q,a._details=q,l(a.nTr).removeClass("dt-hasChild"),qc(c))},rc=function(a,b){var c=a.context;if(c.length&&a.length){var d=c[0].aoData[a[0]];d._details&&((d._detailsShow=b)?(d._details.insertAfter(d.nTr),l(d.nTr).addClass("dt-hasChild")):(d._details.detach(),l(d.nTr).removeClass("dt-hasChild")),F(c[0],null,"childRow",[b,a.row(a[0])]),Ac(c[0]),qc(c))}},Ac=function(a){var b=new B(a),c=a.aoData;b.off("draw.dt.DT_details column-visibility.dt.DT_details destroy.dt.DT_details"); +0g){var n=l.map(d,function(p,t){return p.bVisible?t:null});return[n[n.length+g]]}return[ta(a,g)];case "name":return l.map(e,function(p,t){return p===m[1]?t:null});default:return[]}if(f.nodeName&&f._DT_CellIndex)return[f._DT_CellIndex.column];g=l(h).filter(f).map(function(){return l.inArray(this,h)}).toArray();if(g.length||!f.nodeName)return g;g=l(f).closest("*[data-dt-column]");return g.length?[g.data("dt-column")]:[]},a,c)}; +y("columns()",function(a,b){a===q?a="":l.isPlainObject(a)&&(b=a,a="");b=vb(b);var c=this.iterator("table",function(d){return Cc(d,a,b)},1);c.selector.cols=a;c.selector.opts=b;return c});J("columns().header()","column().header()",function(a,b){return this.iterator("column",function(c,d){return c.aoColumns[d].nTh},1)});J("columns().footer()","column().footer()",function(a,b){return this.iterator("column",function(c,d){return c.aoColumns[d].nTf},1)});J("columns().data()","column().data()",function(){return this.iterator("column-rows", +sc,1)});J("columns().dataSrc()","column().dataSrc()",function(){return this.iterator("column",function(a,b){return a.aoColumns[b].mData},1)});J("columns().cache()","column().cache()",function(a){return this.iterator("column-rows",function(b,c,d,e,h){return Ea(b.aoData,h,"search"===a?"_aFilterData":"_aSortData",c)},1)});J("columns().nodes()","column().nodes()",function(){return this.iterator("column-rows",function(a,b,c,d,e){return Ea(a.aoData,e,"anCells",b)},1)});J("columns().visible()","column().visible()", +function(a,b){var c=this,d=this.iterator("column",function(e,h){if(a===q)return e.aoColumns[h].bVisible;var f=e.aoColumns,g=f[h],k=e.aoData,m;if(a!==q&&g.bVisible!==a){if(a){var n=l.inArray(!0,U(f,"bVisible"),h+1);f=0;for(m=k.length;fd;return!0};u.isDataTable=u.fnIsDataTable=function(a){var b=l(a).get(0),c=!1;if(a instanceof u.Api)return!0;l.each(u.settings,function(d,e){d=e.nScrollHead?l("table",e.nScrollHead)[0]:null;var h=e.nScrollFoot?l("table",e.nScrollFoot)[0]:null;if(e.nTable===b||d===b||h===b)c=!0});return c};u.tables=u.fnTables=function(a){var b= +!1;l.isPlainObject(a)&&(b=a.api,a=a.visible);var c=l.map(u.settings,function(d){if(!a||a&&l(d.nTable).is(":visible"))return d.nTable});return b?new B(c):c};u.camelToHungarian=P;y("$()",function(a,b){b=this.rows(b).nodes();b=l(b);return l([].concat(b.filter(a).toArray(),b.find(a).toArray()))});l.each(["on","one","off"],function(a,b){y(b+"()",function(){var c=Array.prototype.slice.call(arguments);c[0]=l.map(c[0].split(/\s/),function(e){return e.match(/\.dt\b/)?e:e+".dt"}).join(" ");var d=l(this.tables().nodes()); +d[b].apply(d,c);return this})});y("clear()",function(){return this.iterator("table",function(a){Ka(a)})});y("settings()",function(){return new B(this.context,this.context)});y("init()",function(){var a=this.context;return a.length?a[0].oInit:null});y("data()",function(){return this.iterator("table",function(a){return U(a.aoData,"_aData")}).flatten()});y("destroy()",function(a){a=a||!1;return this.iterator("table",function(b){var c=b.nTableWrapper.parentNode,d=b.oClasses,e=b.nTable,h=b.nTBody,f=b.nTHead, +g=b.nTFoot,k=l(e);h=l(h);var m=l(b.nTableWrapper),n=l.map(b.aoData,function(t){return t.nTr}),p;b.bDestroying=!0;F(b,"aoDestroyCallback","destroy",[b]);a||(new B(b)).columns().visible(!0);m.off(".DT").find(":not(tbody *)").off(".DT");l(z).off(".DT-"+b.sInstance);e!=f.parentNode&&(k.children("thead").detach(),k.append(f));g&&e!=g.parentNode&&(k.children("tfoot").detach(),k.append(g));b.aaSorting=[];b.aaSortingFixed=[];Sa(b);l(n).removeClass(b.asStripeClasses.join(" "));l("th, td",f).removeClass(d.sSortable+ +" "+d.sSortableAsc+" "+d.sSortableDesc+" "+d.sSortableNone);h.children().detach();h.append(n);f=a?"remove":"detach";k[f]();m[f]();!a&&c&&(c.insertBefore(e,b.nTableReinsertBefore),k.css("width",b.sDestroyWidth).removeClass(d.sTable),(p=b.asDestroyStripes.length)&&h.children().each(function(t){l(this).addClass(b.asDestroyStripes[t%p])}));c=l.inArray(b,u.settings);-1!==c&&u.settings.splice(c,1)})});l.each(["column","row","cell"],function(a,b){y(b+"s().every()",function(c){var d=this.selector.opts,e= +this;return this.iterator(b,function(h,f,g,k,m){c.call(e[b](f,"cell"===b?g:d,"cell"===b?d:q),f,g,k,m)})})});y("i18n()",function(a,b,c){var d=this.context[0];a=na(a)(d.oLanguage);a===q&&(a=b);c!==q&&l.isPlainObject(a)&&(a=a[c]!==q?a[c]:a._);return a.replace("%d",c)});u.version="1.11.5";u.settings=[];u.models={};u.models.oSearch={bCaseInsensitive:!0,sSearch:"",bRegex:!1,bSmart:!0,"return":!1};u.models.oRow={nTr:null,anCells:null,_aData:[],_aSortData:null,_aFilterData:null,_sFilterRow:null,_sRowStripe:"", +src:null,idx:-1};u.models.oColumn={idx:null,aDataSort:null,asSorting:null,bSearchable:null,bSortable:null,bVisible:null,_sManualType:null,_bAttrSrc:!1,fnCreatedCell:null,fnGetData:null,fnSetData:null,mData:null,mRender:null,nTh:null,nTf:null,sClass:null,sContentPadding:null,sDefaultContent:null,sName:null,sSortDataType:"std",sSortingClass:null,sSortingClassJUI:null,sTitle:null,sType:null,sWidth:null,sWidthOrig:null};u.defaults={aaData:null,aaSorting:[[0,"asc"]],aaSortingFixed:[],ajax:null,aLengthMenu:[10, +25,50,100],aoColumns:null,aoColumnDefs:null,aoSearchCols:[],asStripeClasses:null,bAutoWidth:!0,bDeferRender:!1,bDestroy:!1,bFilter:!0,bInfo:!0,bLengthChange:!0,bPaginate:!0,bProcessing:!1,bRetrieve:!1,bScrollCollapse:!1,bServerSide:!1,bSort:!0,bSortMulti:!0,bSortCellsTop:!1,bSortClasses:!0,bStateSave:!1,fnCreatedRow:null,fnDrawCallback:null,fnFooterCallback:null,fnFormatNumber:function(a){return a.toString().replace(/\B(?=(\d{3})+(?!\d))/g,this.oLanguage.sThousands)},fnHeaderCallback:null,fnInfoCallback:null, +fnInitComplete:null,fnPreDrawCallback:null,fnRowCallback:null,fnServerData:null,fnServerParams:null,fnStateLoadCallback:function(a){try{return JSON.parse((-1===a.iStateDuration?sessionStorage:localStorage).getItem("DataTables_"+a.sInstance+"_"+location.pathname))}catch(b){return{}}},fnStateLoadParams:null,fnStateLoaded:null,fnStateSaveCallback:function(a,b){try{(-1===a.iStateDuration?sessionStorage:localStorage).setItem("DataTables_"+a.sInstance+"_"+location.pathname,JSON.stringify(b))}catch(c){}}, +fnStateSaveParams:null,iStateDuration:7200,iDeferLoading:null,iDisplayLength:10,iDisplayStart:0,iTabIndex:0,oClasses:{},oLanguage:{oAria:{sSortAscending:": activate to sort column ascending",sSortDescending:": activate to sort column descending"},oPaginate:{sFirst:"First",sLast:"Last",sNext:"Next",sPrevious:"Previous"},sEmptyTable:"No data available in table",sInfo:"Showing _START_ to _END_ of _TOTAL_ entries",sInfoEmpty:"Showing 0 to 0 of 0 entries",sInfoFiltered:"(filtered from _MAX_ total entries)", +sInfoPostFix:"",sDecimal:"",sThousands:",",sLengthMenu:"Show _MENU_ entries",sLoadingRecords:"Loading...",sProcessing:"Processing...",sSearch:"Search:",sSearchPlaceholder:"",sUrl:"",sZeroRecords:"No matching records found"},oSearch:l.extend({},u.models.oSearch),sAjaxDataProp:"data",sAjaxSource:null,sDom:"lfrtip",searchDelay:null,sPaginationType:"simple_numbers",sScrollX:"",sScrollXInner:"",sScrollY:"",sServerMethod:"GET",renderer:null,rowId:"DT_RowId"};E(u.defaults);u.defaults.column={aDataSort:null, +iDataSort:-1,asSorting:["asc","desc"],bSearchable:!0,bSortable:!0,bVisible:!0,fnCreatedCell:null,mData:null,mRender:null,sCellType:"td",sClass:"",sContentPadding:"",sDefaultContent:null,sName:"",sSortDataType:"std",sTitle:null,sType:null,sWidth:null};E(u.defaults.column);u.models.oSettings={oFeatures:{bAutoWidth:null,bDeferRender:null,bFilter:null,bInfo:null,bLengthChange:null,bPaginate:null,bProcessing:null,bServerSide:null,bSort:null,bSortMulti:null,bSortClasses:null,bStateSave:null},oScroll:{bCollapse:null, +iBarWidth:0,sX:null,sXInner:null,sY:null},oLanguage:{fnInfoCallback:null},oBrowser:{bScrollOversize:!1,bScrollbarLeft:!1,bBounding:!1,barWidth:0},ajax:null,aanFeatures:[],aoData:[],aiDisplay:[],aiDisplayMaster:[],aIds:{},aoColumns:[],aoHeader:[],aoFooter:[],oPreviousSearch:{},aoPreSearchCols:[],aaSorting:null,aaSortingFixed:[],asStripeClasses:null,asDestroyStripes:[],sDestroyWidth:0,aoRowCallback:[],aoHeaderCallback:[],aoFooterCallback:[],aoDrawCallback:[],aoRowCreatedCallback:[],aoPreDrawCallback:[], +aoInitComplete:[],aoStateSaveParams:[],aoStateLoadParams:[],aoStateLoaded:[],sTableId:"",nTable:null,nTHead:null,nTFoot:null,nTBody:null,nTableWrapper:null,bDeferLoading:!1,bInitialised:!1,aoOpenRows:[],sDom:null,searchDelay:null,sPaginationType:"two_button",iStateDuration:0,aoStateSave:[],aoStateLoad:[],oSavedState:null,oLoadedState:null,sAjaxSource:null,sAjaxDataProp:null,jqXHR:null,json:q,oAjaxData:q,fnServerData:null,aoServerParams:[],sServerMethod:null,fnFormatNumber:null,aLengthMenu:null,iDraw:0, +bDrawing:!1,iDrawError:-1,_iDisplayLength:10,_iDisplayStart:0,_iRecordsTotal:0,_iRecordsDisplay:0,oClasses:{},bFiltered:!1,bSorted:!1,bSortCellsTop:null,oInit:null,aoDestroyCallback:[],fnRecordsTotal:function(){return"ssp"==Q(this)?1*this._iRecordsTotal:this.aiDisplayMaster.length},fnRecordsDisplay:function(){return"ssp"==Q(this)?1*this._iRecordsDisplay:this.aiDisplay.length},fnDisplayEnd:function(){var a=this._iDisplayLength,b=this._iDisplayStart,c=b+a,d=this.aiDisplay.length,e=this.oFeatures,h= +e.bPaginate;return e.bServerSide?!1===h||-1===a?b+d:Math.min(b+a,this._iRecordsDisplay):!h||c>d||-1===a?d:c},oInstance:null,sInstance:null,iTabIndex:0,nScrollHead:null,nScrollFoot:null,aLastSort:[],oPlugins:{},rowIdFn:null,rowId:null};u.ext=M={buttons:{},classes:{},builder:"-source-",errMode:"alert",feature:[],search:[],selector:{cell:[],column:[],row:[]},internal:{},legacy:{ajax:null},pager:{},renderer:{pageButton:{},header:{}},order:{},type:{detect:[],search:{},order:{}},_unique:0,fnVersionCheck:u.fnVersionCheck, +iApiIndex:0,oJUIClasses:{},sVersion:u.version};l.extend(M,{afnFiltering:M.search,aTypes:M.type.detect,ofnSearch:M.type.search,oSort:M.type.order,afnSortData:M.order,aoFeatures:M.feature,oApi:M.internal,oStdClasses:M.classes,oPagination:M.pager});l.extend(u.ext.classes,{sTable:"dataTable",sNoFooter:"no-footer",sPageButton:"paginate_button",sPageButtonActive:"current",sPageButtonDisabled:"disabled",sStripeOdd:"odd",sStripeEven:"even",sRowEmpty:"dataTables_empty",sWrapper:"dataTables_wrapper",sFilter:"dataTables_filter", +sInfo:"dataTables_info",sPaging:"dataTables_paginate paging_",sLength:"dataTables_length",sProcessing:"dataTables_processing",sSortAsc:"sorting_asc",sSortDesc:"sorting_desc",sSortable:"sorting",sSortableAsc:"sorting_desc_disabled",sSortableDesc:"sorting_asc_disabled",sSortableNone:"sorting_disabled",sSortColumn:"sorting_",sFilterInput:"",sLengthSelect:"",sScrollWrapper:"dataTables_scroll",sScrollHead:"dataTables_scrollHead",sScrollHeadInner:"dataTables_scrollHeadInner",sScrollBody:"dataTables_scrollBody", +sScrollFoot:"dataTables_scrollFoot",sScrollFootInner:"dataTables_scrollFootInner",sHeaderTH:"",sFooterTH:"",sSortJUIAsc:"",sSortJUIDesc:"",sSortJUI:"",sSortJUIAscAllowed:"",sSortJUIDescAllowed:"",sSortJUIWrapper:"",sSortIcon:"",sJUIHeader:"",sJUIFooter:""});var ec=u.ext.pager;l.extend(ec,{simple:function(a,b){return["previous","next"]},full:function(a,b){return["first","previous","next","last"]},numbers:function(a,b){return[Da(a,b)]},simple_numbers:function(a,b){return["previous",Da(a,b),"next"]}, +full_numbers:function(a,b){return["first","previous",Da(a,b),"next","last"]},first_last_numbers:function(a,b){return["first",Da(a,b),"last"]},_numbers:Da,numbers_length:7});l.extend(!0,u.ext.renderer,{pageButton:{_:function(a,b,c,d,e,h){var f=a.oClasses,g=a.oLanguage.oPaginate,k=a.oLanguage.oAria.paginate||{},m,n,p=0,t=function(x,w){var r,C=f.sPageButtonDisabled,G=function(I){Ra(a,I.data.action,!0)};var aa=0;for(r=w.length;aa").appendTo(x); +t(O,L)}else{m=null;n=L;O=a.iTabIndex;switch(L){case "ellipsis":x.append('');break;case "first":m=g.sFirst;0===e&&(O=-1,n+=" "+C);break;case "previous":m=g.sPrevious;0===e&&(O=-1,n+=" "+C);break;case "next":m=g.sNext;if(0===h||e===h-1)O=-1,n+=" "+C;break;case "last":m=g.sLast;if(0===h||e===h-1)O=-1,n+=" "+C;break;default:m=a.fnFormatNumber(L+1),n=e===L?f.sPageButtonActive:""}null!==m&&(O=l("",{"class":f.sPageButton+" "+n,"aria-controls":a.sTableId,"aria-label":k[L], +"data-dt-idx":p,tabindex:O,id:0===c&&"string"===typeof L?a.sTableId+"_"+L:null}).html(m).appendTo(x),ob(O,{action:L},G),p++)}}};try{var v=l(b).find(A.activeElement).data("dt-idx")}catch(x){}t(l(b).empty(),d);v!==q&&l(b).find("[data-dt-idx="+v+"]").trigger("focus")}}});l.extend(u.ext.type.detect,[function(a,b){b=b.oLanguage.sDecimal;return tb(a,b)?"num"+b:null},function(a,b){if(a&&!(a instanceof Date)&&!vc.test(a))return null;b=Date.parse(a);return null!==b&&!isNaN(b)||Z(a)?"date":null},function(a, +b){b=b.oLanguage.sDecimal;return tb(a,b,!0)?"num-fmt"+b:null},function(a,b){b=b.oLanguage.sDecimal;return jc(a,b)?"html-num"+b:null},function(a,b){b=b.oLanguage.sDecimal;return jc(a,b,!0)?"html-num-fmt"+b:null},function(a,b){return Z(a)||"string"===typeof a&&-1!==a.indexOf("<")?"html":null}]);l.extend(u.ext.type.search,{html:function(a){return Z(a)?a:"string"===typeof a?a.replace(gc," ").replace(Va,""):""},string:function(a){return Z(a)?a:"string"===typeof a?a.replace(gc," "):a}});var Ua=function(a, +b,c,d){if(0!==a&&(!a||"-"===a))return-Infinity;b&&(a=ic(a,b));a.replace&&(c&&(a=a.replace(c,"")),d&&(a=a.replace(d,"")));return 1*a};l.extend(M.type.order,{"date-pre":function(a){a=Date.parse(a);return isNaN(a)?-Infinity:a},"html-pre":function(a){return Z(a)?"":a.replace?a.replace(/<.*?>/g,"").toLowerCase():a+""},"string-pre":function(a){return Z(a)?"":"string"===typeof a?a.toLowerCase():a.toString?a.toString():""},"string-asc":function(a,b){return ab?1:0},"string-desc":function(a,b){return a< +b?1:a>b?-1:0}});Xa("");l.extend(!0,u.ext.renderer,{header:{_:function(a,b,c,d){l(a.nTable).on("order.dt.DT",function(e,h,f,g){a===h&&(e=c.idx,b.removeClass(d.sSortAsc+" "+d.sSortDesc).addClass("asc"==g[e]?d.sSortAsc:"desc"==g[e]?d.sSortDesc:c.sSortingClass))})},jqueryui:function(a,b,c,d){l("
    ").addClass(d.sSortJUIWrapper).append(b.contents()).append(l("").addClass(d.sSortIcon+" "+c.sSortingClassJUI)).appendTo(b);l(a.nTable).on("order.dt.DT",function(e,h,f,g){a===h&&(e=c.idx,b.removeClass(d.sSortAsc+ +" "+d.sSortDesc).addClass("asc"==g[e]?d.sSortAsc:"desc"==g[e]?d.sSortDesc:c.sSortingClass),b.find("span."+d.sSortIcon).removeClass(d.sSortJUIAsc+" "+d.sSortJUIDesc+" "+d.sSortJUI+" "+d.sSortJUIAscAllowed+" "+d.sSortJUIDescAllowed).addClass("asc"==g[e]?d.sSortJUIAsc:"desc"==g[e]?d.sSortJUIDesc:c.sSortingClassJUI))})}}});var yb=function(a){Array.isArray(a)&&(a=a.join(","));return"string"===typeof a?a.replace(/&/g,"&").replace(//g,">").replace(/"/g,"""):a};u.render= +{number:function(a,b,c,d,e){return{display:function(h){if("number"!==typeof h&&"string"!==typeof h)return h;var f=0>h?"-":"",g=parseFloat(h);if(isNaN(g))return yb(h);g=g.toFixed(c);h=Math.abs(g);g=parseInt(h,10);h=c?b+(h-g).toFixed(c).substring(2):"";0===g&&0===parseFloat(h)&&(f="");return f+(d||"")+g.toString().replace(/\B(?=(\d{3})+(?!\d))/g,a)+h+(e||"")}}},text:function(){return{display:yb,filter:yb}}};l.extend(u.ext.internal,{_fnExternApiFunc:fc,_fnBuildAjax:Oa,_fnAjaxUpdate:Gb,_fnAjaxParameters:Pb, +_fnAjaxUpdateDraw:Qb,_fnAjaxDataSrc:za,_fnAddColumn:Ya,_fnColumnOptions:Ga,_fnAdjustColumnSizing:sa,_fnVisibleToColumnIndex:ta,_fnColumnIndexToVisible:ua,_fnVisbleColumns:oa,_fnGetColumns:Ia,_fnColumnTypes:$a,_fnApplyColumnDefs:Db,_fnHungarianMap:E,_fnCamelToHungarian:P,_fnLanguageCompat:ma,_fnBrowserDetect:Bb,_fnAddData:ia,_fnAddTr:Ja,_fnNodeToDataIndex:function(a,b){return b._DT_RowIndex!==q?b._DT_RowIndex:null},_fnNodeToColumnIndex:function(a,b,c){return l.inArray(c,a.aoData[b].anCells)},_fnGetCellData:T, +_fnSetCellData:Eb,_fnSplitObjNotation:cb,_fnGetObjectDataFn:na,_fnSetObjectDataFn:ha,_fnGetDataMaster:db,_fnClearTable:Ka,_fnDeleteIndex:La,_fnInvalidate:va,_fnGetRowElements:bb,_fnCreateTr:ab,_fnBuildHead:Fb,_fnDrawHead:xa,_fnDraw:ja,_fnReDraw:ka,_fnAddOptionsHtml:Ib,_fnDetectHeader:wa,_fnGetUniqueThs:Na,_fnFeatureHtmlFilter:Kb,_fnFilterComplete:ya,_fnFilterCustom:Tb,_fnFilterColumn:Sb,_fnFilter:Rb,_fnFilterCreateSearch:ib,_fnEscapeRegex:jb,_fnFilterData:Ub,_fnFeatureHtmlInfo:Nb,_fnUpdateInfo:Xb, +_fnInfoMacros:Yb,_fnInitialise:Aa,_fnInitComplete:Pa,_fnLengthChange:kb,_fnFeatureHtmlLength:Jb,_fnFeatureHtmlPaginate:Ob,_fnPageChange:Ra,_fnFeatureHtmlProcessing:Lb,_fnProcessingDisplay:V,_fnFeatureHtmlTable:Mb,_fnScrollDraw:Ha,_fnApplyToChildren:ca,_fnCalculateColumnWidths:Za,_fnThrottle:hb,_fnConvertToWidth:Zb,_fnGetWidestNode:$b,_fnGetMaxLenString:ac,_fnStringToCss:K,_fnSortFlatten:pa,_fnSort:Hb,_fnSortAria:cc,_fnSortListener:nb,_fnSortAttachListener:fb,_fnSortingClasses:Sa,_fnSortData:bc,_fnSaveState:Ca, +_fnLoadState:dc,_fnImplementState:pb,_fnSettingsFromNode:Ta,_fnLog:da,_fnMap:X,_fnBindAction:ob,_fnCallbackReg:R,_fnCallbackFire:F,_fnLengthOverflow:lb,_fnRenderer:gb,_fnDataSource:Q,_fnRowAttributes:eb,_fnExtend:qb,_fnCalculateEnd:function(){}});l.fn.dataTable=u;u.$=l;l.fn.dataTableSettings=u.settings;l.fn.dataTableExt=u.ext;l.fn.DataTable=function(a){return l(this).dataTable(a).api()};l.each(u,function(a,b){l.fn.DataTable[a]=b});return u}); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/federation/federation.js b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/federation/federation.js index b3b1e9d392c..ac8e3efe11e 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/federation/federation.js +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/federation/federation.js @@ -28,7 +28,7 @@ $(document).ready(function() { var capabilityArr = scTableData.filter(item => (item.subcluster === row.id())); var capabilityObj = JSON.parse(capabilityArr[0].capability).clusterMetrics; row.child( - '' + + '
    ' + ' ' + ' "); } - @Test public void testEnumAttrs() { + @Test + void testEnumAttrs() { Hamlet h = newHamlet(). meta_http("Content-type", "text/html; charset=utf-8"). title("test enum attrs"). @@ -109,7 +119,8 @@ public class TestHamlet { verify(out).print(" rel=\"start index\""); } - @Test public void testScriptStyle() { + @Test + void testScriptStyle() { Hamlet h = newHamlet(). script("a.js").script("b.js"). style("h1 { font-size: 1.2em }"); @@ -121,7 +132,8 @@ public class TestHamlet { verify(out).print(" type=\"text/css\""); } - @Test public void testPreformatted() { + @Test + void testPreformatted() { Hamlet h = newHamlet(). div(). i("inline before pre"). @@ -144,7 +156,8 @@ public class TestHamlet { @Override public void renderPartial() {} } - @Test public void testSubViews() { + @Test + void testSubViews() { Hamlet h = newHamlet(). title("test sub-views"). div("#view1").__(TestView1.class).__(). @@ -153,8 +166,8 @@ public class TestHamlet { PrintWriter out = h.getWriter(); out.flush(); assertEquals(0, h.nestLevel); - verify(out).print("["+ TestView1.class.getName() +"]"); - verify(out).print("["+ TestView2.class.getName() +"]"); + verify(out).print("[" + TestView1.class.getName() + "]"); + verify(out).print("[" + TestView2.class.getName() + "]"); } static Hamlet newHamlet() { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/hamlet2/TestHamletImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/hamlet2/TestHamletImpl.java index 3b19aa30c3d..6a3e90b5689 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/hamlet2/TestHamletImpl.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/hamlet2/TestHamletImpl.java @@ -18,19 +18,29 @@ package org.apache.hadoop.yarn.webapp.hamlet2; import java.io.PrintWriter; -import org.junit.Test; -import static org.junit.Assert.assertEquals; -import static org.mockito.Mockito.*; +import org.junit.jupiter.api.Test; -import org.apache.hadoop.yarn.webapp.hamlet2.HamletSpec.*; +import org.apache.hadoop.yarn.webapp.hamlet2.HamletSpec.CoreAttrs; +import org.apache.hadoop.yarn.webapp.hamlet2.HamletSpec.H1; +import org.apache.hadoop.yarn.webapp.hamlet2.HamletSpec.LINK; +import org.apache.hadoop.yarn.webapp.hamlet2.HamletSpec.SCRIPT; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.never; +import static org.mockito.Mockito.spy; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.verifyNoMoreInteractions; public class TestHamletImpl { /** * Test the generic implementation methods * @see TestHamlet for Hamlet syntax */ - @Test public void testGeneric() { + @Test + void testGeneric() { PrintWriter out = spy(new PrintWriter(System.out)); HamletImpl hi = new HamletImpl(out, 0, false); hi. @@ -66,7 +76,8 @@ public class TestHamletImpl { verify(out, never()).print(""); } - @Test public void testSetSelector() { + @Test + void testSetSelector() { CoreAttrs e = mock(CoreAttrs.class); HamletImpl.setSelector(e, "#id.class"); @@ -81,7 +92,8 @@ public class TestHamletImpl { verify(t).__("heading"); } - @Test public void testSetLinkHref() { + @Test + void testSetLinkHref() { LINK link = mock(LINK.class); HamletImpl.setLinkHref(link, "uri"); HamletImpl.setLinkHref(link, "style.css"); @@ -93,7 +105,8 @@ public class TestHamletImpl { verifyNoMoreInteractions(link); } - @Test public void testSetScriptSrc() { + @Test + void testSetScriptSrc() { SCRIPT script = mock(SCRIPT.class); HamletImpl.setScriptSrc(script, "uri"); HamletImpl.setScriptSrc(script, "script.js"); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/hamlet2/TestParseSelector.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/hamlet2/TestParseSelector.java index e2141e6942c..a340f1491f3 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/hamlet2/TestParseSelector.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/hamlet2/TestParseSelector.java @@ -17,41 +17,51 @@ */ package org.apache.hadoop.yarn.webapp.hamlet2; -import org.junit.Test; - -import static org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl.*; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertNull; +import org.junit.jupiter.api.Test; import org.apache.hadoop.yarn.webapp.WebAppException; +import static org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl.S_CLASS; +import static org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl.S_ID; +import static org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl.parseSelector; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNull; +import static org.junit.jupiter.api.Assertions.assertThrows; + public class TestParseSelector { - @Test public void testNormal() { + @Test + void testNormal() { String[] res = parseSelector("#id.class"); assertEquals("id", res[S_ID]); assertEquals("class", res[S_CLASS]); } - @Test public void testMultiClass() { + @Test + void testMultiClass() { String[] res = parseSelector("#id.class1.class2"); assertEquals("id", res[S_ID]); assertEquals("class1 class2", res[S_CLASS]); } - @Test public void testMissingId() { + @Test + void testMissingId() { String[] res = parseSelector(".class"); assertNull(res[S_ID]); assertEquals("class", res[S_CLASS]); } - @Test public void testMissingClass() { + @Test + void testMissingClass() { String[] res = parseSelector("#id"); assertEquals("id", res[S_ID]); assertNull(res[S_CLASS]); } - @Test(expected=WebAppException.class) public void testMissingAll() { - parseSelector(""); + @Test + void testMissingAll() { + assertThrows(WebAppException.class, () -> { + parseSelector(""); + }); } } \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/log/AggregatedLogsBlockForTest.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/log/AggregatedLogsBlockForTest.java index 57e6c81659b..e171ee1fc56 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/log/AggregatedLogsBlockForTest.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/log/AggregatedLogsBlockForTest.java @@ -20,7 +20,6 @@ package org.apache.hadoop.yarn.webapp.log; import java.util.HashMap; import java.util.Map; - import javax.servlet.http.HttpServletRequest; import org.apache.hadoop.conf.Configuration; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/test/TestWebAppTests.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/test/TestWebAppTests.java index e2f2bfa1b25..aef8ea463e6 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/test/TestWebAppTests.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/test/TestWebAppTests.java @@ -18,23 +18,26 @@ package org.apache.hadoop.yarn.webapp.test; -import com.google.inject.AbstractModule; -import com.google.inject.Injector; -import com.google.inject.servlet.RequestScoped; import java.io.PrintWriter; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; -import org.junit.Test; -import static org.junit.Assert.*; -import static org.mockito.Mockito.*; -import org.slf4j.LoggerFactory; +import com.google.inject.AbstractModule; +import com.google.inject.Injector; +import com.google.inject.servlet.RequestScoped; +import org.junit.jupiter.api.Test; import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import static org.junit.jupiter.api.Assertions.assertNotSame; +import static org.junit.jupiter.api.Assertions.assertSame; +import static org.mockito.Mockito.verify; public class TestWebAppTests { static final Logger LOG = LoggerFactory.getLogger(TestWebAppTests.class); - @Test public void testInstances() throws Exception { + @Test + void testInstances() throws Exception { Injector injector = WebAppTests.createMockInjector(this); HttpServletRequest req = injector.getInstance(HttpServletRequest.class); HttpServletResponse res = injector.getInstance(HttpServletResponse.class); @@ -61,24 +64,27 @@ public class TestWebAppTests { static class FooBar extends Bar { } - @Test public void testCreateInjector() throws Exception { + @Test + void testCreateInjector() throws Exception { Bar bar = new Bar(); Injector injector = WebAppTests.createMockInjector(Foo.class, bar); logInstances(injector.getInstance(HttpServletRequest.class), - injector.getInstance(HttpServletResponse.class), - injector.getInstance(HttpServletResponse.class).getWriter()); + injector.getInstance(HttpServletResponse.class), + injector.getInstance(HttpServletResponse.class).getWriter()); assertSame(bar, injector.getInstance(Foo.class)); } - @Test public void testCreateInjector2() { + @Test + void testCreateInjector2() { final FooBar foobar = new FooBar(); Bar bar = new Bar(); Injector injector = WebAppTests.createMockInjector(Foo.class, bar, new AbstractModule() { - @Override protected void configure() { - bind(Bar.class).toInstance(foobar); - } - }); + @Override + protected void configure() { + bind(Bar.class).toInstance(foobar); + } + }); assertNotSame(bar, injector.getInstance(Bar.class)); assertSame(foobar, injector.getInstance(Bar.class)); } @@ -87,11 +93,12 @@ public class TestWebAppTests { static class ScopeTest { } - @Test public void testRequestScope() { + @Test + void testRequestScope() { Injector injector = WebAppTests.createMockInjector(this); assertSame(injector.getInstance(ScopeTest.class), - injector.getInstance(ScopeTest.class)); + injector.getInstance(ScopeTest.class)); } private void logInstances(HttpServletRequest req, HttpServletResponse res, diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/test/WebAppTests.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/test/WebAppTests.java index da009d4d28e..41c56ce1b61 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/test/WebAppTests.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/test/WebAppTests.java @@ -18,30 +18,29 @@ package org.apache.hadoop.yarn.webapp.test; +import java.io.IOException; +import java.io.PrintWriter; +import java.lang.reflect.Method; +import java.util.Map; +import javax.servlet.http.HttpServletRequest; +import javax.servlet.http.HttpServletResponse; + +import com.google.inject.AbstractModule; +import com.google.inject.Guice; +import com.google.inject.Injector; +import com.google.inject.Module; +import com.google.inject.Provides; +import com.google.inject.Scopes; +import com.google.inject.servlet.RequestScoped; + import org.apache.hadoop.yarn.webapp.Controller; import org.apache.hadoop.yarn.webapp.SubView; import org.apache.hadoop.yarn.webapp.View; import org.apache.hadoop.yarn.webapp.WebAppException; -import java.lang.reflect.Method; -import java.util.Map; - -import com.google.inject.Module; -import com.google.inject.Scopes; -import com.google.inject.servlet.RequestScoped; -import com.google.inject.AbstractModule; -import com.google.inject.Guice; -import com.google.inject.Injector; -import com.google.inject.Provides; - -import java.io.IOException; -import java.io.PrintWriter; - -import javax.servlet.http.HttpServletResponse; -import javax.servlet.http.HttpServletRequest; - - -import static org.mockito.Mockito.*; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.spy; +import static org.mockito.Mockito.when; public class WebAppTests { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/util/TestWebAppUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/util/TestWebAppUtils.java index d6f78b18c1b..2d9c39aba8b 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/util/TestWebAppUtils.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/util/TestWebAppUtils.java @@ -18,14 +18,18 @@ package org.apache.hadoop.yarn.webapp.util; -import static org.junit.Assert.assertArrayEquals; -import static org.junit.Assert.assertEquals; - import java.io.File; import java.io.IOException; import java.net.UnknownHostException; import java.util.HashMap; import java.util.Map; +import javax.servlet.http.HttpServletRequest; + +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.Test; +import org.mockito.Mockito; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.http.HttpServer2; @@ -35,13 +39,10 @@ import org.apache.hadoop.security.alias.CredentialProvider; import org.apache.hadoop.security.alias.CredentialProviderFactory; import org.apache.hadoop.security.alias.JavaKeyStoreProvider; import org.apache.hadoop.yarn.conf.YarnConfiguration; -import org.junit.AfterClass; -import org.junit.BeforeClass; -import org.junit.Assert; -import org.junit.Test; -import org.mockito.Mockito; -import javax.servlet.http.HttpServletRequest; +import static org.junit.jupiter.api.Assertions.assertArrayEquals; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNull; public class TestWebAppUtils { private static final String RM1_NODE_ID = "rm1"; @@ -53,7 +54,7 @@ public class TestWebAppUtils { private static final String anyIpAddress = "1.2.3.4"; private static Map savedStaticResolution = new HashMap<>(); - @BeforeClass + @BeforeAll public static void initializeDummyHostnameResolution() throws Exception { String previousIpAddress; for (String hostName : dummyHostNames) { @@ -64,7 +65,7 @@ public class TestWebAppUtils { } } - @AfterClass + @AfterAll public static void restoreDummyHostnameResolution() throws Exception { for (Map.Entry hostnameToIpEntry : savedStaticResolution.entrySet()) { NetUtils.addStaticResolution(hostnameToIpEntry.getKey(), hostnameToIpEntry.getValue()); @@ -72,7 +73,7 @@ public class TestWebAppUtils { } @Test - public void TestRMWebAppURLRemoteAndLocal() throws UnknownHostException { + void TestRMWebAppURLRemoteAndLocal() throws UnknownHostException { Configuration configuration = new Configuration(); final String rmAddress = "host1:8088"; configuration.set(YarnConfiguration.RM_WEBAPP_ADDRESS, rmAddress); @@ -84,30 +85,32 @@ public class TestWebAppUtils { configuration.set(YarnConfiguration.RM_HA_IDS, RM1_NODE_ID + "," + RM2_NODE_ID); String rmRemoteUrl = WebAppUtils.getResolvedRemoteRMWebAppURLWithoutScheme(configuration); - Assert.assertEquals("ResolvedRemoteRMWebAppUrl should resolve to the first HA RM address", rm1Address, rmRemoteUrl); + assertEquals(rm1Address, rmRemoteUrl, + "ResolvedRemoteRMWebAppUrl should resolve to the first HA RM address"); String rmLocalUrl = WebAppUtils.getResolvedRMWebAppURLWithoutScheme(configuration); - Assert.assertEquals("ResolvedRMWebAppUrl should resolve to the default RM webapp address", rmAddress, rmLocalUrl); + assertEquals(rmAddress, rmLocalUrl, + "ResolvedRMWebAppUrl should resolve to the default RM webapp address"); } @Test - public void testGetPassword() throws Exception { + void testGetPassword() throws Exception { Configuration conf = provisionCredentialsForSSL(); // use WebAppUtils as would be used by loadSslConfiguration - Assert.assertEquals("keypass", + assertEquals("keypass", WebAppUtils.getPassword(conf, WebAppUtils.WEB_APP_KEY_PASSWORD_KEY)); - Assert.assertEquals("storepass", + assertEquals("storepass", WebAppUtils.getPassword(conf, WebAppUtils.WEB_APP_KEYSTORE_PASSWORD_KEY)); - Assert.assertEquals("trustpass", + assertEquals("trustpass", WebAppUtils.getPassword(conf, WebAppUtils.WEB_APP_TRUSTSTORE_PASSWORD_KEY)); // let's make sure that a password that doesn't exist returns null - Assert.assertEquals(null, WebAppUtils.getPassword(conf,"invalid-alias")); + assertNull(WebAppUtils.getPassword(conf, "invalid-alias")); } @Test - public void testLoadSslConfiguration() throws Exception { + void testLoadSslConfiguration() throws Exception { Configuration conf = provisionCredentialsForSSL(); TestBuilder builder = (TestBuilder) new TestBuilder(); @@ -116,12 +119,12 @@ public class TestWebAppUtils { String keypass = "keypass"; String storepass = "storepass"; - String trustpass = "trustpass"; + String trustpass = "trustpass"; // make sure we get the right passwords in the builder - assertEquals(keypass, ((TestBuilder)builder).keypass); - assertEquals(storepass, ((TestBuilder)builder).keystorePassword); - assertEquals(trustpass, ((TestBuilder)builder).truststorePassword); + assertEquals(keypass, ((TestBuilder) builder).keypass); + assertEquals(storepass, ((TestBuilder) builder).keystorePassword); + assertEquals(trustpass, ((TestBuilder) builder).truststorePassword); } protected Configuration provisionCredentialsForSSL() throws IOException, @@ -145,11 +148,11 @@ public class TestWebAppUtils { char[] trustpass = {'t', 'r', 'u', 's', 't', 'p', 'a', 's', 's'}; // ensure that we get nulls when the key isn't there - assertEquals(null, provider.getCredentialEntry( + assertNull(provider.getCredentialEntry( WebAppUtils.WEB_APP_KEY_PASSWORD_KEY)); - assertEquals(null, provider.getCredentialEntry( + assertNull(provider.getCredentialEntry( WebAppUtils.WEB_APP_KEYSTORE_PASSWORD_KEY)); - assertEquals(null, provider.getCredentialEntry( + assertNull(provider.getCredentialEntry( WebAppUtils.WEB_APP_TRUSTSTORE_PASSWORD_KEY)); // create new aliases @@ -180,7 +183,7 @@ public class TestWebAppUtils { } @Test - public void testAppendQueryParams() throws Exception { + void testAppendQueryParams() throws Exception { HttpServletRequest request = Mockito.mock(HttpServletRequest.class); String targetUri = "/test/path"; Mockito.when(request.getCharacterEncoding()).thenReturn(null); @@ -194,12 +197,12 @@ public class TestWebAppUtils { for (Map.Entry entry : paramResultMap.entrySet()) { Mockito.when(request.getQueryString()).thenReturn(entry.getKey()); String uri = WebAppUtils.appendQueryParams(request, targetUri); - Assert.assertEquals(entry.getValue(), uri); + assertEquals(entry.getValue(), uri); } } @Test - public void testGetHtmlEscapedURIWithQueryString() throws Exception { + void testGetHtmlEscapedURIWithQueryString() throws Exception { HttpServletRequest request = Mockito.mock(HttpServletRequest.class); String targetUri = "/test/path"; Mockito.when(request.getCharacterEncoding()).thenReturn(null); @@ -214,7 +217,7 @@ public class TestWebAppUtils { for (Map.Entry entry : paramResultMap.entrySet()) { Mockito.when(request.getQueryString()).thenReturn(entry.getKey()); String uri = WebAppUtils.getHtmlEscapedURIWithQueryString(request); - Assert.assertEquals(entry.getValue(), uri); + assertEquals(entry.getValue(), uri); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/util/TestWebServiceClient.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/util/TestWebServiceClient.java index 99a5783628e..b51dcf88bcb 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/util/TestWebServiceClient.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/util/TestWebServiceClient.java @@ -22,6 +22,8 @@ import java.net.HttpURLConnection; import java.net.URI; import java.net.URL; +import org.junit.jupiter.api.Test; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileUtil; import org.apache.hadoop.http.HttpServer2; @@ -29,8 +31,9 @@ import org.apache.hadoop.http.TestHttpServer.EchoServlet; import org.apache.hadoop.net.NetUtils; import org.apache.hadoop.security.ssl.KeyStoreTestUtil; import org.apache.hadoop.yarn.conf.YarnConfiguration; -import org.junit.Assert; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotNull; public class TestWebServiceClient { @@ -43,17 +46,17 @@ public class TestWebServiceClient { static final String SERVLET_PATH_ECHO = "/" + SERVLET_NAME_ECHO; @Test - public void testGetWebServiceClient() throws Exception { + void testGetWebServiceClient() throws Exception { Configuration conf = new Configuration(); conf.set(YarnConfiguration.YARN_HTTP_POLICY_KEY, "HTTPS_ONLY"); WebServiceClient.initialize(conf); WebServiceClient client = WebServiceClient.getWebServiceClient(); - Assert.assertNotNull(client.getSSLFactory()); + assertNotNull(client.getSSLFactory()); WebServiceClient.destroy(); } @Test - public void testCreateClient() throws Exception { + void testCreateClient() throws Exception { Configuration conf = new Configuration(); conf.set(YarnConfiguration.YARN_HTTP_POLICY_KEY, "HTTPS_ONLY"); File base = new File(BASEDIR); @@ -91,7 +94,7 @@ public class TestWebServiceClient { WebServiceClient client = WebServiceClient.getWebServiceClient(); HttpURLConnection conn = client.getHttpURLConnectionFactory() .getHttpURLConnection(u); - Assert.assertEquals(HttpURLConnection.HTTP_OK, conn.getResponseCode()); + assertEquals(HttpURLConnection.HTTP_OK, conn.getResponseCode()); WebServiceClient.destroy(); server.stop(); FileUtil.fullyDelete(new File(BASEDIR)); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/view/TestCommonViews.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/view/TestCommonViews.java index a29b152e9f4..028bc6ad18c 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/view/TestCommonViews.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/view/TestCommonViews.java @@ -19,36 +19,36 @@ package org.apache.hadoop.yarn.webapp.view; import com.google.inject.Injector; +import org.junit.jupiter.api.Test; import org.apache.hadoop.yarn.webapp.ResponseInfo; import org.apache.hadoop.yarn.webapp.test.WebAppTests; -import org.apache.hadoop.yarn.webapp.view.ErrorPage; -import org.apache.hadoop.yarn.webapp.view.FooterBlock; -import org.apache.hadoop.yarn.webapp.view.HeaderBlock; -import org.apache.hadoop.yarn.webapp.view.JQueryUI; - -import org.junit.Test; public class TestCommonViews { - @Test public void testErrorPage() { + @Test + void testErrorPage() { Injector injector = WebAppTests.testPage(ErrorPage.class); } - @Test public void testHeaderBlock() { + @Test + void testHeaderBlock() { WebAppTests.testBlock(HeaderBlock.class); } - @Test public void testFooterBlock() { + @Test + void testFooterBlock() { WebAppTests.testBlock(FooterBlock.class); } - @Test public void testJQueryUI() { + @Test + void testJQueryUI() { WebAppTests.testBlock(JQueryUI.class); } - @Test public void testInfoBlock() { + @Test + void testInfoBlock() { Injector injector = WebAppTests.createMockInjector(this); ResponseInfo info = injector.getInstance(ResponseInfo.class); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/view/TestHtmlBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/view/TestHtmlBlock.java index e510dd57ba8..de9bd454de5 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/view/TestHtmlBlock.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/view/TestHtmlBlock.java @@ -18,15 +18,16 @@ package org.apache.hadoop.yarn.webapp.view; -import com.google.inject.Injector; - import java.io.PrintWriter; +import com.google.inject.Injector; +import org.junit.jupiter.api.Test; + import org.apache.hadoop.yarn.webapp.WebAppException; import org.apache.hadoop.yarn.webapp.test.WebAppTests; -import org.junit.Test; -import static org.mockito.Mockito.*; +import static org.junit.jupiter.api.Assertions.assertThrows; +import static org.mockito.Mockito.verify; public class TestHtmlBlock { public static class TestBlock extends HtmlBlock { @@ -54,7 +55,8 @@ public class TestHtmlBlock { } } - @Test public void testUsual() { + @Test + void testUsual() { Injector injector = WebAppTests.testBlock(TestBlock.class); PrintWriter out = injector.getInstance(PrintWriter.class); @@ -62,11 +64,17 @@ public class TestHtmlBlock { verify(out).print("test note"); } - @Test(expected=WebAppException.class) public void testShortBlock() { - WebAppTests.testBlock(ShortBlock.class); + @Test + void testShortBlock() { + assertThrows(WebAppException.class, () -> { + WebAppTests.testBlock(ShortBlock.class); + }); } - @Test(expected=WebAppException.class) public void testShortPage() { - WebAppTests.testPage(ShortPage.class); + @Test + void testShortPage() { + assertThrows(WebAppException.class, () -> { + WebAppTests.testPage(ShortPage.class); + }); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/view/TestHtmlPage.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/view/TestHtmlPage.java index beed31fb478..ce8bbc9d851 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/view/TestHtmlPage.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/view/TestHtmlPage.java @@ -18,16 +18,17 @@ package org.apache.hadoop.yarn.webapp.view; -import com.google.inject.Injector; - import java.io.PrintWriter; +import com.google.inject.Injector; +import org.junit.jupiter.api.Test; + import org.apache.hadoop.yarn.webapp.MimeType; import org.apache.hadoop.yarn.webapp.WebAppException; import org.apache.hadoop.yarn.webapp.test.WebAppTests; -import org.junit.Test; -import static org.mockito.Mockito.*; +import static org.junit.jupiter.api.Assertions.assertThrows; +import static org.mockito.Mockito.verify; public class TestHtmlPage { @@ -49,7 +50,8 @@ public class TestHtmlPage { } } - @Test public void testUsual() { + @Test + void testUsual() { Injector injector = WebAppTests.testPage(TestView.class); PrintWriter out = injector.getInstance(PrintWriter.class); @@ -64,7 +66,10 @@ public class TestHtmlPage { verify(out).print("test note"); } - @Test(expected=WebAppException.class) public void testShort() { - WebAppTests.testPage(ShortView.class); + @Test + void testShort() { + assertThrows(WebAppException.class, () -> { + WebAppTests.testPage(ShortView.class); + }); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/view/TestInfoBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/view/TestInfoBlock.java index 751aa2cabe4..1f0a4a6467e 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/view/TestInfoBlock.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/view/TestInfoBlock.java @@ -21,13 +21,15 @@ package org.apache.hadoop.yarn.webapp.view; import java.io.PrintWriter; import java.io.StringWriter; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertTrue; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; import org.apache.hadoop.yarn.webapp.ResponseInfo; import org.apache.hadoop.yarn.webapp.test.WebAppTests; -import org.junit.Before; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertTrue; public class TestInfoBlock { @@ -86,30 +88,32 @@ public class TestInfoBlock { } } - @Before + @BeforeEach public void setup() { sw = new StringWriter(); pw = new PrintWriter(sw); } - @Test(timeout=60000L) - public void testMultilineInfoBlock() throws Exception{ + @Test + @Timeout(60000L) + void testMultilineInfoBlock() throws Exception { WebAppTests.testBlock(MultilineInfoBlock.class); TestInfoBlock.pw.flush(); String output = TestInfoBlock.sw.toString().replaceAll(" +", " "); String expectedMultilineData1 = String.format("%n" - + " %n" - + " %n"); + + " %n" + + " %n"); String expectedMultilineData2 = String.format("%n" - + " %n %n
    ' + '

    Application Metrics

    ' + @@ -42,11 +42,12 @@ $(document).ready(function() { '
    ' + '

    Resource Metrics

    ' + '

    Memory

    ' + - ' TotalMB : ' + capabilityObj.totalMB + '

    ' + - ' ReservedMB : ' + capabilityObj.reservedMB + '

    ' + - ' AvailableMB : ' + capabilityObj.availableMB + '

    ' + - ' AllocatedMB : ' + capabilityObj.allocatedMB + '

    ' + - ' PendingMB : ' + capabilityObj.pendingMB + '

    ' + + ' Total Memory : ' + capabilityArr[0].totalmemory + '

    ' + + ' Reserved Memory : ' + capabilityArr[0].reservedmemory + '

    ' + + ' Available Memory : ' + capabilityArr[0].availablememory + '

    ' + + ' Allocated Memory : ' + capabilityArr[0].allocatedmemory + '

    ' + + ' Pending Memory : ' + capabilityArr[0].pendingmemory + '

    ' + + '
    ' + '

    VirtualCores

    ' + ' TotalVirtualCores : ' + capabilityObj.totalVirtualCores + '

    ' + ' ReservedVirtualCores : ' + capabilityObj.reservedVirtualCores + '

    ' + diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml index 5132d4199af..80672fb1cc8 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml @@ -2459,6 +2459,24 @@ 0 + + Provide local directory which includes the related jar file as well as all the dependencies’ + jar file. We could specify the single jar file or use ${local_dir_to_jar}/* to load all + jars under the dep directory. + yarn.nodemanager.aux-services.%s.classpath + NONE + + + + Normally this configuration shouldn't be manually set. The class would be loaded from customized + classpath if it does not belong to system-classes. By default, the package org.apache.hadoop is part of the + system-classes. If for example the class CustomAuxService is in the package org.apache.hadoop, it will not be + loaded from the customized classpath. To solve this, either the package of CustomAuxService could be changed, + or a separate system-classes could be configured which excludes the package org.apache.hadoop. + yarn.nodemanager.aux-services.%s.system-classes + + + No. of ms to wait between sending a SIGTERM and SIGKILL to a container yarn.nodemanager.sleep-delay-before-sigkill.ms @@ -3727,6 +3745,26 @@ yarnfederation/ + + + The number of retries to clear the app in the FederationStateStore, + the default value is 1, that is, after the app fails to clean up, it will retry the cleanup again. + + yarn.federation.state-store.clean-up-retry-count + 1 + + + + + Clear the sleep time of App retry in FederationStateStore. + When the app fails to clean up, + it will sleep for a period of time and then try to clean up. + The default value is 1s. + + yarn.federation.state-store.clean-up-retry-sleep-time + 1s + + @@ -5006,7 +5044,18 @@ Default is 100 - + + + yarn.router.webapp.cross-origin.enabled + false + + Flag to enable cross-origin (CORS) support for Yarn Router. + For Yarn Router, also add + org.apache.hadoop.security.HttpCrossOriginFilterInitializer to the + configuration hadoop.http.filter.initializers in core-site.xml. + + + yarn.federation.state-store.max-applications 1000 @@ -5016,4 +5065,56 @@ + + yarn.router.interceptor.user-thread-pool.minimum-pool-size + 5 + + This configurable is used to set the corePoolSize(minimumPoolSize) + of the thread pool of the interceptor. + Default is 5. + + + + + yarn.router.interceptor.user-thread-pool.maximum-pool-size + 5 + + This configuration is used to set the default value of maximumPoolSize + of the thread pool of the interceptor. + Default is 5. + + + + + yarn.router.interceptor.user-thread-pool.keep-alive-time + 0s + + This configurable is used to set the keepAliveTime of the thread pool of the interceptor. + Default is 0s. + + + + + yarn.router.submit.interval.time + 10ms + + The interval Time between calling different subCluster requests. + Default is 10ms. + + + + + yarn.router.interceptor.allow-partial-result.enable + false + + This configuration represents whether to allow the interceptor to + return partial SubCluster results. + If true, we will ignore the exception to some subClusters during the calling process, + and return result. + If false, if an exception occurs in a subCluster during the calling process, + an exception will be thrown directly. + Default is false. + + + diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/MockApps.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/MockApps.java index f59f00e6e85..11d54d6e6ae 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/MockApps.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/MockApps.java @@ -20,11 +20,11 @@ package org.apache.hadoop.yarn; import java.util.Iterator; +import org.apache.hadoop.thirdparty.com.google.common.collect.Iterators; + import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.api.records.YarnApplicationState; -import org.apache.hadoop.thirdparty.com.google.common.collect.Iterators; - /** * Utilities to generate fake test apps */ diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestContainerLaunchRPC.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestContainerLaunchRPC.java index 279a37b7d19..94f1e520e4f 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestContainerLaunchRPC.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestContainerLaunchRPC.java @@ -24,8 +24,10 @@ import java.net.SocketTimeoutException; import java.util.ArrayList; import java.util.List; +import org.junit.jupiter.api.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.ipc.Server; import org.apache.hadoop.net.NetUtils; @@ -35,12 +37,12 @@ import org.apache.hadoop.yarn.api.ContainerManagementProtocol; import org.apache.hadoop.yarn.api.protocolrecords.CommitResponse; import org.apache.hadoop.yarn.api.protocolrecords.ContainerUpdateRequest; import org.apache.hadoop.yarn.api.protocolrecords.ContainerUpdateResponse; +import org.apache.hadoop.yarn.api.protocolrecords.GetContainerStatusesRequest; +import org.apache.hadoop.yarn.api.protocolrecords.GetContainerStatusesResponse; import org.apache.hadoop.yarn.api.protocolrecords.GetLocalizationStatusesRequest; import org.apache.hadoop.yarn.api.protocolrecords.GetLocalizationStatusesResponse; import org.apache.hadoop.yarn.api.protocolrecords.IncreaseContainersResourceRequest; import org.apache.hadoop.yarn.api.protocolrecords.IncreaseContainersResourceResponse; -import org.apache.hadoop.yarn.api.protocolrecords.GetContainerStatusesRequest; -import org.apache.hadoop.yarn.api.protocolrecords.GetContainerStatusesResponse; import org.apache.hadoop.yarn.api.protocolrecords.ReInitializeContainerRequest; import org.apache.hadoop.yarn.api.protocolrecords.ReInitializeContainerResponse; import org.apache.hadoop.yarn.api.protocolrecords.ResourceLocalizationRequest; @@ -70,8 +72,9 @@ import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider; import org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC; import org.apache.hadoop.yarn.ipc.YarnRPC; import org.apache.hadoop.yarn.security.ContainerTokenIdentifier; -import org.junit.Assert; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.fail; /* * Test that the container launcher rpc times out properly. This is used @@ -86,7 +89,7 @@ public class TestContainerLaunchRPC { .getRecordFactory(null); @Test - public void testHadoopProtoRPCTimeout() throws Exception { + void testHadoopProtoRPCTimeout() throws Exception { testRPCTimeout(HadoopYarnProtoRPC.class.getName()); } @@ -136,16 +139,15 @@ public class TestContainerLaunchRPC { proxy.startContainers(allRequests); } catch (Exception e) { LOG.info(StringUtils.stringifyException(e)); - Assert.assertEquals("Error, exception is not: " - + SocketTimeoutException.class.getName(), - SocketTimeoutException.class.getName(), e.getClass().getName()); + assertEquals(SocketTimeoutException.class.getName(), e.getClass().getName(), + "Error, exception is not: " + SocketTimeoutException.class.getName()); return; } } finally { server.stop(); } - Assert.fail("timeout exception should have occurred!"); + fail("timeout exception should have occurred!"); } public static Token newContainerToken(NodeId nodeId, byte[] password, diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestContainerLogAppender.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestContainerLogAppender.java index 6b8e537a4c5..26acfd7bad8 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestContainerLogAppender.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestContainerLogAppender.java @@ -18,14 +18,15 @@ package org.apache.hadoop.yarn; +import org.junit.jupiter.api.Test; + import org.apache.log4j.Logger; import org.apache.log4j.PatternLayout; -import org.junit.Test; public class TestContainerLogAppender { @Test - public void testAppendInClose() throws Exception { + void testAppendInClose() throws Exception { final ContainerLogAppender claAppender = new ContainerLogAppender(); claAppender.setName("testCLA"); claAppender.setLayout(new PatternLayout("%-5p [%t]: %m%n")); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestContainerResourceIncreaseRPC.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestContainerResourceIncreaseRPC.java index c3dac914672..e615036d813 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestContainerResourceIncreaseRPC.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestContainerResourceIncreaseRPC.java @@ -18,8 +18,16 @@ package org.apache.hadoop.yarn; +import java.io.IOException; +import java.net.InetSocketAddress; +import java.net.SocketTimeoutException; +import java.util.ArrayList; +import java.util.List; + +import org.junit.jupiter.api.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.ipc.Server; import org.apache.hadoop.net.NetUtils; @@ -59,14 +67,9 @@ import org.apache.hadoop.yarn.exceptions.YarnException; import org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC; import org.apache.hadoop.yarn.ipc.YarnRPC; import org.apache.hadoop.yarn.security.ContainerTokenIdentifier; -import org.junit.Assert; -import org.junit.Test; -import java.io.IOException; -import java.net.InetSocketAddress; -import java.net.SocketTimeoutException; -import java.util.ArrayList; -import java.util.List; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.fail; /* * Test that the container resource increase rpc times out properly. @@ -78,7 +81,7 @@ public class TestContainerResourceIncreaseRPC { TestContainerResourceIncreaseRPC.class); @Test - public void testHadoopProtoRPCTimeout() throws Exception { + void testHadoopProtoRPCTimeout() throws Exception { testRPCTimeout(HadoopYarnProtoRPC.class.getName()); } @@ -122,15 +125,14 @@ public class TestContainerResourceIncreaseRPC { proxy.updateContainer(request); } catch (Exception e) { LOG.info(StringUtils.stringifyException(e)); - Assert.assertEquals("Error, exception is not: " - + SocketTimeoutException.class.getName(), - SocketTimeoutException.class.getName(), e.getClass().getName()); + assertEquals(SocketTimeoutException.class.getName(), e.getClass().getName(), + "Error, exception is not: " + SocketTimeoutException.class.getName()); return; } } finally { server.stop(); } - Assert.fail("timeout exception should have occurred!"); + fail("timeout exception should have occurred!"); } public static Token newContainerToken(NodeId nodeId, byte[] password, @@ -157,11 +159,9 @@ public class TestContainerResourceIncreaseRPC { } @Override - public StopContainersResponse - stopContainers(StopContainersRequest requests) throws YarnException, - IOException { - Exception e = new Exception("Dummy function", new Exception( - "Dummy function cause")); + public StopContainersResponse stopContainers(StopContainersRequest requests) + throws YarnException, IOException { + Exception e = new Exception("Dummy function", new Exception("Dummy function cause")); throw new YarnException(e); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestRPCFactories.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestRPCFactories.java index 765b165e90f..6c283c66744 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestRPCFactories.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestRPCFactories.java @@ -21,7 +21,7 @@ package org.apache.hadoop.yarn; import java.io.IOException; import java.net.InetSocketAddress; -import org.junit.Assert; +import org.junit.jupiter.api.Test; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.ipc.Server; @@ -37,16 +37,16 @@ import org.apache.hadoop.yarn.exceptions.YarnException; import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; import org.apache.hadoop.yarn.factories.impl.pb.RpcClientFactoryPBImpl; import org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.fail; public class TestRPCFactories { - - - + + @Test - public void test() { + void test() { testPbServerFactory(); - + testPbClientFactory(); } @@ -64,7 +64,7 @@ public class TestRPCFactories { server.start(); } catch (YarnRuntimeException e) { e.printStackTrace(); - Assert.fail("Failed to create server"); + fail("Failed to create server"); } finally { if (server != null) { server.stop(); @@ -92,12 +92,12 @@ public class TestRPCFactories { amrmClient = (ApplicationMasterProtocol) RpcClientFactoryPBImpl.get().getClient(ApplicationMasterProtocol.class, 1, NetUtils.getConnectAddress(server), conf); } catch (YarnRuntimeException e) { e.printStackTrace(); - Assert.fail("Failed to create client"); + fail("Failed to create client"); } } catch (YarnRuntimeException e) { e.printStackTrace(); - Assert.fail("Failed to create server"); + fail("Failed to create server"); } finally { if (server != null) { server.stop(); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestRecordFactory.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestRecordFactory.java index 9492988c946..8be77ba6288 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestRecordFactory.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestRecordFactory.java @@ -18,39 +18,41 @@ package org.apache.hadoop.yarn; -import org.junit.Assert; +import org.junit.jupiter.api.Test; -import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; -import org.apache.hadoop.yarn.factories.RecordFactory; -import org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl; import org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest; import org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse; import org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateRequestPBImpl; import org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl; -import org.junit.Test; +import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; +import org.apache.hadoop.yarn.factories.RecordFactory; +import org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.fail; public class TestRecordFactory { - + @Test - public void testPbRecordFactory() { + void testPbRecordFactory() { RecordFactory pbRecordFactory = RecordFactoryPBImpl.get(); - + try { AllocateResponse response = pbRecordFactory.newRecordInstance(AllocateResponse.class); - Assert.assertEquals(AllocateResponsePBImpl.class, response.getClass()); + assertEquals(AllocateResponsePBImpl.class, response.getClass()); } catch (YarnRuntimeException e) { e.printStackTrace(); - Assert.fail("Failed to crete record"); + fail("Failed to crete record"); } - + try { AllocateRequest response = pbRecordFactory.newRecordInstance(AllocateRequest.class); - Assert.assertEquals(AllocateRequestPBImpl.class, response.getClass()); + assertEquals(AllocateRequestPBImpl.class, response.getClass()); } catch (YarnRuntimeException e) { e.printStackTrace(); - Assert.fail("Failed to crete record"); + fail("Failed to crete record"); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestRpcFactoryProvider.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestRpcFactoryProvider.java index 005a71bbd97..a8cc98e4e45 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestRpcFactoryProvider.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestRpcFactoryProvider.java @@ -18,7 +18,7 @@ package org.apache.hadoop.yarn; -import org.junit.Assert; +import org.junit.jupiter.api.Test; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.yarn.conf.YarnConfiguration; @@ -28,21 +28,23 @@ import org.apache.hadoop.yarn.factories.RpcServerFactory; import org.apache.hadoop.yarn.factories.impl.pb.RpcClientFactoryPBImpl; import org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl; import org.apache.hadoop.yarn.factory.providers.RpcFactoryProvider; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.fail; public class TestRpcFactoryProvider { @Test - public void testFactoryProvider() { + void testFactoryProvider() { Configuration conf = new Configuration(); RpcClientFactory clientFactory = null; RpcServerFactory serverFactory = null; - - + + clientFactory = RpcFactoryProvider.getClientFactory(conf); serverFactory = RpcFactoryProvider.getServerFactory(conf); - Assert.assertEquals(RpcClientFactoryPBImpl.class, clientFactory.getClass()); - Assert.assertEquals(RpcServerFactoryPBImpl.class, serverFactory.getClass()); + assertEquals(RpcClientFactoryPBImpl.class, clientFactory.getClass()); + assertEquals(RpcServerFactoryPBImpl.class, serverFactory.getClass()); conf.set(YarnConfiguration.IPC_CLIENT_FACTORY_CLASS, "unknown"); conf.set(YarnConfiguration.IPC_SERVER_FACTORY_CLASS, "unknown"); @@ -50,28 +52,30 @@ public class TestRpcFactoryProvider { try { clientFactory = RpcFactoryProvider.getClientFactory(conf); - Assert.fail("Expected an exception - unknown serializer"); + fail("Expected an exception - unknown serializer"); } catch (YarnRuntimeException e) { } try { serverFactory = RpcFactoryProvider.getServerFactory(conf); - Assert.fail("Expected an exception - unknown serializer"); + fail("Expected an exception - unknown serializer"); } catch (YarnRuntimeException e) { } - + conf = new Configuration(); conf.set(YarnConfiguration.IPC_CLIENT_FACTORY_CLASS, "NonExistantClass"); conf.set(YarnConfiguration.IPC_SERVER_FACTORY_CLASS, RpcServerFactoryPBImpl.class.getName()); - + try { clientFactory = RpcFactoryProvider.getClientFactory(conf); - Assert.fail("Expected an exception - unknown class"); + fail("Expected an exception - unknown class"); } catch (YarnRuntimeException e) { } try { serverFactory = RpcFactoryProvider.getServerFactory(conf); } catch (YarnRuntimeException e) { - Assert.fail("Error while loading factory using reflection: [" + RpcServerFactoryPBImpl.class.getName() + "]"); + fail( + "Error while loading factory using reflection: [" + RpcServerFactoryPBImpl.class.getName() + + "]"); } } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestYarnUncaughtExceptionHandler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestYarnUncaughtExceptionHandler.java index 05bcdb982c4..e0201cfcd15 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestYarnUncaughtExceptionHandler.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/TestYarnUncaughtExceptionHandler.java @@ -18,18 +18,20 @@ package org.apache.hadoop.yarn; -import static org.junit.Assert.assertSame; -import static org.mockito.Mockito.spy; -import static org.mockito.Mockito.verify; +import org.junit.jupiter.api.Test; import org.apache.hadoop.util.ExitUtil; import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertSame; +import static org.mockito.Mockito.spy; +import static org.mockito.Mockito.verify; public class TestYarnUncaughtExceptionHandler { private static final YarnUncaughtExceptionHandler exHandler = new YarnUncaughtExceptionHandler(); + /** * Throw {@code YarnRuntimeException} inside thread and * check {@code YarnUncaughtExceptionHandler} instance @@ -37,7 +39,7 @@ public class TestYarnUncaughtExceptionHandler { * @throws InterruptedException */ @Test - public void testUncaughtExceptionHandlerWithRuntimeException() + void testUncaughtExceptionHandlerWithRuntimeException() throws InterruptedException { final YarnUncaughtExceptionHandler spyYarnHandler = spy(exHandler); final YarnRuntimeException yarnException = new YarnRuntimeException( @@ -67,7 +69,7 @@ public class TestYarnUncaughtExceptionHandler { * @throws InterruptedException */ @Test - public void testUncaughtExceptionHandlerWithError() + void testUncaughtExceptionHandlerWithError() throws InterruptedException { ExitUtil.disableSystemExit(); final YarnUncaughtExceptionHandler spyErrorHandler = spy(exHandler); @@ -96,7 +98,7 @@ public class TestYarnUncaughtExceptionHandler { * @throws InterruptedException */ @Test - public void testUncaughtExceptionHandlerWithOutOfMemoryError() + void testUncaughtExceptionHandlerWithOutOfMemoryError() throws InterruptedException { ExitUtil.disableSystemHalt(); final YarnUncaughtExceptionHandler spyOomHandler = spy(exHandler); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/BasePBImplRecordsTest.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/BasePBImplRecordsTest.java index 5697923c997..e9ac044affc 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/BasePBImplRecordsTest.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/BasePBImplRecordsTest.java @@ -17,26 +17,38 @@ */ package org.apache.hadoop.yarn.api; +import java.lang.reflect.Array; +import java.lang.reflect.Constructor; +import java.lang.reflect.Method; +import java.lang.reflect.Modifier; +import java.lang.reflect.ParameterizedType; +import java.lang.reflect.Type; +import java.nio.ByteBuffer; +import java.util.EnumSet; +import java.util.HashMap; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.Random; +import java.util.Set; + +import org.apache.hadoop.thirdparty.com.google.common.collect.Maps; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + import org.apache.commons.lang3.Range; +import org.apache.hadoop.io.Text; import org.apache.hadoop.util.Lists; import org.apache.hadoop.util.Sets; import org.apache.hadoop.yarn.api.resource.PlacementConstraint; import org.apache.hadoop.yarn.api.resource.PlacementConstraints; -import org.apache.hadoop.thirdparty.com.google.common.collect.Maps; - -import org.junit.Assert; -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; - -import java.lang.reflect.*; -import java.nio.ByteBuffer; -import java.util.*; - import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.NODE; -import static org.apache.hadoop.yarn.api.resource.PlacementConstraints - .PlacementTargets.allocationTag; +import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.PlacementTargets.allocationTag; import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.targetIn; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.fail; /** * Generic helper class to validate protocol records. @@ -80,6 +92,8 @@ public class BasePBImplRecordsTest { 'a' + rand.nextInt(26)); } else if (type.equals(Float.class)) { return rand.nextFloat(); + } else if (type.equals(Text.class)) { + return new Text('a' + String.valueOf(rand.nextInt(1000000))); } else if (type instanceof Class) { Class clazz = (Class)type; if (clazz.isArray()) { @@ -167,7 +181,7 @@ public class BasePBImplRecordsTest { " does not have newInstance method"); } Object [] args = new Object[paramTypes.length]; - for (int i=0;i 0); - - Assert.assertTrue(a1.hashCode() == a5.hashCode()); - Assert.assertFalse(a1.hashCode() == a2.hashCode()); - Assert.assertFalse(a1.hashCode() == a3.hashCode()); - Assert.assertFalse(a1.hashCode() == a4.hashCode()); - + + assertEquals(a1, a5); + assertNotEquals(a1, a2); + assertNotEquals(a1, a3); + assertNotEquals(a1, a4); + + assertTrue(a1.compareTo(a5) == 0); + assertTrue(a1.compareTo(a2) < 0); + assertTrue(a1.compareTo(a3) < 0); + assertTrue(a1.compareTo(a4) > 0); + + assertTrue(a1.hashCode() == a5.hashCode()); + assertFalse(a1.hashCode() == a2.hashCode()); + assertFalse(a1.hashCode() == a3.hashCode()); + assertFalse(a1.hashCode() == a4.hashCode()); + long ts = System.currentTimeMillis(); ApplicationAttemptId a6 = createAppAttemptId(ts, 543627, 33492611); - Assert.assertEquals("appattempt_10_0001_000001", a1.toString()); - Assert.assertEquals("appattempt_" + ts + "_543627_33492611", a6.toString()); + assertEquals("appattempt_10_0001_000001", a1.toString()); + assertEquals("appattempt_" + ts + "_543627_33492611", a6.toString()); } private ApplicationAttemptId createAppAttemptId( diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestApplicationId.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestApplicationId.java index ea25a64c95d..d084468b3ac 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestApplicationId.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestApplicationId.java @@ -18,36 +18,40 @@ package org.apache.hadoop.yarn.api; -import org.junit.Assert; +import org.junit.jupiter.api.Test; import org.apache.hadoop.yarn.api.records.ApplicationId; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertNotEquals; +import static org.junit.jupiter.api.Assertions.assertTrue; public class TestApplicationId { @Test - public void testApplicationId() { + void testApplicationId() { ApplicationId a1 = ApplicationId.newInstance(10l, 1); ApplicationId a2 = ApplicationId.newInstance(10l, 2); ApplicationId a3 = ApplicationId.newInstance(10l, 1); ApplicationId a4 = ApplicationId.newInstance(8l, 3); - Assert.assertFalse(a1.equals(a2)); - Assert.assertFalse(a1.equals(a4)); - Assert.assertTrue(a1.equals(a3)); + assertNotEquals(a1, a2); + assertNotEquals(a1, a4); + assertEquals(a1, a3); - Assert.assertTrue(a1.compareTo(a2) < 0); - Assert.assertTrue(a1.compareTo(a3) == 0); - Assert.assertTrue(a1.compareTo(a4) > 0); + assertTrue(a1.compareTo(a2) < 0); + assertTrue(a1.compareTo(a3) == 0); + assertTrue(a1.compareTo(a4) > 0); + + assertTrue(a1.hashCode() == a3.hashCode()); + assertFalse(a1.hashCode() == a2.hashCode()); + assertFalse(a2.hashCode() == a4.hashCode()); - Assert.assertTrue(a1.hashCode() == a3.hashCode()); - Assert.assertFalse(a1.hashCode() == a2.hashCode()); - Assert.assertFalse(a2.hashCode() == a4.hashCode()); - long ts = System.currentTimeMillis(); ApplicationId a5 = ApplicationId.newInstance(ts, 45436343); - Assert.assertEquals("application_10_0001", a1.toString()); - Assert.assertEquals("application_" + ts + "_45436343", a5.toString()); + assertEquals("application_10_0001", a1.toString()); + assertEquals("application_" + ts + "_45436343", a5.toString()); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestApplicatonReport.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestApplicatonReport.java index ea39a4ccdba..c0904ec09fa 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestApplicatonReport.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestApplicatonReport.java @@ -18,6 +18,8 @@ package org.apache.hadoop.yarn.api; +import org.junit.jupiter.api.Test; + import org.apache.hadoop.yarn.api.records.ApplicationAttemptId; import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.api.records.ApplicationReport; @@ -25,13 +27,15 @@ import org.apache.hadoop.yarn.api.records.FinalApplicationStatus; import org.apache.hadoop.yarn.api.records.Priority; import org.apache.hadoop.yarn.api.records.YarnApplicationState; import org.apache.hadoop.yarn.conf.YarnConfiguration; -import org.junit.Assert; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotSame; +import static org.junit.jupiter.api.Assertions.assertNull; public class TestApplicatonReport { @Test - public void testApplicationReport() { + void testApplicationReport() { long timestamp = System.currentTimeMillis(); ApplicationReport appReport1 = createApplicationReport(1, 1, timestamp); @@ -39,15 +43,15 @@ public class TestApplicatonReport { createApplicationReport(1, 1, timestamp); ApplicationReport appReport3 = createApplicationReport(1, 1, timestamp); - Assert.assertEquals(appReport1, appReport2); - Assert.assertEquals(appReport2, appReport3); + assertEquals(appReport1, appReport2); + assertEquals(appReport2, appReport3); appReport1.setApplicationId(null); - Assert.assertNull(appReport1.getApplicationId()); - Assert.assertNotSame(appReport1, appReport2); + assertNull(appReport1.getApplicationId()); + assertNotSame(appReport1, appReport2); appReport2.setCurrentApplicationAttemptId(null); - Assert.assertNull(appReport2.getCurrentApplicationAttemptId()); - Assert.assertNotSame(appReport2, appReport3); - Assert.assertNull(appReport1.getAMRMToken()); + assertNull(appReport2.getCurrentApplicationAttemptId()); + assertNotSame(appReport2, appReport3); + assertNull(appReport1.getAMRMToken()); } protected static ApplicationReport createApplicationReport( diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestContainerId.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestContainerId.java index 1643301072b..73d585d8f68 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestContainerId.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestContainerId.java @@ -19,57 +19,61 @@ package org.apache.hadoop.yarn.api; -import org.junit.Assert; +import org.junit.jupiter.api.Test; import org.apache.hadoop.yarn.api.records.ApplicationAttemptId; import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.api.records.ContainerId; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertNotEquals; +import static org.junit.jupiter.api.Assertions.assertTrue; public class TestContainerId { @Test - public void testContainerId() { + void testContainerId() { ContainerId c1 = newContainerId(1, 1, 10l, 1); ContainerId c2 = newContainerId(1, 1, 10l, 2); ContainerId c3 = newContainerId(1, 1, 10l, 1); ContainerId c4 = newContainerId(1, 3, 10l, 1); ContainerId c5 = newContainerId(1, 3, 8l, 1); - Assert.assertTrue(c1.equals(c3)); - Assert.assertFalse(c1.equals(c2)); - Assert.assertFalse(c1.equals(c4)); - Assert.assertFalse(c1.equals(c5)); + assertEquals(c1, c3); + assertNotEquals(c1, c2); + assertNotEquals(c1, c4); + assertNotEquals(c1, c5); - Assert.assertTrue(c1.compareTo(c3) == 0); - Assert.assertTrue(c1.compareTo(c2) < 0); - Assert.assertTrue(c1.compareTo(c4) < 0); - Assert.assertTrue(c1.compareTo(c5) > 0); + assertTrue(c1.compareTo(c3) == 0); + assertTrue(c1.compareTo(c2) < 0); + assertTrue(c1.compareTo(c4) < 0); + assertTrue(c1.compareTo(c5) > 0); + + assertTrue(c1.hashCode() == c3.hashCode()); + assertFalse(c1.hashCode() == c2.hashCode()); + assertFalse(c1.hashCode() == c4.hashCode()); + assertFalse(c1.hashCode() == c5.hashCode()); - Assert.assertTrue(c1.hashCode() == c3.hashCode()); - Assert.assertFalse(c1.hashCode() == c2.hashCode()); - Assert.assertFalse(c1.hashCode() == c4.hashCode()); - Assert.assertFalse(c1.hashCode() == c5.hashCode()); - long ts = System.currentTimeMillis(); ContainerId c6 = newContainerId(36473, 4365472, ts, 25645811); - Assert.assertEquals("container_10_0001_01_000001", c1.toString()); - Assert.assertEquals(25645811, 0xffffffffffL & c6.getContainerId()); - Assert.assertEquals(0, c6.getContainerId() >> 40); - Assert.assertEquals("container_" + ts + "_36473_4365472_25645811", + assertEquals("container_10_0001_01_000001", c1.toString()); + assertEquals(25645811, 0xffffffffffL & c6.getContainerId()); + assertEquals(0, c6.getContainerId() >> 40); + assertEquals("container_" + ts + "_36473_4365472_25645811", c6.toString()); ContainerId c7 = newContainerId(36473, 4365472, ts, 4298334883325L); - Assert.assertEquals(999799999997L, 0xffffffffffL & c7.getContainerId()); - Assert.assertEquals(3, c7.getContainerId() >> 40); - Assert.assertEquals( + assertEquals(999799999997L, 0xffffffffffL & c7.getContainerId()); + assertEquals(3, c7.getContainerId() >> 40); + assertEquals( "container_e03_" + ts + "_36473_4365472_999799999997", c7.toString()); ContainerId c8 = newContainerId(36473, 4365472, ts, 844424930131965L); - Assert.assertEquals(1099511627773L, 0xffffffffffL & c8.getContainerId()); - Assert.assertEquals(767, c8.getContainerId() >> 40); - Assert.assertEquals( + assertEquals(1099511627773L, 0xffffffffffL & c8.getContainerId()); + assertEquals(767, c8.getContainerId() >> 40); + assertEquals( "container_e767_" + ts + "_36473_4365472_1099511627773", c8.toString()); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestGetApplicationsRequest.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestGetApplicationsRequest.java index c46c2bc0a9b..b91df652693 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestGetApplicationsRequest.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestGetApplicationsRequest.java @@ -21,93 +21,86 @@ import java.util.EnumSet; import java.util.HashSet; import java.util.Set; +import org.junit.jupiter.api.Test; + import org.apache.commons.lang3.Range; import org.apache.hadoop.yarn.api.protocolrecords.ApplicationsRequestScope; import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationsRequest; import org.apache.hadoop.yarn.api.protocolrecords.impl.pb.GetApplicationsRequestPBImpl; import org.apache.hadoop.yarn.api.records.YarnApplicationState; -import org.junit.Assert; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; public class TestGetApplicationsRequest { @Test - public void testGetApplicationsRequest(){ + void testGetApplicationsRequest() { GetApplicationsRequest request = GetApplicationsRequest.newInstance(); - - EnumSet appStates = - EnumSet.of(YarnApplicationState.ACCEPTED); + + EnumSet appStates = + EnumSet.of(YarnApplicationState.ACCEPTED); request.setApplicationStates(appStates); - + Set tags = new HashSet(); tags.add("tag1"); request.setApplicationTags(tags); - + Set types = new HashSet(); types.add("type1"); request.setApplicationTypes(types); - + long startBegin = System.currentTimeMillis(); long startEnd = System.currentTimeMillis() + 1; request.setStartRange(startBegin, startEnd); long finishBegin = System.currentTimeMillis() + 2; long finishEnd = System.currentTimeMillis() + 3; request.setFinishRange(finishBegin, finishEnd); - + long limit = 100L; request.setLimit(limit); - + Set queues = new HashSet(); queues.add("queue1"); request.setQueues(queues); - - + + Set users = new HashSet(); users.add("user1"); request.setUsers(users); - + ApplicationsRequestScope scope = ApplicationsRequestScope.ALL; request.setScope(scope); - + GetApplicationsRequest requestFromProto = new GetApplicationsRequestPBImpl( - ((GetApplicationsRequestPBImpl)request).getProto()); - + ((GetApplicationsRequestPBImpl) request).getProto()); + // verify the whole record equals with original record - Assert.assertEquals(requestFromProto, request); + assertEquals(requestFromProto, request); // verify all properties are the same as original request - Assert.assertEquals( - "ApplicationStates from proto is not the same with original request", - requestFromProto.getApplicationStates(), appStates); - - Assert.assertEquals( - "ApplicationTags from proto is not the same with original request", - requestFromProto.getApplicationTags(), tags); - - Assert.assertEquals( - "ApplicationTypes from proto is not the same with original request", - requestFromProto.getApplicationTypes(), types); - - Assert.assertEquals( - "StartRange from proto is not the same with original request", - requestFromProto.getStartRange(), Range.between(startBegin, startEnd)); - - Assert.assertEquals( - "FinishRange from proto is not the same with original request", - requestFromProto.getFinishRange(), - Range.between(finishBegin, finishEnd)); - - Assert.assertEquals( - "Limit from proto is not the same with original request", - requestFromProto.getLimit(), limit); - - Assert.assertEquals( - "Queues from proto is not the same with original request", - requestFromProto.getQueues(), queues); - - Assert.assertEquals( - "Users from proto is not the same with original request", - requestFromProto.getUsers(), users); + assertEquals(requestFromProto.getApplicationStates(), appStates, + "ApplicationStates from proto is not the same with original request"); + + assertEquals(requestFromProto.getApplicationTags(), tags, + "ApplicationTags from proto is not the same with original request"); + + assertEquals(requestFromProto.getApplicationTypes(), types, + "ApplicationTypes from proto is not the same with original request"); + + assertEquals(requestFromProto.getStartRange(), Range.between(startBegin, startEnd), + "StartRange from proto is not the same with original request"); + + assertEquals(requestFromProto.getFinishRange(), Range.between(finishBegin, finishEnd), + "FinishRange from proto is not the same with original request"); + + assertEquals(requestFromProto.getLimit(), limit, + "Limit from proto is not the same with original request"); + + assertEquals(requestFromProto.getQueues(), queues, + "Queues from proto is not the same with original request"); + + assertEquals(requestFromProto.getUsers(), users, + "Users from proto is not the same with original request"); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestNodeId.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestNodeId.java index 32d31a30b02..2a4092d915a 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestNodeId.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestNodeId.java @@ -18,32 +18,36 @@ package org.apache.hadoop.yarn.api; -import org.junit.Assert; +import org.junit.jupiter.api.Test; import org.apache.hadoop.yarn.api.records.NodeId; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertNotEquals; +import static org.junit.jupiter.api.Assertions.assertTrue; public class TestNodeId { @Test - public void testNodeId() { + void testNodeId() { NodeId nodeId1 = NodeId.newInstance("10.18.52.124", 8041); NodeId nodeId2 = NodeId.newInstance("10.18.52.125", 8038); NodeId nodeId3 = NodeId.newInstance("10.18.52.124", 8041); NodeId nodeId4 = NodeId.newInstance("10.18.52.124", 8039); - Assert.assertTrue(nodeId1.equals(nodeId3)); - Assert.assertFalse(nodeId1.equals(nodeId2)); - Assert.assertFalse(nodeId3.equals(nodeId4)); + assertEquals(nodeId1, nodeId3); + assertNotEquals(nodeId1, nodeId2); + assertNotEquals(nodeId3, nodeId4); - Assert.assertTrue(nodeId1.compareTo(nodeId3) == 0); - Assert.assertTrue(nodeId1.compareTo(nodeId2) < 0); - Assert.assertTrue(nodeId3.compareTo(nodeId4) > 0); + assertTrue(nodeId1.compareTo(nodeId3) == 0); + assertTrue(nodeId1.compareTo(nodeId2) < 0); + assertTrue(nodeId3.compareTo(nodeId4) > 0); - Assert.assertTrue(nodeId1.hashCode() == nodeId3.hashCode()); - Assert.assertFalse(nodeId1.hashCode() == nodeId2.hashCode()); - Assert.assertFalse(nodeId3.hashCode() == nodeId4.hashCode()); + assertTrue(nodeId1.hashCode() == nodeId3.hashCode()); + assertFalse(nodeId1.hashCode() == nodeId2.hashCode()); + assertFalse(nodeId3.hashCode() == nodeId4.hashCode()); - Assert.assertEquals("10.18.52.124:8041", nodeId1.toString()); + assertEquals("10.18.52.124:8041", nodeId1.toString()); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestPBImplRecords.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestPBImplRecords.java index 2edd7b74792..3bc76cbfe05 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestPBImplRecords.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestPBImplRecords.java @@ -16,9 +16,15 @@ * limitations under the License. */ package org.apache.hadoop.yarn.api; + import java.io.IOException; import java.util.Arrays; +import org.apache.hadoop.thirdparty.com.google.common.collect.ImmutableSet; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.Disabled; +import org.junit.jupiter.api.Test; + import org.apache.commons.lang3.Range; import org.apache.hadoop.security.proto.SecurityProtos.CancelDelegationTokenRequestProto; import org.apache.hadoop.security.proto.SecurityProtos.CancelDelegationTokenResponseProto; @@ -133,8 +139,8 @@ import org.apache.hadoop.yarn.api.records.LocalResource; import org.apache.hadoop.yarn.api.records.LogAggregationContext; import org.apache.hadoop.yarn.api.records.NMToken; import org.apache.hadoop.yarn.api.records.NodeAttribute; -import org.apache.hadoop.yarn.api.records.NodeAttributeKey; import org.apache.hadoop.yarn.api.records.NodeAttributeInfo; +import org.apache.hadoop.yarn.api.records.NodeAttributeKey; import org.apache.hadoop.yarn.api.records.NodeId; import org.apache.hadoop.yarn.api.records.NodeLabel; import org.apache.hadoop.yarn.api.records.NodeReport; @@ -189,8 +195,8 @@ import org.apache.hadoop.yarn.api.records.impl.pb.EnhancedHeadroomPBImpl; import org.apache.hadoop.yarn.api.records.impl.pb.ExecutionTypeRequestPBImpl; import org.apache.hadoop.yarn.api.records.impl.pb.LocalResourcePBImpl; import org.apache.hadoop.yarn.api.records.impl.pb.NMTokenPBImpl; -import org.apache.hadoop.yarn.api.records.impl.pb.NodeAttributeKeyPBImpl; import org.apache.hadoop.yarn.api.records.impl.pb.NodeAttributeInfoPBImpl; +import org.apache.hadoop.yarn.api.records.impl.pb.NodeAttributeKeyPBImpl; import org.apache.hadoop.yarn.api.records.impl.pb.NodeAttributePBImpl; import org.apache.hadoop.yarn.api.records.impl.pb.NodeIdPBImpl; import org.apache.hadoop.yarn.api.records.impl.pb.NodeLabelPBImpl; @@ -201,8 +207,8 @@ import org.apache.hadoop.yarn.api.records.impl.pb.PreemptionContractPBImpl; import org.apache.hadoop.yarn.api.records.impl.pb.PreemptionMessagePBImpl; import org.apache.hadoop.yarn.api.records.impl.pb.PreemptionResourceRequestPBImpl; import org.apache.hadoop.yarn.api.records.impl.pb.PriorityPBImpl; -import org.apache.hadoop.yarn.api.records.impl.pb.QueueInfoPBImpl; import org.apache.hadoop.yarn.api.records.impl.pb.QueueConfigurationsPBImpl; +import org.apache.hadoop.yarn.api.records.impl.pb.QueueInfoPBImpl; import org.apache.hadoop.yarn.api.records.impl.pb.QueueUserACLInfoPBImpl; import org.apache.hadoop.yarn.api.records.impl.pb.ResourceBlacklistRequestPBImpl; import org.apache.hadoop.yarn.api.records.impl.pb.ResourceOptionPBImpl; @@ -232,8 +238,8 @@ import org.apache.hadoop.yarn.proto.YarnProtos.ContainerRetryContextProto; import org.apache.hadoop.yarn.proto.YarnProtos.ContainerStatusProto; import org.apache.hadoop.yarn.proto.YarnProtos.ExecutionTypeRequestProto; import org.apache.hadoop.yarn.proto.YarnProtos.LocalResourceProto; -import org.apache.hadoop.yarn.proto.YarnProtos.NodeAttributeKeyProto; import org.apache.hadoop.yarn.proto.YarnProtos.NodeAttributeInfoProto; +import org.apache.hadoop.yarn.proto.YarnProtos.NodeAttributeKeyProto; import org.apache.hadoop.yarn.proto.YarnProtos.NodeAttributeProto; import org.apache.hadoop.yarn.proto.YarnProtos.NodeIdProto; import org.apache.hadoop.yarn.proto.YarnProtos.NodeLabelProto; @@ -245,8 +251,8 @@ import org.apache.hadoop.yarn.proto.YarnProtos.PreemptionContractProto; import org.apache.hadoop.yarn.proto.YarnProtos.PreemptionMessageProto; import org.apache.hadoop.yarn.proto.YarnProtos.PreemptionResourceRequestProto; import org.apache.hadoop.yarn.proto.YarnProtos.PriorityProto; -import org.apache.hadoop.yarn.proto.YarnProtos.QueueInfoProto; import org.apache.hadoop.yarn.proto.YarnProtos.QueueConfigurationsProto; +import org.apache.hadoop.yarn.proto.YarnProtos.QueueInfoProto; import org.apache.hadoop.yarn.proto.YarnProtos.QueueUserACLInfoProto; import org.apache.hadoop.yarn.proto.YarnProtos.ResourceBlacklistRequestProto; import org.apache.hadoop.yarn.proto.YarnProtos.ResourceOptionProto; @@ -374,19 +380,15 @@ import org.apache.hadoop.yarn.server.api.protocolrecords.impl.pb.ReplaceLabelsOn import org.apache.hadoop.yarn.server.api.protocolrecords.impl.pb.UpdateNodeResourceRequestPBImpl; import org.apache.hadoop.yarn.server.api.protocolrecords.impl.pb.UpdateNodeResourceResponsePBImpl; import org.apache.hadoop.yarn.util.resource.Resources; -import org.junit.Assert; -import org.junit.BeforeClass; -import org.junit.Ignore; -import org.junit.Test; -import org.apache.hadoop.thirdparty.com.google.common.collect.ImmutableSet; +import static org.junit.jupiter.api.Assertions.assertNotNull; /** * Test class for YARN API protocol records. */ public class TestPBImplRecords extends BasePBImplRecordsTest { - @BeforeClass + @BeforeAll public static void setup() throws Exception { typeValueCache.put(Range.class, Range.between(1000L, 2000L)); typeValueCache.put(URL.class, URL.newInstance( @@ -474,326 +476,326 @@ public class TestPBImplRecords extends BasePBImplRecordsTest { } @Test - public void testAllocateRequestPBImpl() throws Exception { + void testAllocateRequestPBImpl() throws Exception { validatePBImplRecord(AllocateRequestPBImpl.class, AllocateRequestProto.class); } @Test - public void testAllocateResponsePBImpl() throws Exception { + void testAllocateResponsePBImpl() throws Exception { validatePBImplRecord(AllocateResponsePBImpl.class, AllocateResponseProto.class); } @Test - public void testCancelDelegationTokenRequestPBImpl() throws Exception { + void testCancelDelegationTokenRequestPBImpl() throws Exception { validatePBImplRecord(CancelDelegationTokenRequestPBImpl.class, CancelDelegationTokenRequestProto.class); } @Test - public void testCancelDelegationTokenResponsePBImpl() throws Exception { + void testCancelDelegationTokenResponsePBImpl() throws Exception { validatePBImplRecord(CancelDelegationTokenResponsePBImpl.class, CancelDelegationTokenResponseProto.class); } @Test - public void testFinishApplicationMasterRequestPBImpl() throws Exception { + void testFinishApplicationMasterRequestPBImpl() throws Exception { validatePBImplRecord(FinishApplicationMasterRequestPBImpl.class, FinishApplicationMasterRequestProto.class); } @Test - public void testFinishApplicationMasterResponsePBImpl() throws Exception { + void testFinishApplicationMasterResponsePBImpl() throws Exception { validatePBImplRecord(FinishApplicationMasterResponsePBImpl.class, FinishApplicationMasterResponseProto.class); } @Test - public void testGetApplicationAttemptReportRequestPBImpl() throws Exception { + void testGetApplicationAttemptReportRequestPBImpl() throws Exception { validatePBImplRecord(GetApplicationAttemptReportRequestPBImpl.class, GetApplicationAttemptReportRequestProto.class); } @Test - public void testGetApplicationAttemptReportResponsePBImpl() throws Exception { + void testGetApplicationAttemptReportResponsePBImpl() throws Exception { validatePBImplRecord(GetApplicationAttemptReportResponsePBImpl.class, GetApplicationAttemptReportResponseProto.class); } @Test - public void testGetApplicationAttemptsRequestPBImpl() throws Exception { + void testGetApplicationAttemptsRequestPBImpl() throws Exception { validatePBImplRecord(GetApplicationAttemptsRequestPBImpl.class, GetApplicationAttemptsRequestProto.class); } @Test - public void testGetApplicationAttemptsResponsePBImpl() throws Exception { + void testGetApplicationAttemptsResponsePBImpl() throws Exception { validatePBImplRecord(GetApplicationAttemptsResponsePBImpl.class, GetApplicationAttemptsResponseProto.class); } @Test - public void testGetApplicationReportRequestPBImpl() throws Exception { + void testGetApplicationReportRequestPBImpl() throws Exception { validatePBImplRecord(GetApplicationReportRequestPBImpl.class, GetApplicationReportRequestProto.class); } @Test - public void testGetApplicationReportResponsePBImpl() throws Exception { + void testGetApplicationReportResponsePBImpl() throws Exception { validatePBImplRecord(GetApplicationReportResponsePBImpl.class, GetApplicationReportResponseProto.class); } @Test - public void testGetApplicationsRequestPBImpl() throws Exception { + void testGetApplicationsRequestPBImpl() throws Exception { validatePBImplRecord(GetApplicationsRequestPBImpl.class, GetApplicationsRequestProto.class); } @Test - public void testGetApplicationsResponsePBImpl() throws Exception { + void testGetApplicationsResponsePBImpl() throws Exception { validatePBImplRecord(GetApplicationsResponsePBImpl.class, GetApplicationsResponseProto.class); } @Test - public void testGetClusterMetricsRequestPBImpl() throws Exception { + void testGetClusterMetricsRequestPBImpl() throws Exception { validatePBImplRecord(GetClusterMetricsRequestPBImpl.class, GetClusterMetricsRequestProto.class); } @Test - public void testGetClusterMetricsResponsePBImpl() throws Exception { + void testGetClusterMetricsResponsePBImpl() throws Exception { validatePBImplRecord(GetClusterMetricsResponsePBImpl.class, GetClusterMetricsResponseProto.class); } @Test - public void testGetClusterNodesRequestPBImpl() throws Exception { + void testGetClusterNodesRequestPBImpl() throws Exception { validatePBImplRecord(GetClusterNodesRequestPBImpl.class, GetClusterNodesRequestProto.class); } @Test - public void testGetClusterNodesResponsePBImpl() throws Exception { + void testGetClusterNodesResponsePBImpl() throws Exception { validatePBImplRecord(GetClusterNodesResponsePBImpl.class, GetClusterNodesResponseProto.class); } @Test - public void testGetContainerReportRequestPBImpl() throws Exception { + void testGetContainerReportRequestPBImpl() throws Exception { validatePBImplRecord(GetContainerReportRequestPBImpl.class, GetContainerReportRequestProto.class); } @Test - public void testGetContainerReportResponsePBImpl() throws Exception { + void testGetContainerReportResponsePBImpl() throws Exception { validatePBImplRecord(GetContainerReportResponsePBImpl.class, GetContainerReportResponseProto.class); } @Test - public void testGetContainersRequestPBImpl() throws Exception { + void testGetContainersRequestPBImpl() throws Exception { validatePBImplRecord(GetContainersRequestPBImpl.class, GetContainersRequestProto.class); } @Test - public void testGetContainersResponsePBImpl() throws Exception { + void testGetContainersResponsePBImpl() throws Exception { validatePBImplRecord(GetContainersResponsePBImpl.class, GetContainersResponseProto.class); } @Test - public void testGetContainerStatusesRequestPBImpl() throws Exception { + void testGetContainerStatusesRequestPBImpl() throws Exception { validatePBImplRecord(GetContainerStatusesRequestPBImpl.class, GetContainerStatusesRequestProto.class); } @Test - public void testGetContainerStatusesResponsePBImpl() throws Exception { + void testGetContainerStatusesResponsePBImpl() throws Exception { validatePBImplRecord(GetContainerStatusesResponsePBImpl.class, GetContainerStatusesResponseProto.class); } @Test - public void testGetDelegationTokenRequestPBImpl() throws Exception { + void testGetDelegationTokenRequestPBImpl() throws Exception { validatePBImplRecord(GetDelegationTokenRequestPBImpl.class, GetDelegationTokenRequestProto.class); } @Test - public void testGetDelegationTokenResponsePBImpl() throws Exception { + void testGetDelegationTokenResponsePBImpl() throws Exception { validatePBImplRecord(GetDelegationTokenResponsePBImpl.class, GetDelegationTokenResponseProto.class); } @Test - public void testGetNewApplicationRequestPBImpl() throws Exception { + void testGetNewApplicationRequestPBImpl() throws Exception { validatePBImplRecord(GetNewApplicationRequestPBImpl.class, GetNewApplicationRequestProto.class); } @Test - public void testGetNewApplicationResponsePBImpl() throws Exception { + void testGetNewApplicationResponsePBImpl() throws Exception { validatePBImplRecord(GetNewApplicationResponsePBImpl.class, GetNewApplicationResponseProto.class); } @Test - public void testGetQueueInfoRequestPBImpl() throws Exception { + void testGetQueueInfoRequestPBImpl() throws Exception { validatePBImplRecord(GetQueueInfoRequestPBImpl.class, GetQueueInfoRequestProto.class); } @Test - public void testGetQueueInfoResponsePBImpl() throws Exception { + void testGetQueueInfoResponsePBImpl() throws Exception { validatePBImplRecord(GetQueueInfoResponsePBImpl.class, GetQueueInfoResponseProto.class); } @Test - public void testGetQueueUserAclsInfoRequestPBImpl() throws Exception { + void testGetQueueUserAclsInfoRequestPBImpl() throws Exception { validatePBImplRecord(GetQueueUserAclsInfoRequestPBImpl.class, GetQueueUserAclsInfoRequestProto.class); } @Test - public void testGetQueueUserAclsInfoResponsePBImpl() throws Exception { + void testGetQueueUserAclsInfoResponsePBImpl() throws Exception { validatePBImplRecord(GetQueueUserAclsInfoResponsePBImpl.class, GetQueueUserAclsInfoResponseProto.class); } @Test - public void testKillApplicationRequestPBImpl() throws Exception { + void testKillApplicationRequestPBImpl() throws Exception { validatePBImplRecord(KillApplicationRequestPBImpl.class, KillApplicationRequestProto.class); } @Test - public void testKillApplicationResponsePBImpl() throws Exception { + void testKillApplicationResponsePBImpl() throws Exception { validatePBImplRecord(KillApplicationResponsePBImpl.class, KillApplicationResponseProto.class); } @Test - public void testMoveApplicationAcrossQueuesRequestPBImpl() throws Exception { + void testMoveApplicationAcrossQueuesRequestPBImpl() throws Exception { validatePBImplRecord(MoveApplicationAcrossQueuesRequestPBImpl.class, MoveApplicationAcrossQueuesRequestProto.class); } @Test - public void testMoveApplicationAcrossQueuesResponsePBImpl() throws Exception { + void testMoveApplicationAcrossQueuesResponsePBImpl() throws Exception { validatePBImplRecord(MoveApplicationAcrossQueuesResponsePBImpl.class, MoveApplicationAcrossQueuesResponseProto.class); } @Test - public void testRegisterApplicationMasterRequestPBImpl() throws Exception { + void testRegisterApplicationMasterRequestPBImpl() throws Exception { validatePBImplRecord(RegisterApplicationMasterRequestPBImpl.class, RegisterApplicationMasterRequestProto.class); } @Test - public void testRegisterApplicationMasterResponsePBImpl() throws Exception { + void testRegisterApplicationMasterResponsePBImpl() throws Exception { validatePBImplRecord(RegisterApplicationMasterResponsePBImpl.class, RegisterApplicationMasterResponseProto.class); } @Test - public void testRenewDelegationTokenRequestPBImpl() throws Exception { + void testRenewDelegationTokenRequestPBImpl() throws Exception { validatePBImplRecord(RenewDelegationTokenRequestPBImpl.class, RenewDelegationTokenRequestProto.class); } @Test - public void testRenewDelegationTokenResponsePBImpl() throws Exception { + void testRenewDelegationTokenResponsePBImpl() throws Exception { validatePBImplRecord(RenewDelegationTokenResponsePBImpl.class, RenewDelegationTokenResponseProto.class); } @Test - public void testStartContainerRequestPBImpl() throws Exception { + void testStartContainerRequestPBImpl() throws Exception { validatePBImplRecord(StartContainerRequestPBImpl.class, StartContainerRequestProto.class); } @Test - public void testStartContainersRequestPBImpl() throws Exception { + void testStartContainersRequestPBImpl() throws Exception { validatePBImplRecord(StartContainersRequestPBImpl.class, StartContainersRequestProto.class); } @Test - public void testStartContainersResponsePBImpl() throws Exception { + void testStartContainersResponsePBImpl() throws Exception { validatePBImplRecord(StartContainersResponsePBImpl.class, StartContainersResponseProto.class); } @Test - public void testStopContainersRequestPBImpl() throws Exception { + void testStopContainersRequestPBImpl() throws Exception { validatePBImplRecord(StopContainersRequestPBImpl.class, StopContainersRequestProto.class); } @Test - public void testStopContainersResponsePBImpl() throws Exception { + void testStopContainersResponsePBImpl() throws Exception { validatePBImplRecord(StopContainersResponsePBImpl.class, StopContainersResponseProto.class); } @Test - public void testIncreaseContainersResourceRequestPBImpl() throws Exception { + void testIncreaseContainersResourceRequestPBImpl() throws Exception { validatePBImplRecord(IncreaseContainersResourceRequestPBImpl.class, IncreaseContainersResourceRequestProto.class); } @Test - public void testIncreaseContainersResourceResponsePBImpl() throws Exception { + void testIncreaseContainersResourceResponsePBImpl() throws Exception { validatePBImplRecord(IncreaseContainersResourceResponsePBImpl.class, IncreaseContainersResourceResponseProto.class); } @Test - public void testSubmitApplicationRequestPBImpl() throws Exception { + void testSubmitApplicationRequestPBImpl() throws Exception { validatePBImplRecord(SubmitApplicationRequestPBImpl.class, SubmitApplicationRequestProto.class); } @Test - public void testSubmitApplicationResponsePBImpl() throws Exception { + void testSubmitApplicationResponsePBImpl() throws Exception { validatePBImplRecord(SubmitApplicationResponsePBImpl.class, SubmitApplicationResponseProto.class); } - @Test - @Ignore // ignore cause ApplicationIdPBImpl is immutable - public void testApplicationAttemptIdPBImpl() throws Exception { + @Test + @Disabled + void testApplicationAttemptIdPBImpl() throws Exception { validatePBImplRecord(ApplicationAttemptIdPBImpl.class, ApplicationAttemptIdProto.class); } @Test - public void testApplicationAttemptReportPBImpl() throws Exception { + void testApplicationAttemptReportPBImpl() throws Exception { validatePBImplRecord(ApplicationAttemptReportPBImpl.class, ApplicationAttemptReportProto.class); } - @Test - @Ignore // ignore cause ApplicationIdPBImpl is immutable - public void testApplicationIdPBImpl() throws Exception { + @Test + @Disabled + void testApplicationIdPBImpl() throws Exception { validatePBImplRecord(ApplicationIdPBImpl.class, ApplicationIdProto.class); } @Test - public void testApplicationReportPBImpl() throws Exception { + void testApplicationReportPBImpl() throws Exception { validatePBImplRecord(ApplicationReportPBImpl.class, ApplicationReportProto.class); } @Test - public void testApplicationResourceUsageReportPBImpl() throws Exception { + void testApplicationResourceUsageReportPBImpl() throws Exception { excludedPropertiesMap.put(ApplicationResourceUsageReportPBImpl.class.getClass(), Arrays.asList("PreemptedResourceSecondsMap", "ResourceSecondsMap")); validatePBImplRecord(ApplicationResourceUsageReportPBImpl.class, @@ -801,550 +803,550 @@ public class TestPBImplRecords extends BasePBImplRecordsTest { } @Test - public void testApplicationSubmissionContextPBImpl() throws Exception { + void testApplicationSubmissionContextPBImpl() throws Exception { validatePBImplRecord(ApplicationSubmissionContextPBImpl.class, ApplicationSubmissionContextProto.class); - + ApplicationSubmissionContext ctx = ApplicationSubmissionContext.newInstance(null, null, null, null, null, false, false, 0, Resources.none(), null, false, null, null); - - Assert.assertNotNull(ctx.getResource()); + + assertNotNull(ctx.getResource()); } - @Test - @Ignore // ignore cause ApplicationIdPBImpl is immutable - public void testContainerIdPBImpl() throws Exception { + @Test + @Disabled + void testContainerIdPBImpl() throws Exception { validatePBImplRecord(ContainerIdPBImpl.class, ContainerIdProto.class); } @Test - public void testContainerRetryPBImpl() throws Exception { + void testContainerRetryPBImpl() throws Exception { validatePBImplRecord(ContainerRetryContextPBImpl.class, ContainerRetryContextProto.class); } @Test - public void testContainerLaunchContextPBImpl() throws Exception { + void testContainerLaunchContextPBImpl() throws Exception { validatePBImplRecord(ContainerLaunchContextPBImpl.class, ContainerLaunchContextProto.class); } @Test - public void testResourceLocalizationRequest() throws Exception { + void testResourceLocalizationRequest() throws Exception { validatePBImplRecord(ResourceLocalizationRequestPBImpl.class, YarnServiceProtos.ResourceLocalizationRequestProto.class); } @Test - public void testResourceLocalizationResponse() throws Exception { + void testResourceLocalizationResponse() throws Exception { validatePBImplRecord(ResourceLocalizationResponsePBImpl.class, YarnServiceProtos.ResourceLocalizationResponseProto.class); } @Test - public void testContainerPBImpl() throws Exception { + void testContainerPBImpl() throws Exception { validatePBImplRecord(ContainerPBImpl.class, ContainerProto.class); } @Test - public void testContainerReportPBImpl() throws Exception { + void testContainerReportPBImpl() throws Exception { validatePBImplRecord(ContainerReportPBImpl.class, ContainerReportProto.class); } @Test - public void testUpdateContainerRequestPBImpl() throws Exception { + void testUpdateContainerRequestPBImpl() throws Exception { validatePBImplRecord(UpdateContainerRequestPBImpl.class, YarnServiceProtos.UpdateContainerRequestProto.class); } @Test - public void testContainerStatusPBImpl() throws Exception { + void testContainerStatusPBImpl() throws Exception { validatePBImplRecord(ContainerStatusPBImpl.class, ContainerStatusProto.class); } @Test - public void testLocalResourcePBImpl() throws Exception { + void testLocalResourcePBImpl() throws Exception { validatePBImplRecord(LocalResourcePBImpl.class, LocalResourceProto.class); } @Test - public void testNMTokenPBImpl() throws Exception { + void testNMTokenPBImpl() throws Exception { validatePBImplRecord(NMTokenPBImpl.class, NMTokenProto.class); } - @Test - @Ignore // ignore cause ApplicationIdPBImpl is immutable - public void testNodeIdPBImpl() throws Exception { + @Test + @Disabled + void testNodeIdPBImpl() throws Exception { validatePBImplRecord(NodeIdPBImpl.class, NodeIdProto.class); } @Test - public void testNodeReportPBImpl() throws Exception { + void testNodeReportPBImpl() throws Exception { validatePBImplRecord(NodeReportPBImpl.class, NodeReportProto.class); } @Test - public void testPreemptionContainerPBImpl() throws Exception { + void testPreemptionContainerPBImpl() throws Exception { validatePBImplRecord(PreemptionContainerPBImpl.class, PreemptionContainerProto.class); } @Test - public void testPreemptionContractPBImpl() throws Exception { + void testPreemptionContractPBImpl() throws Exception { validatePBImplRecord(PreemptionContractPBImpl.class, PreemptionContractProto.class); } @Test - public void testPreemptionMessagePBImpl() throws Exception { + void testPreemptionMessagePBImpl() throws Exception { validatePBImplRecord(PreemptionMessagePBImpl.class, PreemptionMessageProto.class); } @Test - public void testPreemptionResourceRequestPBImpl() throws Exception { + void testPreemptionResourceRequestPBImpl() throws Exception { validatePBImplRecord(PreemptionResourceRequestPBImpl.class, PreemptionResourceRequestProto.class); } @Test - public void testPriorityPBImpl() throws Exception { + void testPriorityPBImpl() throws Exception { validatePBImplRecord(PriorityPBImpl.class, PriorityProto.class); } @Test - public void testQueueInfoPBImpl() throws Exception { + void testQueueInfoPBImpl() throws Exception { validatePBImplRecord(QueueInfoPBImpl.class, QueueInfoProto.class); } @Test - public void testQueueConfigurationsPBImpl() throws Exception{ + void testQueueConfigurationsPBImpl() throws Exception { validatePBImplRecord(QueueConfigurationsPBImpl.class, QueueConfigurationsProto.class); } @Test - public void testQueueUserACLInfoPBImpl() throws Exception { + void testQueueUserACLInfoPBImpl() throws Exception { validatePBImplRecord(QueueUserACLInfoPBImpl.class, QueueUserACLInfoProto.class); } @Test - public void testResourceBlacklistRequestPBImpl() throws Exception { + void testResourceBlacklistRequestPBImpl() throws Exception { validatePBImplRecord(ResourceBlacklistRequestPBImpl.class, ResourceBlacklistRequestProto.class); } - @Test - @Ignore // ignore as ResourceOptionPBImpl is immutable - public void testResourceOptionPBImpl() throws Exception { + @Test + @Disabled + void testResourceOptionPBImpl() throws Exception { validatePBImplRecord(ResourceOptionPBImpl.class, ResourceOptionProto.class); } @Test - public void testResourcePBImpl() throws Exception { + void testResourcePBImpl() throws Exception { validatePBImplRecord(ResourcePBImpl.class, ResourceProto.class); } @Test - public void testResourceRequestPBImpl() throws Exception { + void testResourceRequestPBImpl() throws Exception { validatePBImplRecord(ResourceRequestPBImpl.class, ResourceRequestProto.class); } @Test - public void testResourceSizingPBImpl() throws Exception { + void testResourceSizingPBImpl() throws Exception { validatePBImplRecord(ResourceSizingPBImpl.class, ResourceSizingProto.class); } @Test - public void testSchedulingRequestPBImpl() throws Exception { + void testSchedulingRequestPBImpl() throws Exception { validatePBImplRecord(SchedulingRequestPBImpl.class, SchedulingRequestProto.class); } @Test - public void testSerializedExceptionPBImpl() throws Exception { + void testSerializedExceptionPBImpl() throws Exception { validatePBImplRecord(SerializedExceptionPBImpl.class, SerializedExceptionProto.class); } @Test - public void testStrictPreemptionContractPBImpl() throws Exception { + void testStrictPreemptionContractPBImpl() throws Exception { validatePBImplRecord(StrictPreemptionContractPBImpl.class, StrictPreemptionContractProto.class); } @Test - public void testTokenPBImpl() throws Exception { + void testTokenPBImpl() throws Exception { validatePBImplRecord(TokenPBImpl.class, TokenProto.class); } @Test - public void testURLPBImpl() throws Exception { + void testURLPBImpl() throws Exception { validatePBImplRecord(URLPBImpl.class, URLProto.class); } @Test - public void testYarnClusterMetricsPBImpl() throws Exception { + void testYarnClusterMetricsPBImpl() throws Exception { validatePBImplRecord(YarnClusterMetricsPBImpl.class, YarnClusterMetricsProto.class); } @Test - public void testRefreshAdminAclsRequestPBImpl() throws Exception { + void testRefreshAdminAclsRequestPBImpl() throws Exception { validatePBImplRecord(RefreshAdminAclsRequestPBImpl.class, RefreshAdminAclsRequestProto.class); } @Test - public void testRefreshAdminAclsResponsePBImpl() throws Exception { + void testRefreshAdminAclsResponsePBImpl() throws Exception { validatePBImplRecord(RefreshAdminAclsResponsePBImpl.class, RefreshAdminAclsResponseProto.class); } @Test - public void testRefreshNodesRequestPBImpl() throws Exception { + void testRefreshNodesRequestPBImpl() throws Exception { validatePBImplRecord(RefreshNodesRequestPBImpl.class, RefreshNodesRequestProto.class); } @Test - public void testRefreshNodesResponsePBImpl() throws Exception { + void testRefreshNodesResponsePBImpl() throws Exception { validatePBImplRecord(RefreshNodesResponsePBImpl.class, RefreshNodesResponseProto.class); } @Test - public void testRefreshQueuesRequestPBImpl() throws Exception { + void testRefreshQueuesRequestPBImpl() throws Exception { validatePBImplRecord(RefreshQueuesRequestPBImpl.class, RefreshQueuesRequestProto.class); } @Test - public void testRefreshQueuesResponsePBImpl() throws Exception { + void testRefreshQueuesResponsePBImpl() throws Exception { validatePBImplRecord(RefreshQueuesResponsePBImpl.class, RefreshQueuesResponseProto.class); } @Test - public void testRefreshNodesResourcesRequestPBImpl() throws Exception { + void testRefreshNodesResourcesRequestPBImpl() throws Exception { validatePBImplRecord(RefreshNodesResourcesRequestPBImpl.class, RefreshNodesResourcesRequestProto.class); } @Test - public void testRefreshNodesResourcesResponsePBImpl() throws Exception { + void testRefreshNodesResourcesResponsePBImpl() throws Exception { validatePBImplRecord(RefreshNodesResourcesResponsePBImpl.class, RefreshNodesResourcesResponseProto.class); } @Test - public void testRefreshServiceAclsRequestPBImpl() throws Exception { + void testRefreshServiceAclsRequestPBImpl() throws Exception { validatePBImplRecord(RefreshServiceAclsRequestPBImpl.class, RefreshServiceAclsRequestProto.class); } @Test - public void testRefreshServiceAclsResponsePBImpl() throws Exception { + void testRefreshServiceAclsResponsePBImpl() throws Exception { validatePBImplRecord(RefreshServiceAclsResponsePBImpl.class, RefreshServiceAclsResponseProto.class); } @Test - public void testRefreshSuperUserGroupsConfigurationRequestPBImpl() + void testRefreshSuperUserGroupsConfigurationRequestPBImpl() throws Exception { validatePBImplRecord(RefreshSuperUserGroupsConfigurationRequestPBImpl.class, RefreshSuperUserGroupsConfigurationRequestProto.class); } @Test - public void testRefreshSuperUserGroupsConfigurationResponsePBImpl() + void testRefreshSuperUserGroupsConfigurationResponsePBImpl() throws Exception { validatePBImplRecord(RefreshSuperUserGroupsConfigurationResponsePBImpl.class, RefreshSuperUserGroupsConfigurationResponseProto.class); } @Test - public void testRefreshUserToGroupsMappingsRequestPBImpl() throws Exception { + void testRefreshUserToGroupsMappingsRequestPBImpl() throws Exception { validatePBImplRecord(RefreshUserToGroupsMappingsRequestPBImpl.class, RefreshUserToGroupsMappingsRequestProto.class); } @Test - public void testRefreshUserToGroupsMappingsResponsePBImpl() throws Exception { + void testRefreshUserToGroupsMappingsResponsePBImpl() throws Exception { validatePBImplRecord(RefreshUserToGroupsMappingsResponsePBImpl.class, RefreshUserToGroupsMappingsResponseProto.class); } @Test - public void testUpdateNodeResourceRequestPBImpl() throws Exception { + void testUpdateNodeResourceRequestPBImpl() throws Exception { validatePBImplRecord(UpdateNodeResourceRequestPBImpl.class, UpdateNodeResourceRequestProto.class); } @Test - public void testUpdateNodeResourceResponsePBImpl() throws Exception { + void testUpdateNodeResourceResponsePBImpl() throws Exception { validatePBImplRecord(UpdateNodeResourceResponsePBImpl.class, UpdateNodeResourceResponseProto.class); } @Test - public void testReservationSubmissionRequestPBImpl() throws Exception { + void testReservationSubmissionRequestPBImpl() throws Exception { validatePBImplRecord(ReservationSubmissionRequestPBImpl.class, ReservationSubmissionRequestProto.class); } @Test - public void testReservationSubmissionResponsePBImpl() throws Exception { + void testReservationSubmissionResponsePBImpl() throws Exception { validatePBImplRecord(ReservationSubmissionResponsePBImpl.class, ReservationSubmissionResponseProto.class); } @Test - public void testReservationUpdateRequestPBImpl() throws Exception { + void testReservationUpdateRequestPBImpl() throws Exception { validatePBImplRecord(ReservationUpdateRequestPBImpl.class, ReservationUpdateRequestProto.class); } @Test - public void testReservationUpdateResponsePBImpl() throws Exception { + void testReservationUpdateResponsePBImpl() throws Exception { validatePBImplRecord(ReservationUpdateResponsePBImpl.class, ReservationUpdateResponseProto.class); } @Test - public void testReservationDeleteRequestPBImpl() throws Exception { + void testReservationDeleteRequestPBImpl() throws Exception { validatePBImplRecord(ReservationDeleteRequestPBImpl.class, ReservationDeleteRequestProto.class); } @Test - public void testReservationDeleteResponsePBImpl() throws Exception { + void testReservationDeleteResponsePBImpl() throws Exception { validatePBImplRecord(ReservationDeleteResponsePBImpl.class, ReservationDeleteResponseProto.class); } @Test - public void testReservationListRequestPBImpl() throws Exception { + void testReservationListRequestPBImpl() throws Exception { validatePBImplRecord(ReservationListRequestPBImpl.class, - ReservationListRequestProto.class); + ReservationListRequestProto.class); } @Test - public void testReservationListResponsePBImpl() throws Exception { + void testReservationListResponsePBImpl() throws Exception { validatePBImplRecord(ReservationListResponsePBImpl.class, - ReservationListResponseProto.class); + ReservationListResponseProto.class); } @Test - public void testAddToClusterNodeLabelsRequestPBImpl() throws Exception { + void testAddToClusterNodeLabelsRequestPBImpl() throws Exception { validatePBImplRecord(AddToClusterNodeLabelsRequestPBImpl.class, AddToClusterNodeLabelsRequestProto.class); } - + @Test - public void testAddToClusterNodeLabelsResponsePBImpl() throws Exception { + void testAddToClusterNodeLabelsResponsePBImpl() throws Exception { validatePBImplRecord(AddToClusterNodeLabelsResponsePBImpl.class, AddToClusterNodeLabelsResponseProto.class); } - + @Test - public void testRemoveFromClusterNodeLabelsRequestPBImpl() throws Exception { + void testRemoveFromClusterNodeLabelsRequestPBImpl() throws Exception { validatePBImplRecord(RemoveFromClusterNodeLabelsRequestPBImpl.class, RemoveFromClusterNodeLabelsRequestProto.class); } - + @Test - public void testRemoveFromClusterNodeLabelsResponsePBImpl() throws Exception { + void testRemoveFromClusterNodeLabelsResponsePBImpl() throws Exception { validatePBImplRecord(RemoveFromClusterNodeLabelsResponsePBImpl.class, RemoveFromClusterNodeLabelsResponseProto.class); } - + @Test - public void testGetClusterNodeLabelsRequestPBImpl() throws Exception { + void testGetClusterNodeLabelsRequestPBImpl() throws Exception { validatePBImplRecord(GetClusterNodeLabelsRequestPBImpl.class, GetClusterNodeLabelsRequestProto.class); } @Test - public void testGetClusterNodeLabelsResponsePBImpl() throws Exception { + void testGetClusterNodeLabelsResponsePBImpl() throws Exception { validatePBImplRecord(GetClusterNodeLabelsResponsePBImpl.class, GetClusterNodeLabelsResponseProto.class); } - + @Test - public void testReplaceLabelsOnNodeRequestPBImpl() throws Exception { + void testReplaceLabelsOnNodeRequestPBImpl() throws Exception { validatePBImplRecord(ReplaceLabelsOnNodeRequestPBImpl.class, ReplaceLabelsOnNodeRequestProto.class); } @Test - public void testReplaceLabelsOnNodeResponsePBImpl() throws Exception { + void testReplaceLabelsOnNodeResponsePBImpl() throws Exception { validatePBImplRecord(ReplaceLabelsOnNodeResponsePBImpl.class, ReplaceLabelsOnNodeResponseProto.class); } - + @Test - public void testGetNodeToLabelsRequestPBImpl() throws Exception { + void testGetNodeToLabelsRequestPBImpl() throws Exception { validatePBImplRecord(GetNodesToLabelsRequestPBImpl.class, GetNodesToLabelsRequestProto.class); } @Test - public void testGetNodeToLabelsResponsePBImpl() throws Exception { + void testGetNodeToLabelsResponsePBImpl() throws Exception { validatePBImplRecord(GetNodesToLabelsResponsePBImpl.class, GetNodesToLabelsResponseProto.class); } @Test - public void testGetLabelsToNodesRequestPBImpl() throws Exception { + void testGetLabelsToNodesRequestPBImpl() throws Exception { validatePBImplRecord(GetLabelsToNodesRequestPBImpl.class, GetLabelsToNodesRequestProto.class); } @Test - public void testGetLabelsToNodesResponsePBImpl() throws Exception { + void testGetLabelsToNodesResponsePBImpl() throws Exception { validatePBImplRecord(GetLabelsToNodesResponsePBImpl.class, GetLabelsToNodesResponseProto.class); } - + @Test - public void testNodeLabelAttributesPBImpl() throws Exception { + void testNodeLabelAttributesPBImpl() throws Exception { validatePBImplRecord(NodeLabelPBImpl.class, NodeLabelProto.class); } - + @Test - public void testCheckForDecommissioningNodesRequestPBImpl() throws Exception { + void testCheckForDecommissioningNodesRequestPBImpl() throws Exception { validatePBImplRecord(CheckForDecommissioningNodesRequestPBImpl.class, CheckForDecommissioningNodesRequestProto.class); } @Test - public void testCheckForDecommissioningNodesResponsePBImpl() throws Exception { + void testCheckForDecommissioningNodesResponsePBImpl() throws Exception { validatePBImplRecord(CheckForDecommissioningNodesResponsePBImpl.class, CheckForDecommissioningNodesResponseProto.class); } @Test - public void testExecutionTypeRequestPBImpl() throws Exception { + void testExecutionTypeRequestPBImpl() throws Exception { validatePBImplRecord(ExecutionTypeRequestPBImpl.class, ExecutionTypeRequestProto.class); } @Test - public void testGetAllResourceProfilesResponsePBImpl() throws Exception { + void testGetAllResourceProfilesResponsePBImpl() throws Exception { validatePBImplRecord(GetAllResourceProfilesResponsePBImpl.class, GetAllResourceProfilesResponseProto.class); } @Test - public void testGetResourceProfileRequestPBImpl() throws Exception { + void testGetResourceProfileRequestPBImpl() throws Exception { validatePBImplRecord(GetResourceProfileRequestPBImpl.class, GetResourceProfileRequestProto.class); } @Test - public void testGetResourceProfileResponsePBImpl() throws Exception { + void testGetResourceProfileResponsePBImpl() throws Exception { validatePBImplRecord(GetResourceProfileResponsePBImpl.class, GetResourceProfileResponseProto.class); } @Test - public void testResourceTypesInfoPBImpl() throws Exception { + void testResourceTypesInfoPBImpl() throws Exception { validatePBImplRecord(ResourceTypeInfoPBImpl.class, YarnProtos.ResourceTypeInfoProto.class); } @Test - public void testGetAllResourceTypesInfoRequestPBImpl() throws Exception { + void testGetAllResourceTypesInfoRequestPBImpl() throws Exception { validatePBImplRecord(GetAllResourceTypeInfoRequestPBImpl.class, YarnServiceProtos.GetAllResourceTypeInfoRequestProto.class); } @Test - public void testGetAllResourceTypesInfoResponsePBImpl() throws Exception { + void testGetAllResourceTypesInfoResponsePBImpl() throws Exception { validatePBImplRecord(GetAllResourceTypeInfoResponsePBImpl.class, YarnServiceProtos.GetAllResourceTypeInfoResponseProto.class); } @Test - public void testNodeAttributeKeyPBImpl() throws Exception { + void testNodeAttributeKeyPBImpl() throws Exception { validatePBImplRecord(NodeAttributeKeyPBImpl.class, NodeAttributeKeyProto.class); } @Test - public void testNodeToAttributeValuePBImpl() throws Exception { + void testNodeToAttributeValuePBImpl() throws Exception { validatePBImplRecord(NodeToAttributeValuePBImpl.class, NodeToAttributeValueProto.class); } @Test - public void testNodeAttributePBImpl() throws Exception { + void testNodeAttributePBImpl() throws Exception { validatePBImplRecord(NodeAttributePBImpl.class, NodeAttributeProto.class); } @Test - public void testNodeAttributeInfoPBImpl() throws Exception { + void testNodeAttributeInfoPBImpl() throws Exception { validatePBImplRecord(NodeAttributeInfoPBImpl.class, NodeAttributeInfoProto.class); } @Test - public void testNodeToAttributesPBImpl() throws Exception { + void testNodeToAttributesPBImpl() throws Exception { validatePBImplRecord(NodeToAttributesPBImpl.class, NodeToAttributesProto.class); } @Test - public void testNodesToAttributesMappingRequestPBImpl() throws Exception { + void testNodesToAttributesMappingRequestPBImpl() throws Exception { validatePBImplRecord(NodesToAttributesMappingRequestPBImpl.class, NodesToAttributesMappingRequestProto.class); } @Test - public void testGetAttributesToNodesRequestPBImpl() throws Exception { + void testGetAttributesToNodesRequestPBImpl() throws Exception { validatePBImplRecord(GetAttributesToNodesRequestPBImpl.class, YarnServiceProtos.GetAttributesToNodesRequestProto.class); } @Test - public void testGetAttributesToNodesResponsePBImpl() throws Exception { + void testGetAttributesToNodesResponsePBImpl() throws Exception { validatePBImplRecord(GetAttributesToNodesResponsePBImpl.class, YarnServiceProtos.GetAttributesToNodesResponseProto.class); } @Test - public void testGetClusterNodeAttributesRequestPBImpl() throws Exception { + void testGetClusterNodeAttributesRequestPBImpl() throws Exception { validatePBImplRecord(GetClusterNodeAttributesRequestPBImpl.class, YarnServiceProtos.GetClusterNodeAttributesRequestProto.class); } @Test - public void testGetClusterNodeAttributesResponsePBImpl() throws Exception { + void testGetClusterNodeAttributesResponsePBImpl() throws Exception { validatePBImplRecord(GetClusterNodeAttributesResponsePBImpl.class, YarnServiceProtos.GetClusterNodeAttributesResponseProto.class); } @Test - public void testGetNodesToAttributesRequestPBImpl() throws Exception { + void testGetNodesToAttributesRequestPBImpl() throws Exception { validatePBImplRecord(GetNodesToAttributesRequestPBImpl.class, YarnServiceProtos.GetNodesToAttributesRequestProto.class); } @Test - public void testGetNodesToAttributesResponsePBImpl() throws Exception { + void testGetNodesToAttributesResponsePBImpl() throws Exception { validatePBImplRecord(GetNodesToAttributesResponsePBImpl.class, YarnServiceProtos.GetNodesToAttributesResponseProto.class); } @Test - public void testGetEnhancedHeadroomPBImpl() throws Exception { + void testGetEnhancedHeadroomPBImpl() throws Exception { validatePBImplRecord(EnhancedHeadroomPBImpl.class, YarnServiceProtos.EnhancedHeadroomProto.class); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestPlacementConstraintPBConversion.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestPlacementConstraintPBConversion.java index bd245e29ce9..300fc6a42b0 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestPlacementConstraintPBConversion.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestPlacementConstraintPBConversion.java @@ -18,17 +18,10 @@ package org.apache.hadoop.yarn.api; -import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.NODE; -import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.RACK; -import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.cardinality; -import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.maxCardinality; -import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.or; -import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.targetCardinality; -import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.targetIn; -import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.PlacementTargets.allocationTag; - import java.util.Iterator; +import org.junit.jupiter.api.Test; + import org.apache.hadoop.yarn.api.pb.PlacementConstraintFromProtoConverter; import org.apache.hadoop.yarn.api.pb.PlacementConstraintToProtoConverter; import org.apache.hadoop.yarn.api.resource.PlacementConstraint; @@ -40,8 +33,18 @@ import org.apache.hadoop.yarn.proto.YarnProtos.CompositePlacementConstraintProto import org.apache.hadoop.yarn.proto.YarnProtos.CompositePlacementConstraintProto.CompositeType; import org.apache.hadoop.yarn.proto.YarnProtos.PlacementConstraintProto; import org.apache.hadoop.yarn.proto.YarnProtos.SimplePlacementConstraintProto; -import org.junit.Assert; -import org.junit.Test; + +import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.NODE; +import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.PlacementTargets.allocationTag; +import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.RACK; +import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.cardinality; +import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.maxCardinality; +import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.or; +import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.targetCardinality; +import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.targetIn; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertTrue; /** * Test class for {@link PlacementConstraintToProtoConverter} and @@ -50,10 +53,10 @@ import org.junit.Test; public class TestPlacementConstraintPBConversion { @Test - public void testTargetConstraintProtoConverter() { + void testTargetConstraintProtoConverter() { AbstractConstraint sConstraintExpr = targetIn(NODE, allocationTag("hbase-m")); - Assert.assertTrue(sConstraintExpr instanceof SingleConstraint); + assertTrue(sConstraintExpr instanceof SingleConstraint); SingleConstraint single = (SingleConstraint) sConstraintExpr; PlacementConstraint sConstraint = PlacementConstraints.build(sConstraintExpr); @@ -63,14 +66,14 @@ public class TestPlacementConstraintPBConversion { new PlacementConstraintToProtoConverter(sConstraint); PlacementConstraintProto protoConstraint = toProtoConverter.convert(); - Assert.assertTrue(protoConstraint.hasSimpleConstraint()); - Assert.assertFalse(protoConstraint.hasCompositeConstraint()); + assertTrue(protoConstraint.hasSimpleConstraint()); + assertFalse(protoConstraint.hasCompositeConstraint()); SimplePlacementConstraintProto sProto = protoConstraint.getSimpleConstraint(); - Assert.assertEquals(single.getScope(), sProto.getScope()); - Assert.assertEquals(single.getMinCardinality(), sProto.getMinCardinality()); - Assert.assertEquals(single.getMaxCardinality(), sProto.getMaxCardinality()); - Assert.assertEquals(single.getTargetExpressions().size(), + assertEquals(single.getScope(), sProto.getScope()); + assertEquals(single.getMinCardinality(), sProto.getMinCardinality()); + assertEquals(single.getMaxCardinality(), sProto.getMaxCardinality()); + assertEquals(single.getTargetExpressions().size(), sProto.getTargetExpressionsList().size()); // Convert from proto. @@ -79,21 +82,21 @@ public class TestPlacementConstraintPBConversion { PlacementConstraint newConstraint = fromProtoConverter.convert(); AbstractConstraint newConstraintExpr = newConstraint.getConstraintExpr(); - Assert.assertTrue(newConstraintExpr instanceof SingleConstraint); + assertTrue(newConstraintExpr instanceof SingleConstraint); SingleConstraint newSingle = (SingleConstraint) newConstraintExpr; - Assert.assertEquals(single.getScope(), newSingle.getScope()); - Assert.assertEquals(single.getMinCardinality(), + assertEquals(single.getScope(), newSingle.getScope()); + assertEquals(single.getMinCardinality(), newSingle.getMinCardinality()); - Assert.assertEquals(single.getMaxCardinality(), + assertEquals(single.getMaxCardinality(), newSingle.getMaxCardinality()); - Assert.assertEquals(single.getTargetExpressions(), + assertEquals(single.getTargetExpressions(), newSingle.getTargetExpressions()); } @Test - public void testCardinalityConstraintProtoConverter() { + void testCardinalityConstraintProtoConverter() { AbstractConstraint sConstraintExpr = cardinality(RACK, 3, 10); - Assert.assertTrue(sConstraintExpr instanceof SingleConstraint); + assertTrue(sConstraintExpr instanceof SingleConstraint); SingleConstraint single = (SingleConstraint) sConstraintExpr; PlacementConstraint sConstraint = PlacementConstraints.build(sConstraintExpr); @@ -111,17 +114,17 @@ public class TestPlacementConstraintPBConversion { PlacementConstraint newConstraint = fromProtoConverter.convert(); AbstractConstraint newConstraintExpr = newConstraint.getConstraintExpr(); - Assert.assertTrue(newConstraintExpr instanceof SingleConstraint); + assertTrue(newConstraintExpr instanceof SingleConstraint); SingleConstraint newSingle = (SingleConstraint) newConstraintExpr; compareSimpleConstraints(single, newSingle); } @Test - public void testCompositeConstraintProtoConverter() { + void testCompositeConstraintProtoConverter() { AbstractConstraint constraintExpr = or(targetIn(RACK, allocationTag("spark")), maxCardinality(NODE, 3), targetCardinality(RACK, 2, 10, allocationTag("zk"))); - Assert.assertTrue(constraintExpr instanceof Or); + assertTrue(constraintExpr instanceof Or); PlacementConstraint constraint = PlacementConstraints.build(constraintExpr); Or orExpr = (Or) constraintExpr; @@ -130,14 +133,14 @@ public class TestPlacementConstraintPBConversion { new PlacementConstraintToProtoConverter(constraint); PlacementConstraintProto protoConstraint = toProtoConverter.convert(); - Assert.assertFalse(protoConstraint.hasSimpleConstraint()); - Assert.assertTrue(protoConstraint.hasCompositeConstraint()); + assertFalse(protoConstraint.hasSimpleConstraint()); + assertTrue(protoConstraint.hasCompositeConstraint()); CompositePlacementConstraintProto cProto = protoConstraint.getCompositeConstraint(); - Assert.assertEquals(CompositeType.OR, cProto.getCompositeType()); - Assert.assertEquals(3, cProto.getChildConstraintsCount()); - Assert.assertEquals(0, cProto.getTimedChildConstraintsCount()); + assertEquals(CompositeType.OR, cProto.getCompositeType()); + assertEquals(3, cProto.getChildConstraintsCount()); + assertEquals(0, cProto.getTimedChildConstraintsCount()); Iterator orChildren = orExpr.getChildren().iterator(); Iterator orProtoChildren = cProto.getChildConstraintsList().iterator(); @@ -153,9 +156,9 @@ public class TestPlacementConstraintPBConversion { PlacementConstraint newConstraint = fromProtoConverter.convert(); AbstractConstraint newConstraintExpr = newConstraint.getConstraintExpr(); - Assert.assertTrue(newConstraintExpr instanceof Or); + assertTrue(newConstraintExpr instanceof Or); Or newOrExpr = (Or) newConstraintExpr; - Assert.assertEquals(3, newOrExpr.getChildren().size()); + assertEquals(3, newOrExpr.getChildren().size()); orChildren = orExpr.getChildren().iterator(); Iterator newOrChildren = newOrExpr.getChildren().iterator(); @@ -169,26 +172,26 @@ public class TestPlacementConstraintPBConversion { private void compareSimpleConstraintToProto(SingleConstraint constraint, PlacementConstraintProto proto) { - Assert.assertTrue(proto.hasSimpleConstraint()); - Assert.assertFalse(proto.hasCompositeConstraint()); + assertTrue(proto.hasSimpleConstraint()); + assertFalse(proto.hasCompositeConstraint()); SimplePlacementConstraintProto sProto = proto.getSimpleConstraint(); - Assert.assertEquals(constraint.getScope(), sProto.getScope()); - Assert.assertEquals(constraint.getMinCardinality(), + assertEquals(constraint.getScope(), sProto.getScope()); + assertEquals(constraint.getMinCardinality(), sProto.getMinCardinality()); - Assert.assertEquals(constraint.getMaxCardinality(), + assertEquals(constraint.getMaxCardinality(), sProto.getMaxCardinality()); - Assert.assertEquals(constraint.getTargetExpressions().size(), + assertEquals(constraint.getTargetExpressions().size(), sProto.getTargetExpressionsList().size()); } private void compareSimpleConstraints(SingleConstraint single, SingleConstraint newSingle) { - Assert.assertEquals(single.getScope(), newSingle.getScope()); - Assert.assertEquals(single.getMinCardinality(), + assertEquals(single.getScope(), newSingle.getScope()); + assertEquals(single.getMinCardinality(), newSingle.getMinCardinality()); - Assert.assertEquals(single.getMaxCardinality(), + assertEquals(single.getMaxCardinality(), newSingle.getMaxCardinality()); - Assert.assertEquals(single.getTargetExpressions(), + assertEquals(single.getTargetExpressions(), newSingle.getTargetExpressions()); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestResourcePBImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestResourcePBImpl.java index c92e73f44fd..f564932bd40 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestResourcePBImpl.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestResourcePBImpl.java @@ -20,6 +20,10 @@ package org.apache.hadoop.yarn.api; import java.io.File; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.yarn.api.records.Resource; import org.apache.hadoop.yarn.api.records.ResourceInformation; @@ -29,20 +33,19 @@ import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.proto.YarnProtos; import org.apache.hadoop.yarn.util.resource.ResourceUtils; import org.apache.hadoop.yarn.util.resource.TestResourceUtils; -import org.junit.After; -import org.junit.Assert; -import org.junit.Before; -import org.junit.Test; import static org.assertj.core.api.Assertions.assertThat; -import static org.junit.Assert.assertEquals; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertTrue; /** * Test class to handle various proto related tests for resources. */ public class TestResourcePBImpl { - @Before + @BeforeEach public void setup() throws Exception { ResourceUtils.resetResourceTypes(); @@ -51,7 +54,7 @@ public class TestResourcePBImpl { TestResourceUtils.setupResourceTypes(conf, resourceTypesFile); } - @After + @AfterEach public void teardown() { Configuration conf = new YarnConfiguration(); File source = new File( @@ -63,80 +66,80 @@ public class TestResourcePBImpl { } @Test - public void testEmptyResourcePBInit() throws Exception { + void testEmptyResourcePBInit() throws Exception { Resource res = new ResourcePBImpl(); // Assert to check it sets resource value and unit to default. - Assert.assertEquals(0, res.getMemorySize()); - Assert.assertEquals(ResourceInformation.MEMORY_MB.getUnits(), + assertEquals(0, res.getMemorySize()); + assertEquals(ResourceInformation.MEMORY_MB.getUnits(), res.getResourceInformation(ResourceInformation.MEMORY_MB.getName()) .getUnits()); - Assert.assertEquals(ResourceInformation.VCORES.getUnits(), + assertEquals(ResourceInformation.VCORES.getUnits(), res.getResourceInformation(ResourceInformation.VCORES.getName()) .getUnits()); } @Test - public void testResourcePBInitFromOldPB() throws Exception { + void testResourcePBInitFromOldPB() throws Exception { YarnProtos.ResourceProto proto = YarnProtos.ResourceProto.newBuilder().setMemory(1024).setVirtualCores(3) .build(); // Assert to check it sets resource value and unit to default. Resource res = new ResourcePBImpl(proto); - Assert.assertEquals(1024, res.getMemorySize()); - Assert.assertEquals(3, res.getVirtualCores()); - Assert.assertEquals(ResourceInformation.MEMORY_MB.getUnits(), + assertEquals(1024, res.getMemorySize()); + assertEquals(3, res.getVirtualCores()); + assertEquals(ResourceInformation.MEMORY_MB.getUnits(), res.getResourceInformation(ResourceInformation.MEMORY_MB.getName()) .getUnits()); - Assert.assertEquals(ResourceInformation.VCORES.getUnits(), + assertEquals(ResourceInformation.VCORES.getUnits(), res.getResourceInformation(ResourceInformation.VCORES.getName()) .getUnits()); } @Test @SuppressWarnings("deprecation") - public void testGetMemory() { + void testGetMemory() { Resource res = new ResourcePBImpl(); long memorySize = Integer.MAX_VALUE + 1L; res.setMemorySize(memorySize); - assertEquals("No need to cast if both are long", memorySize, - res.getMemorySize()); - assertEquals("Cast to Integer.MAX_VALUE if the long is greater than " - + "Integer.MAX_VALUE", Integer.MAX_VALUE, res.getMemory()); + assertEquals(memorySize, res.getMemorySize(), "No need to cast if both are long"); + assertEquals(Integer.MAX_VALUE, res.getMemory(), + "Cast to Integer.MAX_VALUE if the long is greater than " + "Integer.MAX_VALUE"); } @Test - public void testGetVirtualCores() { + void testGetVirtualCores() { Resource res = new ResourcePBImpl(); long vcores = Integer.MAX_VALUE + 1L; res.getResourceInformation("vcores").setValue(vcores); - assertEquals("No need to cast if both are long", vcores, - res.getResourceInformation("vcores").getValue()); - assertEquals("Cast to Integer.MAX_VALUE if the long is greater than " - + "Integer.MAX_VALUE", Integer.MAX_VALUE, res.getVirtualCores()); + assertEquals(vcores, + res.getResourceInformation("vcores").getValue(), + "No need to cast if both are long"); + assertEquals(Integer.MAX_VALUE, res.getVirtualCores(), + "Cast to Integer.MAX_VALUE if the long is greater than " + "Integer.MAX_VALUE"); } @Test - public void testResourcePBWithExtraResources() throws Exception { + void testResourcePBWithExtraResources() throws Exception { //Resource 'resource1' has been passed as 4T //4T should be converted to 4000G YarnProtos.ResourceInformationProto riProto = YarnProtos.ResourceInformationProto.newBuilder().setType( YarnProtos.ResourceTypeInfoProto.newBuilder(). - setName("resource1").setType( + setName("resource1").setType( YarnProtos.ResourceTypesProto.COUNTABLE).getType()). - setValue(4).setUnits("T").setKey("resource1").build(); + setValue(4).setUnits("T").setKey("resource1").build(); YarnProtos.ResourceProto proto = YarnProtos.ResourceProto.newBuilder().setMemory(1024). - setVirtualCores(3).addResourceValueMap(riProto).build(); + setVirtualCores(3).addResourceValueMap(riProto).build(); Resource res = new ResourcePBImpl(proto); - Assert.assertEquals(4000, + assertEquals(4000, res.getResourceInformation("resource1").getValue()); - Assert.assertEquals("G", + assertEquals("G", res.getResourceInformation("resource1").getUnits()); //Resource 'resource2' has been passed as 4M @@ -144,18 +147,18 @@ public class TestResourcePBImpl { YarnProtos.ResourceInformationProto riProto1 = YarnProtos.ResourceInformationProto.newBuilder().setType( YarnProtos.ResourceTypeInfoProto.newBuilder(). - setName("resource2").setType( + setName("resource2").setType( YarnProtos.ResourceTypesProto.COUNTABLE).getType()). - setValue(4).setUnits("M").setKey("resource2").build(); + setValue(4).setUnits("M").setKey("resource2").build(); YarnProtos.ResourceProto proto1 = YarnProtos.ResourceProto.newBuilder().setMemory(1024). - setVirtualCores(3).addResourceValueMap(riProto1).build(); + setVirtualCores(3).addResourceValueMap(riProto1).build(); Resource res1 = new ResourcePBImpl(proto1); - Assert.assertEquals(4000000000L, + assertEquals(4000000000L, res1.getResourceInformation("resource2").getValue()); - Assert.assertEquals("m", + assertEquals("m", res1.getResourceInformation("resource2").getUnits()); //Resource 'resource1' has been passed as 3M @@ -163,23 +166,23 @@ public class TestResourcePBImpl { YarnProtos.ResourceInformationProto riProto2 = YarnProtos.ResourceInformationProto.newBuilder().setType( YarnProtos.ResourceTypeInfoProto.newBuilder(). - setName("resource1").setType( + setName("resource1").setType( YarnProtos.ResourceTypesProto.COUNTABLE).getType()). - setValue(3).setUnits("M").setKey("resource1").build(); + setValue(3).setUnits("M").setKey("resource1").build(); YarnProtos.ResourceProto proto2 = YarnProtos.ResourceProto.newBuilder().setMemory(1024). - setVirtualCores(3).addResourceValueMap(riProto2).build(); + setVirtualCores(3).addResourceValueMap(riProto2).build(); Resource res2 = new ResourcePBImpl(proto2); - Assert.assertEquals(0, + assertEquals(0, res2.getResourceInformation("resource1").getValue()); - Assert.assertEquals("G", + assertEquals("G", res2.getResourceInformation("resource1").getUnits()); } @Test - public void testResourceTags() { + void testResourceTags() { YarnProtos.ResourceInformationProto riProto = YarnProtos.ResourceInformationProto.newBuilder() .setType( @@ -201,19 +204,19 @@ public class TestResourcePBImpl { .build(); Resource res = new ResourcePBImpl(proto); - Assert.assertNotNull(res.getResourceInformation("yarn.io/test-volume")); - Assert.assertEquals(10, + assertNotNull(res.getResourceInformation("yarn.io/test-volume")); + assertEquals(10, res.getResourceInformation("yarn.io/test-volume") .getValue()); - Assert.assertEquals("G", + assertEquals("G", res.getResourceInformation("yarn.io/test-volume") .getUnits()); - Assert.assertEquals(3, + assertEquals(3, res.getResourceInformation("yarn.io/test-volume") .getTags().size()); - Assert.assertFalse(res.getResourceInformation("yarn.io/test-volume") + assertFalse(res.getResourceInformation("yarn.io/test-volume") .getTags().isEmpty()); - Assert.assertTrue(res.getResourceInformation("yarn.io/test-volume") + assertTrue(res.getResourceInformation("yarn.io/test-volume") .getAttributes().isEmpty()); boolean protoConvertExpected = false; @@ -225,13 +228,13 @@ public class TestResourcePBImpl { && pf.getTagsCount() == 3; } } - Assert.assertTrue("Expecting resource's protobuf message" - + " contains 0 attributes and 3 tags", - protoConvertExpected); + assertTrue(protoConvertExpected, + "Expecting resource's protobuf message" + + " contains 0 attributes and 3 tags"); } @Test - public void testResourceAttributes() { + void testResourceAttributes() { YarnProtos.ResourceInformationProto riProto = YarnProtos.ResourceInformationProto.newBuilder() .setType( @@ -260,19 +263,19 @@ public class TestResourcePBImpl { .build(); Resource res = new ResourcePBImpl(proto); - Assert.assertNotNull(res.getResourceInformation("yarn.io/test-volume")); - Assert.assertEquals(10, + assertNotNull(res.getResourceInformation("yarn.io/test-volume")); + assertEquals(10, res.getResourceInformation("yarn.io/test-volume") .getValue()); - Assert.assertEquals("G", + assertEquals("G", res.getResourceInformation("yarn.io/test-volume") .getUnits()); - Assert.assertEquals(2, + assertEquals(2, res.getResourceInformation("yarn.io/test-volume") .getAttributes().size()); - Assert.assertTrue(res.getResourceInformation("yarn.io/test-volume") + assertTrue(res.getResourceInformation("yarn.io/test-volume") .getTags().isEmpty()); - Assert.assertFalse(res.getResourceInformation("yarn.io/test-volume") + assertFalse(res.getResourceInformation("yarn.io/test-volume") .getAttributes().isEmpty()); boolean protoConvertExpected = false; @@ -284,20 +287,20 @@ public class TestResourcePBImpl { && pf.getTagsCount() == 0; } } - Assert.assertTrue("Expecting resource's protobuf message" - + " contains 2 attributes and 0 tags", - protoConvertExpected); + assertTrue(protoConvertExpected, + "Expecting resource's protobuf message" + + " contains 2 attributes and 0 tags"); } @Test - public void testParsingResourceTags() { + void testParsingResourceTags() { ResourceInformation info = ResourceUtils.getResourceTypes().get("resource3"); - Assert.assertTrue(info.getAttributes().isEmpty()); - Assert.assertFalse(info.getTags().isEmpty()); + assertTrue(info.getAttributes().isEmpty()); + assertFalse(info.getTags().isEmpty()); assertThat(info.getTags()).hasSize(2); info.getTags().remove("resource3_tag_1"); info.getTags().remove("resource3_tag_2"); - Assert.assertTrue(info.getTags().isEmpty()); + assertTrue(info.getTags().isEmpty()); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestResourceRequest.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestResourceRequest.java index aef838cd17e..d3f3fe90d48 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestResourceRequest.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestResourceRequest.java @@ -17,13 +17,15 @@ */ package org.apache.hadoop.yarn.api; +import org.junit.jupiter.api.Test; + import org.apache.hadoop.yarn.api.records.ExecutionType; import org.apache.hadoop.yarn.api.records.ExecutionTypeRequest; import org.apache.hadoop.yarn.api.records.Priority; import org.apache.hadoop.yarn.api.records.Resource; import org.apache.hadoop.yarn.api.records.ResourceRequest; -import org.junit.Assert; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertNotEquals; /** * The class to test {@link ResourceRequest}. @@ -31,7 +33,7 @@ import org.junit.Test; public class TestResourceRequest { @Test - public void testEqualsOnExecutionTypeRequest() { + void testEqualsOnExecutionTypeRequest() { ResourceRequest resourceRequestA = ResourceRequest.newInstance(Priority.newInstance(0), "localhost", Resource.newInstance(1024, 1), 1, false, "", @@ -42,6 +44,6 @@ public class TestResourceRequest { Resource.newInstance(1024, 1), 1, false, "", ExecutionTypeRequest.newInstance(ExecutionType.GUARANTEED, false)); - Assert.assertFalse(resourceRequestA.equals(resourceRequestB)); + assertNotEquals(resourceRequestA, resourceRequestB); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestTimelineEntityGroupId.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestTimelineEntityGroupId.java index 55b149640d3..952b7fede76 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestTimelineEntityGroupId.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestTimelineEntityGroupId.java @@ -18,15 +18,20 @@ package org.apache.hadoop.yarn.api; +import org.junit.jupiter.api.Test; + import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.api.records.timeline.TimelineEntityGroupId; -import org.junit.Assert; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertNotEquals; +import static org.junit.jupiter.api.Assertions.assertTrue; public class TestTimelineEntityGroupId { @Test - public void testTimelineEntityGroupId() { + void testTimelineEntityGroupId() { ApplicationId appId1 = ApplicationId.newInstance(1234, 1); ApplicationId appId2 = ApplicationId.newInstance(1234, 2); TimelineEntityGroupId group1 = TimelineEntityGroupId.newInstance(appId1, "1"); @@ -34,19 +39,19 @@ public class TestTimelineEntityGroupId { TimelineEntityGroupId group3 = TimelineEntityGroupId.newInstance(appId2, "1"); TimelineEntityGroupId group4 = TimelineEntityGroupId.newInstance(appId1, "1"); - Assert.assertTrue(group1.equals(group4)); - Assert.assertFalse(group1.equals(group2)); - Assert.assertFalse(group1.equals(group3)); + assertEquals(group1, group4); + assertNotEquals(group1, group2); + assertNotEquals(group1, group3); - Assert.assertTrue(group1.compareTo(group4) == 0); - Assert.assertTrue(group1.compareTo(group2) < 0); - Assert.assertTrue(group1.compareTo(group3) < 0); + assertTrue(group1.compareTo(group4) == 0); + assertTrue(group1.compareTo(group2) < 0); + assertTrue(group1.compareTo(group3) < 0); - Assert.assertTrue(group1.hashCode() == group4.hashCode()); - Assert.assertFalse(group1.hashCode() == group2.hashCode()); - Assert.assertFalse(group1.hashCode() == group3.hashCode()); + assertTrue(group1.hashCode() == group4.hashCode()); + assertFalse(group1.hashCode() == group2.hashCode()); + assertFalse(group1.hashCode() == group3.hashCode()); - Assert.assertEquals("timelineEntityGroupId_1234_1_1", group1.toString()); - Assert.assertEquals(TimelineEntityGroupId.fromString("timelineEntityGroupId_1234_1_1"), group1); + assertEquals("timelineEntityGroupId_1234_1_1", group1.toString()); + assertEquals(TimelineEntityGroupId.fromString("timelineEntityGroupId_1234_1_1"), group1); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/TestGetApplicationsRequestPBImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/TestGetApplicationsRequestPBImpl.java index 35f9aa54e64..beea949b5e5 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/TestGetApplicationsRequestPBImpl.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/TestGetApplicationsRequestPBImpl.java @@ -18,57 +18,64 @@ package org.apache.hadoop.yarn.api.protocolrecords.impl.pb; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertNotEquals; - import java.util.ArrayList; import java.util.Collection; import java.util.List; +import org.junit.jupiter.params.ParameterizedTest; +import org.junit.jupiter.params.provider.MethodSource; + import org.apache.hadoop.util.Sets; import org.apache.hadoop.yarn.proto.YarnServiceProtos.GetApplicationsRequestProto; -import org.junit.Test; -import org.junit.runner.RunWith; -import org.junit.runners.Parameterized; -import org.junit.runners.Parameterized.Parameter; -import org.junit.runners.Parameterized.Parameters; -@RunWith(Parameterized.class) +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotEquals; + public class TestGetApplicationsRequestPBImpl { - @Parameter @SuppressWarnings("checkstyle:visibilitymodifier") public GetApplicationsRequestPBImpl impl; - @Test - public void testAppTagsLowerCaseConversionDefault() { - impl.setApplicationTags(Sets.newHashSet("ABcd", "efgH")); - impl.getApplicationTags().forEach(s -> - assertEquals(s, s.toLowerCase())); + @MethodSource("data") + @ParameterizedTest + void testAppTagsLowerCaseConversionDefault( + GetApplicationsRequestPBImpl applicationsRequestPBImpl) { + initTestGetApplicationsRequestPBImpl(applicationsRequestPBImpl); + applicationsRequestPBImpl.setApplicationTags(Sets.newHashSet("ABcd", "efgH")); + applicationsRequestPBImpl.getApplicationTags().forEach(s -> assertEquals(s, s.toLowerCase())); } - @Test - public void testAppTagsLowerCaseConversionDisabled() { + @MethodSource("data") + @ParameterizedTest + void testAppTagsLowerCaseConversionDisabled( + GetApplicationsRequestPBImpl applicationsRequestPBImpl) { + initTestGetApplicationsRequestPBImpl(applicationsRequestPBImpl); GetApplicationsRequestPBImpl.setForceLowerCaseTags(false); - impl.setApplicationTags(Sets.newHashSet("ABcd", "efgH")); - impl.getApplicationTags().forEach(s -> - assertNotEquals(s, s.toLowerCase())); + applicationsRequestPBImpl.setApplicationTags(Sets.newHashSet("ABcd", "efgH")); + applicationsRequestPBImpl.getApplicationTags() + .forEach(s -> assertNotEquals(s, s.toLowerCase())); } - @Test - public void testAppTagsLowerCaseConversionEnabled() { + @MethodSource("data") + @ParameterizedTest + void testAppTagsLowerCaseConversionEnabled( + GetApplicationsRequestPBImpl applicationsRequestPBImpl) { + initTestGetApplicationsRequestPBImpl(applicationsRequestPBImpl); GetApplicationsRequestPBImpl.setForceLowerCaseTags(true); - impl.setApplicationTags(Sets.newHashSet("ABcd", "efgH")); - impl.getApplicationTags().forEach(s -> - assertEquals(s, s.toLowerCase())); + applicationsRequestPBImpl.setApplicationTags(Sets.newHashSet("ABcd", "efgH")); + applicationsRequestPBImpl.getApplicationTags().forEach(s -> assertEquals(s, s.toLowerCase())); } - @Parameters public static Collection data() { List list = new ArrayList<>(); - list.add(new Object[] {new GetApplicationsRequestPBImpl()}); - list.add(new Object[] {new GetApplicationsRequestPBImpl( + list.add(new Object[]{new GetApplicationsRequestPBImpl()}); + list.add(new Object[]{new GetApplicationsRequestPBImpl( GetApplicationsRequestProto.newBuilder().build())}); return list; } + + public void initTestGetApplicationsRequestPBImpl( + GetApplicationsRequestPBImpl applicationsRequestPBImpl) { + this.impl = impl; + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/TestResourceUtilization.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/TestResourceUtilization.java index a2b05708326..27c05e082e5 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/TestResourceUtilization.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/TestResourceUtilization.java @@ -18,54 +18,58 @@ package org.apache.hadoop.yarn.api.records; -import org.junit.Assert; -import org.junit.Test; - import java.util.HashMap; import java.util.Map; +import org.junit.jupiter.api.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertNotEquals; +import static org.junit.jupiter.api.Assertions.assertTrue; + public class TestResourceUtilization { @Test - public void testResourceUtilization() { + void testResourceUtilization() { ResourceUtilization u1 = ResourceUtilization.newInstance(10, 20, 0.5f); ResourceUtilization u2 = ResourceUtilization.newInstance(u1); ResourceUtilization u3 = ResourceUtilization.newInstance(10, 20, 0.5f); ResourceUtilization u4 = ResourceUtilization.newInstance(20, 20, 0.5f); ResourceUtilization u5 = ResourceUtilization.newInstance(30, 40, 0.8f); - Assert.assertEquals(u1, u2); - Assert.assertEquals(u1, u3); - Assert.assertNotEquals(u1, u4); - Assert.assertNotEquals(u2, u5); - Assert.assertNotEquals(u4, u5); + assertEquals(u1, u2); + assertEquals(u1, u3); + assertNotEquals(u1, u4); + assertNotEquals(u2, u5); + assertNotEquals(u4, u5); - Assert.assertTrue(u1.hashCode() == u2.hashCode()); - Assert.assertTrue(u1.hashCode() == u3.hashCode()); - Assert.assertFalse(u1.hashCode() == u4.hashCode()); - Assert.assertFalse(u2.hashCode() == u5.hashCode()); - Assert.assertFalse(u4.hashCode() == u5.hashCode()); + assertTrue(u1.hashCode() == u2.hashCode()); + assertTrue(u1.hashCode() == u3.hashCode()); + assertFalse(u1.hashCode() == u4.hashCode()); + assertFalse(u2.hashCode() == u5.hashCode()); + assertFalse(u4.hashCode() == u5.hashCode()); - Assert.assertTrue(u1.getPhysicalMemory() == 10); - Assert.assertFalse(u1.getVirtualMemory() == 10); - Assert.assertTrue(u1.getCPU() == 0.5f); + assertTrue(u1.getPhysicalMemory() == 10); + assertFalse(u1.getVirtualMemory() == 10); + assertTrue(u1.getCPU() == 0.5f); - Assert.assertEquals("", u1.toString()); u1.addTo(10, 0, 0.0f); - Assert.assertNotEquals(u1, u2); - Assert.assertEquals(u1, u4); + assertNotEquals(u1, u2); + assertEquals(u1, u4); u1.addTo(10, 20, 0.3f); - Assert.assertEquals(u1, u5); + assertEquals(u1, u5); u1.subtractFrom(10, 20, 0.3f); - Assert.assertEquals(u1, u4); + assertEquals(u1, u4); u1.subtractFrom(10, 0, 0.0f); - Assert.assertEquals(u1, u3); + assertEquals(u1, u3); } @Test - public void testResourceUtilizationWithCustomResource() { + void testResourceUtilizationWithCustomResource() { Map customResources = new HashMap<>(); customResources.put(ResourceInformation.GPU_URI, 5.0f); ResourceUtilization u1 = ResourceUtilization. @@ -78,35 +82,35 @@ public class TestResourceUtilization { ResourceUtilization u5 = ResourceUtilization. newInstance(30, 40, 0.8f, customResources); - Assert.assertEquals(u1, u2); - Assert.assertEquals(u1, u3); - Assert.assertNotEquals(u1, u4); - Assert.assertNotEquals(u2, u5); - Assert.assertNotEquals(u4, u5); + assertEquals(u1, u2); + assertEquals(u1, u3); + assertNotEquals(u1, u4); + assertNotEquals(u2, u5); + assertNotEquals(u4, u5); - Assert.assertTrue(u1.hashCode() == u2.hashCode()); - Assert.assertTrue(u1.hashCode() == u3.hashCode()); - Assert.assertFalse(u1.hashCode() == u4.hashCode()); - Assert.assertFalse(u2.hashCode() == u5.hashCode()); - Assert.assertFalse(u4.hashCode() == u5.hashCode()); + assertTrue(u1.hashCode() == u2.hashCode()); + assertTrue(u1.hashCode() == u3.hashCode()); + assertFalse(u1.hashCode() == u4.hashCode()); + assertFalse(u2.hashCode() == u5.hashCode()); + assertFalse(u4.hashCode() == u5.hashCode()); - Assert.assertTrue(u1.getPhysicalMemory() == 10); - Assert.assertFalse(u1.getVirtualMemory() == 10); - Assert.assertTrue(u1.getCPU() == 0.5f); - Assert.assertTrue(u1. + assertTrue(u1.getPhysicalMemory() == 10); + assertFalse(u1.getVirtualMemory() == 10); + assertTrue(u1.getCPU() == 0.5f); + assertTrue(u1. getCustomResource(ResourceInformation.GPU_URI) == 5.0f); - Assert.assertEquals("", u1.toString()); u1.addTo(10, 0, 0.0f); - Assert.assertNotEquals(u1, u2); - Assert.assertEquals(u1, u4); + assertNotEquals(u1, u2); + assertEquals(u1, u4); u1.addTo(10, 20, 0.3f); - Assert.assertEquals(u1, u5); + assertEquals(u1, u5); u1.subtractFrom(10, 20, 0.3f); - Assert.assertEquals(u1, u4); + assertEquals(u1, u4); u1.subtractFrom(10, 0, 0.0f); - Assert.assertEquals(u1, u3); + assertEquals(u1, u3); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/impl/pb/TestApplicationClientProtocolRecords.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/impl/pb/TestApplicationClientProtocolRecords.java index 6c51516434e..4ce6646a716 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/impl/pb/TestApplicationClientProtocolRecords.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/impl/pb/TestApplicationClientProtocolRecords.java @@ -25,6 +25,8 @@ import java.util.HashMap; import java.util.List; import java.util.Map; +import org.junit.jupiter.api.Test; + import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.DataOutputBuffer; import org.apache.hadoop.security.Credentials; @@ -36,8 +38,10 @@ import org.apache.hadoop.yarn.api.records.LocalResourceVisibility; import org.apache.hadoop.yarn.api.records.URL; import org.apache.hadoop.yarn.factories.RecordFactory; import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider; -import org.junit.Assert; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; public class TestApplicationClientProtocolRecords { @@ -47,7 +51,7 @@ public class TestApplicationClientProtocolRecords { * */ @Test - public void testCLCPBImplNullEnv() throws IOException { + void testCLCPBImplNullEnv() throws IOException { Map localResources = Collections.emptyMap(); Map environment = new HashMap(); List commands = Collections.emptyList(); @@ -68,7 +72,7 @@ public class TestApplicationClientProtocolRecords { ContainerLaunchContext clcProto = new ContainerLaunchContextPBImpl( ((ContainerLaunchContextPBImpl) clc).getProto()); - Assert.assertEquals("", + assertEquals("", clcProto.getEnvironment().get("testCLCPBImplNullEnv")); } @@ -78,7 +82,7 @@ public class TestApplicationClientProtocolRecords { * local resource URL. */ @Test - public void testCLCPBImplNullResourceURL() throws IOException { + void testCLCPBImplNullResourceURL() throws IOException { RecordFactory recordFactory = RecordFactoryProvider.getRecordFactory(null); try { LocalResource rsrc_alpha = recordFactory.newRecordInstance(LocalResource.class); @@ -92,9 +96,9 @@ public class TestApplicationClientProtocolRecords { localResources.put("null_url_resource", rsrc_alpha); ContainerLaunchContext containerLaunchContext = recordFactory.newRecordInstance(ContainerLaunchContext.class); containerLaunchContext.setLocalResources(localResources); - Assert.fail("Setting an invalid local resource should be an error!"); + fail("Setting an invalid local resource should be an error!"); } catch (NullPointerException e) { - Assert.assertTrue(e.getMessage().contains("Null resource URL for local resource")); + assertTrue(e.getMessage().contains("Null resource URL for local resource")); } } @@ -103,7 +107,7 @@ public class TestApplicationClientProtocolRecords { * local resource type. */ @Test - public void testCLCPBImplNullResourceType() throws IOException { + void testCLCPBImplNullResourceType() throws IOException { RecordFactory recordFactory = RecordFactoryProvider.getRecordFactory(null); try { LocalResource resource = recordFactory.newRecordInstance(LocalResource.class); @@ -117,9 +121,9 @@ public class TestApplicationClientProtocolRecords { localResources.put("null_type_resource", resource); ContainerLaunchContext containerLaunchContext = recordFactory.newRecordInstance(ContainerLaunchContext.class); containerLaunchContext.setLocalResources(localResources); - Assert.fail("Setting an invalid local resource should be an error!"); + fail("Setting an invalid local resource should be an error!"); } catch (NullPointerException e) { - Assert.assertTrue(e.getMessage().contains("Null resource type for local resource")); + assertTrue(e.getMessage().contains("Null resource type for local resource")); } } @@ -128,7 +132,7 @@ public class TestApplicationClientProtocolRecords { * local resource type. */ @Test - public void testCLCPBImplNullResourceVisibility() throws IOException { + void testCLCPBImplNullResourceVisibility() throws IOException { RecordFactory recordFactory = RecordFactoryProvider.getRecordFactory(null); try { LocalResource resource = recordFactory.newRecordInstance(LocalResource.class); @@ -142,9 +146,9 @@ public class TestApplicationClientProtocolRecords { localResources.put("null_visibility_resource", resource); ContainerLaunchContext containerLaunchContext = recordFactory.newRecordInstance(ContainerLaunchContext.class); containerLaunchContext.setLocalResources(localResources); - Assert.fail("Setting an invalid local resource should be an error!"); + fail("Setting an invalid local resource should be an error!"); } catch (NullPointerException e) { - Assert.assertTrue(e.getMessage().contains("Null resource visibility for local resource")); + assertTrue(e.getMessage().contains("Null resource visibility for local resource")); } } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/impl/pb/TestApplicationSubmissionContextPBImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/impl/pb/TestApplicationSubmissionContextPBImpl.java index b6b2dbb6fb3..806c0725bdb 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/impl/pb/TestApplicationSubmissionContextPBImpl.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/impl/pb/TestApplicationSubmissionContextPBImpl.java @@ -18,57 +18,66 @@ package org.apache.hadoop.yarn.api.records.impl.pb; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertNotEquals; - import java.util.ArrayList; import java.util.Collection; import java.util.List; +import org.junit.jupiter.params.ParameterizedTest; +import org.junit.jupiter.params.provider.MethodSource; + import org.apache.hadoop.util.Sets; import org.apache.hadoop.yarn.proto.YarnProtos.ApplicationSubmissionContextProto; -import org.junit.Test; -import org.junit.runner.RunWith; -import org.junit.runners.Parameterized; -import org.junit.runners.Parameterized.Parameter; -import org.junit.runners.Parameterized.Parameters; -@RunWith(Parameterized.class) +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotEquals; + public class TestApplicationSubmissionContextPBImpl { - @Parameter @SuppressWarnings("checkstyle:visibilitymodifier") public ApplicationSubmissionContextPBImpl impl; - @Test - public void testAppTagsLowerCaseConversionDefault() { - impl.setApplicationTags(Sets.newHashSet("ABcd", "efgH")); - impl.getApplicationTags().forEach(s -> - assertEquals(s, s.toLowerCase())); + @MethodSource("data") + @ParameterizedTest + void testAppTagsLowerCaseConversionDefault( + ApplicationSubmissionContextPBImpl applicationSubmissionContextPB) { + initTestApplicationSubmissionContextPBImpl(applicationSubmissionContextPB); + applicationSubmissionContextPB.setApplicationTags(Sets.newHashSet("ABcd", "efgH")); + applicationSubmissionContextPB.getApplicationTags() + .forEach(s -> assertEquals(s, s.toLowerCase())); } - @Test - public void testAppTagsLowerCaseConversionDisabled() { + @MethodSource("data") + @ParameterizedTest + void testAppTagsLowerCaseConversionDisabled( + ApplicationSubmissionContextPBImpl applicationSubmissionContextPB) { + initTestApplicationSubmissionContextPBImpl(applicationSubmissionContextPB); ApplicationSubmissionContextPBImpl.setForceLowerCaseTags(false); - impl.setApplicationTags(Sets.newHashSet("ABcd", "efgH")); - impl.getApplicationTags().forEach(s -> - assertNotEquals(s, s.toLowerCase())); + applicationSubmissionContextPB.setApplicationTags(Sets.newHashSet("ABcd", "efgH")); + applicationSubmissionContextPB.getApplicationTags() + .forEach(s -> assertNotEquals(s, s.toLowerCase())); } - @Test - public void testAppTagsLowerCaseConversionEnabled() { + @MethodSource("data") + @ParameterizedTest + void testAppTagsLowerCaseConversionEnabled( + ApplicationSubmissionContextPBImpl applicationSubmissionContextPB) { + initTestApplicationSubmissionContextPBImpl(applicationSubmissionContextPB); ApplicationSubmissionContextPBImpl.setForceLowerCaseTags(true); - impl.setApplicationTags(Sets.newHashSet("ABcd", "efgH")); - impl.getApplicationTags().forEach(s -> - assertEquals(s, s.toLowerCase())); + applicationSubmissionContextPB.setApplicationTags(Sets.newHashSet("ABcd", "efgH")); + applicationSubmissionContextPB.getApplicationTags() + .forEach(s -> assertEquals(s, s.toLowerCase())); } - @Parameters public static Collection data() { List list = new ArrayList<>(); - list.add(new Object[] {new ApplicationSubmissionContextPBImpl()}); - list.add(new Object[] {new ApplicationSubmissionContextPBImpl( + list.add(new Object[]{new ApplicationSubmissionContextPBImpl()}); + list.add(new Object[]{new ApplicationSubmissionContextPBImpl( ApplicationSubmissionContextProto.newBuilder().build())}); return list; } + + public void initTestApplicationSubmissionContextPBImpl( + ApplicationSubmissionContextPBImpl applicationSubmissionContextPB) { + this.impl = applicationSubmissionContextPB; + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/impl/pb/TestProtoUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/impl/pb/TestProtoUtils.java index b3330694837..6d1f20f269e 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/impl/pb/TestProtoUtils.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/impl/pb/TestProtoUtils.java @@ -17,20 +17,21 @@ */ package org.apache.hadoop.yarn.api.records.impl.pb; -import static org.junit.Assert.*; - import java.util.stream.Stream; +import org.junit.jupiter.api.Test; + import org.apache.hadoop.yarn.api.records.ContainerState; import org.apache.hadoop.yarn.api.records.ContainerSubState; import org.apache.hadoop.yarn.proto.YarnProtos.ContainerStateProto; import org.apache.hadoop.yarn.proto.YarnProtos.ContainerSubStateProto; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.fail; public class TestProtoUtils { @Test - public void testConvertFromOrToProtoFormat() { + void testConvertFromOrToProtoFormat() { // Check if utility has all enum values try { Stream.of(ContainerState.values()) diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/impl/pb/TestSerializedExceptionPBImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/impl/pb/TestSerializedExceptionPBImpl.java index ecfa63e0329..d4bfb318fed 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/impl/pb/TestSerializedExceptionPBImpl.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/impl/pb/TestSerializedExceptionPBImpl.java @@ -20,71 +20,75 @@ package org.apache.hadoop.yarn.api.records.impl.pb; import java.nio.channels.ClosedChannelException; +import org.junit.jupiter.api.Test; + import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; import org.apache.hadoop.yarn.proto.YarnProtos.SerializedExceptionProto; -import org.junit.Assert; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNull; +import static org.junit.jupiter.api.Assertions.fail; public class TestSerializedExceptionPBImpl { @Test - public void testSerializedException() throws Exception { + void testSerializedException() throws Exception { SerializedExceptionPBImpl orig = new SerializedExceptionPBImpl(); orig.init(new Exception("test exception")); SerializedExceptionProto proto = orig.getProto(); SerializedExceptionPBImpl deser = new SerializedExceptionPBImpl(proto); - Assert.assertEquals(orig, deser); - Assert.assertEquals(orig.getMessage(), deser.getMessage()); - Assert.assertEquals(orig.getRemoteTrace(), deser.getRemoteTrace()); - Assert.assertEquals(orig.getCause(), deser.getCause()); + assertEquals(orig, deser); + assertEquals(orig.getMessage(), deser.getMessage()); + assertEquals(orig.getRemoteTrace(), deser.getRemoteTrace()); + assertEquals(orig.getCause(), deser.getCause()); } @Test - public void testDeserialize() throws Exception { + void testDeserialize() throws Exception { Exception ex = new Exception("test exception"); SerializedExceptionPBImpl pb = new SerializedExceptionPBImpl(); try { pb.deSerialize(); - Assert.fail("deSerialze should throw YarnRuntimeException"); + fail("deSerialze should throw YarnRuntimeException"); } catch (YarnRuntimeException e) { - Assert.assertEquals(ClassNotFoundException.class, + assertEquals(ClassNotFoundException.class, e.getCause().getClass()); } pb.init(ex); - Assert.assertEquals(ex.toString(), pb.deSerialize().toString()); + assertEquals(ex.toString(), pb.deSerialize().toString()); } @Test - public void testDeserializeWithDefaultConstructor() { + void testDeserializeWithDefaultConstructor() { // Init SerializedException with an Exception with default constructor. ClosedChannelException ex = new ClosedChannelException(); SerializedExceptionPBImpl pb = new SerializedExceptionPBImpl(); pb.init(ex); - Assert.assertEquals(ex.getClass(), pb.deSerialize().getClass()); + assertEquals(ex.getClass(), pb.deSerialize().getClass()); } @Test - public void testBeforeInit() throws Exception { + void testBeforeInit() throws Exception { SerializedExceptionProto defaultProto = SerializedExceptionProto.newBuilder().build(); SerializedExceptionPBImpl pb1 = new SerializedExceptionPBImpl(); - Assert.assertNull(pb1.getCause()); + assertNull(pb1.getCause()); SerializedExceptionPBImpl pb2 = new SerializedExceptionPBImpl(); - Assert.assertEquals(defaultProto, pb2.getProto()); + assertEquals(defaultProto, pb2.getProto()); SerializedExceptionPBImpl pb3 = new SerializedExceptionPBImpl(); - Assert.assertEquals(defaultProto.getTrace(), pb3.getRemoteTrace()); + assertEquals(defaultProto.getTrace(), pb3.getRemoteTrace()); } @Test - public void testThrowableDeserialization() { + void testThrowableDeserialization() { // java.lang.Error should also be serializable Error ex = new Error(); SerializedExceptionPBImpl pb = new SerializedExceptionPBImpl(); pb.init(ex); - Assert.assertEquals(ex.getClass(), pb.deSerialize().getClass()); + assertEquals(ex.getClass(), pb.deSerialize().getClass()); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/timeline/TestTimelineRecords.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/timeline/TestTimelineRecords.java index 0de8200d63b..7973f4ea3af 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/timeline/TestTimelineRecords.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/timeline/TestTimelineRecords.java @@ -27,12 +27,16 @@ import java.util.Set; import java.util.TreeMap; import java.util.WeakHashMap; +import org.junit.jupiter.api.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; + import org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse.TimelinePutError; import org.apache.hadoop.yarn.util.timeline.TimelineUtils; -import org.junit.Assert; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertTrue; public class TestTimelineRecords { @@ -40,7 +44,7 @@ public class TestTimelineRecords { LoggerFactory.getLogger(TestTimelineRecords.class); @Test - public void testEntities() throws Exception { + void testEntities() throws Exception { TimelineEntities entities = new TimelineEntities(); for (int j = 0; j < 2; ++j) { TimelineEntity entity = new TimelineEntity(); @@ -67,27 +71,27 @@ public class TestTimelineRecords { LOG.info("Entities in JSON:"); LOG.info(TimelineUtils.dumpTimelineRecordtoJSON(entities, true)); - Assert.assertEquals(2, entities.getEntities().size()); + assertEquals(2, entities.getEntities().size()); TimelineEntity entity1 = entities.getEntities().get(0); - Assert.assertEquals("entity id 0", entity1.getEntityId()); - Assert.assertEquals("entity type 0", entity1.getEntityType()); - Assert.assertEquals(2, entity1.getRelatedEntities().size()); - Assert.assertEquals(2, entity1.getEvents().size()); - Assert.assertEquals(2, entity1.getPrimaryFilters().size()); - Assert.assertEquals(2, entity1.getOtherInfo().size()); - Assert.assertEquals("domain id 0", entity1.getDomainId()); + assertEquals("entity id 0", entity1.getEntityId()); + assertEquals("entity type 0", entity1.getEntityType()); + assertEquals(2, entity1.getRelatedEntities().size()); + assertEquals(2, entity1.getEvents().size()); + assertEquals(2, entity1.getPrimaryFilters().size()); + assertEquals(2, entity1.getOtherInfo().size()); + assertEquals("domain id 0", entity1.getDomainId()); TimelineEntity entity2 = entities.getEntities().get(1); - Assert.assertEquals("entity id 1", entity2.getEntityId()); - Assert.assertEquals("entity type 1", entity2.getEntityType()); - Assert.assertEquals(2, entity2.getRelatedEntities().size()); - Assert.assertEquals(2, entity2.getEvents().size()); - Assert.assertEquals(2, entity2.getPrimaryFilters().size()); - Assert.assertEquals(2, entity2.getOtherInfo().size()); - Assert.assertEquals("domain id 1", entity2.getDomainId()); + assertEquals("entity id 1", entity2.getEntityId()); + assertEquals("entity type 1", entity2.getEntityType()); + assertEquals(2, entity2.getRelatedEntities().size()); + assertEquals(2, entity2.getEvents().size()); + assertEquals(2, entity2.getPrimaryFilters().size()); + assertEquals(2, entity2.getOtherInfo().size()); + assertEquals("domain id 1", entity2.getDomainId()); } @Test - public void testEvents() throws Exception { + void testEvents() throws Exception { TimelineEvents events = new TimelineEvents(); for (int j = 0; j < 2; ++j) { TimelineEvents.EventsOfOneEntity partEvents = @@ -107,31 +111,31 @@ public class TestTimelineRecords { LOG.info("Events in JSON:"); LOG.info(TimelineUtils.dumpTimelineRecordtoJSON(events, true)); - Assert.assertEquals(2, events.getAllEvents().size()); + assertEquals(2, events.getAllEvents().size()); TimelineEvents.EventsOfOneEntity partEvents1 = events.getAllEvents().get(0); - Assert.assertEquals("entity id 0", partEvents1.getEntityId()); - Assert.assertEquals("entity type 0", partEvents1.getEntityType()); - Assert.assertEquals(2, partEvents1.getEvents().size()); + assertEquals("entity id 0", partEvents1.getEntityId()); + assertEquals("entity type 0", partEvents1.getEntityType()); + assertEquals(2, partEvents1.getEvents().size()); TimelineEvent event11 = partEvents1.getEvents().get(0); - Assert.assertEquals("event type 0", event11.getEventType()); - Assert.assertEquals(2, event11.getEventInfo().size()); + assertEquals("event type 0", event11.getEventType()); + assertEquals(2, event11.getEventInfo().size()); TimelineEvent event12 = partEvents1.getEvents().get(1); - Assert.assertEquals("event type 1", event12.getEventType()); - Assert.assertEquals(2, event12.getEventInfo().size()); + assertEquals("event type 1", event12.getEventType()); + assertEquals(2, event12.getEventInfo().size()); TimelineEvents.EventsOfOneEntity partEvents2 = events.getAllEvents().get(1); - Assert.assertEquals("entity id 1", partEvents2.getEntityId()); - Assert.assertEquals("entity type 1", partEvents2.getEntityType()); - Assert.assertEquals(2, partEvents2.getEvents().size()); + assertEquals("entity id 1", partEvents2.getEntityId()); + assertEquals("entity type 1", partEvents2.getEntityType()); + assertEquals(2, partEvents2.getEvents().size()); TimelineEvent event21 = partEvents2.getEvents().get(0); - Assert.assertEquals("event type 0", event21.getEventType()); - Assert.assertEquals(2, event21.getEventInfo().size()); + assertEquals("event type 0", event21.getEventType()); + assertEquals(2, event21.getEventInfo().size()); TimelineEvent event22 = partEvents2.getEvents().get(1); - Assert.assertEquals("event type 1", event22.getEventType()); - Assert.assertEquals(2, event22.getEventInfo().size()); + assertEquals("event type 1", event22.getEventType()); + assertEquals(2, event22.getEventInfo().size()); } @Test - public void testTimelinePutErrors() throws Exception { + void testTimelinePutErrors() throws Exception { TimelinePutResponse TimelinePutErrors = new TimelinePutResponse(); TimelinePutError error1 = new TimelinePutError(); error1.setEntityId("entity id 1"); @@ -149,23 +153,23 @@ public class TestTimelineRecords { LOG.info("Errors in JSON:"); LOG.info(TimelineUtils.dumpTimelineRecordtoJSON(TimelinePutErrors, true)); - Assert.assertEquals(3, TimelinePutErrors.getErrors().size()); + assertEquals(3, TimelinePutErrors.getErrors().size()); TimelinePutError e = TimelinePutErrors.getErrors().get(0); - Assert.assertEquals(error1.getEntityId(), e.getEntityId()); - Assert.assertEquals(error1.getEntityType(), e.getEntityType()); - Assert.assertEquals(error1.getErrorCode(), e.getErrorCode()); + assertEquals(error1.getEntityId(), e.getEntityId()); + assertEquals(error1.getEntityType(), e.getEntityType()); + assertEquals(error1.getErrorCode(), e.getErrorCode()); e = TimelinePutErrors.getErrors().get(1); - Assert.assertEquals(error1.getEntityId(), e.getEntityId()); - Assert.assertEquals(error1.getEntityType(), e.getEntityType()); - Assert.assertEquals(error1.getErrorCode(), e.getErrorCode()); + assertEquals(error1.getEntityId(), e.getEntityId()); + assertEquals(error1.getEntityType(), e.getEntityType()); + assertEquals(error1.getErrorCode(), e.getErrorCode()); e = TimelinePutErrors.getErrors().get(2); - Assert.assertEquals(error2.getEntityId(), e.getEntityId()); - Assert.assertEquals(error2.getEntityType(), e.getEntityType()); - Assert.assertEquals(error2.getErrorCode(), e.getErrorCode()); + assertEquals(error2.getEntityId(), e.getEntityId()); + assertEquals(error2.getEntityType(), e.getEntityType()); + assertEquals(error2.getErrorCode(), e.getErrorCode()); } @Test - public void testTimelineDomain() throws Exception { + void testTimelineDomain() throws Exception { TimelineDomains domains = new TimelineDomains(); TimelineDomain domain = null; @@ -185,25 +189,25 @@ public class TestTimelineRecords { LOG.info("Domain in JSON:"); LOG.info(TimelineUtils.dumpTimelineRecordtoJSON(domains, true)); - Assert.assertEquals(2, domains.getDomains().size()); + assertEquals(2, domains.getDomains().size()); for (int i = 0; i < domains.getDomains().size(); ++i) { domain = domains.getDomains().get(i); - Assert.assertEquals("test id " + (i + 1), domain.getId()); - Assert.assertEquals("test description " + (i + 1), + assertEquals("test id " + (i + 1), domain.getId()); + assertEquals("test description " + (i + 1), domain.getDescription()); - Assert.assertEquals("test owner " + (i + 1), domain.getOwner()); - Assert.assertEquals("test_reader_user_" + (i + 1) + + assertEquals("test owner " + (i + 1), domain.getOwner()); + assertEquals("test_reader_user_" + (i + 1) + " test_reader_group+" + (i + 1), domain.getReaders()); - Assert.assertEquals("test_writer_user_" + (i + 1) + + assertEquals("test_writer_user_" + (i + 1) + " test_writer_group+" + (i + 1), domain.getWriters()); - Assert.assertEquals(new Long(0L), domain.getCreatedTime()); - Assert.assertEquals(new Long(1L), domain.getModifiedTime()); + assertEquals(Long.valueOf(0L), domain.getCreatedTime()); + assertEquals(Long.valueOf(1L), domain.getModifiedTime()); } } @Test - public void testMapInterfaceOrTimelineRecords() throws Exception { + void testMapInterfaceOrTimelineRecords() throws Exception { TimelineEntity entity = new TimelineEntity(); List>> primaryFiltersList = new ArrayList>>(); @@ -284,36 +288,36 @@ public class TestTimelineRecords { } private static void assertPrimaryFilters(TimelineEntity entity) { - Assert.assertNotNull(entity.getPrimaryFilters()); - Assert.assertNotNull(entity.getPrimaryFiltersJAXB()); - Assert.assertTrue(entity.getPrimaryFilters() instanceof HashMap); - Assert.assertTrue(entity.getPrimaryFiltersJAXB() instanceof HashMap); - Assert.assertEquals( + assertNotNull(entity.getPrimaryFilters()); + assertNotNull(entity.getPrimaryFiltersJAXB()); + assertTrue(entity.getPrimaryFilters() instanceof HashMap); + assertTrue(entity.getPrimaryFiltersJAXB() instanceof HashMap); + assertEquals( entity.getPrimaryFilters(), entity.getPrimaryFiltersJAXB()); } private static void assertRelatedEntities(TimelineEntity entity) { - Assert.assertNotNull(entity.getRelatedEntities()); - Assert.assertNotNull(entity.getRelatedEntitiesJAXB()); - Assert.assertTrue(entity.getRelatedEntities() instanceof HashMap); - Assert.assertTrue(entity.getRelatedEntitiesJAXB() instanceof HashMap); - Assert.assertEquals( + assertNotNull(entity.getRelatedEntities()); + assertNotNull(entity.getRelatedEntitiesJAXB()); + assertTrue(entity.getRelatedEntities() instanceof HashMap); + assertTrue(entity.getRelatedEntitiesJAXB() instanceof HashMap); + assertEquals( entity.getRelatedEntities(), entity.getRelatedEntitiesJAXB()); } private static void assertOtherInfo(TimelineEntity entity) { - Assert.assertNotNull(entity.getOtherInfo()); - Assert.assertNotNull(entity.getOtherInfoJAXB()); - Assert.assertTrue(entity.getOtherInfo() instanceof HashMap); - Assert.assertTrue(entity.getOtherInfoJAXB() instanceof HashMap); - Assert.assertEquals(entity.getOtherInfo(), entity.getOtherInfoJAXB()); + assertNotNull(entity.getOtherInfo()); + assertNotNull(entity.getOtherInfoJAXB()); + assertTrue(entity.getOtherInfo() instanceof HashMap); + assertTrue(entity.getOtherInfoJAXB() instanceof HashMap); + assertEquals(entity.getOtherInfo(), entity.getOtherInfoJAXB()); } private static void assertEventInfo(TimelineEvent event) { - Assert.assertNotNull(event); - Assert.assertNotNull(event.getEventInfoJAXB()); - Assert.assertTrue(event.getEventInfo() instanceof HashMap); - Assert.assertTrue(event.getEventInfoJAXB() instanceof HashMap); - Assert.assertEquals(event.getEventInfo(), event.getEventInfoJAXB()); + assertNotNull(event); + assertNotNull(event.getEventInfoJAXB()); + assertTrue(event.getEventInfo() instanceof HashMap); + assertTrue(event.getEventInfoJAXB() instanceof HashMap); + assertEquals(event.getEventInfo(), event.getEventInfoJAXB()); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/timelineservice/TestTimelineServiceRecords.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/timelineservice/TestTimelineServiceRecords.java index b488a652ae2..b822835337a 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/timelineservice/TestTimelineServiceRecords.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/timelineservice/TestTimelineServiceRecords.java @@ -17,30 +17,36 @@ */ package org.apache.hadoop.yarn.api.records.timelineservice; - -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; -import org.apache.hadoop.yarn.api.records.ApplicationAttemptId; -import org.apache.hadoop.yarn.api.records.ApplicationId; -import org.apache.hadoop.yarn.api.records.ContainerId; -import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; -import org.apache.hadoop.yarn.util.timeline.TimelineUtils; -import org.junit.Test; -import org.junit.Assert; - import java.util.Arrays; import java.util.Collections; import java.util.HashMap; import java.util.Iterator; import java.util.Map; +import org.junit.jupiter.api.Test; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.yarn.api.records.ApplicationAttemptId; +import org.apache.hadoop.yarn.api.records.ApplicationId; +import org.apache.hadoop.yarn.api.records.ContainerId; +import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; +import org.apache.hadoop.yarn.util.timeline.TimelineUtils; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertNotEquals; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; + public class TestTimelineServiceRecords { private static final Logger LOG = LoggerFactory.getLogger(TestTimelineServiceRecords.class); @Test - public void testTimelineEntities() throws Exception { + void testTimelineEntities() throws Exception { TimelineEntity entity = new TimelineEntity(); entity.setType("test type 1"); entity.setId("test id 1"); @@ -48,7 +54,7 @@ public class TestTimelineServiceRecords { entity.addInfo("test info key 2", Arrays.asList("test info value 2", "test info value 3")); entity.addInfo("test info key 3", true); - Assert.assertTrue( + assertTrue( entity.getInfo().get("test info key 3") instanceof Boolean); entity.addConfig("test config key 1", "test config value 1"); entity.addConfig("test config key 2", "test config value 2"); @@ -59,43 +65,43 @@ public class TestTimelineServiceRecords { metric1.addValue(1L, 1.0F); metric1.addValue(3L, 3.0D); metric1.addValue(2L, 2); - Assert.assertEquals(TimelineMetric.Type.TIME_SERIES, metric1.getType()); + assertEquals(TimelineMetric.Type.TIME_SERIES, metric1.getType()); Iterator> itr = metric1.getValues().entrySet().iterator(); Map.Entry entry = itr.next(); - Assert.assertEquals(new Long(3L), entry.getKey()); - Assert.assertEquals(3.0D, entry.getValue()); + assertEquals(Long.valueOf(3L), entry.getKey()); + assertEquals(3.0D, entry.getValue()); entry = itr.next(); - Assert.assertEquals(new Long(2L), entry.getKey()); - Assert.assertEquals(2, entry.getValue()); + assertEquals(Long.valueOf(2L), entry.getKey()); + assertEquals(2, entry.getValue()); entry = itr.next(); - Assert.assertEquals(new Long(1L), entry.getKey()); - Assert.assertEquals(1.0F, entry.getValue()); - Assert.assertFalse(itr.hasNext()); + assertEquals(Long.valueOf(1L), entry.getKey()); + assertEquals(1.0F, entry.getValue()); + assertFalse(itr.hasNext()); entity.addMetric(metric1); TimelineMetric metric2 = new TimelineMetric(TimelineMetric.Type.SINGLE_VALUE); metric2.setId("test metric id 1"); metric2.addValue(3L, (short) 3); - Assert.assertEquals(TimelineMetric.Type.SINGLE_VALUE, metric2.getType()); - Assert.assertTrue( + assertEquals(TimelineMetric.Type.SINGLE_VALUE, metric2.getType()); + assertTrue( metric2.getValues().values().iterator().next() instanceof Short); Map points = new HashMap<>(); points.put(4L, 4.0D); points.put(5L, 5.0D); try { metric2.setValues(points); - Assert.fail(); + fail(); } catch (IllegalArgumentException e) { - Assert.assertTrue(e.getMessage().contains( + assertTrue(e.getMessage().contains( "Values cannot contain more than one point in")); } try { metric2.addValues(points); - Assert.fail(); + fail(); } catch (IllegalArgumentException e) { - Assert.assertTrue(e.getMessage().contains( + assertTrue(e.getMessage().contains( "Values cannot contain more than one point in")); } entity.addMetric(metric2); @@ -104,9 +110,8 @@ public class TestTimelineServiceRecords { new TimelineMetric(TimelineMetric.Type.SINGLE_VALUE); metric3.setId("test metric id 1"); metric3.addValue(4L, (short) 4); - Assert.assertEquals("metric3 should equal to metric2! ", metric3, metric2); - Assert.assertNotEquals("metric1 should not equal to metric2! ", - metric1, metric2); + assertEquals(metric3, metric2, "metric3 should equal to metric2! "); + assertNotEquals(metric1, metric2, "metric1 should not equal to metric2! "); TimelineEvent event1 = new TimelineEvent(); event1.setId("test event id 1"); @@ -114,7 +119,7 @@ public class TestTimelineServiceRecords { event1.addInfo("test info key 2", Arrays.asList("test info value 2", "test info value 3")); event1.addInfo("test info key 3", true); - Assert.assertTrue( + assertTrue( event1.getInfo().get("test info key 3") instanceof Boolean); event1.setTimestamp(1L); entity.addEvent(event1); @@ -125,19 +130,17 @@ public class TestTimelineServiceRecords { event2.addInfo("test info key 2", Arrays.asList("test info value 2", "test info value 3")); event2.addInfo("test info key 3", true); - Assert.assertTrue( + assertTrue( event2.getInfo().get("test info key 3") instanceof Boolean); event2.setTimestamp(2L); entity.addEvent(event2); - Assert.assertFalse("event1 should not equal to event2! ", - event1.equals(event2)); + assertNotEquals(event1, event2); TimelineEvent event3 = new TimelineEvent(); event3.setId("test event id 1"); event3.setTimestamp(1L); - Assert.assertEquals("event1 should equal to event3! ", event3, event1); - Assert.assertNotEquals("event1 should not equal to event2! ", - event1, event2); + assertEquals(event3, event1, "event1 should equal to event3! "); + assertNotEquals(event1, event2, "event1 should not equal to event2! "); entity.setCreatedTime(0L); entity.addRelatesToEntity("test type 2", "test id 2"); @@ -153,25 +156,22 @@ public class TestTimelineServiceRecords { entities.addEntity(entity2); LOG.info(TimelineUtils.dumpTimelineRecordtoJSON(entities, true)); - Assert.assertFalse("entity 1 should not be valid without type and id", - entity1.isValid()); + assertFalse(entity1.isValid(), + "entity 1 should not be valid without type and id"); entity1.setId("test id 2"); entity1.setType("test type 2"); entity2.setId("test id 1"); entity2.setType("test type 1"); - Assert.assertEquals("Timeline entity should equal to entity2! ", - entity, entity2); - Assert.assertNotEquals("entity1 should not equal to entity! ", - entity1, entity); - Assert.assertEquals("entity should be less than entity1! ", - entity1.compareTo(entity), 1); - Assert.assertEquals("entity's hash code should be -28727840 but not " - + entity.hashCode(), entity.hashCode(), -28727840); + assertEquals(entity, entity2, "Timeline entity should equal to entity2! "); + assertNotEquals(entity1, entity, "entity1 should not equal to entity! "); + assertEquals(entity1.compareTo(entity), 1, "entity should be less than entity1! "); + assertEquals(entity.hashCode(), -28727840, "entity's hash code should be -28727840 but not " + + entity.hashCode()); } @Test - public void testFirstClassCitizenEntities() throws Exception { + void testFirstClassCitizenEntities() throws Exception { UserEntity user = new UserEntity(); user.setId("test user id"); @@ -245,49 +245,49 @@ public class TestTimelineServiceRecords { // Check parent/children APIs - Assert.assertNotNull(app1.getParent()); - Assert.assertEquals(flow2.getType(), app1.getParent().getType()); - Assert.assertEquals(flow2.getId(), app1.getParent().getId()); + assertNotNull(app1.getParent()); + assertEquals(flow2.getType(), app1.getParent().getType()); + assertEquals(flow2.getId(), app1.getParent().getId()); app1.addInfo(ApplicationEntity.PARENT_INFO_KEY, "invalid parent object"); try { app1.getParent(); - Assert.fail(); + fail(); } catch (Exception e) { - Assert.assertTrue(e instanceof YarnRuntimeException); - Assert.assertTrue(e.getMessage().contains( + assertTrue(e instanceof YarnRuntimeException); + assertTrue(e.getMessage().contains( "Parent info is invalid identifier object")); } - Assert.assertNotNull(app1.getChildren()); - Assert.assertEquals(1, app1.getChildren().size()); - Assert.assertEquals( + assertNotNull(app1.getChildren()); + assertEquals(1, app1.getChildren().size()); + assertEquals( appAttempt.getType(), app1.getChildren().iterator().next().getType()); - Assert.assertEquals( + assertEquals( appAttempt.getId(), app1.getChildren().iterator().next().getId()); app1.addInfo(ApplicationEntity.CHILDREN_INFO_KEY, Collections.singletonList("invalid children set")); try { app1.getChildren(); - Assert.fail(); + fail(); } catch (Exception e) { - Assert.assertTrue(e instanceof YarnRuntimeException); - Assert.assertTrue(e.getMessage().contains( + assertTrue(e instanceof YarnRuntimeException); + assertTrue(e.getMessage().contains( "Children info is invalid identifier set")); } app1.addInfo(ApplicationEntity.CHILDREN_INFO_KEY, Collections.singleton("invalid child object")); try { app1.getChildren(); - Assert.fail(); + fail(); } catch (Exception e) { - Assert.assertTrue(e instanceof YarnRuntimeException); - Assert.assertTrue(e.getMessage().contains( + assertTrue(e instanceof YarnRuntimeException); + assertTrue(e.getMessage().contains( "Children info contains invalid identifier object")); } } @Test - public void testUser() throws Exception { + void testUser() throws Exception { UserEntity user = new UserEntity(); user.setId("test user id"); user.addInfo("test info key 1", "test info value 1"); @@ -296,7 +296,7 @@ public class TestTimelineServiceRecords { } @Test - public void testQueue() throws Exception { + void testQueue() throws Exception { QueueEntity queue = new QueueEntity(); queue.setId("test queue id"); queue.addInfo("test info key 1", "test info value 1"); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/resource/TestPlacementConstraintTransformations.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/resource/TestPlacementConstraintTransformations.java index 557b82fe257..0c40fa924a2 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/resource/TestPlacementConstraintTransformations.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/resource/TestPlacementConstraintTransformations.java @@ -18,18 +18,12 @@ package org.apache.hadoop.yarn.api.resource; -import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.NODE; -import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.RACK; -import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.maxCardinality; -import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.or; -import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.targetCardinality; -import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.targetIn; -import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.PlacementTargets.allocationTag; - import java.util.Arrays; import java.util.HashSet; import java.util.List; +import org.junit.jupiter.api.Test; + import org.apache.hadoop.yarn.api.resource.PlacementConstraint.AbstractConstraint; import org.apache.hadoop.yarn.api.resource.PlacementConstraint.CardinalityConstraint; import org.apache.hadoop.yarn.api.resource.PlacementConstraint.Or; @@ -39,8 +33,16 @@ import org.apache.hadoop.yarn.api.resource.PlacementConstraint.TargetConstraint. import org.apache.hadoop.yarn.api.resource.PlacementConstraintTransformations.SingleConstraintTransformer; import org.apache.hadoop.yarn.api.resource.PlacementConstraintTransformations.SpecializedConstraintTransformer; import org.apache.hadoop.yarn.api.resource.PlacementConstraints.PlacementTargets; -import org.junit.Assert; -import org.junit.Test; + +import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.NODE; +import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.PlacementTargets.allocationTag; +import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.RACK; +import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.maxCardinality; +import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.or; +import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.targetCardinality; +import static org.apache.hadoop.yarn.api.resource.PlacementConstraints.targetIn; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertTrue; /** * Test class for {@link PlacementConstraintTransformations}. @@ -48,10 +50,10 @@ import org.junit.Test; public class TestPlacementConstraintTransformations { @Test - public void testTargetConstraint() { + void testTargetConstraint() { AbstractConstraint sConstraintExpr = targetIn(NODE, allocationTag("hbase-m")); - Assert.assertTrue(sConstraintExpr instanceof SingleConstraint); + assertTrue(sConstraintExpr instanceof SingleConstraint); PlacementConstraint sConstraint = PlacementConstraints.build(sConstraintExpr); @@ -61,17 +63,17 @@ public class TestPlacementConstraintTransformations { PlacementConstraint tConstraint = specTransformer.transform(); AbstractConstraint tConstraintExpr = tConstraint.getConstraintExpr(); - Assert.assertTrue(tConstraintExpr instanceof TargetConstraint); + assertTrue(tConstraintExpr instanceof TargetConstraint); SingleConstraint single = (SingleConstraint) sConstraintExpr; TargetConstraint target = (TargetConstraint) tConstraintExpr; // Make sure the expression string is consistent // before and after transforming - Assert.assertEquals(single.toString(), target.toString()); - Assert.assertEquals(single.getScope(), target.getScope()); - Assert.assertEquals(TargetOperator.IN, target.getOp()); - Assert.assertEquals(single.getTargetExpressions(), + assertEquals(single.toString(), target.toString()); + assertEquals(single.getScope(), target.getScope()); + assertEquals(TargetOperator.IN, target.getOp()); + assertEquals(single.getTargetExpressions(), target.getTargetExpressions()); // Transform from specialized TargetConstraint to SimpleConstraint @@ -80,18 +82,18 @@ public class TestPlacementConstraintTransformations { sConstraint = singleTransformer.transform(); sConstraintExpr = sConstraint.getConstraintExpr(); - Assert.assertTrue(sConstraintExpr instanceof SingleConstraint); + assertTrue(sConstraintExpr instanceof SingleConstraint); single = (SingleConstraint) sConstraintExpr; - Assert.assertEquals(target.getScope(), single.getScope()); - Assert.assertEquals(1, single.getMinCardinality()); - Assert.assertEquals(Integer.MAX_VALUE, single.getMaxCardinality()); - Assert.assertEquals(single.getTargetExpressions(), + assertEquals(target.getScope(), single.getScope()); + assertEquals(1, single.getMinCardinality()); + assertEquals(Integer.MAX_VALUE, single.getMaxCardinality()); + assertEquals(single.getTargetExpressions(), target.getTargetExpressions()); } @Test - public void testCardinalityConstraint() { + void testCardinalityConstraint() { CardinalityConstraint cardinality = new CardinalityConstraint(RACK, 3, 10, new HashSet<>(Arrays.asList("hb"))); PlacementConstraint cConstraint = PlacementConstraints.build(cardinality); @@ -102,27 +104,27 @@ public class TestPlacementConstraintTransformations { PlacementConstraint sConstraint = singleTransformer.transform(); AbstractConstraint sConstraintExpr = sConstraint.getConstraintExpr(); - Assert.assertTrue(sConstraintExpr instanceof SingleConstraint); + assertTrue(sConstraintExpr instanceof SingleConstraint); SingleConstraint single = (SingleConstraint) sConstraintExpr; // Make sure the consistent expression string is consistent // before and after transforming - Assert.assertEquals(single.toString(), cardinality.toString()); - Assert.assertEquals(cardinality.getScope(), single.getScope()); - Assert.assertEquals(cardinality.getMinCardinality(), + assertEquals(single.toString(), cardinality.toString()); + assertEquals(cardinality.getScope(), single.getScope()); + assertEquals(cardinality.getMinCardinality(), single.getMinCardinality()); - Assert.assertEquals(cardinality.getMaxCardinality(), + assertEquals(cardinality.getMaxCardinality(), single.getMaxCardinality()); - Assert.assertEquals( + assertEquals( new HashSet<>(Arrays.asList(PlacementTargets.allocationTag("hb"))), single.getTargetExpressions()); } @Test - public void testTargetCardinalityConstraint() { + void testTargetCardinalityConstraint() { AbstractConstraint constraintExpr = targetCardinality(RACK, 3, 10, allocationTag("zk")); - Assert.assertTrue(constraintExpr instanceof SingleConstraint); + assertTrue(constraintExpr instanceof SingleConstraint); PlacementConstraint constraint = PlacementConstraints.build(constraintExpr); // Apply transformation. Should be a no-op. @@ -131,19 +133,19 @@ public class TestPlacementConstraintTransformations { PlacementConstraint newConstraint = specTransformer.transform(); // The constraint expression should be the same. - Assert.assertEquals(constraintExpr, newConstraint.getConstraintExpr()); + assertEquals(constraintExpr, newConstraint.getConstraintExpr()); } @Test - public void testCompositeConstraint() { + void testCompositeConstraint() { AbstractConstraint constraintExpr = or(targetIn(RACK, allocationTag("spark")), maxCardinality(NODE, 3), targetCardinality(RACK, 2, 10, allocationTag("zk"))); - Assert.assertTrue(constraintExpr instanceof Or); + assertTrue(constraintExpr instanceof Or); PlacementConstraint constraint = PlacementConstraints.build(constraintExpr); Or orExpr = (Or) constraintExpr; for (AbstractConstraint child : orExpr.getChildren()) { - Assert.assertTrue(child instanceof SingleConstraint); + assertTrue(child instanceof SingleConstraint); } // Apply transformation. Should transform target and cardinality constraints @@ -154,19 +156,19 @@ public class TestPlacementConstraintTransformations { Or specOrExpr = (Or) specConstraint.getConstraintExpr(); List specChildren = specOrExpr.getChildren(); - Assert.assertEquals(3, specChildren.size()); - Assert.assertTrue(specChildren.get(0) instanceof TargetConstraint); - Assert.assertTrue(specChildren.get(1) instanceof SingleConstraint); - Assert.assertTrue(specChildren.get(2) instanceof SingleConstraint); + assertEquals(3, specChildren.size()); + assertTrue(specChildren.get(0) instanceof TargetConstraint); + assertTrue(specChildren.get(1) instanceof SingleConstraint); + assertTrue(specChildren.get(2) instanceof SingleConstraint); // Transform from specialized TargetConstraint to SimpleConstraint SingleConstraintTransformer singleTransformer = new SingleConstraintTransformer(specConstraint); PlacementConstraint simConstraint = singleTransformer.transform(); - Assert.assertTrue(simConstraint.getConstraintExpr() instanceof Or); + assertTrue(simConstraint.getConstraintExpr() instanceof Or); Or simOrExpr = (Or) specConstraint.getConstraintExpr(); for (AbstractConstraint child : simOrExpr.getChildren()) { - Assert.assertTrue(child instanceof SingleConstraint); + assertTrue(child instanceof SingleConstraint); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/TestClientRMProxy.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/TestClientRMProxy.java index 6c31fea7d56..b5e86779cd2 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/TestClientRMProxy.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/TestClientRMProxy.java @@ -18,6 +18,12 @@ package org.apache.hadoop.yarn.client; +import java.io.IOException; +import java.net.InetSocketAddress; +import java.security.PrivilegedExceptionAction; + +import org.junit.jupiter.api.Test; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.io.Text; import org.apache.hadoop.security.UserGroupInformation; @@ -29,20 +35,15 @@ import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC; import org.apache.hadoop.yarn.ipc.YarnRPC; import org.apache.hadoop.yarn.util.Records; -import org.junit.Assert; -import org.junit.Test; -import java.io.IOException; -import java.net.InetSocketAddress; -import java.security.PrivilegedExceptionAction; - -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertNotNull; -import static org.junit.Assert.assertTrue; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; public class TestClientRMProxy { @Test - public void testGetRMDelegationTokenService() { + void testGetRMDelegationTokenService() { String defaultRMAddress = YarnConfiguration.DEFAULT_RM_ADDRESS; YarnConfiguration conf = new YarnConfiguration(); @@ -51,8 +52,8 @@ public class TestClientRMProxy { String[] services = tokenService.toString().split(","); assertEquals(1, services.length); for (String service : services) { - assertTrue("Incorrect token service name", - service.contains(defaultRMAddress)); + assertTrue(service.contains(defaultRMAddress), + "Incorrect token service name"); } // HA is enabled @@ -66,13 +67,13 @@ public class TestClientRMProxy { services = tokenService.toString().split(","); assertEquals(2, services.length); for (String service : services) { - assertTrue("Incorrect token service name", - service.contains(defaultRMAddress)); + assertTrue(service.contains(defaultRMAddress), + "Incorrect token service name"); } } @Test - public void testGetAMRMTokenService() { + void testGetAMRMTokenService() { String defaultRMAddress = YarnConfiguration.DEFAULT_RM_SCHEDULER_ADDRESS; YarnConfiguration conf = new YarnConfiguration(); @@ -81,8 +82,8 @@ public class TestClientRMProxy { String[] services = tokenService.toString().split(","); assertEquals(1, services.length); for (String service : services) { - assertTrue("Incorrect token service name", - service.contains(defaultRMAddress)); + assertTrue(service.contains(defaultRMAddress), + "Incorrect token service name"); } // HA is enabled @@ -96,8 +97,8 @@ public class TestClientRMProxy { services = tokenService.toString().split(","); assertEquals(2, services.length); for (String service : services) { - assertTrue("Incorrect token service name", - service.contains(defaultRMAddress)); + assertTrue(service.contains(defaultRMAddress), + "Incorrect token service name"); } } @@ -109,7 +110,7 @@ public class TestClientRMProxy { * @throws Exception an Exception occurred */ @Test - public void testProxyUserCorrectUGI() throws Exception { + void testProxyUserCorrectUGI() throws Exception { final YarnConfiguration conf = new YarnConfiguration(); conf.setBoolean(YarnConfiguration.RM_HA_ENABLED, true); conf.set(YarnConfiguration.RM_HA_IDS, "rm1,rm2"); @@ -129,7 +130,7 @@ public class TestClientRMProxy { UserGroupInformation realUser = UserGroupInformation.getCurrentUser(); UserGroupInformation proxyUser = UserGroupInformation.createProxyUserForTesting("proxy", realUser, - new String[] {"group1"}); + new String[]{"group1"}); // Create the RMProxy using the proxyUser ApplicationClientProtocol rmProxy = proxyUser.doAs( @@ -163,7 +164,7 @@ public class TestClientRMProxy { UGICapturingHadoopYarnProtoRPC.lastCurrentUser; assertNotNull(lastCurrentUser); assertEquals("proxy", lastCurrentUser.getShortUserName()); - Assert.assertEquals(UserGroupInformation.AuthenticationMethod.PROXY, + assertEquals(UserGroupInformation.AuthenticationMethod.PROXY, lastCurrentUser.getAuthenticationMethod()); assertEquals(UserGroupInformation.getCurrentUser(), lastCurrentUser.getRealUser()); @@ -187,7 +188,7 @@ public class TestClientRMProxy { try { currentUser = UserGroupInformation.getCurrentUser(); } catch (IOException ioe) { - Assert.fail("Unable to get current user\n" + fail("Unable to get current user\n" + StringUtils.stringifyException(ioe)); } lastCurrentUser = currentUser; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestTimelineClient.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestTimelineClient.java index e7110dda9ff..507cac61332 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestTimelineClient.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestTimelineClient.java @@ -18,22 +18,19 @@ package org.apache.hadoop.yarn.client.api.impl; -import static org.mockito.ArgumentMatchers.any; -import static org.mockito.Mockito.doReturn; -import static org.mockito.Mockito.doThrow; -import static org.mockito.Mockito.mock; -import static org.mockito.Mockito.never; -import static org.mockito.Mockito.spy; -import static org.mockito.Mockito.when; -import static org.mockito.Mockito.verify; -import static org.mockito.Mockito.times; - import java.io.IOException; import java.net.ConnectException; import java.net.SocketTimeoutException; import java.net.URI; import java.security.PrivilegedExceptionAction; +import com.sun.jersey.api.client.Client; +import com.sun.jersey.api.client.ClientHandlerException; +import com.sun.jersey.api.client.ClientResponse; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.CommonConfigurationKeysPublic; import org.apache.hadoop.http.HttpConfig.Policy; @@ -42,7 +39,6 @@ import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.security.ssl.KeyStoreTestUtil; import org.apache.hadoop.security.token.Token; import org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager; -import static org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.SSL_MONITORING_THREAD_NAME; import org.apache.hadoop.test.TestGenericTestUtils; import org.apache.hadoop.yarn.api.records.timeline.TimelineDomain; import org.apache.hadoop.yarn.api.records.timeline.TimelineEntities; @@ -52,14 +48,22 @@ import org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.exceptions.YarnException; import org.apache.hadoop.yarn.security.client.TimelineDelegationTokenIdentifier; -import org.junit.After; -import org.junit.Assert; -import org.junit.Before; -import org.junit.Test; -import com.sun.jersey.api.client.Client; -import com.sun.jersey.api.client.ClientHandlerException; -import com.sun.jersey.api.client.ClientResponse; +import static org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.SSL_MONITORING_THREAD_NAME; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertNull; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.Mockito.doReturn; +import static org.mockito.Mockito.doThrow; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.never; +import static org.mockito.Mockito.spy; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; public class TestTimelineClient { @@ -68,7 +72,7 @@ public class TestTimelineClient { private String keystoresDir; private String sslConfDir; - @Before + @BeforeEach public void setup() { YarnConfiguration conf = new YarnConfiguration(); conf.setBoolean(YarnConfiguration.TIMELINE_SERVICE_ENABLED, true); @@ -76,7 +80,7 @@ public class TestTimelineClient { client = createTimelineClient(conf); } - @After + @AfterEach public void tearDown() throws Exception { if (client != null) { client.stop(); @@ -87,113 +91,113 @@ public class TestTimelineClient { } @Test - public void testPostEntities() throws Exception { + void testPostEntities() throws Exception { mockEntityClientResponse(spyTimelineWriter, ClientResponse.Status.OK, - false, false); + false, false); try { TimelinePutResponse response = client.putEntities(generateEntity()); - Assert.assertEquals(0, response.getErrors().size()); + assertEquals(0, response.getErrors().size()); } catch (YarnException e) { - Assert.fail("Exception is not expected"); + fail("Exception is not expected"); } } @Test - public void testPostEntitiesWithError() throws Exception { + void testPostEntitiesWithError() throws Exception { mockEntityClientResponse(spyTimelineWriter, ClientResponse.Status.OK, true, - false); + false); try { TimelinePutResponse response = client.putEntities(generateEntity()); - Assert.assertEquals(1, response.getErrors().size()); - Assert.assertEquals("test entity id", response.getErrors().get(0) + assertEquals(1, response.getErrors().size()); + assertEquals("test entity id", response.getErrors().get(0) .getEntityId()); - Assert.assertEquals("test entity type", response.getErrors().get(0) + assertEquals("test entity type", response.getErrors().get(0) .getEntityType()); - Assert.assertEquals(TimelinePutResponse.TimelinePutError.IO_EXCEPTION, + assertEquals(TimelinePutResponse.TimelinePutError.IO_EXCEPTION, response.getErrors().get(0).getErrorCode()); } catch (YarnException e) { - Assert.fail("Exception is not expected"); + fail("Exception is not expected"); } } @Test - public void testPostIncompleteEntities() throws Exception { + void testPostIncompleteEntities() throws Exception { try { client.putEntities(new TimelineEntity()); - Assert.fail("Exception should have been thrown"); + fail("Exception should have been thrown"); } catch (YarnException e) { } } @Test - public void testPostEntitiesNoResponse() throws Exception { + void testPostEntitiesNoResponse() throws Exception { mockEntityClientResponse(spyTimelineWriter, - ClientResponse.Status.INTERNAL_SERVER_ERROR, false, false); + ClientResponse.Status.INTERNAL_SERVER_ERROR, false, false); try { client.putEntities(generateEntity()); - Assert.fail("Exception is expected"); + fail("Exception is expected"); } catch (YarnException e) { - Assert.assertTrue(e.getMessage().contains( + assertTrue(e.getMessage().contains( "Failed to get the response from the timeline server.")); } } @Test - public void testPostEntitiesConnectionRefused() throws Exception { + void testPostEntitiesConnectionRefused() throws Exception { mockEntityClientResponse(spyTimelineWriter, null, false, true); try { client.putEntities(generateEntity()); - Assert.fail("RuntimeException is expected"); + fail("RuntimeException is expected"); } catch (RuntimeException re) { - Assert.assertTrue(re instanceof ClientHandlerException); + assertTrue(re instanceof ClientHandlerException); } } @Test - public void testPutDomain() throws Exception { + void testPutDomain() throws Exception { mockDomainClientResponse(spyTimelineWriter, ClientResponse.Status.OK, false); try { client.putDomain(generateDomain()); } catch (YarnException e) { - Assert.fail("Exception is not expected"); + fail("Exception is not expected"); } } @Test - public void testPutDomainNoResponse() throws Exception { + void testPutDomainNoResponse() throws Exception { mockDomainClientResponse(spyTimelineWriter, ClientResponse.Status.FORBIDDEN, false); try { client.putDomain(generateDomain()); - Assert.fail("Exception is expected"); + fail("Exception is expected"); } catch (YarnException e) { - Assert.assertTrue(e.getMessage().contains( + assertTrue(e.getMessage().contains( "Failed to get the response from the timeline server.")); } } @Test - public void testPutDomainConnectionRefused() throws Exception { + void testPutDomainConnectionRefused() throws Exception { mockDomainClientResponse(spyTimelineWriter, null, true); try { client.putDomain(generateDomain()); - Assert.fail("RuntimeException is expected"); + fail("RuntimeException is expected"); } catch (RuntimeException re) { - Assert.assertTrue(re instanceof ClientHandlerException); + assertTrue(re instanceof ClientHandlerException); } } @Test - public void testCheckRetryCount() throws Exception { + void testCheckRetryCount() throws Exception { try { YarnConfiguration conf = new YarnConfiguration(); conf.setBoolean(YarnConfiguration.TIMELINE_SERVICE_ENABLED, true); conf.setInt(YarnConfiguration.TIMELINE_SERVICE_CLIENT_MAX_RETRIES, - -2); + -2); createTimelineClient(conf); - Assert.fail(); - } catch(IllegalArgumentException e) { - Assert.assertTrue(e.getMessage().contains( + fail(); + } catch (IllegalArgumentException e) { + assertTrue(e.getMessage().contains( YarnConfiguration.TIMELINE_SERVICE_CLIENT_MAX_RETRIES)); } @@ -201,46 +205,46 @@ public class TestTimelineClient { YarnConfiguration conf = new YarnConfiguration(); conf.setBoolean(YarnConfiguration.TIMELINE_SERVICE_ENABLED, true); conf.setLong(YarnConfiguration.TIMELINE_SERVICE_CLIENT_RETRY_INTERVAL_MS, - 0); + 0); createTimelineClient(conf); - Assert.fail(); - } catch(IllegalArgumentException e) { - Assert.assertTrue(e.getMessage().contains( + fail(); + } catch (IllegalArgumentException e) { + assertTrue(e.getMessage().contains( YarnConfiguration.TIMELINE_SERVICE_CLIENT_RETRY_INTERVAL_MS)); } int newMaxRetries = 5; long newIntervalMs = 500; YarnConfiguration conf = new YarnConfiguration(); conf.setInt(YarnConfiguration.TIMELINE_SERVICE_CLIENT_MAX_RETRIES, - newMaxRetries); + newMaxRetries); conf.setLong(YarnConfiguration.TIMELINE_SERVICE_CLIENT_RETRY_INTERVAL_MS, - newIntervalMs); + newIntervalMs); conf.setBoolean(YarnConfiguration.TIMELINE_SERVICE_ENABLED, true); TimelineClientImpl client = createTimelineClient(conf); try { // This call should fail because there is no timeline server client.putEntities(generateEntity()); - Assert.fail("Exception expected! " + fail("Exception expected! " + "Timeline server should be off to run this test. "); } catch (RuntimeException ce) { - Assert.assertTrue( - "Handler exception for reason other than retry: " + ce.getMessage(), - ce.getMessage().contains("Connection retries limit exceeded")); + assertTrue( + ce.getMessage().contains("Connection retries limit exceeded"), + "Handler exception for reason other than retry: " + ce.getMessage()); // we would expect this exception here, check if the client has retried - Assert.assertTrue("Retry filter didn't perform any retries! ", - client.connector.connectionRetry.getRetired()); + assertTrue(client.connector.connectionRetry.getRetired(), + "Retry filter didn't perform any retries! "); } } @Test - public void testDelegationTokenOperationsRetry() throws Exception { + void testDelegationTokenOperationsRetry() throws Exception { int newMaxRetries = 5; long newIntervalMs = 500; YarnConfiguration conf = new YarnConfiguration(); conf.setInt(YarnConfiguration.TIMELINE_SERVICE_CLIENT_MAX_RETRIES, - newMaxRetries); + newMaxRetries); conf.setLong(YarnConfiguration.TIMELINE_SERVICE_CLIENT_RETRY_INTERVAL_MS, - newIntervalMs); + newIntervalMs); conf.setBoolean(YarnConfiguration.TIMELINE_SERVICE_ENABLED, true); // use kerberos to bypass the issue in HADOOP-11215 conf.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION, @@ -260,7 +264,7 @@ public class TestTimelineClient { try { // try getting a delegation token client.getDelegationToken( - UserGroupInformation.getCurrentUser().getShortUserName()); + UserGroupInformation.getCurrentUser().getShortUserName()); assertFail(); } catch (RuntimeException ce) { assertException(client, ce); @@ -323,7 +327,7 @@ public class TestTimelineClient { * @throws Exception */ @Test - public void testDelegationTokenDisabledOnSimpleAuth() throws Exception { + void testDelegationTokenDisabledOnSimpleAuth() throws Exception { YarnConfiguration conf = new YarnConfiguration(); conf.setBoolean(YarnConfiguration.TIMELINE_SERVICE_ENABLED, true); conf.set(YarnConfiguration.TIMELINE_HTTP_AUTH_TYPE, "simple"); @@ -336,15 +340,15 @@ public class TestTimelineClient { // try getting a delegation token Token identifierToken = tClient.getDelegationToken( - UserGroupInformation.getCurrentUser().getShortUserName()); + UserGroupInformation.getCurrentUser().getShortUserName()); // Get a null token when using simple auth - Assert.assertNull(identifierToken); + assertNull(identifierToken); // try renew a delegation token Token dummyToken = new Token<>(); long renewTime = tClient.renewDelegationToken(dummyToken); // Get invalid expiration time so that RM skips renewal - Assert.assertEquals(renewTime, -1); + assertEquals(renewTime, -1); // try cancel a delegation token tClient.cancelDelegationToken(dummyToken); @@ -356,17 +360,16 @@ public class TestTimelineClient { } private static void assertFail() { - Assert.fail("Exception expected! " + fail("Exception expected! " + "Timeline server should be off to run this test."); } private void assertException(TimelineClientImpl client, RuntimeException ce) { - Assert.assertTrue( - "Handler exception for reason other than retry: " + ce.toString(), ce - .getMessage().contains("Connection retries limit exceeded")); + assertTrue(ce.getMessage().contains("Connection retries limit exceeded"), + "Handler exception for reason other than retry: " + ce.toString()); // we would expect this exception here, check if the client has retried - Assert.assertTrue("Retry filter didn't perform any retries! ", - client.connector.connectionRetry.getRetired()); + assertTrue(client.connector.connectionRetry.getRetired(), + "Retry filter didn't perform any retries! "); } public static ClientResponse mockEntityClientResponse( @@ -495,7 +498,7 @@ public class TestTimelineClient { } @Test - public void testTimelineClientCleanup() throws Exception { + void testTimelineClientCleanup() throws Exception { YarnConfiguration conf = new YarnConfiguration(); conf.setBoolean(YarnConfiguration.TIMELINE_SERVICE_ENABLED, true); conf.setInt(YarnConfiguration.TIMELINE_SERVICE_CLIENT_MAX_RETRIES, 0); @@ -520,7 +523,7 @@ public class TestTimelineClient { reloaderThread = thread; } } - Assert.assertTrue("Reloader is not alive", reloaderThread.isAlive()); + assertTrue(reloaderThread.isAlive(), "Reloader is not alive"); client.close(); @@ -532,11 +535,11 @@ public class TestTimelineClient { } Thread.sleep(1000); } - Assert.assertFalse("Reloader is still alive", reloaderStillAlive); + assertFalse(reloaderStillAlive, "Reloader is still alive"); } @Test - public void testTimelineConnectorDestroy() { + void testTimelineConnectorDestroy() { YarnConfiguration conf = new YarnConfiguration(); conf.setBoolean(YarnConfiguration.TIMELINE_SERVICE_ENABLED, true); TimelineClientImpl client = createTimelineClient(conf); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestTimelineClientForATS1_5.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestTimelineClientForATS1_5.java index 26dd7f42bc5..4d4e412e732 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestTimelineClientForATS1_5.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestTimelineClientForATS1_5.java @@ -18,20 +18,18 @@ package org.apache.hadoop.yarn.client.api.impl; -import static org.mockito.ArgumentMatchers.any; -import static org.mockito.Mockito.mock; -import static org.mockito.Mockito.when; -import static org.mockito.Mockito.verify; -import static org.mockito.Mockito.spy; -import static org.mockito.Mockito.times; -import static org.mockito.Mockito.reset; - import java.io.File; import java.io.IOException; import java.net.URI; +import com.sun.jersey.api.client.Client; +import com.sun.jersey.api.client.ClientResponse; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileContext; import org.apache.hadoop.fs.Path; @@ -42,13 +40,16 @@ import org.apache.hadoop.yarn.api.records.timeline.TimelineDomain; import org.apache.hadoop.yarn.api.records.timeline.TimelineEntity; import org.apache.hadoop.yarn.api.records.timeline.TimelineEntityGroupId; import org.apache.hadoop.yarn.conf.YarnConfiguration; -import org.junit.After; -import org.junit.Assert; -import org.junit.Before; -import org.junit.Test; -import com.sun.jersey.api.client.Client; -import com.sun.jersey.api.client.ClientResponse; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.reset; +import static org.mockito.Mockito.spy; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; public class TestTimelineClientForATS1_5 { @@ -61,7 +62,7 @@ public class TestTimelineClientForATS1_5 { private TimelineWriter spyTimelineWriter; private UserGroupInformation authUgi; - @Before + @BeforeEach public void setup() throws Exception { localFS = FileContext.getLocalFSFileContext(); localActiveDir = @@ -85,7 +86,7 @@ public class TestTimelineClientForATS1_5 { return conf; } - @After + @AfterEach public void tearDown() throws Exception { if (client != null) { client.stop(); @@ -94,13 +95,13 @@ public class TestTimelineClientForATS1_5 { } @Test - public void testPostEntities() throws Exception { + void testPostEntities() throws Exception { client = createTimelineClient(getConfigurations()); verifyForPostEntities(false); } @Test - public void testPostEntitiesToKeepUnderUserDir() throws Exception { + void testPostEntitiesToKeepUnderUserDir() throws Exception { YarnConfiguration conf = getConfigurations(); conf.setBoolean( YarnConfiguration.TIMELINE_SERVICE_ENTITYGROUP_FS_STORE_WITH_USER_DIR, @@ -137,7 +138,7 @@ public class TestTimelineClientForATS1_5 { TimelineEntity[] entityTDB = new TimelineEntity[1]; entityTDB[0] = entities[0]; verify(spyTimelineWriter, times(1)).putEntities(entityTDB); - Assert.assertTrue(localFS.util().exists( + assertTrue(localFS.util().exists( new Path(getAppAttemptDir(attemptId1, storeInsideUserDir), "summarylog-" + attemptId1.toString()))); @@ -152,32 +153,32 @@ public class TestTimelineClientForATS1_5 { client.putEntities(attemptId2, groupId2, entities); verify(spyTimelineWriter, times(0)).putEntities( any(TimelineEntity[].class)); - Assert.assertTrue(localFS.util().exists( + assertTrue(localFS.util().exists( new Path(getAppAttemptDir(attemptId2, storeInsideUserDir), "summarylog-" + attemptId2.toString()))); - Assert.assertTrue(localFS.util().exists( + assertTrue(localFS.util().exists( new Path(getAppAttemptDir(attemptId2, storeInsideUserDir), "entitylog-" + groupId.toString()))); - Assert.assertTrue(localFS.util().exists( + assertTrue(localFS.util().exists( new Path(getAppAttemptDir(attemptId2, storeInsideUserDir), "entitylog-" + groupId2.toString()))); reset(spyTimelineWriter); } catch (Exception e) { - Assert.fail("Exception is not expected. " + e); + fail("Exception is not expected. " + e); } } @Test - public void testPutDomain() { + void testPutDomain() { client = createTimelineClient(getConfigurations()); verifyForPutDomain(false); } @Test - public void testPutDomainToKeepUnderUserDir() { + void testPutDomainToKeepUnderUserDir() { YarnConfiguration conf = getConfigurations(); conf.setBoolean( YarnConfiguration.TIMELINE_SERVICE_ENTITYGROUP_FS_STORE_WITH_USER_DIR, @@ -200,12 +201,12 @@ public class TestTimelineClientForATS1_5 { client.putDomain(attemptId1, domain); verify(spyTimelineWriter, times(0)).putDomain(domain); - Assert.assertTrue(localFS.util() + assertTrue(localFS.util() .exists(new Path(getAppAttemptDir(attemptId1, storeInsideUserDir), "domainlog-" + attemptId1.toString()))); reset(spyTimelineWriter); } catch (Exception e) { - Assert.fail("Exception is not expected." + e); + fail("Exception is not expected." + e); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestTimelineClientV2Impl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestTimelineClientV2Impl.java index 8d437af3076..a26b4bf0a67 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestTimelineClientV2Impl.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestTimelineClientV2Impl.java @@ -18,20 +18,19 @@ package org.apache.hadoop.yarn.client.api.impl; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertNotEquals; -import static org.junit.Assert.assertNotNull; -import static org.junit.Assert.assertNull; - import java.io.IOException; import java.net.URI; import java.util.ArrayList; import java.util.List; - import javax.ws.rs.core.MultivaluedMap; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.TestInfo; import org.slf4j.Logger; import org.slf4j.LoggerFactory; + import org.apache.hadoop.io.Text; import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.yarn.api.records.ApplicationId; @@ -42,12 +41,13 @@ import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntity; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.exceptions.YarnException; import org.apache.hadoop.yarn.security.client.TimelineDelegationTokenIdentifier; -import org.junit.After; -import org.junit.Assert; -import org.junit.Before; -import org.junit.Rule; -import org.junit.Test; -import org.junit.rules.TestName; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotEquals; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertNull; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; public class TestTimelineClientV2Impl { private static final Logger LOG = @@ -56,20 +56,25 @@ public class TestTimelineClientV2Impl { private static final long TIME_TO_SLEEP = 150L; private static final String EXCEPTION_MSG = "Exception in the content"; - @Before - public void setup() { + @BeforeEach + public void setup(TestInfo testInfo) { conf = new YarnConfiguration(); conf.setBoolean(YarnConfiguration.TIMELINE_SERVICE_ENABLED, true); conf.setFloat(YarnConfiguration.TIMELINE_SERVICE_VERSION, 2.0f); conf.setInt(YarnConfiguration.NUMBER_OF_ASYNC_ENTITIES_TO_MERGE, 3); - if (!currTestName.getMethodName() + if (!testInfo.getDisplayName() .contains("testRetryOnConnectionFailure")) { client = createTimelineClient(conf); } } - @Rule - public TestName currTestName = new TestName(); + @Test + void getTestInfo(TestInfo testInfo) { + System.out.println(testInfo.getDisplayName()); + System.out.println(testInfo.getTestMethod()); + System.out.println(testInfo.getTestClass()); + System.out.println(testInfo.getTags()); + } private YarnConfiguration conf; private TestV2TimelineClient createTimelineClient(YarnConfiguration config) { @@ -116,8 +121,8 @@ public class TestTimelineClientV2Impl { private List publishedEntities; public TimelineEntities getPublishedEntities(int putIndex) { - Assert.assertTrue("Not So many entities Published", - putIndex < publishedEntities.size()); + assertTrue(putIndex < publishedEntities.size(), + "Not So many entities Published"); return publishedEntities.get(putIndex); } @@ -152,7 +157,7 @@ public class TestTimelineClientV2Impl { } @Test - public void testExceptionMultipleRetry() { + void testExceptionMultipleRetry() { TestV2TimelineClientForExceptionHandling c = new TestV2TimelineClientForExceptionHandling( ApplicationId.newInstance(0, 0)); @@ -165,42 +170,42 @@ public class TestTimelineClientV2Impl { try { c.putEntities(new TimelineEntity()); } catch (IOException e) { - Assert.fail("YARN exception is expected"); + fail("YARN exception is expected"); } catch (YarnException e) { Throwable cause = e.getCause(); - Assert.assertTrue("IOException is expected", - cause instanceof IOException); - Assert.assertTrue("YARN exception is expected", - cause.getMessage().contains( - "TimelineClient has reached to max retry times : " + maxRetries)); + assertTrue(cause instanceof IOException, + "IOException is expected"); + assertTrue(cause.getMessage().contains( + "TimelineClient has reached to max retry times : " + maxRetries), + "YARN exception is expected"); } c.setThrowYarnException(true); try { c.putEntities(new TimelineEntity()); } catch (IOException e) { - Assert.fail("YARN exception is expected"); + fail("YARN exception is expected"); } catch (YarnException e) { Throwable cause = e.getCause(); - Assert.assertTrue("YARN exception is expected", - cause instanceof YarnException); - Assert.assertTrue("YARN exception is expected", - cause.getMessage().contains(EXCEPTION_MSG)); + assertTrue(cause instanceof YarnException, + "YARN exception is expected"); + assertTrue(cause.getMessage().contains(EXCEPTION_MSG), + "YARN exception is expected"); } c.stop(); } @Test - public void testPostEntities() throws Exception { + void testPostEntities() throws Exception { try { client.putEntities(generateEntity("1")); } catch (YarnException e) { - Assert.fail("Exception is not expected"); + fail("Exception is not expected"); } } @Test - public void testASyncCallMerge() throws Exception { + void testASyncCallMerge() throws Exception { client.setSleepBeforeReturn(true); try { client.putEntitiesAsync(generateEntity("1")); @@ -209,7 +214,7 @@ public class TestTimelineClientV2Impl { client.putEntitiesAsync(generateEntity("2")); client.putEntitiesAsync(generateEntity("3")); } catch (YarnException e) { - Assert.fail("Exception is not expected"); + fail("Exception is not expected"); } for (int i = 0; i < 4; i++) { if (client.getNumOfTimelineEntitiesPublished() == 2) { @@ -217,20 +222,24 @@ public class TestTimelineClientV2Impl { } Thread.sleep(TIME_TO_SLEEP); } - Assert.assertEquals("two merged TimelineEntities needs to be published", 2, - client.getNumOfTimelineEntitiesPublished()); + assertEquals(2, + client.getNumOfTimelineEntitiesPublished(), + "two merged TimelineEntities needs to be published"); TimelineEntities secondPublishedEntities = client.getPublishedEntities(1); - Assert.assertEquals( - "Merged TimelineEntities Object needs to 2 TimelineEntity Object", 2, - secondPublishedEntities.getEntities().size()); - Assert.assertEquals("Order of Async Events Needs to be FIFO", "2", - secondPublishedEntities.getEntities().get(0).getId()); - Assert.assertEquals("Order of Async Events Needs to be FIFO", "3", - secondPublishedEntities.getEntities().get(1).getId()); + assertEquals( + 2, + secondPublishedEntities.getEntities().size(), + "Merged TimelineEntities Object needs to 2 TimelineEntity Object"); + assertEquals("2", + secondPublishedEntities.getEntities().get(0).getId(), + "Order of Async Events Needs to be FIFO"); + assertEquals("3", + secondPublishedEntities.getEntities().get(1).getId(), + "Order of Async Events Needs to be FIFO"); } @Test - public void testSyncCall() throws Exception { + void testSyncCall() throws Exception { try { // sync entity should not be be merged with Async client.putEntities(generateEntity("1")); @@ -239,7 +248,7 @@ public class TestTimelineClientV2Impl { // except for the sync call above 2 should be merged client.putEntities(generateEntity("4")); } catch (YarnException e) { - Assert.fail("Exception is not expected"); + fail("Exception is not expected"); } for (int i = 0; i < 4; i++) { if (client.getNumOfTimelineEntitiesPublished() == 3) { @@ -253,57 +262,65 @@ public class TestTimelineClientV2Impl { int lastPublishIndex = asyncPushesMerged ? 2 : 3; TimelineEntities firstPublishedEntities = client.getPublishedEntities(0); - Assert.assertEquals("sync entities should not be merged with async", 1, - firstPublishedEntities.getEntities().size()); + assertEquals(1, + firstPublishedEntities.getEntities().size(), + "sync entities should not be merged with async"); // async push does not guarantee a merge but is FIFO if (asyncPushesMerged) { TimelineEntities secondPublishedEntities = client.getPublishedEntities(1); - Assert.assertEquals( - "async entities should be merged before publishing sync", 2, - secondPublishedEntities.getEntities().size()); - Assert.assertEquals("Order of Async Events Needs to be FIFO", "2", - secondPublishedEntities.getEntities().get(0).getId()); - Assert.assertEquals("Order of Async Events Needs to be FIFO", "3", - secondPublishedEntities.getEntities().get(1).getId()); + assertEquals( + 2, + secondPublishedEntities.getEntities().size(), + "async entities should be merged before publishing sync"); + assertEquals("2", + secondPublishedEntities.getEntities().get(0).getId(), + "Order of Async Events Needs to be FIFO"); + assertEquals("3", + secondPublishedEntities.getEntities().get(1).getId(), + "Order of Async Events Needs to be FIFO"); } else { TimelineEntities secondAsyncPublish = client.getPublishedEntities(1); - Assert.assertEquals("Order of Async Events Needs to be FIFO", "2", - secondAsyncPublish.getEntities().get(0).getId()); + assertEquals("2", + secondAsyncPublish.getEntities().get(0).getId(), + "Order of Async Events Needs to be FIFO"); TimelineEntities thirdAsyncPublish = client.getPublishedEntities(2); - Assert.assertEquals("Order of Async Events Needs to be FIFO", "3", - thirdAsyncPublish.getEntities().get(0).getId()); + assertEquals("3", + thirdAsyncPublish.getEntities().get(0).getId(), + "Order of Async Events Needs to be FIFO"); } // test the last entity published is sync put TimelineEntities thirdPublishedEntities = client.getPublishedEntities(lastPublishIndex); - Assert.assertEquals("sync entities had to be published at the last", 1, - thirdPublishedEntities.getEntities().size()); - Assert.assertEquals("Expected last sync Event is not proper", "4", - thirdPublishedEntities.getEntities().get(0).getId()); + assertEquals(1, + thirdPublishedEntities.getEntities().size(), + "sync entities had to be published at the last"); + assertEquals("4", + thirdPublishedEntities.getEntities().get(0).getId(), + "Expected last sync Event is not proper"); } @Test - public void testExceptionCalls() throws Exception { + void testExceptionCalls() throws Exception { client.setThrowYarnException(true); try { client.putEntitiesAsync(generateEntity("1")); } catch (YarnException e) { - Assert.fail("Async calls are not expected to throw exception"); + fail("Async calls are not expected to throw exception"); } try { client.putEntities(generateEntity("2")); - Assert.fail("Sync calls are expected to throw exception"); + fail("Sync calls are expected to throw exception"); } catch (YarnException e) { - Assert.assertEquals("Same exception needs to be thrown", - "ActualException", e.getCause().getMessage()); + assertEquals("ActualException", e.getCause().getMessage(), + "Same exception needs to be thrown"); } } @Test - public void testConfigurableNumberOfMerges() throws Exception { + void testConfigurableNumberOfMerges() throws Exception { client.setSleepBeforeReturn(true); try { // At max 3 entities need to be merged @@ -318,52 +335,52 @@ public class TestTimelineClientV2Impl { client.putEntitiesAsync(generateEntity("9")); client.putEntitiesAsync(generateEntity("10")); } catch (YarnException e) { - Assert.fail("No exception expected"); + fail("No exception expected"); } // not having the same logic here as it doesn't depend on how many times // events are published. Thread.sleep(2 * TIME_TO_SLEEP); printReceivedEntities(); for (TimelineEntities publishedEntities : client.publishedEntities) { - Assert.assertTrue( + assertTrue( + publishedEntities.getEntities().size() <= 3, "Number of entities should not be greater than 3 for each publish," - + " but was " + publishedEntities.getEntities().size(), - publishedEntities.getEntities().size() <= 3); + + " but was " + publishedEntities.getEntities().size()); } } @Test - public void testSetTimelineToken() throws Exception { + void testSetTimelineToken() throws Exception { UserGroupInformation ugi = UserGroupInformation.getCurrentUser(); assertEquals(0, ugi.getTokens().size()); - assertNull("Timeline token in v2 client should not be set", - client.currentTimelineToken); + assertNull(client.currentTimelineToken, + "Timeline token in v2 client should not be set"); Token token = Token.newInstance( new byte[0], "kind", new byte[0], "service"); client.setTimelineCollectorInfo(CollectorInfo.newInstance(null, token)); - assertNull("Timeline token in v2 client should not be set as token kind " + - "is unexepcted.", client.currentTimelineToken); + assertNull(client.currentTimelineToken, + "Timeline token in v2 client should not be set as token kind " + "is unexepcted."); assertEquals(0, ugi.getTokens().size()); token = Token.newInstance(new byte[0], TimelineDelegationTokenIdentifier. KIND_NAME.toString(), new byte[0], null); client.setTimelineCollectorInfo(CollectorInfo.newInstance(null, token)); - assertNull("Timeline token in v2 client should not be set as serice is " + - "not set.", client.currentTimelineToken); + assertNull(client.currentTimelineToken, + "Timeline token in v2 client should not be set as serice is " + "not set."); assertEquals(0, ugi.getTokens().size()); TimelineDelegationTokenIdentifier ident = new TimelineDelegationTokenIdentifier(new Text(ugi.getUserName()), - new Text("renewer"), null); + new Text("renewer"), null); ident.setSequenceNumber(1); token = Token.newInstance(ident.getBytes(), TimelineDelegationTokenIdentifier.KIND_NAME.toString(), new byte[0], "localhost:1234"); client.setTimelineCollectorInfo(CollectorInfo.newInstance(null, token)); assertEquals(1, ugi.getTokens().size()); - assertNotNull("Timeline token should be set in v2 client.", - client.currentTimelineToken); + assertNotNull(client.currentTimelineToken, + "Timeline token should be set in v2 client."); assertEquals(token, client.currentTimelineToken); ident.setSequenceNumber(20); @@ -377,7 +394,7 @@ public class TestTimelineClientV2Impl { } @Test - public void testAfterStop() throws Exception { + void testAfterStop() throws Exception { client.setSleepBeforeReturn(true); try { // At max 3 entities need to be merged @@ -388,12 +405,12 @@ public class TestTimelineClientV2Impl { client.stop(); try { client.putEntitiesAsync(generateEntity("50")); - Assert.fail("Exception expected"); + fail("Exception expected"); } catch (YarnException e) { // expected } } catch (YarnException e) { - Assert.fail("No exception expected"); + fail("No exception expected"); } // not having the same logic here as it doesn't depend on how many times // events are published. @@ -411,7 +428,7 @@ public class TestTimelineClientV2Impl { client.publishedEntities.get(client.publishedEntities.size() - 1); TimelineEntity timelineEntity = publishedEntities.getEntities() .get(publishedEntities.getEntities().size() - 1); - Assert.assertEquals("", "19", timelineEntity.getId()); + assertEquals("19", timelineEntity.getId(), ""); } private void printReceivedEntities() { @@ -435,7 +452,7 @@ public class TestTimelineClientV2Impl { return entity; } - @After + @AfterEach public void tearDown() { if (client != null) { client.stop(); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestTimelineReaderClientImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestTimelineReaderClientImpl.java index 757aeb8c31d..975f9c74f4e 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestTimelineReaderClientImpl.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestTimelineReaderClientImpl.java @@ -18,30 +18,31 @@ package org.apache.hadoop.yarn.client.api.impl; -import static org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntityType.YARN_APPLICATION_ATTEMPT; -import static org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntityType.YARN_CONTAINER; -import static org.mockito.Mockito.mock; -import static org.mockito.Mockito.when; +import java.io.IOException; +import java.net.URI; +import java.util.ArrayList; +import java.util.List; +import javax.ws.rs.core.MultivaluedMap; -import com.sun.jersey.api.client.ClientResponse; -import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.thirdparty.com.google.common.collect.ImmutableMap; +import com.sun.jersey.api.client.ClientResponse; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.yarn.api.records.ApplicationAttemptId; import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.api.records.ContainerId; import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntity; import org.apache.hadoop.yarn.client.api.TimelineReaderClient; import org.apache.hadoop.yarn.conf.YarnConfiguration; -import org.junit.After; -import org.junit.Assert; -import org.junit.Before; -import org.junit.Test; -import javax.ws.rs.core.MultivaluedMap; -import java.io.IOException; -import java.net.URI; -import java.util.ArrayList; -import java.util.List; +import static org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntityType.YARN_APPLICATION_ATTEMPT; +import static org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntityType.YARN_CONTAINER; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; /** * Test class for Timeline Reader Client. @@ -52,7 +53,7 @@ public class TestTimelineReaderClientImpl { "\"id\":\"appattempt_1234_0001_000001\"}"; private TimelineReaderClient client; - @Before + @BeforeEach public void setup() { client = new MockTimelineReaderClient(); Configuration conf = new YarnConfiguration(); @@ -63,64 +64,64 @@ public class TestTimelineReaderClientImpl { } @Test - public void testGetApplication() throws Exception { + void testGetApplication() throws Exception { ApplicationId applicationId = ApplicationId.fromString("application_1234_0001"); TimelineEntity entity = client.getApplicationEntity(applicationId, null, null); - Assert.assertEquals("mockApp1", entity.getId()); + assertEquals("mockApp1", entity.getId()); } @Test - public void getApplicationAttemptEntity() throws Exception { + void getApplicationAttemptEntity() throws Exception { ApplicationAttemptId attemptId = ApplicationAttemptId.fromString("appattempt_1234_0001_000001"); TimelineEntity entity = client.getApplicationAttemptEntity(attemptId, null, null); - Assert.assertEquals("mockAppAttempt1", entity.getId()); + assertEquals("mockAppAttempt1", entity.getId()); } @Test - public void getApplicationAttemptEntities() throws Exception { + void getApplicationAttemptEntities() throws Exception { ApplicationId applicationId = ApplicationId.fromString("application_1234_0001"); List entities = client.getApplicationAttemptEntities(applicationId, null, null, 0, null); - Assert.assertEquals(2, entities.size()); - Assert.assertEquals("mockAppAttempt2", entities.get(1).getId()); + assertEquals(2, entities.size()); + assertEquals("mockAppAttempt2", entities.get(1).getId()); } @Test - public void testGetContainer() throws Exception { + void testGetContainer() throws Exception { ContainerId containerId = ContainerId.fromString("container_1234_0001_01_000001"); TimelineEntity entity = client.getContainerEntity(containerId, null, null); - Assert.assertEquals("mockContainer1", entity.getId()); + assertEquals("mockContainer1", entity.getId()); } @Test - public void testGetContainers() throws Exception { + void testGetContainers() throws Exception { ApplicationId appId = ApplicationId.fromString("application_1234_0001"); List entities = client.getContainerEntities(appId, null, null, 0, null); - Assert.assertEquals(2, entities.size()); - Assert.assertEquals("mockContainer2", entities.get(1).getId()); + assertEquals(2, entities.size()); + assertEquals("mockContainer2", entities.get(1).getId()); } @Test - public void testGetContainersForAppAttempt() throws Exception { + void testGetContainersForAppAttempt() throws Exception { ApplicationId appId = ApplicationId.fromString("application_1234_0001"); List entities = client.getContainerEntities(appId, null, ImmutableMap.of("infofilters", appAttemptInfoFilter), 0, null); - Assert.assertEquals(2, entities.size()); - Assert.assertEquals("mockContainer4", entities.get(1).getId()); + assertEquals(2, entities.size()); + assertEquals("mockContainer4", entities.get(1).getId()); } - @After + @AfterEach public void tearDown() { if (client != null) { client.stop(); @@ -154,7 +155,7 @@ public class TestTimelineReaderClientImpl { when(mockClientResponse.getEntity(TimelineEntity[].class)).thenReturn( createTimelineEntities("mockContainer1", "mockContainer2")); } else if (path.contains(YARN_CONTAINER.toString()) && params.containsKey("infofilters")) { - Assert.assertEquals(encodeValue(appAttemptInfoFilter), params.get("infofilters").get(0)); + assertEquals(encodeValue(appAttemptInfoFilter), params.get("infofilters").get(0)); when(mockClientResponse.getEntity(TimelineEntity[].class)).thenReturn( createTimelineEntities("mockContainer3", "mockContainer4")); } else if (path.contains(YARN_APPLICATION_ATTEMPT.toString())) { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/conf/TestHAUtil.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/conf/TestHAUtil.java index fc2c1d0d335..d7d3f106156 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/conf/TestHAUtil.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/conf/TestHAUtil.java @@ -18,20 +18,20 @@ package org.apache.hadoop.yarn.conf; -import org.apache.hadoop.conf.Configuration; - -import org.apache.hadoop.util.StringUtils; -import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; -import org.junit.Before; -import org.junit.Test; - import java.util.Collection; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertNull; -import static org.junit.Assert.assertTrue; -import static org.junit.Assert.fail; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.util.StringUtils; +import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertNull; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; public class TestHAUtil { private Configuration conf; @@ -48,7 +48,7 @@ public class TestHAUtil { private static final String RM_NODE_IDS_UNTRIMMED = RM1_NODE_ID_UNTRIMMED + "," + RM2_NODE_ID; private static final String RM_NODE_IDS = RM1_NODE_ID + "," + RM2_NODE_ID; - @Before + @BeforeEach public void setUp() { conf = new Configuration(); conf.set(YarnConfiguration.RM_HA_IDS, RM_NODE_IDS_UNTRIMMED); @@ -62,7 +62,7 @@ public class TestHAUtil { } @Test - public void testGetRMServiceId() throws Exception { + void testGetRMServiceId() throws Exception { conf.set(YarnConfiguration.RM_HA_IDS, RM1_NODE_ID + "," + RM2_NODE_ID); Collection rmhaIds = HAUtil.getRMHAIds(conf); assertEquals(2, rmhaIds.size()); @@ -73,18 +73,18 @@ public class TestHAUtil { } @Test - public void testGetRMId() throws Exception { + void testGetRMId() throws Exception { conf.set(YarnConfiguration.RM_HA_ID, RM1_NODE_ID); - assertEquals("Does not honor " + YarnConfiguration.RM_HA_ID, - RM1_NODE_ID, HAUtil.getRMHAId(conf)); + assertEquals(RM1_NODE_ID, HAUtil.getRMHAId(conf), + "Does not honor " + YarnConfiguration.RM_HA_ID); conf.clear(); - assertNull("Return null when " + YarnConfiguration.RM_HA_ID - + " is not set", HAUtil.getRMHAId(conf)); + assertNull(HAUtil.getRMHAId(conf), "Return null when " + YarnConfiguration.RM_HA_ID + + " is not set"); } @Test - public void testVerifyAndSetConfiguration() throws Exception { + void testVerifyAndSetConfiguration() throws Exception { Configuration myConf = new Configuration(conf); try { @@ -93,14 +93,12 @@ public class TestHAUtil { fail("Should not throw any exceptions."); } - assertEquals("Should be saved as Trimmed collection", - StringUtils.getStringCollection(RM_NODE_IDS), - HAUtil.getRMHAIds(myConf)); - assertEquals("Should be saved as Trimmed string", - RM1_NODE_ID, HAUtil.getRMHAId(myConf)); + assertEquals(StringUtils.getStringCollection(RM_NODE_IDS), + HAUtil.getRMHAIds(myConf), + "Should be saved as Trimmed collection"); + assertEquals(RM1_NODE_ID, HAUtil.getRMHAId(myConf), "Should be saved as Trimmed string"); for (String confKey : YarnConfiguration.getServiceAddressConfKeys(myConf)) { - assertEquals("RPC address not set for " + confKey, - RM1_ADDRESS, myConf.get(confKey)); + assertEquals(RM1_ADDRESS, myConf.get(confKey), "RPC address not set for " + confKey); } myConf = new Configuration(conf); @@ -108,12 +106,12 @@ public class TestHAUtil { try { HAUtil.verifyAndSetConfiguration(myConf); } catch (YarnRuntimeException e) { - assertEquals("YarnRuntimeException by verifyAndSetRMHAIds()", - HAUtil.BAD_CONFIG_MESSAGE_PREFIX + + assertEquals(HAUtil.BAD_CONFIG_MESSAGE_PREFIX + HAUtil.getInvalidValueMessage(YarnConfiguration.RM_HA_IDS, myConf.get(YarnConfiguration.RM_HA_IDS) + - "\nHA mode requires atleast two RMs"), - e.getMessage()); + "\nHA mode requires atleast two RMs"), + e.getMessage(), + "YarnRuntimeException by verifyAndSetRMHAIds()"); } myConf = new Configuration(conf); @@ -127,10 +125,10 @@ public class TestHAUtil { try { HAUtil.verifyAndSetConfiguration(myConf); } catch (YarnRuntimeException e) { - assertEquals("YarnRuntimeException by getRMId()", - HAUtil.BAD_CONFIG_MESSAGE_PREFIX + + assertEquals(HAUtil.BAD_CONFIG_MESSAGE_PREFIX + HAUtil.getNeedToSetValueMessage(YarnConfiguration.RM_HA_ID), - e.getMessage()); + e.getMessage(), + "YarnRuntimeException by getRMId()"); } myConf = new Configuration(conf); @@ -144,11 +142,11 @@ public class TestHAUtil { try { HAUtil.verifyAndSetConfiguration(myConf); } catch (YarnRuntimeException e) { - assertEquals("YarnRuntimeException by addSuffix()", - HAUtil.BAD_CONFIG_MESSAGE_PREFIX + + assertEquals(HAUtil.BAD_CONFIG_MESSAGE_PREFIX + HAUtil.getInvalidValueMessage(YarnConfiguration.RM_HA_ID, - RM_INVALID_NODE_ID), - e.getMessage()); + RM_INVALID_NODE_ID), + e.getMessage(), + "YarnRuntimeException by addSuffix()"); } myConf = new Configuration(); @@ -160,11 +158,10 @@ public class TestHAUtil { fail("Should throw YarnRuntimeException. by Configuration#set()"); } catch (YarnRuntimeException e) { String confKey = - HAUtil.addSuffix(YarnConfiguration.RM_ADDRESS, RM1_NODE_ID); - assertEquals("YarnRuntimeException by Configuration#set()", - HAUtil.BAD_CONFIG_MESSAGE_PREFIX + HAUtil.getNeedToSetValueMessage( - HAUtil.addSuffix(YarnConfiguration.RM_HOSTNAME, RM1_NODE_ID) - + " or " + confKey), e.getMessage()); + HAUtil.addSuffix(YarnConfiguration.RM_ADDRESS, RM1_NODE_ID); + assertEquals(HAUtil.BAD_CONFIG_MESSAGE_PREFIX + HAUtil.getNeedToSetValueMessage( + HAUtil.addSuffix(YarnConfiguration.RM_HOSTNAME, RM1_NODE_ID) + + " or " + confKey), e.getMessage(), "YarnRuntimeException by Configuration#set()"); } // simulate the case YarnConfiguration.RM_HA_IDS doesn't contain @@ -180,10 +177,10 @@ public class TestHAUtil { try { HAUtil.verifyAndSetConfiguration(myConf); } catch (YarnRuntimeException e) { - assertEquals("YarnRuntimeException by getRMId()'s validation", - HAUtil.BAD_CONFIG_MESSAGE_PREFIX + - HAUtil.getRMHAIdNeedToBeIncludedMessage("[rm2, rm3]", RM1_NODE_ID), - e.getMessage()); + assertEquals(HAUtil.BAD_CONFIG_MESSAGE_PREFIX + + HAUtil.getRMHAIdNeedToBeIncludedMessage("[rm2, rm3]", RM1_NODE_ID), + e.getMessage(), + "YarnRuntimeException by getRMId()'s validation"); } // simulate the case that no leader election is enabled @@ -196,19 +193,19 @@ public class TestHAUtil { try { HAUtil.verifyAndSetConfiguration(myConf); } catch (YarnRuntimeException e) { - assertEquals("YarnRuntimeException by getRMId()'s validation", - HAUtil.BAD_CONFIG_MESSAGE_PREFIX + HAUtil.NO_LEADER_ELECTION_MESSAGE, - e.getMessage()); + assertEquals(HAUtil.BAD_CONFIG_MESSAGE_PREFIX + HAUtil.NO_LEADER_ELECTION_MESSAGE, + e.getMessage(), + "YarnRuntimeException by getRMId()'s validation"); } } @Test - public void testGetConfKeyForRMInstance() { - assertTrue("RM instance id is not suffixed", - HAUtil.getConfKeyForRMInstance(YarnConfiguration.RM_ADDRESS, conf) - .contains(HAUtil.getRMHAId(conf))); - assertFalse("RM instance id is suffixed", - HAUtil.getConfKeyForRMInstance(YarnConfiguration.NM_ADDRESS, conf) - .contains(HAUtil.getRMHAId(conf))); + void testGetConfKeyForRMInstance() { + assertTrue(HAUtil.getConfKeyForRMInstance(YarnConfiguration.RM_ADDRESS, conf) + .contains(HAUtil.getRMHAId(conf)), + "RM instance id is not suffixed"); + assertFalse(HAUtil.getConfKeyForRMInstance(YarnConfiguration.NM_ADDRESS, conf) + .contains(HAUtil.getRMHAId(conf)), + "RM instance id is suffixed"); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfiguration.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfiguration.java index 212e09c02e9..b17c1806de3 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfiguration.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfiguration.java @@ -18,28 +18,29 @@ package org.apache.hadoop.yarn.conf; -import org.junit.Assert; - -import org.apache.hadoop.yarn.webapp.util.WebAppUtils; -import org.junit.Test; - import java.net.InetSocketAddress; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertNull; -import static org.junit.Assert.assertTrue; -import static org.junit.Assert.assertFalse; +import org.junit.jupiter.api.Test; + +import org.apache.hadoop.yarn.webapp.util.WebAppUtils; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertNotSame; +import static org.junit.jupiter.api.Assertions.assertNull; +import static org.junit.jupiter.api.Assertions.assertTrue; public class TestYarnConfiguration { @Test - public void testDefaultRMWebUrl() throws Exception { + void testDefaultRMWebUrl() throws Exception { YarnConfiguration conf = new YarnConfiguration(); String rmWebUrl = WebAppUtils.getRMWebAppURLWithScheme(conf); // shouldn't have a "/" on the end of the url as all the other uri routinnes // specifically add slashes and Jetty doesn't handle double slashes. - Assert.assertNotSame("RM Web Url is not correct", "http://0.0.0.0:8088", - rmWebUrl); + assertNotSame("http://0.0.0.0:8088", + rmWebUrl, + "RM Web Url is not correct"); // test it in HA scenario conf.setBoolean(YarnConfiguration.RM_HA_ENABLED, true); @@ -47,7 +48,7 @@ public class TestYarnConfiguration { conf.set("yarn.resourcemanager.webapp.address.rm1", "10.10.10.10:18088"); conf.set("yarn.resourcemanager.webapp.address.rm2", "20.20.20.20:28088"); String rmWebUrlinHA = WebAppUtils.getRMWebAppURLWithScheme(conf); - Assert.assertEquals("http://10.10.10.10:18088", rmWebUrlinHA); + assertEquals("http://10.10.10.10:18088", rmWebUrlinHA); YarnConfiguration conf2 = new YarnConfiguration(); conf2.setBoolean(YarnConfiguration.RM_HA_ENABLED, true); @@ -55,17 +56,17 @@ public class TestYarnConfiguration { conf2.set("yarn.resourcemanager.hostname.rm1", "30.30.30.30"); conf2.set("yarn.resourcemanager.hostname.rm2", "40.40.40.40"); String rmWebUrlinHA2 = WebAppUtils.getRMWebAppURLWithScheme(conf2); - Assert.assertEquals("http://30.30.30.30:8088", rmWebUrlinHA2); + assertEquals("http://30.30.30.30:8088", rmWebUrlinHA2); rmWebUrlinHA2 = WebAppUtils.getRMWebAppURLWithScheme(conf2, 0); - Assert.assertEquals("http://30.30.30.30:8088", rmWebUrlinHA2); + assertEquals("http://30.30.30.30:8088", rmWebUrlinHA2); rmWebUrlinHA2 = WebAppUtils.getRMWebAppURLWithScheme(conf2, 1); - Assert.assertEquals("http://40.40.40.40:8088", rmWebUrlinHA2); + assertEquals("http://40.40.40.40:8088", rmWebUrlinHA2); } @Test - public void testRMWebUrlSpecified() throws Exception { + void testRMWebUrlSpecified() throws Exception { YarnConfiguration conf = new YarnConfiguration(); // seems a bit odd but right now we are forcing webapp for RM to be // RM_ADDRESS @@ -74,15 +75,15 @@ public class TestYarnConfiguration { conf.set(YarnConfiguration.RM_ADDRESS, "rmtesting:9999"); String rmWebUrl = WebAppUtils.getRMWebAppURLWithScheme(conf); String[] parts = rmWebUrl.split(":"); - Assert.assertEquals("RM Web URL Port is incrrect", 24543, - Integer.parseInt(parts[parts.length - 1])); - Assert.assertNotSame( - "RM Web Url not resolved correctly. Should not be rmtesting", - "http://rmtesting:24543", rmWebUrl); + assertEquals(24543, + Integer.parseInt(parts[parts.length - 1]), + "RM Web URL Port is incrrect"); + assertNotSame("http://rmtesting:24543", rmWebUrl, + "RM Web Url not resolved correctly. Should not be rmtesting"); } @Test - public void testGetSocketAddressForNMWithHA() { + void testGetSocketAddressForNMWithHA() { YarnConfiguration conf = new YarnConfiguration(); // Set NM address @@ -100,7 +101,7 @@ public class TestYarnConfiguration { } @Test - public void testGetSocketAddr() throws Exception { + void testGetSocketAddr() throws Exception { YarnConfiguration conf; InetSocketAddress resourceTrackerAddress; @@ -113,9 +114,9 @@ public class TestYarnConfiguration { YarnConfiguration.DEFAULT_RM_RESOURCE_TRACKER_ADDRESS, YarnConfiguration.DEFAULT_RM_RESOURCE_TRACKER_PORT); assertEquals( - new InetSocketAddress( - YarnConfiguration.DEFAULT_RM_RESOURCE_TRACKER_ADDRESS.split(":")[0], - YarnConfiguration.DEFAULT_RM_RESOURCE_TRACKER_PORT), + new InetSocketAddress( + YarnConfiguration.DEFAULT_RM_RESOURCE_TRACKER_ADDRESS.split(":")[0], + YarnConfiguration.DEFAULT_RM_RESOURCE_TRACKER_PORT), resourceTrackerAddress); //with address @@ -126,9 +127,9 @@ public class TestYarnConfiguration { YarnConfiguration.DEFAULT_RM_RESOURCE_TRACKER_ADDRESS, YarnConfiguration.DEFAULT_RM_RESOURCE_TRACKER_PORT); assertEquals( - new InetSocketAddress( - "10.0.0.1", - YarnConfiguration.DEFAULT_RM_RESOURCE_TRACKER_PORT), + new InetSocketAddress( + "10.0.0.1", + YarnConfiguration.DEFAULT_RM_RESOURCE_TRACKER_PORT), resourceTrackerAddress); //address and socket @@ -139,9 +140,9 @@ public class TestYarnConfiguration { YarnConfiguration.DEFAULT_RM_RESOURCE_TRACKER_ADDRESS, YarnConfiguration.DEFAULT_RM_RESOURCE_TRACKER_PORT); assertEquals( - new InetSocketAddress( - "10.0.0.2", - 5001), + new InetSocketAddress( + "10.0.0.2", + 5001), resourceTrackerAddress); //bind host only @@ -153,9 +154,9 @@ public class TestYarnConfiguration { YarnConfiguration.DEFAULT_RM_RESOURCE_TRACKER_ADDRESS, YarnConfiguration.DEFAULT_RM_RESOURCE_TRACKER_PORT); assertEquals( - new InetSocketAddress( - "10.0.0.3", - YarnConfiguration.DEFAULT_RM_RESOURCE_TRACKER_PORT), + new InetSocketAddress( + "10.0.0.3", + YarnConfiguration.DEFAULT_RM_RESOURCE_TRACKER_PORT), resourceTrackerAddress); //bind host and address no port @@ -167,9 +168,9 @@ public class TestYarnConfiguration { YarnConfiguration.DEFAULT_RM_RESOURCE_TRACKER_ADDRESS, YarnConfiguration.DEFAULT_RM_RESOURCE_TRACKER_PORT); assertEquals( - new InetSocketAddress( - "0.0.0.0", - YarnConfiguration.DEFAULT_RM_RESOURCE_TRACKER_PORT), + new InetSocketAddress( + "0.0.0.0", + YarnConfiguration.DEFAULT_RM_RESOURCE_TRACKER_PORT), resourceTrackerAddress); //bind host and address with port @@ -181,15 +182,15 @@ public class TestYarnConfiguration { YarnConfiguration.DEFAULT_RM_RESOURCE_TRACKER_ADDRESS, YarnConfiguration.DEFAULT_RM_RESOURCE_TRACKER_PORT); assertEquals( - new InetSocketAddress( - "0.0.0.0", - 5003), + new InetSocketAddress( + "0.0.0.0", + 5003), resourceTrackerAddress); } @Test - public void testUpdateConnectAddr() throws Exception { + void testUpdateConnectAddr() throws Exception { YarnConfiguration conf; InetSocketAddress resourceTrackerConnectAddress; InetSocketAddress serverAddress; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/event/InlineDispatcher.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/event/InlineDispatcher.java index cd6274afd0d..e882631d8f7 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/event/InlineDispatcher.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/event/InlineDispatcher.java @@ -20,9 +20,6 @@ package org.apache.hadoop.yarn.event; import org.slf4j.Logger; import org.slf4j.LoggerFactory; -import org.apache.hadoop.yarn.event.AsyncDispatcher; -import org.apache.hadoop.yarn.event.Event; -import org.apache.hadoop.yarn.event.EventHandler; @SuppressWarnings({"unchecked", "rawtypes"}) public class InlineDispatcher extends AsyncDispatcher { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/event/TestAsyncDispatcher.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/event/TestAsyncDispatcher.java index 8b2dfa08b0d..4119542164c 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/event/TestAsyncDispatcher.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/event/TestAsyncDispatcher.java @@ -27,23 +27,30 @@ import java.util.Set; import java.util.concurrent.BlockingQueue; import java.util.concurrent.LinkedBlockingQueue; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; +import org.slf4j.Logger; +import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.metrics2.AbstractMetric; import org.apache.hadoop.metrics2.MetricsRecord; import org.apache.hadoop.metrics2.impl.MetricsCollectorImpl; import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem; import org.apache.hadoop.test.GenericTestUtils; -import org.apache.hadoop.yarn.metrics.GenericEventTypeMetrics; -import org.slf4j.Logger; -import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; -import org.junit.Assert; -import org.junit.Test; +import org.apache.hadoop.yarn.metrics.GenericEventTypeMetrics; import static org.apache.hadoop.metrics2.lib.Interns.info; -import static org.junit.Assert.assertEquals; -import static org.mockito.Mockito.*; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; +import static org.mockito.Mockito.atLeastOnce; +import static org.mockito.Mockito.doThrow; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.spy; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; public class TestAsyncDispatcher { @@ -52,9 +59,10 @@ public class TestAsyncDispatcher { * 1. A thread which was putting event to event queue is interrupted. * 2. Event queue is empty on close. */ - @SuppressWarnings({ "unchecked", "rawtypes" }) - @Test(timeout=10000) - public void testDispatcherOnCloseIfQueueEmpty() throws Exception { + @SuppressWarnings({"unchecked", "rawtypes"}) + @Test + @Timeout(10000) + void testDispatcherOnCloseIfQueueEmpty() throws Exception { BlockingQueue eventQueue = spy(new LinkedBlockingQueue()); Event event = mock(Event.class); doThrow(new InterruptedException()).when(eventQueue).put(event); @@ -66,19 +74,20 @@ public class TestAsyncDispatcher { disp.waitForEventThreadToWait(); try { disp.getEventHandler().handle(event); - Assert.fail("Expected YarnRuntimeException"); + fail("Expected YarnRuntimeException"); } catch (YarnRuntimeException e) { - Assert.assertTrue(e.getCause() instanceof InterruptedException); + assertTrue(e.getCause() instanceof InterruptedException); } // Queue should be empty and dispatcher should not hang on close - Assert.assertTrue("Event Queue should have been empty", - eventQueue.isEmpty()); + assertTrue(eventQueue.isEmpty(), + "Event Queue should have been empty"); disp.close(); } // Test dispatcher should timeout on draining events. - @Test(timeout=10000) - public void testDispatchStopOnTimeout() throws Exception { + @Test + @Timeout(10000) + void testDispatchStopOnTimeout() throws Exception { BlockingQueue eventQueue = new LinkedBlockingQueue(); eventQueue = spy(eventQueue); // simulate dispatcher is not drained. @@ -143,9 +152,10 @@ public class TestAsyncDispatcher { } // Test if drain dispatcher drains events on stop. - @SuppressWarnings({ "rawtypes" }) - @Test(timeout=10000) - public void testDrainDispatcherDrainEventsOnStop() throws Exception { + @SuppressWarnings({"rawtypes"}) + @Test + @Timeout(10000) + void testDrainDispatcherDrainEventsOnStop() throws Exception { YarnConfiguration conf = new YarnConfiguration(); conf.setInt(YarnConfiguration.DISPATCHER_DRAIN_EVENTS_TIMEOUT, 2000); BlockingQueue queue = new LinkedBlockingQueue(); @@ -161,11 +171,12 @@ public class TestAsyncDispatcher { } //Test print dispatcher details when the blocking queue is heavy - @Test(timeout = 10000) - public void testPrintDispatcherEventDetails() throws Exception { + @Test + @Timeout(10000) + void testPrintDispatcherEventDetails() throws Exception { YarnConfiguration conf = new YarnConfiguration(); conf.setInt(YarnConfiguration. - YARN_DISPATCHER_PRINT_EVENTS_INFO_THRESHOLD, 5000); + YARN_DISPATCHER_PRINT_EVENTS_INFO_THRESHOLD, 5000); Logger log = mock(Logger.class); AsyncDispatcher dispatcher = new AsyncDispatcher(); dispatcher.init(conf); @@ -190,7 +201,7 @@ public class TestAsyncDispatcher { Thread.sleep(2000); //Make sure more than one event to take verify(log, atLeastOnce()). - info("Latest dispatch event type: TestEventType"); + info("Latest dispatch event type: TestEventType"); } finally { //... restore logger object logger.set(null, oldLog); @@ -199,8 +210,9 @@ public class TestAsyncDispatcher { } //Test print dispatcher details when the blocking queue is heavy - @Test(timeout = 60000) - public void testPrintDispatcherEventDetailsAvoidDeadLoop() throws Exception { + @Test + @Timeout(60000) + void testPrintDispatcherEventDetailsAvoidDeadLoop() throws Exception { for (int i = 0; i < 5; i++) { testPrintDispatcherEventDetailsAvoidDeadLoopInternal(); } @@ -241,7 +253,7 @@ public class TestAsyncDispatcher { } @Test - public void testMetricsForDispatcher() throws Exception { + void testMetricsForDispatcher() throws Exception { YarnConfiguration conf = new YarnConfiguration(); AsyncDispatcher dispatcher = null; @@ -252,7 +264,7 @@ public class TestAsyncDispatcher { new GenericEventTypeMetrics.EventTypeMetricsBuilder() .setMs(DefaultMetricsSystem.instance()) .setInfo(info("GenericEventTypeMetrics for " - + TestEnum.class.getName(), + + TestEnum.class.getName(), "Metrics for " + dispatcher.getName())) .setEnumClass(TestEnum.class) .setEnums(TestEnum.class.getEnumConstants()) @@ -287,34 +299,34 @@ public class TestAsyncDispatcher { get(TestEnum.TestEventType2) == 2, 1000, 10000); // Check time spend. - Assert.assertTrue(genericEventTypeMetrics. + assertTrue(genericEventTypeMetrics. getTotalProcessingTime(TestEnum.TestEventType) - >= 1500*3); - Assert.assertTrue(genericEventTypeMetrics. + >= 1500 * 3); + assertTrue(genericEventTypeMetrics. getTotalProcessingTime(TestEnum.TestEventType) - < 1500*4); + < 1500 * 4); - Assert.assertTrue(genericEventTypeMetrics. + assertTrue(genericEventTypeMetrics. getTotalProcessingTime(TestEnum.TestEventType2) - >= 1500*2); - Assert.assertTrue(genericEventTypeMetrics. + >= 1500 * 2); + assertTrue(genericEventTypeMetrics. getTotalProcessingTime(TestEnum.TestEventType2) - < 1500*3); + < 1500 * 3); // Make sure metrics consistent. - Assert.assertEquals(Long.toString(genericEventTypeMetrics. + assertEquals(Long.toString(genericEventTypeMetrics. get(TestEnum.TestEventType)), genericEventTypeMetrics. getRegistry().get("TestEventType_event_count").toString()); - Assert.assertEquals(Long.toString(genericEventTypeMetrics. + assertEquals(Long.toString(genericEventTypeMetrics. get(TestEnum.TestEventType2)), genericEventTypeMetrics. getRegistry().get("TestEventType2_event_count").toString()); - Assert.assertEquals(Long.toString(genericEventTypeMetrics. + assertEquals(Long.toString(genericEventTypeMetrics. getTotalProcessingTime(TestEnum.TestEventType)), genericEventTypeMetrics. getRegistry().get("TestEventType_processing_time").toString()); - Assert.assertEquals(Long.toString(genericEventTypeMetrics. + assertEquals(Long.toString(genericEventTypeMetrics. getTotalProcessingTime(TestEnum.TestEventType2)), genericEventTypeMetrics. getRegistry().get("TestEventType2_processing_time").toString()); @@ -326,7 +338,7 @@ public class TestAsyncDispatcher { } @Test - public void testDispatcherMetricsHistogram() throws Exception { + void testDispatcherMetricsHistogram() throws Exception { YarnConfiguration conf = new YarnConfiguration(); AsyncDispatcher dispatcher = null; @@ -337,7 +349,7 @@ public class TestAsyncDispatcher { new GenericEventTypeMetrics.EventTypeMetricsBuilder() .setMs(DefaultMetricsSystem.instance()) .setInfo(info("GenericEventTypeMetrics for " - + TestEnum.class.getName(), + + TestEnum.class.getName(), "Metrics for " + dispatcher.getName())) .setEnumClass(TestEnum.class) .setEnums(TestEnum.class.getEnumConstants()) @@ -393,14 +405,13 @@ public class TestAsyncDispatcher { String metricName = metric.name(); if (expectedValues.containsKey(metricName)) { Long expectedValue = expectedValues.get(metricName); - Assert.assertEquals( - "Metric " + metricName + " doesn't have expected value", - expectedValue, metric.value()); + assertEquals(expectedValue, metric.value(), + "Metric " + metricName + " doesn't have expected value"); testResults.add(metricName); } } } - Assert.assertEquals(expectedValues.keySet(), testResults); + assertEquals(expectedValues.keySet(), testResults); } finally { dispatcher.close(); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/factories/impl/pb/TestRpcClientFactoryPBImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/factories/impl/pb/TestRpcClientFactoryPBImpl.java index 7967c41765d..ae807ec6e48 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/factories/impl/pb/TestRpcClientFactoryPBImpl.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/factories/impl/pb/TestRpcClientFactoryPBImpl.java @@ -18,11 +18,12 @@ package org.apache.hadoop.yarn.factories.impl.pb; -import org.apache.hadoop.conf.Configuration; -import org.junit.Test; - import java.net.InetSocketAddress; +import org.junit.jupiter.api.Test; + +import org.apache.hadoop.conf.Configuration; + import static org.mockito.ArgumentMatchers.anyString; import static org.mockito.Mockito.atLeastOnce; import static org.mockito.Mockito.mock; @@ -33,7 +34,7 @@ import static org.mockito.Mockito.verify; */ public class TestRpcClientFactoryPBImpl { @Test - public void testToUseCustomClassloader() throws Exception { + void testToUseCustomClassloader() throws Exception { Configuration configuration = mock(Configuration.class); RpcClientFactoryPBImpl rpcClientFactoryPB = RpcClientFactoryPBImpl.get(); try { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/factories/impl/pb/TestRpcServerFactoryPBImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/factories/impl/pb/TestRpcServerFactoryPBImpl.java index c8650e169a1..bc9e1d890c2 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/factories/impl/pb/TestRpcServerFactoryPBImpl.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/factories/impl/pb/TestRpcServerFactoryPBImpl.java @@ -18,11 +18,12 @@ package org.apache.hadoop.yarn.factories.impl.pb; -import org.apache.hadoop.conf.Configuration; -import org.junit.Test; - import java.net.InetSocketAddress; +import org.junit.jupiter.api.Test; + +import org.apache.hadoop.conf.Configuration; + import static org.mockito.ArgumentMatchers.anyString; import static org.mockito.Mockito.atLeastOnce; import static org.mockito.Mockito.mock; @@ -33,7 +34,7 @@ import static org.mockito.Mockito.verify; */ public class TestRpcServerFactoryPBImpl { @Test - public void testToUseCustomClassloader() throws Exception { + void testToUseCustomClassloader() throws Exception { Configuration configuration = mock(Configuration.class); RpcServerFactoryPBImpl rpcServerFactoryPB = RpcServerFactoryPBImpl.get(); try { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/ipc/TestRPCUtil.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/ipc/TestRPCUtil.java index 73380741d24..8ffb7deac0b 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/ipc/TestRPCUtil.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/ipc/TestRPCUtil.java @@ -21,71 +21,71 @@ package org.apache.hadoop.yarn.ipc; import java.io.FileNotFoundException; import java.io.IOException; -import org.junit.Assert; +import org.apache.hadoop.thirdparty.protobuf.ServiceException; +import org.junit.jupiter.api.Test; import org.apache.hadoop.ipc.RemoteException; import org.apache.hadoop.yarn.exceptions.YarnException; -import org.junit.Test; -import org.apache.hadoop.thirdparty.protobuf.ServiceException; +import static org.junit.jupiter.api.Assertions.assertTrue; public class TestRPCUtil { @Test - public void testUnknownExceptionUnwrapping() { + void testUnknownExceptionUnwrapping() { Class exception = YarnException.class; String className = "UnknownException.class"; verifyRemoteExceptionUnwrapping(exception, className); } @Test - public void testRemoteIOExceptionUnwrapping() { + void testRemoteIOExceptionUnwrapping() { Class exception = IOException.class; verifyRemoteExceptionUnwrapping(exception, exception.getName()); } @Test - public void testRemoteIOExceptionDerivativeUnwrapping() { + void testRemoteIOExceptionDerivativeUnwrapping() { // Test IOException sub-class Class exception = FileNotFoundException.class; verifyRemoteExceptionUnwrapping(exception, exception.getName()); } @Test - public void testRemoteYarnExceptionUnwrapping() { + void testRemoteYarnExceptionUnwrapping() { Class exception = YarnException.class; verifyRemoteExceptionUnwrapping(exception, exception.getName()); } @Test - public void testRemoteYarnExceptionDerivativeUnwrapping() { + void testRemoteYarnExceptionDerivativeUnwrapping() { Class exception = YarnTestException.class; verifyRemoteExceptionUnwrapping(exception, exception.getName()); } @Test - public void testRemoteRuntimeExceptionUnwrapping() { + void testRemoteRuntimeExceptionUnwrapping() { Class exception = NullPointerException.class; verifyRemoteExceptionUnwrapping(exception, exception.getName()); } @Test - public void testUnexpectedRemoteExceptionUnwrapping() { + void testUnexpectedRemoteExceptionUnwrapping() { // Non IOException, YarnException thrown by the remote side. Class exception = Exception.class; verifyRemoteExceptionUnwrapping(RemoteException.class, exception.getName()); } - + @Test - public void testRemoteYarnExceptionWithoutStringConstructor() { + void testRemoteYarnExceptionWithoutStringConstructor() { // Derivatives of YarnException should always define a string constructor. Class exception = YarnTestExceptionNoConstructor.class; verifyRemoteExceptionUnwrapping(RemoteException.class, exception.getName()); } - + @Test - public void testRPCServiceExceptionUnwrapping() { + void testRPCServiceExceptionUnwrapping() { String message = "ServiceExceptionMessage"; ServiceException se = new ServiceException(message); @@ -96,12 +96,12 @@ public class TestRPCUtil { t = thrown; } - Assert.assertTrue(IOException.class.isInstance(t)); - Assert.assertTrue(t.getMessage().contains(message)); + assertTrue(IOException.class.isInstance(t)); + assertTrue(t.getMessage().contains(message)); } @Test - public void testRPCIOExceptionUnwrapping() { + void testRPCIOExceptionUnwrapping() { String message = "DirectIOExceptionMessage"; IOException ioException = new FileNotFoundException(message); ServiceException se = new ServiceException(ioException); @@ -112,12 +112,12 @@ public class TestRPCUtil { } catch (Throwable thrown) { t = thrown; } - Assert.assertTrue(FileNotFoundException.class.isInstance(t)); - Assert.assertTrue(t.getMessage().contains(message)); + assertTrue(FileNotFoundException.class.isInstance(t)); + assertTrue(t.getMessage().contains(message)); } @Test - public void testRPCRuntimeExceptionUnwrapping() { + void testRPCRuntimeExceptionUnwrapping() { String message = "RPCRuntimeExceptionUnwrapping"; RuntimeException re = new NullPointerException(message); ServiceException se = new ServiceException(re); @@ -129,8 +129,8 @@ public class TestRPCUtil { t = thrown; } - Assert.assertTrue(NullPointerException.class.isInstance(t)); - Assert.assertTrue(t.getMessage().contains(message)); + assertTrue(NullPointerException.class.isInstance(t)); + assertTrue(t.getMessage().contains(message)); } private void verifyRemoteExceptionUnwrapping( @@ -147,11 +147,10 @@ public class TestRPCUtil { t = thrown; } - Assert.assertTrue("Expected exception [" + expectedLocalException - + "] but found " + t, expectedLocalException.isInstance(t)); - Assert.assertTrue( - "Expected message [" + message + "] but found " + t.getMessage(), t - .getMessage().contains(message)); + assertTrue(expectedLocalException.isInstance(t), "Expected exception [" + expectedLocalException + + "] but found " + t); + assertTrue(t.getMessage().contains(message), + "Expected message [" + message + "] but found " + t.getMessage()); } private static class YarnTestException extends YarnException { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/LogAggregationTestUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/LogAggregationTestUtils.java index 3cd563a6489..e0010a373b6 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/LogAggregationTestUtils.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/LogAggregationTestUtils.java @@ -18,13 +18,13 @@ package org.apache.hadoop.yarn.logaggregation; +import java.util.List; + import org.apache.commons.lang3.StringUtils; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.logaggregation.filecontroller.LogAggregationFileController; -import java.util.List; - import static org.apache.hadoop.yarn.conf.YarnConfiguration.LOG_AGGREGATION_FILE_CONTROLLER_FMT; import static org.apache.hadoop.yarn.conf.YarnConfiguration.LOG_AGGREGATION_REMOTE_APP_LOG_DIR_FMT; import static org.apache.hadoop.yarn.conf.YarnConfiguration.LOG_AGGREGATION_REMOTE_APP_LOG_DIR_SUFFIX_FMT; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogDeletionService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogDeletionService.java index 285ac43322a..fa5a5870c4e 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogDeletionService.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogDeletionService.java @@ -18,6 +18,15 @@ package org.apache.hadoop.yarn.logaggregation; +import java.io.IOException; +import java.net.URI; +import java.util.Arrays; +import java.util.List; + +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + import org.apache.commons.lang3.tuple.Pair; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; @@ -32,14 +41,6 @@ import org.apache.hadoop.yarn.logaggregation.testutils.LogAggregationTestcase; import org.apache.hadoop.yarn.logaggregation.testutils.LogAggregationTestcaseBuilder; import org.apache.hadoop.yarn.logaggregation.testutils.LogAggregationTestcaseBuilder.AppDescriptor; import org.apache.log4j.Level; -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.Test; - -import java.io.IOException; -import java.net.URI; -import java.util.Arrays; -import java.util.List; import static org.apache.hadoop.yarn.conf.YarnConfiguration.LOG_AGGREGATION_FILE_CONTROLLER_FMT; import static org.apache.hadoop.yarn.logaggregation.LogAggregationTestUtils.enableFileControllers; @@ -64,12 +65,12 @@ public class TestAggregatedLogDeletionService { LogAggregationTFileController.class); public static final List ALL_FILE_CONTROLLER_NAMES = Arrays.asList(I_FILE, T_FILE); - @BeforeClass + @BeforeAll public static void beforeClass() { org.apache.log4j.Logger.getRootLogger().setLevel(Level.DEBUG); } - @Before + @BeforeEach public void closeFilesystems() throws IOException { // prevent the same mockfs instance from being reused due to FS cache FileSystem.closeAll(); @@ -91,7 +92,7 @@ public class TestAggregatedLogDeletionService { } @Test - public void testDeletion() throws Exception { + void testDeletion() throws Exception { long now = System.currentTimeMillis(); long toDeleteTime = now - (2000 * 1000); long toKeepTime = now - (1500 * 1000); @@ -99,36 +100,36 @@ public class TestAggregatedLogDeletionService { Configuration conf = setupConfiguration(1800, -1); long timeout = 2000L; LogAggregationTestcaseBuilder.create(conf) - .withRootPath(ROOT) - .withRemoteRootLogPath(REMOTE_ROOT_LOG_DIR) - .withUserDir(USER_ME, toKeepTime) - .withSuffixDir(SUFFIX, toDeleteTime) - .withBucketDir(toDeleteTime) - .withApps(Lists.newArrayList( - new AppDescriptor(toDeleteTime, Lists.newArrayList()), - new AppDescriptor(toDeleteTime, Lists.newArrayList( - Pair.of(DIR_HOST1, toDeleteTime), - Pair.of(DIR_HOST2, toKeepTime))), - new AppDescriptor(toDeleteTime, Lists.newArrayList( - Pair.of(DIR_HOST1, toDeleteTime), - Pair.of(DIR_HOST2, toDeleteTime))), - new AppDescriptor(toDeleteTime, Lists.newArrayList( - Pair.of(DIR_HOST1, toDeleteTime), - Pair.of(DIR_HOST2, toKeepTime))))) - .withFinishedApps(1, 2, 3) - .withRunningApps(4) - .injectExceptionForAppDirDeletion(3) - .build() - .startDeletionService() - .verifyAppDirsDeleted(timeout, 1, 3) - .verifyAppDirsNotDeleted(timeout, 2, 4) - .verifyAppFileDeleted(4, 1, timeout) - .verifyAppFileNotDeleted(4, 2, timeout) - .teardown(1); + .withRootPath(ROOT) + .withRemoteRootLogPath(REMOTE_ROOT_LOG_DIR) + .withUserDir(USER_ME, toKeepTime) + .withSuffixDir(SUFFIX, toDeleteTime) + .withBucketDir(toDeleteTime) + .withApps(Lists.newArrayList( + new AppDescriptor(toDeleteTime, Lists.newArrayList()), + new AppDescriptor(toDeleteTime, Lists.newArrayList( + Pair.of(DIR_HOST1, toDeleteTime), + Pair.of(DIR_HOST2, toKeepTime))), + new AppDescriptor(toDeleteTime, Lists.newArrayList( + Pair.of(DIR_HOST1, toDeleteTime), + Pair.of(DIR_HOST2, toDeleteTime))), + new AppDescriptor(toDeleteTime, Lists.newArrayList( + Pair.of(DIR_HOST1, toDeleteTime), + Pair.of(DIR_HOST2, toKeepTime))))) + .withFinishedApps(1, 2, 3) + .withRunningApps(4) + .injectExceptionForAppDirDeletion(3) + .build() + .startDeletionService() + .verifyAppDirsDeleted(timeout, 1, 3) + .verifyAppDirsNotDeleted(timeout, 2, 4) + .verifyAppFileDeleted(4, 1, timeout) + .verifyAppFileNotDeleted(4, 2, timeout) + .teardown(1); } @Test - public void testRefreshLogRetentionSettings() throws Exception { + void testRefreshLogRetentionSettings() throws Exception { long now = System.currentTimeMillis(); long before2000Secs = now - (2000 * 1000); long before50Secs = now - (50 * 1000); @@ -138,51 +139,51 @@ public class TestAggregatedLogDeletionService { Configuration conf = setupConfiguration(1800, 1); LogAggregationTestcase testcase = LogAggregationTestcaseBuilder.create(conf) - .withRootPath(ROOT) - .withRemoteRootLogPath(REMOTE_ROOT_LOG_DIR) - .withUserDir(USER_ME, before50Secs) - .withSuffixDir(SUFFIX, before50Secs) - .withBucketDir(before50Secs) - .withApps(Lists.newArrayList( - //Set time last modified of app1Dir directory and its files to before2000Secs - new AppDescriptor(before2000Secs, Lists.newArrayList( - Pair.of(DIR_HOST1, before2000Secs))), - //Set time last modified of app1Dir directory and its files to before50Secs - new AppDescriptor(before50Secs, Lists.newArrayList( - Pair.of(DIR_HOST1, before50Secs)))) - ) - .withFinishedApps(1, 2) - .withRunningApps() - .build(); - + .withRootPath(ROOT) + .withRemoteRootLogPath(REMOTE_ROOT_LOG_DIR) + .withUserDir(USER_ME, before50Secs) + .withSuffixDir(SUFFIX, before50Secs) + .withBucketDir(before50Secs) + .withApps(Lists.newArrayList( + //Set time last modified of app1Dir directory and its files to before2000Secs + new AppDescriptor(before2000Secs, Lists.newArrayList( + Pair.of(DIR_HOST1, before2000Secs))), + //Set time last modified of app1Dir directory and its files to before50Secs + new AppDescriptor(before50Secs, Lists.newArrayList( + Pair.of(DIR_HOST1, before50Secs)))) + ) + .withFinishedApps(1, 2) + .withRunningApps() + .build(); + testcase - .startDeletionService() - //app1Dir would be deleted since it is done above log retention period - .verifyAppDirDeleted(1, 10000L) - //app2Dir is not expected to be deleted since it is below the threshold - .verifyAppDirNotDeleted(2, 3000L); + .startDeletionService() + //app1Dir would be deleted since it is done above log retention period + .verifyAppDirDeleted(1, 10000L) + //app2Dir is not expected to be deleted since it is below the threshold + .verifyAppDirNotDeleted(2, 3000L); //Now, let's change the log aggregation retention configs conf.setInt(YarnConfiguration.LOG_AGGREGATION_RETAIN_SECONDS, 50); conf.setInt(YarnConfiguration.LOG_AGGREGATION_RETAIN_CHECK_INTERVAL_SECONDS, - checkIntervalSeconds); + checkIntervalSeconds); testcase - //We have not called refreshLogSettings, hence don't expect to see - // the changed conf values - .verifyCheckIntervalMilliSecondsNotEqualTo(checkIntervalMilliSeconds) - //refresh the log settings - .refreshLogRetentionSettings() - //Check interval time should reflect the new value - .verifyCheckIntervalMilliSecondsEqualTo(checkIntervalMilliSeconds) - //app2Dir should be deleted since it falls above the threshold - .verifyAppDirDeleted(2, 10000L) - //Close expected 2 times: once for refresh and once for stopping - .teardown(2); + //We have not called refreshLogSettings, hence don't expect to see + // the changed conf values + .verifyCheckIntervalMilliSecondsNotEqualTo(checkIntervalMilliSeconds) + //refresh the log settings + .refreshLogRetentionSettings() + //Check interval time should reflect the new value + .verifyCheckIntervalMilliSecondsEqualTo(checkIntervalMilliSeconds) + //app2Dir should be deleted since it falls above the threshold + .verifyAppDirDeleted(2, 10000L) + //Close expected 2 times: once for refresh and once for stopping + .teardown(2); } - + @Test - public void testCheckInterval() throws Exception { + void testCheckInterval() throws Exception { long now = System.currentTimeMillis(); long toDeleteTime = now - TEN_DAYS_IN_SECONDS * 1000; @@ -192,32 +193,32 @@ public class TestAggregatedLogDeletionService { FileSystem.closeAll(); LogAggregationTestcaseBuilder.create(conf) - .withRootPath(ROOT) - .withRemoteRootLogPath(REMOTE_ROOT_LOG_DIR) - .withUserDir(USER_ME, now) - .withSuffixDir(SUFFIX, now) - .withBucketDir(now) - .withApps(Lists.newArrayList( - new AppDescriptor(now, - Lists.newArrayList(Pair.of(DIR_HOST1, now))), - new AppDescriptor(now))) - .withFinishedApps(1) - .withRunningApps() - .build() - .startDeletionService() - .verifyAnyPathListedAtLeast(4, 10000L) - .verifyAppDirNotDeleted(1, NO_TIMEOUT) - // modify the timestamp of the logs and verify if it is picked up quickly - .changeModTimeOfApp(1, toDeleteTime) - .changeModTimeOfAppLogDir(1, 1, toDeleteTime) - .changeModTimeOfBucketDir(toDeleteTime) - .reinitAllPaths() - .verifyAppDirDeleted(1, 10000L) - .teardown(1); + .withRootPath(ROOT) + .withRemoteRootLogPath(REMOTE_ROOT_LOG_DIR) + .withUserDir(USER_ME, now) + .withSuffixDir(SUFFIX, now) + .withBucketDir(now) + .withApps(Lists.newArrayList( + new AppDescriptor(now, + Lists.newArrayList(Pair.of(DIR_HOST1, now))), + new AppDescriptor(now))) + .withFinishedApps(1) + .withRunningApps() + .build() + .startDeletionService() + .verifyAnyPathListedAtLeast(4, 10000L) + .verifyAppDirNotDeleted(1, NO_TIMEOUT) + // modify the timestamp of the logs and verify if it is picked up quickly + .changeModTimeOfApp(1, toDeleteTime) + .changeModTimeOfAppLogDir(1, 1, toDeleteTime) + .changeModTimeOfBucketDir(toDeleteTime) + .reinitAllPaths() + .verifyAppDirDeleted(1, 10000L) + .teardown(1); } @Test - public void testRobustLogDeletion() throws Exception { + void testRobustLogDeletion() throws Exception { Configuration conf = setupConfiguration(TEN_DAYS_IN_SECONDS, 1); // prevent us from picking up the same mockfs instance from another test @@ -225,26 +226,26 @@ public class TestAggregatedLogDeletionService { long modTime = 0L; LogAggregationTestcaseBuilder.create(conf) - .withRootPath(ROOT) - .withRemoteRootLogPath(REMOTE_ROOT_LOG_DIR) - .withUserDir(USER_ME, modTime) - .withSuffixDir(SUFFIX, modTime) - .withBucketDir(modTime, "0") - .withApps(Lists.newArrayList( - new AppDescriptor(modTime), - new AppDescriptor(modTime), - new AppDescriptor(modTime, Lists.newArrayList(Pair.of(DIR_HOST1, modTime))))) - .withAdditionalAppDirs(Lists.newArrayList(Pair.of("application_a", modTime))) - .withFinishedApps(1, 3) - .withRunningApps() - .injectExceptionForAppDirDeletion(1) - .build() - .runDeletionTask(TEN_DAYS_IN_SECONDS) - .verifyAppDirDeleted(3, NO_TIMEOUT); + .withRootPath(ROOT) + .withRemoteRootLogPath(REMOTE_ROOT_LOG_DIR) + .withUserDir(USER_ME, modTime) + .withSuffixDir(SUFFIX, modTime) + .withBucketDir(modTime, "0") + .withApps(Lists.newArrayList( + new AppDescriptor(modTime), + new AppDescriptor(modTime), + new AppDescriptor(modTime, Lists.newArrayList(Pair.of(DIR_HOST1, modTime))))) + .withAdditionalAppDirs(Lists.newArrayList(Pair.of("application_a", modTime))) + .withFinishedApps(1, 3) + .withRunningApps() + .injectExceptionForAppDirDeletion(1) + .build() + .runDeletionTask(TEN_DAYS_IN_SECONDS) + .verifyAppDirDeleted(3, NO_TIMEOUT); } @Test - public void testDeletionTwoControllers() throws IOException { + void testDeletionTwoControllers() throws IOException { long now = System.currentTimeMillis(); long toDeleteTime = now - (2000 * 1000); long toKeepTime = now - (1500 * 1000); @@ -252,48 +253,48 @@ public class TestAggregatedLogDeletionService { Configuration conf = setupConfiguration(1800, -1); enableFileControllers(conf, REMOTE_ROOT_LOG_DIR, ALL_FILE_CONTROLLERS, - ALL_FILE_CONTROLLER_NAMES); + ALL_FILE_CONTROLLER_NAMES); long timeout = 2000L; LogAggregationTestcaseBuilder.create(conf) - .withRootPath(ROOT) - .withRemoteRootLogPath(REMOTE_ROOT_LOG_DIR) - .withBothFileControllers() - .withUserDir(USER_ME, toKeepTime) - .withSuffixDir(SUFFIX, toDeleteTime) - .withBucketDir(toDeleteTime) - .withApps(//Apps for TFile - Lists.newArrayList( - new AppDescriptor(T_FILE, toDeleteTime, Lists.newArrayList()), - new AppDescriptor(T_FILE, toDeleteTime, Lists.newArrayList( - Pair.of(DIR_HOST1, toDeleteTime), - Pair.of(DIR_HOST2, toKeepTime))), - new AppDescriptor(T_FILE, toDeleteTime, Lists.newArrayList( - Pair.of(DIR_HOST1, toDeleteTime), - Pair.of(DIR_HOST2, toDeleteTime))), - new AppDescriptor(T_FILE, toDeleteTime, Lists.newArrayList( - Pair.of(DIR_HOST1, toDeleteTime), - Pair.of(DIR_HOST2, toKeepTime))), - //Apps for IFile - new AppDescriptor(I_FILE, toDeleteTime, Lists.newArrayList()), - new AppDescriptor(I_FILE, toDeleteTime, Lists.newArrayList( - Pair.of(DIR_HOST1, toDeleteTime), - Pair.of(DIR_HOST2, toKeepTime))), - new AppDescriptor(I_FILE, toDeleteTime, Lists.newArrayList( - Pair.of(DIR_HOST1, toDeleteTime), - Pair.of(DIR_HOST2, toDeleteTime))), - new AppDescriptor(I_FILE, toDeleteTime, Lists.newArrayList( - Pair.of(DIR_HOST1, toDeleteTime), - Pair.of(DIR_HOST2, toKeepTime))))) - .withFinishedApps(1, 2, 3, 5, 6, 7) - .withRunningApps(4, 8) - .injectExceptionForAppDirDeletion(3, 6) - .build() - .startDeletionService() - .verifyAppDirsDeleted(timeout, 1, 3, 5, 7) - .verifyAppDirsNotDeleted(timeout, 2, 4, 6, 8) - .verifyAppFilesDeleted(timeout, Lists.newArrayList(Pair.of(4, 1), Pair.of(8, 1))) - .verifyAppFilesNotDeleted(timeout, Lists.newArrayList(Pair.of(4, 2), Pair.of(8, 2))) - .teardown(1); + .withRootPath(ROOT) + .withRemoteRootLogPath(REMOTE_ROOT_LOG_DIR) + .withBothFileControllers() + .withUserDir(USER_ME, toKeepTime) + .withSuffixDir(SUFFIX, toDeleteTime) + .withBucketDir(toDeleteTime) + .withApps(//Apps for TFile + Lists.newArrayList( + new AppDescriptor(T_FILE, toDeleteTime, Lists.newArrayList()), + new AppDescriptor(T_FILE, toDeleteTime, Lists.newArrayList( + Pair.of(DIR_HOST1, toDeleteTime), + Pair.of(DIR_HOST2, toKeepTime))), + new AppDescriptor(T_FILE, toDeleteTime, Lists.newArrayList( + Pair.of(DIR_HOST1, toDeleteTime), + Pair.of(DIR_HOST2, toDeleteTime))), + new AppDescriptor(T_FILE, toDeleteTime, Lists.newArrayList( + Pair.of(DIR_HOST1, toDeleteTime), + Pair.of(DIR_HOST2, toKeepTime))), + //Apps for IFile + new AppDescriptor(I_FILE, toDeleteTime, Lists.newArrayList()), + new AppDescriptor(I_FILE, toDeleteTime, Lists.newArrayList( + Pair.of(DIR_HOST1, toDeleteTime), + Pair.of(DIR_HOST2, toKeepTime))), + new AppDescriptor(I_FILE, toDeleteTime, Lists.newArrayList( + Pair.of(DIR_HOST1, toDeleteTime), + Pair.of(DIR_HOST2, toDeleteTime))), + new AppDescriptor(I_FILE, toDeleteTime, Lists.newArrayList( + Pair.of(DIR_HOST1, toDeleteTime), + Pair.of(DIR_HOST2, toKeepTime))))) + .withFinishedApps(1, 2, 3, 5, 6, 7) + .withRunningApps(4, 8) + .injectExceptionForAppDirDeletion(3, 6) + .build() + .startDeletionService() + .verifyAppDirsDeleted(timeout, 1, 3, 5, 7) + .verifyAppDirsNotDeleted(timeout, 2, 4, 6, 8) + .verifyAppFilesDeleted(timeout, Lists.newArrayList(Pair.of(4, 1), Pair.of(8, 1))) + .verifyAppFilesNotDeleted(timeout, Lists.newArrayList(Pair.of(4, 2), Pair.of(8, 2))) + .teardown(1); } static class MockFileSystem extends FilterFileSystem { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogFormat.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogFormat.java index bf20fb74292..5a4beca990c 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogFormat.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogFormat.java @@ -18,10 +18,6 @@ package org.apache.hadoop.yarn.logaggregation; -import static org.mockito.Mockito.spy; -import static org.mockito.Mockito.when; -import static org.mockito.Mockito.doThrow; - import java.io.BufferedReader; import java.io.DataInputStream; import java.io.File; @@ -41,9 +37,14 @@ import java.util.Arrays; import java.util.Collections; import java.util.concurrent.CountDownLatch; -import org.junit.Assert; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.Assumptions; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; import org.slf4j.Logger; import org.slf4j.LoggerFactory; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.CommonConfigurationKeysPublic; import org.apache.hadoop.fs.FileStatus; @@ -62,10 +63,15 @@ import org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat.LogReader; import org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat.LogValue; import org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat.LogWriter; import org.apache.hadoop.yarn.util.Times; -import org.junit.After; -import org.junit.Assume; -import org.junit.Before; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertNull; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; +import static org.mockito.Mockito.doThrow; +import static org.mockito.Mockito.spy; +import static org.mockito.Mockito.when; public class TestAggregatedLogFormat { @@ -84,8 +90,8 @@ public class TestAggregatedLogFormat { } } - @Before - @After + @BeforeEach + @AfterEach public void cleanupTestDir() throws Exception { Path workDirPath = new Path(testWorkDir.getAbsolutePath()); LOG.info("Cleaning test directory [" + workDirPath + "]"); @@ -97,7 +103,7 @@ public class TestAggregatedLogFormat { //appending to logs @Test - public void testForCorruptedAggregatedLogs() throws Exception { + void testForCorruptedAggregatedLogs() throws Exception { Configuration conf = new Configuration(); File workDir = new File(testWorkDir, "testReadAcontainerLogs1"); Path remoteAppLogFile = @@ -112,7 +118,7 @@ public class TestAggregatedLogFormat { long numChars = 950000; writeSrcFileAndALog(srcFilePath, "stdout", numChars, remoteAppLogFile, - srcFileRoot, testContainerId); + srcFileRoot, testContainerId); LogReader logReader = new LogReader(conf, remoteAppLogFile); LogKey rLogKey = new LogKey(); @@ -121,8 +127,8 @@ public class TestAggregatedLogFormat { try { LogReader.readAcontainerLogs(dis, writer); } catch (Exception e) { - if(e.toString().contains("NumberFormatException")) { - Assert.fail("Aggregated logs are corrupted."); + if (e.toString().contains("NumberFormatException")) { + fail("Aggregated logs are corrupted."); } } @@ -134,10 +140,10 @@ public class TestAggregatedLogFormat { // Trying to read a corrupted log file created above should cause // log reading to fail below with an IOException. logReader = new LogReader(conf, remoteAppLogFile); - Assert.fail("Expect IOException from reading corrupt aggregated logs."); + fail("Expect IOException from reading corrupt aggregated logs."); } catch (IOException ioe) { DataInputStream dIS = logReader.next(rLogKey); - Assert.assertNull("Input stream not available for reading", dIS); + assertNull(dIS, "Input stream not available for reading"); } } @@ -198,7 +204,7 @@ public class TestAggregatedLogFormat { } @Test - public void testReadAcontainerLogs1() throws Exception { + void testReadAcontainerLogs1() throws Exception { //Verify the output generated by readAContainerLogs(DataInputStream, Writer, logUploadedTime) testReadAcontainerLog(true); @@ -250,12 +256,10 @@ public class TestAggregatedLogFormat { logWriter.append(logKey, spyLogValue); } - // make sure permission are correct on the file - FileStatus fsStatus = fs.getFileStatus(remoteAppLogFile); - Assert.assertEquals("permissions on log aggregation file are wrong", - FsPermission.createImmutable((short) 0640), fsStatus.getPermission()); - + FileStatus fsStatus = fs.getFileStatus(remoteAppLogFile); + assertEquals(FsPermission.createImmutable((short) 0640), fsStatus.getPermission(), + "permissions on log aggregation file are wrong"); LogReader logReader = new LogReader(conf, remoteAppLogFile); LogKey rLogKey = new LogKey(); DataInputStream dis = logReader.next(rLogKey); @@ -283,24 +287,24 @@ public class TestAggregatedLogFormat { + numChars + ("\n").length() + ("End of LogType:stdout" + System.lineSeparator() + System.lineSeparator()).length(); - Assert.assertTrue("LogType not matched", s.contains("LogType:stdout")); - Assert.assertTrue("log file:stderr should not be aggregated.", !s.contains("LogType:stderr")); - Assert.assertTrue("log file:logs should not be aggregated.", !s.contains("LogType:logs")); - Assert.assertTrue("LogLength not matched", s.contains("LogLength:" + numChars)); - Assert.assertTrue("Log Contents not matched", s.contains("Log Contents")); + assertTrue(s.contains("LogType:stdout"), "LogType not matched"); + assertTrue(!s.contains("LogType:stderr"), "log file:stderr should not be aggregated."); + assertTrue(!s.contains("LogType:logs"), "log file:logs should not be aggregated."); + assertTrue(s.contains("LogLength:" + numChars), "LogLength not matched"); + assertTrue(s.contains("Log Contents"), "Log Contents not matched"); StringBuilder sb = new StringBuilder(); for (int i = 0 ; i < numChars ; i++) { sb.append(filler); } String expectedContent = sb.toString(); - Assert.assertTrue("Log content incorrect", s.contains(expectedContent)); + assertTrue(s.contains(expectedContent), "Log content incorrect"); - Assert.assertEquals(expectedLength, s.length()); + assertEquals(expectedLength, s.length()); } @Test - public void testZeroLengthLog() throws IOException { + void testZeroLengthLog() throws IOException { Configuration conf = new Configuration(); File workDir = new File(testWorkDir, "testZeroLength"); Path remoteAppLogFile = new Path(workDir.getAbsolutePath(), @@ -332,17 +336,18 @@ public class TestAggregatedLogFormat { Writer writer = new StringWriter(); LogReader.readAcontainerLogs(dis, writer); - Assert.assertEquals("LogType:stdout\n" + + assertEquals("LogType:stdout\n" + "LogLength:0\n" + "Log Contents:\n\n" + "End of LogType:stdout\n\n", writer.toString()); } - @Test(timeout=10000) - public void testContainerLogsFileAccess() throws IOException { + @Test + @Timeout(10000) + void testContainerLogsFileAccess() throws IOException { // This test will run only if NativeIO is enabled as SecureIOUtils // require it to be enabled. - Assume.assumeTrue(NativeIO.isAvailable()); + Assumptions.assumeTrue(NativeIO.isAvailable()); Configuration conf = new Configuration(); conf.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION, "kerberos"); @@ -415,28 +420,28 @@ public class TestAggregatedLogFormat { String stdoutFile1 = StringUtils.join( File.separator, - Arrays.asList(new String[] { + Arrays.asList(new String[]{ workDir.getAbsolutePath(), "srcFiles", testContainerId1.getApplicationAttemptId().getApplicationId() - .toString(), testContainerId1.toString(), stderr })); + .toString(), testContainerId1.toString(), stderr})); // The file: stdout is expected to be aggregated. String stdoutFile2 = StringUtils.join( File.separator, - Arrays.asList(new String[] { + Arrays.asList(new String[]{ workDir.getAbsolutePath(), "srcFiles", testContainerId1.getApplicationAttemptId().getApplicationId() - .toString(), testContainerId1.toString(), stdout })); + .toString(), testContainerId1.toString(), stdout})); String message2 = "Owner '" + expectedOwner + "' for path " + stdoutFile2 + " did not match expected owner '" + ugi.getShortUserName() + "'"; - - Assert.assertFalse(line.contains(message2)); - Assert.assertFalse(line.contains(data + testContainerId1.toString() + + assertFalse(line.contains(message2)); + assertFalse(line.contains(data + testContainerId1.toString() + stderr)); - Assert.assertTrue(line.contains(data + testContainerId1.toString() + assertTrue(line.contains(data + testContainerId1.toString() + stdout)); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogsBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogsBlock.java index 66008a82323..27125602304 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogsBlock.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogsBlock.java @@ -28,9 +28,11 @@ import java.util.Arrays; import java.util.HashMap; import java.util.List; import java.util.Map; - import javax.servlet.http.HttpServletRequest; +import com.google.inject.Inject; +import org.junit.jupiter.api.Test; + import org.apache.commons.io.FileUtils; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileUtil; @@ -49,19 +51,17 @@ import org.apache.hadoop.yarn.logaggregation.filecontroller.LogAggregationFileCo import org.apache.hadoop.yarn.logaggregation.filecontroller.LogAggregationFileControllerContext; import org.apache.hadoop.yarn.logaggregation.filecontroller.LogAggregationFileControllerFactory; import org.apache.hadoop.yarn.logaggregation.filecontroller.tfile.TFileAggregatedLogsBlock; -import org.apache.hadoop.yarn.webapp.YarnWebParams; import org.apache.hadoop.yarn.webapp.View.ViewContext; +import org.apache.hadoop.yarn.webapp.YarnWebParams; import org.apache.hadoop.yarn.webapp.log.AggregatedLogsBlockForTest; import org.apache.hadoop.yarn.webapp.view.BlockForTest; import org.apache.hadoop.yarn.webapp.view.HtmlBlock; import org.apache.hadoop.yarn.webapp.view.HtmlBlockForTest; -import org.junit.Test; -import static org.mockito.Mockito.*; - -import com.google.inject.Inject; - -import static org.junit.Assert.*; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; /** * Test AggregatedLogsBlock. AggregatedLogsBlock should check user, aggregate a @@ -73,7 +73,7 @@ public class TestAggregatedLogsBlock { * Bad user. User 'owner' is trying to read logs without access */ @Test - public void testAccessDenied() throws Exception { + void testAccessDenied() throws Exception { FileUtil.fullyDelete(new File("target/logs")); Configuration configuration = getConfiguration(); @@ -89,7 +89,7 @@ public class TestAggregatedLogsBlock { HtmlBlock.Block block = new BlockForTest(html, printWriter, 10, false); TFileAggregatedLogsBlockForTest aggregatedBlock = getTFileAggregatedLogsBlockForTest(configuration, "owner", - "container_0_0001_01_000001", "localhost:1234"); + "container_0_0001_01_000001", "localhost:1234"); aggregatedBlock.render(block); block.getWriter().flush(); @@ -100,7 +100,7 @@ public class TestAggregatedLogsBlock { } @Test - public void testBlockContainsPortNumForUnavailableAppLog() { + void testBlockContainsPortNumForUnavailableAppLog() { FileUtil.fullyDelete(new File("target/logs")); Configuration configuration = getConfiguration(); @@ -125,7 +125,7 @@ public class TestAggregatedLogsBlock { * @throws Exception */ @Test - public void testBadLogs() throws Exception { + void testBadLogs() throws Exception { FileUtil.fullyDelete(new File("target/logs")); Configuration configuration = getConfiguration(); @@ -146,8 +146,8 @@ public class TestAggregatedLogsBlock { String out = data.toString(); assertTrue(out .contains("Logs not available for entity. Aggregation may not be " - + "complete, Check back later or try to find the container logs " - + "in the local directory of nodemanager localhost:1234")); + + "complete, Check back later or try to find the container logs " + + "in the local directory of nodemanager localhost:1234")); assertTrue(out .contains("Or see application log at http://localhost:8042")); @@ -160,7 +160,7 @@ public class TestAggregatedLogsBlock { * @throws Exception */ @Test - public void testAggregatedLogsBlock() throws Exception { + void testAggregatedLogsBlock() throws Exception { FileUtil.fullyDelete(new File("target/logs")); Configuration configuration = getConfiguration(); @@ -175,7 +175,7 @@ public class TestAggregatedLogsBlock { HtmlBlock.Block block = new BlockForTest(html, printWriter, 10, false); TFileAggregatedLogsBlockForTest aggregatedBlock = getTFileAggregatedLogsBlockForTest(configuration, "admin", - "container_0_0001_01_000001", "localhost:1234"); + "container_0_0001_01_000001", "localhost:1234"); aggregatedBlock.render(block); block.getWriter().flush(); @@ -192,7 +192,7 @@ public class TestAggregatedLogsBlock { * @throws Exception */ @Test - public void testAggregatedLogsBlockHar() throws Exception { + void testAggregatedLogsBlockHar() throws Exception { FileUtil.fullyDelete(new File("target/logs")); Configuration configuration = getConfiguration(); @@ -209,7 +209,7 @@ public class TestAggregatedLogsBlock { HtmlBlock.Block block = new BlockForTest(html, printWriter, 10, false); TFileAggregatedLogsBlockForTest aggregatedBlock = getTFileAggregatedLogsBlockForTest(configuration, "admin", - "container_1440536969523_0001_01_000001", "host1:1111"); + "container_1440536969523_0001_01_000001", "host1:1111"); aggregatedBlock.render(block); block.getWriter().flush(); @@ -238,7 +238,7 @@ public class TestAggregatedLogsBlock { * @throws Exception */ @Test - public void testNoLogs() throws Exception { + void testNoLogs() throws Exception { FileUtil.fullyDelete(new File("target/logs")); Configuration configuration = getConfiguration(); @@ -255,7 +255,7 @@ public class TestAggregatedLogsBlock { HtmlBlock.Block block = new BlockForTest(html, printWriter, 10, false); TFileAggregatedLogsBlockForTest aggregatedBlock = getTFileAggregatedLogsBlockForTest(configuration, "admin", - "container_0_0001_01_000001", "localhost:1234"); + "container_0_0001_01_000001", "localhost:1234"); aggregatedBlock.render(block); block.getWriter().flush(); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestContainerLogsUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestContainerLogsUtils.java index 3687d023d69..0cb71b59909 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestContainerLogsUtils.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestContainerLogsUtils.java @@ -17,8 +17,6 @@ */ package org.apache.hadoop.yarn.logaggregation; -import static org.junit.Assert.assertTrue; - import java.io.File; import java.io.FileWriter; import java.io.IOException; @@ -27,6 +25,7 @@ import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Map; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; @@ -41,6 +40,8 @@ import org.apache.hadoop.yarn.logaggregation.filecontroller.LogAggregationFileCo /** * This class contains several utility functions for log aggregation tests. + * Any assertion libraries shouldn't be used here because this class is used by + * multiple modules including MapReduce. */ public final class TestContainerLogsUtils { @@ -74,13 +75,16 @@ public final class TestContainerLogsUtils { if (fs.exists(rootLogDirPath)) { fs.delete(rootLogDirPath, true); } - assertTrue(fs.mkdirs(rootLogDirPath)); + fs.mkdirs(rootLogDirPath); + // Make sure the target dir is created. If not, FileNotFoundException is thrown + fs.getFileStatus(rootLogDirPath); Path appLogsDir = new Path(rootLogDirPath, appId.toString()); if (fs.exists(appLogsDir)) { fs.delete(appLogsDir, true); } - assertTrue(fs.mkdirs(appLogsDir)); - + fs.mkdirs(appLogsDir); + // Make sure the target dir is created. If not, FileNotFoundException is thrown + fs.getFileStatus(appLogsDir); createContainerLogInLocalDir(appLogsDir, containerToContent, fs, fileName); // upload container logs to remote log dir @@ -94,7 +98,9 @@ public final class TestContainerLogsUtils { if (fs.exists(path) && deleteRemoteLogDir) { fs.delete(path, true); } - assertTrue(fs.mkdirs(path)); + fs.mkdirs(path); + // Make sure the target dir is created. If not, FileNotFoundException is thrown + fs.getFileStatus(path); uploadContainerLogIntoRemoteDir(ugi, conf, rootLogDirList, nodeId, appId, containerToContent.keySet(), path); } @@ -110,7 +116,9 @@ public final class TestContainerLogsUtils { if (fs.exists(containerLogsDir)) { fs.delete(containerLogsDir, true); } - assertTrue(fs.mkdirs(containerLogsDir)); + fs.mkdirs(containerLogsDir); + // Make sure the target dir is created. If not, FileNotFoundException is thrown + fs.getFileStatus(containerLogsDir); Writer writer = new FileWriter(new File(containerLogsDir.toString(), fileName)); writer.write(content); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestLogAggregationMetaCollector.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestLogAggregationMetaCollector.java index c60635b0e2a..e7434f4b990 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestLogAggregationMetaCollector.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestLogAggregationMetaCollector.java @@ -18,19 +18,6 @@ package org.apache.hadoop.yarn.logaggregation; -import org.apache.commons.lang3.tuple.ImmutablePair; -import org.apache.hadoop.fs.FileStatus; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.fs.RemoteIterator; -import org.apache.hadoop.yarn.api.records.ApplicationAttemptId; -import org.apache.hadoop.yarn.api.records.ApplicationId; -import org.apache.hadoop.yarn.api.records.ContainerId; -import org.apache.hadoop.yarn.conf.YarnConfiguration; -import org.apache.hadoop.yarn.logaggregation.filecontroller.FakeLogAggregationFileController; -import org.junit.After; -import org.junit.Before; -import org.junit.Test; - import java.io.IOException; import java.time.Clock; import java.util.ArrayList; @@ -42,7 +29,23 @@ import java.util.Map; import java.util.Set; import java.util.stream.Collectors; -import static org.junit.Assert.*; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import org.apache.commons.lang3.tuple.ImmutablePair; +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.yarn.api.records.ApplicationAttemptId; +import org.apache.hadoop.yarn.api.records.ApplicationId; +import org.apache.hadoop.yarn.api.records.ContainerId; +import org.apache.hadoop.yarn.conf.YarnConfiguration; +import org.apache.hadoop.yarn.logaggregation.filecontroller.FakeLogAggregationFileController; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; public class TestLogAggregationMetaCollector { private static final String TEST_NODE = "TEST_NODE_1"; @@ -133,17 +136,17 @@ public class TestLogAggregationMetaCollector { } } - @Before + @BeforeEach public void setUp() throws Exception { fileController = createFileController(); } - @After + @AfterEach public void tearDown() throws Exception { } @Test - public void testAllNull() throws IOException { + void testAllNull() throws IOException { ExtendedLogMetaRequest.ExtendedLogMetaRequestBuilder request = new ExtendedLogMetaRequest.ExtendedLogMetaRequestBuilder(); request.setAppId(null); @@ -165,7 +168,7 @@ public class TestLogAggregationMetaCollector { } @Test - public void testAllSet() throws IOException { + void testAllSet() throws IOException { ExtendedLogMetaRequest.ExtendedLogMetaRequestBuilder request = new ExtendedLogMetaRequest.ExtendedLogMetaRequestBuilder(); Set fileSizeExpressions = new HashSet<>(); @@ -191,7 +194,7 @@ public class TestLogAggregationMetaCollector { } @Test - public void testSingleNodeRequest() throws IOException { + void testSingleNodeRequest() throws IOException { ExtendedLogMetaRequest.ExtendedLogMetaRequestBuilder request = new ExtendedLogMetaRequest.ExtendedLogMetaRequestBuilder(); request.setAppId(null); @@ -214,7 +217,7 @@ public class TestLogAggregationMetaCollector { } @Test - public void testMultipleNodeRegexRequest() throws IOException { + void testMultipleNodeRegexRequest() throws IOException { ExtendedLogMetaRequest.ExtendedLogMetaRequestBuilder request = new ExtendedLogMetaRequest.ExtendedLogMetaRequestBuilder(); request.setAppId(null); @@ -236,7 +239,7 @@ public class TestLogAggregationMetaCollector { } @Test - public void testMultipleFileRegex() throws IOException { + void testMultipleFileRegex() throws IOException { ExtendedLogMetaRequest.ExtendedLogMetaRequestBuilder request = new ExtendedLogMetaRequest.ExtendedLogMetaRequestBuilder(); request.setAppId(null); @@ -260,7 +263,7 @@ public class TestLogAggregationMetaCollector { } @Test - public void testContainerIdExactMatch() throws IOException { + void testContainerIdExactMatch() throws IOException { ExtendedLogMetaRequest.ExtendedLogMetaRequestBuilder request = new ExtendedLogMetaRequest.ExtendedLogMetaRequestBuilder(); request.setAppId(null); @@ -284,7 +287,7 @@ public class TestLogAggregationMetaCollector { } @Test - public void testMultipleFileBetweenSize() throws IOException { + void testMultipleFileBetweenSize() throws IOException { ExtendedLogMetaRequest.ExtendedLogMetaRequestBuilder request = new ExtendedLogMetaRequest.ExtendedLogMetaRequestBuilder(); Set fileSizeExpressions = new HashSet<>(); @@ -311,7 +314,7 @@ public class TestLogAggregationMetaCollector { } @Test - public void testInvalidQueryStrings() throws IOException { + void testInvalidQueryStrings() throws IOException { ExtendedLogMetaRequest.ExtendedLogMetaRequestBuilder request = new ExtendedLogMetaRequest.ExtendedLogMetaRequestBuilder(); Set fileSizeExpressions = new HashSet<>(); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/filecontroller/FakeLogAggregationFileController.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/filecontroller/FakeLogAggregationFileController.java index c667d3b4fee..d28cee3b1bd 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/filecontroller/FakeLogAggregationFileController.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/filecontroller/FakeLogAggregationFileController.java @@ -18,6 +18,11 @@ package org.apache.hadoop.yarn.logaggregation.filecontroller; +import java.io.IOException; +import java.io.OutputStream; +import java.util.List; +import java.util.Map; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.yarn.api.records.ApplicationAccessType; @@ -28,11 +33,6 @@ import org.apache.hadoop.yarn.logaggregation.ContainerLogsRequest; import org.apache.hadoop.yarn.webapp.View; import org.apache.hadoop.yarn.webapp.view.HtmlBlock; -import java.io.IOException; -import java.io.OutputStream; -import java.util.List; -import java.util.Map; - public class FakeLogAggregationFileController extends LogAggregationFileController { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/filecontroller/TestLogAggregationFileController.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/filecontroller/TestLogAggregationFileController.java index 818e01129fc..fe1c5f2fa73 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/filecontroller/TestLogAggregationFileController.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/filecontroller/TestLogAggregationFileController.java @@ -18,6 +18,13 @@ package org.apache.hadoop.yarn.logaggregation.filecontroller; +import java.io.FileNotFoundException; +import java.net.URI; + +import org.junit.jupiter.api.Test; +import org.mockito.ArgumentMatcher; +import org.mockito.Mockito; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; @@ -25,15 +32,9 @@ import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.permission.FsPermission; import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.yarn.conf.YarnConfiguration; -import org.junit.Assert; -import org.junit.Test; -import org.mockito.ArgumentMatcher; -import org.mockito.Mockito; - -import java.io.FileNotFoundException; -import java.net.URI; import static org.apache.hadoop.yarn.logaggregation.filecontroller.LogAggregationFileController.TLDIR_PERMISSIONS; +import static org.junit.jupiter.api.Assertions.assertTrue; import static org.mockito.ArgumentMatchers.any; import static org.mockito.ArgumentMatchers.argThat; import static org.mockito.ArgumentMatchers.eq; @@ -49,19 +50,19 @@ import static org.mockito.Mockito.verify; public class TestLogAggregationFileController { @Test - public void testRemoteDirCreationDefault() throws Exception { + void testRemoteDirCreationDefault() throws Exception { FileSystem fs = mock(FileSystem.class); doReturn(new URI("")).when(fs).getUri(); doThrow(FileNotFoundException.class).when(fs) - .getFileStatus(any(Path.class)); + .getFileStatus(any(Path.class)); Configuration conf = new Configuration(); LogAggregationFileController controller = mock( - LogAggregationFileController.class, Mockito.CALLS_REAL_METHODS); + LogAggregationFileController.class, Mockito.CALLS_REAL_METHODS); doReturn(fs).when(controller).getFileSystem(any(Configuration.class)); UserGroupInformation ugi = UserGroupInformation.createUserForTesting( - "yarn_user", new String[] {"yarn_group", "other_group"}); + "yarn_user", new String[]{"yarn_group", "other_group"}); UserGroupInformation.setLoginUser(ugi); controller.initialize(conf, "TFile"); @@ -71,7 +72,7 @@ public class TestLogAggregationFileController { } @Test - public void testRemoteDirCreationWithCustomGroup() throws Exception { + void testRemoteDirCreationWithCustomGroup() throws Exception { String testGroupName = "testGroup"; FileSystem fs = mock(FileSystem.class); @@ -86,7 +87,7 @@ public class TestLogAggregationFileController { doReturn(fs).when(controller).getFileSystem(any(Configuration.class)); UserGroupInformation ugi = UserGroupInformation.createUserForTesting( - "yarn_user", new String[] {"yarn_group", "other_group"}); + "yarn_user", new String[]{"yarn_group", "other_group"}); UserGroupInformation.setLoginUser(ugi); controller.initialize(conf, "TFile"); @@ -114,7 +115,7 @@ public class TestLogAggregationFileController { } @Test - public void testRemoteDirCreationWithCustomUser() throws Exception { + void testRemoteDirCreationWithCustomUser() throws Exception { FileSystem fs = mock(FileSystem.class); doReturn(new URI("")).when(fs).getUri(); doReturn(new FileStatus(128, false, 0, 64, System.currentTimeMillis(), @@ -139,6 +140,6 @@ public class TestLogAggregationFileController { verify(fs).setPermission(argThat(new PathContainsString(".permission_check")), eq(new FsPermission(TLDIR_PERMISSIONS))); verify(fs).delete(argThat(new PathContainsString(".permission_check")), eq(false)); - Assert.assertTrue(controller.fsSupportsChmod); + assertTrue(controller.fsSupportsChmod); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/filecontroller/TestLogAggregationFileControllerFactory.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/filecontroller/TestLogAggregationFileControllerFactory.java index c1b991b9bc1..0f879616f3b 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/filecontroller/TestLogAggregationFileControllerFactory.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/filecontroller/TestLogAggregationFileControllerFactory.java @@ -18,6 +18,21 @@ package org.apache.hadoop.yarn.logaggregation.filecontroller; +import java.io.File; +import java.io.FileWriter; +import java.io.IOException; +import java.io.OutputStream; +import java.io.Writer; +import java.util.Arrays; +import java.util.Collections; +import java.util.List; +import java.util.Map; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configured; import org.apache.hadoop.fs.FileSystem; @@ -33,25 +48,14 @@ import org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregation import org.apache.hadoop.yarn.logaggregation.filecontroller.tfile.LogAggregationTFileController; import org.apache.hadoop.yarn.webapp.View.ViewContext; import org.apache.hadoop.yarn.webapp.view.HtmlBlock.Block; -import org.junit.Before; -import org.junit.Test; -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; - -import java.io.File; -import java.io.FileWriter; -import java.io.IOException; -import java.io.OutputStream; -import java.io.Writer; -import java.util.Arrays; -import java.util.Collections; -import java.util.List; -import java.util.Map; import static org.apache.hadoop.yarn.conf.YarnConfiguration.LOG_AGGREGATION_FILE_FORMATS; import static org.apache.hadoop.yarn.logaggregation.LogAggregationTestUtils.REMOTE_LOG_ROOT; import static org.apache.hadoop.yarn.logaggregation.LogAggregationTestUtils.enableFileControllers; -import static org.junit.Assert.*; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertThrows; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; /** * Test LogAggregationFileControllerFactory. @@ -79,7 +83,7 @@ public class TestLogAggregationFileControllerFactory extends Configured { private ApplicationId appId = ApplicationId.newInstance( System.currentTimeMillis(), 1); - @Before + @BeforeEach public void setup() throws IOException { Configuration conf = new YarnConfiguration(); conf.setBoolean(YarnConfiguration.LOG_AGGREGATION_ENABLED, true); @@ -107,100 +111,106 @@ public class TestLogAggregationFileControllerFactory extends Configured { new FileWriter(new File(logPath.toString(), "testLog"))) { writer.write("test"); } - assertTrue("The used LogAggregationFileController is not instance of " - + className.getSimpleName(), className.isInstance( - factory.getFileControllerForRead(appId, APP_OWNER))); + assertTrue(className.isInstance(factory.getFileControllerForRead(appId, APP_OWNER)), + "The used LogAggregationFileController is not instance of " + className.getSimpleName()); } finally { fs.delete(logPath, true); } } @Test - public void testDefaultLogAggregationFileControllerFactory() + void testDefaultLogAggregationFileControllerFactory() throws IOException { LogAggregationFileControllerFactory factory = new LogAggregationFileControllerFactory(getConf()); List list = factory .getConfiguredLogAggregationFileControllerList(); - assertEquals("Only one LogAggregationFileController is expected!", 1, - list.size()); - assertTrue("TFile format is expected to be the first " + - "LogAggregationFileController!", list.get(0) instanceof - LogAggregationTFileController); - assertTrue("TFile format is expected to be used for writing!", - factory.getFileControllerForWrite() instanceof - LogAggregationTFileController); + assertEquals(1, + list.size(), + "Only one LogAggregationFileController is expected!"); + assertTrue(list.get(0) instanceof + LogAggregationTFileController, "TFile format is expected to be the first " + + "LogAggregationFileController!"); + assertTrue(factory.getFileControllerForWrite() instanceof + LogAggregationTFileController, + "TFile format is expected to be used for writing!"); verifyFileControllerInstance(factory, LogAggregationTFileController.class); } - @Test(expected = Exception.class) - public void testLogAggregationFileControllerFactoryClassNotSet() { - Configuration conf = getConf(); - conf.set(LOG_AGGREGATION_FILE_FORMATS, "TestLogAggregationFileController"); - new LogAggregationFileControllerFactory(conf); - fail("TestLogAggregationFileController's class was not set, " + - "but the factory creation did not fail."); + @Test + void testLogAggregationFileControllerFactoryClassNotSet() { + assertThrows(Exception.class, () -> { + Configuration conf = getConf(); + conf.set(LOG_AGGREGATION_FILE_FORMATS, "TestLogAggregationFileController"); + new LogAggregationFileControllerFactory(conf); + fail("TestLogAggregationFileController's class was not set, " + + "but the factory creation did not fail."); + }); } @Test - public void testLogAggregationFileControllerFactory() throws Exception { + void testLogAggregationFileControllerFactory() throws Exception { enableFileControllers(getConf(), ALL_FILE_CONTROLLERS, ALL_FILE_CONTROLLER_NAMES); LogAggregationFileControllerFactory factory = new LogAggregationFileControllerFactory(getConf()); List list = factory.getConfiguredLogAggregationFileControllerList(); - assertEquals("The expected number of LogAggregationFileController " + - "is not 3!", 3, list.size()); - assertTrue("Test format is expected to be the first " + - "LogAggregationFileController!", list.get(0) instanceof - TestLogAggregationFileController); - assertTrue("IFile format is expected to be the second " + - "LogAggregationFileController!", list.get(1) instanceof - LogAggregationIndexedFileController); - assertTrue("TFile format is expected to be the first " + - "LogAggregationFileController!", list.get(2) instanceof - LogAggregationTFileController); - assertTrue("Test format is expected to be used for writing!", - factory.getFileControllerForWrite() instanceof - TestLogAggregationFileController); + assertEquals(3, list.size(), "The expected number of LogAggregationFileController " + + "is not 3!"); + assertTrue(list.get(0) instanceof + TestLogAggregationFileController, "Test format is expected to be the first " + + "LogAggregationFileController!"); + assertTrue(list.get(1) instanceof + LogAggregationIndexedFileController, "IFile format is expected to be the second " + + "LogAggregationFileController!"); + assertTrue(list.get(2) instanceof + LogAggregationTFileController, "TFile format is expected to be the first " + + "LogAggregationFileController!"); + assertTrue(factory.getFileControllerForWrite() instanceof + TestLogAggregationFileController, + "Test format is expected to be used for writing!"); verifyFileControllerInstance(factory, TestLogAggregationFileController.class); } @Test - public void testClassConfUsed() { + void testClassConfUsed() { enableFileControllers(getConf(), Collections.singletonList(LogAggregationTFileController.class), Collections.singletonList("TFile")); LogAggregationFileControllerFactory factory = new LogAggregationFileControllerFactory(getConf()); LogAggregationFileController fc = factory.getFileControllerForWrite(); - assertEquals(WRONG_ROOT_LOG_DIR_MSG, "target/app-logs/TFile", - fc.getRemoteRootLogDir().toString()); - assertEquals(WRONG_ROOT_LOG_DIR_SUFFIX_MSG, "TFile", - fc.getRemoteRootLogDirSuffix()); + assertEquals("target/app-logs/TFile", + fc.getRemoteRootLogDir().toString(), + WRONG_ROOT_LOG_DIR_MSG); + assertEquals("TFile", + fc.getRemoteRootLogDirSuffix(), + WRONG_ROOT_LOG_DIR_SUFFIX_MSG); } @Test - public void testNodemanagerConfigurationIsUsed() { + void testNodemanagerConfigurationIsUsed() { Configuration conf = getConf(); conf.set(LOG_AGGREGATION_FILE_FORMATS, "TFile"); LogAggregationFileControllerFactory factory = new LogAggregationFileControllerFactory(conf); LogAggregationFileController fc = factory.getFileControllerForWrite(); - assertEquals(WRONG_ROOT_LOG_DIR_MSG, "target/app-logs/default", - fc.getRemoteRootLogDir().toString()); - assertEquals(WRONG_ROOT_LOG_DIR_SUFFIX_MSG, "log-tfile", - fc.getRemoteRootLogDirSuffix()); + assertEquals("target/app-logs/default", + fc.getRemoteRootLogDir().toString(), + WRONG_ROOT_LOG_DIR_MSG); + assertEquals("log-tfile", + fc.getRemoteRootLogDirSuffix(), + WRONG_ROOT_LOG_DIR_SUFFIX_MSG); } @Test - public void testDefaultConfUsed() { + void testDefaultConfUsed() { Configuration conf = getConf(); conf.unset(YarnConfiguration.NM_REMOTE_APP_LOG_DIR); conf.unset(YarnConfiguration.NM_REMOTE_APP_LOG_DIR_SUFFIX); @@ -210,10 +220,12 @@ public class TestLogAggregationFileControllerFactory extends Configured { new LogAggregationFileControllerFactory(getConf()); LogAggregationFileController fc = factory.getFileControllerForWrite(); - assertEquals(WRONG_ROOT_LOG_DIR_MSG, "/tmp/logs", - fc.getRemoteRootLogDir().toString()); - assertEquals(WRONG_ROOT_LOG_DIR_SUFFIX_MSG, "logs-tfile", - fc.getRemoteRootLogDirSuffix()); + assertEquals("/tmp/logs", + fc.getRemoteRootLogDir().toString(), + WRONG_ROOT_LOG_DIR_MSG); + assertEquals("logs-tfile", + fc.getRemoteRootLogDirSuffix(), + WRONG_ROOT_LOG_DIR_SUFFIX_MSG); } private static class TestLogAggregationFileController diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/filecontroller/ifile/TestLogAggregationIndexedFileController.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/filecontroller/ifile/TestLogAggregationIndexedFileController.java index dfb19d49078..cd178382b52 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/filecontroller/ifile/TestLogAggregationIndexedFileController.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/filecontroller/ifile/TestLogAggregationIndexedFileController.java @@ -32,6 +32,12 @@ import java.util.HashSet; import java.util.List; import java.util.Map; import java.util.Set; + +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configured; import org.apache.hadoop.fs.FSDataOutputStream; @@ -48,27 +54,24 @@ import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.api.records.ContainerId; import org.apache.hadoop.yarn.api.records.NodeId; import org.apache.hadoop.yarn.conf.YarnConfiguration; +import org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat.LogKey; +import org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat.LogValue; +import org.apache.hadoop.yarn.logaggregation.ContainerLogFileInfo; import org.apache.hadoop.yarn.logaggregation.ContainerLogMeta; import org.apache.hadoop.yarn.logaggregation.ContainerLogsRequest; import org.apache.hadoop.yarn.logaggregation.ExtendedLogMetaRequest; import org.apache.hadoop.yarn.logaggregation.LogAggregationUtils; -import org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat.LogKey; -import org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat.LogValue; -import org.apache.hadoop.yarn.logaggregation.ContainerLogFileInfo; import org.apache.hadoop.yarn.logaggregation.filecontroller.LogAggregationFileController; import org.apache.hadoop.yarn.logaggregation.filecontroller.LogAggregationFileControllerContext; import org.apache.hadoop.yarn.logaggregation.filecontroller.LogAggregationFileControllerFactory; import org.apache.hadoop.yarn.util.Clock; import org.apache.hadoop.yarn.util.ControlledClock; -import org.junit.After; -import org.junit.Before; -import org.junit.Test; import static org.assertj.core.api.Assertions.assertThat; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertTrue; -import static org.junit.Assert.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertTrue; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.when; @@ -104,7 +107,7 @@ public class TestLogAggregationIndexedFileController return conf; } - @Before + @BeforeEach public void setUp() throws IOException { setConf(getTestConf()); appId = ApplicationId.newInstance(123456, 1); @@ -122,14 +125,15 @@ public class TestLogAggregationIndexedFileController System.setErr(sysErr); } - @After + @AfterEach public void teardown() throws Exception { fs.delete(rootLocalLogDirPath, true); fs.delete(new Path(remoteLogDir), true); } - @Test(timeout = 15000) - public void testLogAggregationIndexFileFormat() throws Exception { + @Test + @Timeout(15000) + void testLogAggregationIndexFileFormat() throws Exception { if (fs.exists(rootLocalLogDirPath)) { fs.delete(rootLocalLogDirPath, true); } @@ -150,7 +154,7 @@ public class TestLogAggregationIndexedFileController LogKey key1 = new LogKey(containerId.toString()); - for(String logType : logTypes) { + for (String logType : logTypes) { File file = createAndWriteLocalLogFile(containerId, appLogsDir, logType); files.add(file); @@ -162,24 +166,23 @@ public class TestLogAggregationIndexedFileController final ControlledClock clock = new ControlledClock(); clock.setTime(System.currentTimeMillis()); - LogAggregationIndexedFileController fileFormat - = new LogAggregationIndexedFileController() { - private int rollOverCheck = 0; - @Override - public Clock getSystemClock() { - return clock; - } + LogAggregationIndexedFileController fileFormat = new LogAggregationIndexedFileController() { + private int rollOverCheck = 0; - @Override - public boolean isRollover(final FileContext fc, - final Path candidate) throws IOException { - rollOverCheck++; - if (rollOverCheck >= 3) { - return true; - } - return false; - } - }; + @Override + public Clock getSystemClock() { + return clock; + } + + @Override + public boolean isRollover(final FileContext fc, final Path candidate) throws IOException { + rollOverCheck++; + if (rollOverCheck >= 3) { + return true; + } + return false; + } + }; fileFormat.initialize(getConf(), "Indexed"); @@ -238,7 +241,7 @@ public class TestLogAggregationIndexedFileController factoryConf.set("yarn.log-aggregation.file-formats", "Indexed"); factoryConf.set("yarn.log-aggregation.file-controller.Indexed.class", "org.apache.hadoop.yarn.logaggregation.filecontroller.ifile" - + ".LogAggregationIndexedFileController"); + + ".LogAggregationIndexedFileController"); LogAggregationFileControllerFactory factory = new LogAggregationFileControllerFactory(factoryConf); LogAggregationFileController fileController = factory @@ -255,9 +258,9 @@ public class TestLogAggregationIndexedFileController // create a checksum file Path checksumFile = new Path(fileFormat.getRemoteAppLogDir( - appId, USER_UGI.getShortUserName()), + appId, USER_UGI.getShortUserName()), LogAggregationUtils.getNodeString(nodeId) - + LogAggregationIndexedFileController.CHECK_SUM_FILE_SUFFIX); + + LogAggregationIndexedFileController.CHECK_SUM_FILE_SUFFIX); FSDataOutputStream fInput = null; try { String nodeName = logPath.getName() + "_" + clock.getTime(); @@ -330,7 +333,7 @@ public class TestLogAggregationIndexedFileController fileFormat.postWrite(context); fileFormat.closeWriter(); meta = fileFormat.readAggregatedLogsMeta( - logRequest); + logRequest); assertThat(meta.size()).isEqualTo(2); for (ContainerLogMeta log : meta) { assertEquals(containerId.toString(), log.getContainerId()); @@ -380,8 +383,9 @@ public class TestLogAggregationIndexedFileController sysOutStream.reset(); } - @Test(timeout = 15000) - public void testFetchApplictionLogsHar() throws Exception { + @Test + @Timeout(15000) + void testFetchApplictionLogsHar() throws Exception { List newLogTypes = new ArrayList<>(); newLogTypes.add("syslog"); newLogTypes.add("stdout"); @@ -472,7 +476,7 @@ public class TestLogAggregationIndexedFileController } @Test - public void testGetRollOverLogMaxSize() { + void testGetRollOverLogMaxSize() { String fileControllerName = "testController"; String remoteDirConf = String.format( YarnConfiguration.LOG_AGGREGATION_REMOTE_APP_LOG_DIR_FMT, @@ -500,7 +504,7 @@ public class TestLogAggregationIndexedFileController } @Test - public void testGetLogMetaFilesOfNode() throws Exception { + void testGetLogMetaFilesOfNode() throws Exception { if (fs.exists(rootLocalLogDirPath)) { fs.delete(rootLocalLogDirPath, true); } @@ -521,7 +525,7 @@ public class TestLogAggregationIndexedFileController LogKey key1 = new LogKey(containerId.toString()); - for(String logType : logTypes) { + for (String logType : logTypes) { File file = createAndWriteLocalLogFile(containerId, appLogsDir, logType); files.add(file); @@ -566,7 +570,7 @@ public class TestLogAggregationIndexedFileController final ControlledClock clock = new ControlledClock(); clock.setTime(System.currentTimeMillis()); Path checksumFile = new Path(fileFormat.getRemoteAppLogDir( - appId, USER_UGI.getShortUserName()), + appId, USER_UGI.getShortUserName()), LogAggregationUtils.getNodeString(nodeId) + LogAggregationIndexedFileController.CHECK_SUM_FILE_SUFFIX); FSDataOutputStream fInput = null; @@ -593,11 +597,11 @@ public class TestLogAggregationIndexedFileController if (node.getPath().getName().contains( LogAggregationIndexedFileController.CHECK_SUM_FILE_SUFFIX)) { - assertTrue("Checksum node files should not contain any logs", - metas.isEmpty()); + assertTrue(metas.isEmpty(), + "Checksum node files should not contain any logs"); } else { - assertFalse("Non-checksum node files should contain log files", - metas.isEmpty()); + assertFalse(metas.isEmpty(), + "Non-checksum node files should contain log files"); assertEquals(4, metas.values().stream().findFirst().get().size()); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/testutils/AggregatedLogDeletionServiceForTest.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/testutils/AggregatedLogDeletionServiceForTest.java index 76ec8aab537..88f2d42a11d 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/testutils/AggregatedLogDeletionServiceForTest.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/testutils/AggregatedLogDeletionServiceForTest.java @@ -18,14 +18,14 @@ package org.apache.hadoop.yarn.logaggregation.testutils; +import java.io.IOException; +import java.util.List; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.yarn.api.ApplicationClientProtocol; import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.logaggregation.AggregatedLogDeletionService; -import java.io.IOException; -import java.util.List; - import static org.apache.hadoop.yarn.logaggregation.testutils.MockRMClientUtils.createMockRMClient; public class AggregatedLogDeletionServiceForTest extends AggregatedLogDeletionService { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/testutils/LogAggregationTestcase.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/testutils/LogAggregationTestcase.java index f2074f8c8e6..2af74f8cbb8 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/testutils/LogAggregationTestcase.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/testutils/LogAggregationTestcase.java @@ -18,6 +18,19 @@ package org.apache.hadoop.yarn.logaggregation.testutils; +import java.io.Closeable; +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Objects; +import java.util.Set; +import java.util.stream.Collectors; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + import org.apache.commons.lang3.tuple.Pair; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileStatus; @@ -30,26 +43,20 @@ import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.logaggregation.AggregatedLogDeletionService; import org.apache.hadoop.yarn.logaggregation.AggregatedLogDeletionService.LogDeletionTask; import org.apache.hadoop.yarn.logaggregation.testutils.LogAggregationTestcaseBuilder.AppDescriptor; -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; -import java.io.Closeable; -import java.io.IOException; -import java.util.ArrayList; -import java.util.HashSet; -import java.util.List; -import java.util.Map; -import java.util.Objects; -import java.util.Set; -import java.util.stream.Collectors; - -import static org.apache.hadoop.yarn.logaggregation.testutils.FileStatusUtils.*; +import static org.apache.hadoop.yarn.logaggregation.testutils.FileStatusUtils.createDirBucketDirLogPathWithFileStatus; +import static org.apache.hadoop.yarn.logaggregation.testutils.FileStatusUtils.createDirLogPathWithFileStatus; +import static org.apache.hadoop.yarn.logaggregation.testutils.FileStatusUtils.createFileLogPathWithFileStatus; +import static org.apache.hadoop.yarn.logaggregation.testutils.FileStatusUtils.createPathWithFileStatusForAppId; import static org.apache.hadoop.yarn.logaggregation.testutils.LogAggregationTestcaseBuilder.NO_TIMEOUT; import static org.apache.hadoop.yarn.logaggregation.testutils.MockRMClientUtils.createMockRMClient; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertTrue; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertTrue; import static org.mockito.ArgumentMatchers.any; -import static org.mockito.Mockito.*; +import static org.mockito.Mockito.timeout; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; public class LogAggregationTestcase { private static final Logger LOG = LoggerFactory.getLogger(LogAggregationTestcase.class); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/testutils/LogAggregationTestcaseBuilder.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/testutils/LogAggregationTestcaseBuilder.java index f532dddce0f..3b56bcf37cf 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/testutils/LogAggregationTestcaseBuilder.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/testutils/LogAggregationTestcaseBuilder.java @@ -18,6 +18,12 @@ package org.apache.hadoop.yarn.logaggregation.testutils; +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + import org.apache.commons.compress.utils.Lists; import org.apache.commons.lang3.tuple.Pair; import org.apache.hadoop.conf.Configuration; @@ -27,12 +33,6 @@ import org.apache.hadoop.security.AccessControlException; import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.logaggregation.LogAggregationUtils; -import java.io.IOException; -import java.util.ArrayList; -import java.util.HashMap; -import java.util.List; -import java.util.Map; - import static org.apache.hadoop.yarn.logaggregation.TestAggregatedLogDeletionService.ALL_FILE_CONTROLLER_NAMES; public class LogAggregationTestcaseBuilder { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/testutils/MockRMClientUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/testutils/MockRMClientUtils.java index 6eb1eb1ecbe..33c5d8bd9b4 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/testutils/MockRMClientUtils.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/testutils/MockRMClientUtils.java @@ -18,6 +18,8 @@ package org.apache.hadoop.yarn.logaggregation.testutils; +import java.util.List; + import org.apache.hadoop.test.MockitoUtil; import org.apache.hadoop.yarn.api.ApplicationClientProtocol; import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationReportRequest; @@ -26,8 +28,6 @@ import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.api.records.ApplicationReport; import org.apache.hadoop.yarn.api.records.YarnApplicationState; -import java.util.List; - import static org.mockito.Mockito.mock; import static org.mockito.Mockito.when; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/NodeLabelTestBase.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/NodeLabelTestBase.java index 798d8835c81..de6bcf9af40 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/NodeLabelTestBase.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/NodeLabelTestBase.java @@ -25,19 +25,23 @@ import java.util.Map; import java.util.Map.Entry; import java.util.Set; +import org.apache.hadoop.thirdparty.com.google.common.collect.ImmutableMap; + import org.apache.hadoop.util.Sets; import org.apache.hadoop.yarn.api.records.NodeId; import org.apache.hadoop.yarn.api.records.NodeLabel; -import org.junit.Assert; -import org.apache.hadoop.thirdparty.com.google.common.collect.ImmutableMap; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertNull; +import static org.junit.jupiter.api.Assertions.assertTrue; public class NodeLabelTestBase { public static void assertMapEquals(Map> expected, ImmutableMap> actual) { - Assert.assertEquals(expected.size(), actual.size()); + assertEquals(expected.size(), actual.size()); for (NodeId k : expected.keySet()) { - Assert.assertTrue(actual.containsKey(k)); + assertTrue(actual.containsKey(k)); assertCollectionEquals(expected.get(k), actual.get(k)); } } @@ -45,9 +49,9 @@ public class NodeLabelTestBase { public static void assertLabelInfoMapEquals( Map> expected, ImmutableMap> actual) { - Assert.assertEquals(expected.size(), actual.size()); + assertEquals(expected.size(), actual.size()); for (NodeId k : expected.keySet()) { - Assert.assertTrue(actual.containsKey(k)); + assertTrue(actual.containsKey(k)); assertNLCollectionEquals(expected.get(k), actual.get(k)); } } @@ -55,13 +59,13 @@ public class NodeLabelTestBase { public static void assertLabelsToNodesEquals( Map> expected, ImmutableMap> actual) { - Assert.assertEquals(expected.size(), actual.size()); + assertEquals(expected.size(), actual.size()); for (String k : expected.keySet()) { - Assert.assertTrue(actual.containsKey(k)); + assertTrue(actual.containsKey(k)); Set expectedS1 = new HashSet<>(expected.get(k)); Set actualS2 = new HashSet<>(actual.get(k)); - Assert.assertEquals(expectedS1, actualS2); - Assert.assertTrue(expectedS1.containsAll(actualS2)); + assertEquals(expectedS1, actualS2); + assertTrue(expectedS1.containsAll(actualS2)); } } @@ -86,7 +90,7 @@ public class NodeLabelTestBase { public static void assertMapContains(Map> expected, ImmutableMap> actual) { for (NodeId k : actual.keySet()) { - Assert.assertTrue(expected.containsKey(k)); + assertTrue(expected.containsKey(k)); assertCollectionEquals(expected.get(k), actual.get(k)); } } @@ -94,28 +98,28 @@ public class NodeLabelTestBase { public static void assertCollectionEquals(Collection expected, Collection actual) { if (expected == null) { - Assert.assertNull(actual); + assertNull(actual); } else { - Assert.assertNotNull(actual); + assertNotNull(actual); } Set expectedSet = new HashSet<>(expected); Set actualSet = new HashSet<>(actual); - Assert.assertEquals(expectedSet, actualSet); - Assert.assertTrue(expectedSet.containsAll(actualSet)); + assertEquals(expectedSet, actualSet); + assertTrue(expectedSet.containsAll(actualSet)); } public static void assertNLCollectionEquals(Collection expected, Collection actual) { if (expected == null) { - Assert.assertNull(actual); + assertNull(actual); } else { - Assert.assertNotNull(actual); + assertNotNull(actual); } Set expectedSet = new HashSet<>(expected); Set actualSet = new HashSet<>(actual); - Assert.assertEquals(expectedSet, actualSet); - Assert.assertTrue(expectedSet.containsAll(actualSet)); + assertEquals(expectedSet, actualSet); + assertTrue(expectedSet.containsAll(actualSet)); } @SuppressWarnings("unchecked") @@ -150,13 +154,13 @@ public class NodeLabelTestBase { public static void assertLabelsInfoToNodesEquals( Map> expected, ImmutableMap> actual) { - Assert.assertEquals(expected.size(), actual.size()); + assertEquals(expected.size(), actual.size()); for (NodeLabel k : expected.keySet()) { - Assert.assertTrue(actual.containsKey(k)); + assertTrue(actual.containsKey(k)); Set expectedS1 = new HashSet<>(expected.get(k)); Set actualS2 = new HashSet<>(actual.get(k)); - Assert.assertEquals(expectedS1, actualS2); - Assert.assertTrue(expectedS1.containsAll(actualS2)); + assertEquals(expectedS1, actualS2); + assertTrue(expectedS1.containsAll(actualS2)); } } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestCommonNodeLabelsManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestCommonNodeLabelsManager.java index 5a5cab85b47..ab4650fd189 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestCommonNodeLabelsManager.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestCommonNodeLabelsManager.java @@ -18,8 +18,6 @@ package org.apache.hadoop.yarn.nodelabels; -import static org.junit.Assert.assertTrue; - import java.io.IOException; import java.util.Arrays; import java.util.Collection; @@ -27,24 +25,31 @@ import java.util.HashSet; import java.util.Map; import java.util.Set; +import org.apache.hadoop.thirdparty.com.google.common.collect.ImmutableMap; +import org.apache.hadoop.thirdparty.com.google.common.collect.ImmutableSet; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; + import org.apache.commons.lang3.StringUtils; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.util.Sets; import org.apache.hadoop.yarn.api.records.NodeId; import org.apache.hadoop.yarn.api.records.NodeLabel; import org.apache.hadoop.yarn.conf.YarnConfiguration; -import org.junit.After; -import org.junit.Assert; -import org.junit.Before; -import org.junit.Test; -import org.apache.hadoop.thirdparty.com.google.common.collect.ImmutableMap; -import org.apache.hadoop.thirdparty.com.google.common.collect.ImmutableSet; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertNull; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; public class TestCommonNodeLabelsManager extends NodeLabelTestBase { DummyCommonNodeLabelsManager mgr = null; - @Before + @BeforeEach public void before() { mgr = new DummyCommonNodeLabelsManager(); Configuration conf = new YarnConfiguration(); @@ -53,13 +58,14 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { mgr.start(); } - @After + @AfterEach public void after() { mgr.stop(); } - @Test(timeout = 5000) - public void testAddRemovelabel() throws Exception { + @Test + @Timeout(5000) + void testAddRemovelabel() throws Exception { // Add some label mgr.addToCluserNodeLabelsWithDefaultExclusivity(ImmutableSet.of("hello")); verifyNodeLabelAdded(Sets.newHashSet("hello"), mgr.lastAddedlabels); @@ -68,23 +74,23 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { mgr.addToCluserNodeLabelsWithDefaultExclusivity(toSet("hello1", "world1")); verifyNodeLabelAdded(Sets.newHashSet("hello1", "world1"), mgr.lastAddedlabels); - Assert.assertTrue(mgr.getClusterNodeLabelNames().containsAll( + assertTrue(mgr.getClusterNodeLabelNames().containsAll( Sets.newHashSet("hello", "world", "hello1", "world1"))); try { mgr.addToCluserNodeLabels(Arrays.asList(NodeLabel.newInstance("hello1", false))); - Assert.fail("IOException not thrown on exclusivity change of labels"); + fail("IOException not thrown on exclusivity change of labels"); } catch (Exception e) { - Assert.assertTrue("IOException is expected when exclusivity is modified", - e instanceof IOException); + assertTrue(e instanceof IOException, + "IOException is expected when exclusivity is modified"); } try { mgr.addToCluserNodeLabels(Arrays.asList(NodeLabel.newInstance("hello1", true))); } catch (Exception e) { - Assert.assertFalse( - "IOException not expected when no change in exclusivity", - e instanceof IOException); + assertFalse( + e instanceof IOException, + "IOException not expected when no change in exclusivity"); } // try to remove null, empty and non-existed label, should fail for (String p : Arrays.asList(null, CommonNodeLabelsManager.NO_LABEL, "xx")) { @@ -94,42 +100,45 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { } catch (IOException e) { caught = true; } - Assert.assertTrue("remove label should fail " - + "when label is null/empty/non-existed", caught); + assertTrue(caught, "remove label should fail " + + "when label is null/empty/non-existed"); } // Remove some label mgr.removeFromClusterNodeLabels(Arrays.asList("hello")); assertCollectionEquals(Sets.newHashSet("hello"), mgr.lastRemovedlabels); - Assert.assertTrue(mgr.getClusterNodeLabelNames().containsAll( + assertTrue(mgr.getClusterNodeLabelNames().containsAll( Arrays.asList("world", "hello1", "world1"))); mgr.removeFromClusterNodeLabels(Arrays .asList("hello1", "world1", "world")); - Assert.assertTrue(mgr.lastRemovedlabels.containsAll(Sets.newHashSet( + assertTrue(mgr.lastRemovedlabels.containsAll(Sets.newHashSet( "hello1", "world1", "world"))); - Assert.assertTrue(mgr.getClusterNodeLabelNames().isEmpty()); + assertTrue(mgr.getClusterNodeLabelNames().isEmpty()); } - @Test(timeout = 5000) - public void testAddlabelWithCase() throws Exception { + @Test + @Timeout(5000) + void testAddlabelWithCase() throws Exception { // Add some label, case will not ignore here mgr.addToCluserNodeLabelsWithDefaultExclusivity(ImmutableSet.of("HeLlO")); verifyNodeLabelAdded(Sets.newHashSet("HeLlO"), mgr.lastAddedlabels); - Assert.assertFalse(mgr.getClusterNodeLabelNames().containsAll( + assertFalse(mgr.getClusterNodeLabelNames().containsAll( Arrays.asList("hello"))); } - @Test(timeout = 5000) - public void testAddlabelWithExclusivity() throws Exception { + @Test + @Timeout(5000) + void testAddlabelWithExclusivity() throws Exception { // Add some label, case will not ignore here mgr.addToCluserNodeLabels(Arrays.asList(NodeLabel.newInstance("a", false), NodeLabel.newInstance("b", true))); - Assert.assertFalse(mgr.isExclusiveNodeLabel("a")); - Assert.assertTrue(mgr.isExclusiveNodeLabel("b")); + assertFalse(mgr.isExclusiveNodeLabel("a")); + assertTrue(mgr.isExclusiveNodeLabel("b")); } - @Test(timeout = 5000) - public void testAddInvalidlabel() throws IOException { + @Test + @Timeout(5000) + void testAddInvalidlabel() throws IOException { boolean caught = false; try { Set set = new HashSet(); @@ -138,7 +147,7 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { } catch (IOException e) { caught = true; } - Assert.assertTrue("null label should not add to repo", caught); + assertTrue(caught, "null label should not add to repo"); caught = false; try { @@ -147,7 +156,7 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { caught = true; } - Assert.assertTrue("empty label should not add to repo", caught); + assertTrue(caught, "empty label should not add to repo"); caught = false; try { @@ -155,7 +164,7 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { } catch (IOException e) { caught = true; } - Assert.assertTrue("invalid label character should not add to repo", caught); + assertTrue(caught, "invalid label character should not add to repo"); caught = false; try { @@ -163,7 +172,7 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { } catch (IOException e) { caught = true; } - Assert.assertTrue("too long label should not add to repo", caught); + assertTrue(caught, "too long label should not add to repo"); caught = false; try { @@ -171,7 +180,7 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { } catch (IOException e) { caught = true; } - Assert.assertTrue("label cannot start with \"-\"", caught); + assertTrue(caught, "label cannot start with \"-\""); caught = false; try { @@ -179,28 +188,29 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { } catch (IOException e) { caught = true; } - Assert.assertTrue("label cannot start with \"_\"", caught); - + assertTrue(caught, "label cannot start with \"_\""); + caught = false; try { mgr.addToCluserNodeLabelsWithDefaultExclusivity(ImmutableSet.of("a^aabbb")); } catch (IOException e) { caught = true; } - Assert.assertTrue("label cannot contains other chars like ^[] ...", caught); - + assertTrue(caught, "label cannot contains other chars like ^[] ..."); + caught = false; try { mgr.addToCluserNodeLabelsWithDefaultExclusivity(ImmutableSet.of("aa[a]bbb")); } catch (IOException e) { caught = true; } - Assert.assertTrue("label cannot contains other chars like ^[] ...", caught); + assertTrue(caught, "label cannot contains other chars like ^[] ..."); } - @SuppressWarnings({ "unchecked", "rawtypes" }) - @Test(timeout = 5000) - public void testAddReplaceRemoveLabelsOnNodes() throws Exception { + @SuppressWarnings({"unchecked", "rawtypes"}) + @Test + @Timeout(5000) + void testAddReplaceRemoveLabelsOnNodes() throws Exception { // set a label on a node, but label doesn't exist boolean caught = false; try { @@ -208,8 +218,8 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { } catch (IOException e) { caught = true; } - Assert.assertTrue("trying to set a label to a node but " - + "label doesn't exist in repository should fail", caught); + assertTrue(caught, "trying to set a label to a node but " + + "label doesn't exist in repository should fail"); // set a label on a node, but node is null or empty try { @@ -218,7 +228,7 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { } catch (IOException e) { caught = true; } - Assert.assertTrue("trying to add a empty node but succeeded", caught); + assertTrue(caught, "trying to add a empty node but succeeded"); // set node->label one by one mgr.addToCluserNodeLabelsWithDefaultExclusivity(toSet("p1", "p2", "p3")); @@ -263,15 +273,16 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { // remove labels on node mgr.removeLabelsFromNode(ImmutableMap.of(toNodeId("n1"), toSet("p1"), toNodeId("n2"), toSet("p3"), toNodeId("n3"), toSet("p3"))); - Assert.assertEquals(0, mgr.getNodeLabels().size()); + assertEquals(0, mgr.getNodeLabels().size()); assertMapEquals(mgr.lastNodeToLabels, ImmutableMap.of(toNodeId("n1"), CommonNodeLabelsManager.EMPTY_STRING_SET, toNodeId("n2"), CommonNodeLabelsManager.EMPTY_STRING_SET, toNodeId("n3"), CommonNodeLabelsManager.EMPTY_STRING_SET)); } - @Test(timeout = 5000) - public void testRemovelabelWithNodes() throws Exception { + @Test + @Timeout(5000) + void testRemovelabelWithNodes() throws Exception { mgr.addToCluserNodeLabelsWithDefaultExclusivity(toSet("p1", "p2", "p3")); mgr.replaceLabelsOnNode(ImmutableMap.of(toNodeId("n1"), toSet("p1"))); mgr.replaceLabelsOnNode(ImmutableMap.of(toNodeId("n2"), toSet("p2"))); @@ -283,21 +294,23 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { assertCollectionEquals(Arrays.asList("p1"), mgr.lastRemovedlabels); mgr.removeFromClusterNodeLabels(ImmutableSet.of("p2", "p3")); - Assert.assertTrue(mgr.getNodeLabels().isEmpty()); - Assert.assertTrue(mgr.getClusterNodeLabelNames().isEmpty()); + assertTrue(mgr.getNodeLabels().isEmpty()); + assertTrue(mgr.getClusterNodeLabelNames().isEmpty()); assertCollectionEquals(Arrays.asList("p2", "p3"), mgr.lastRemovedlabels); } - - @Test(timeout = 5000) - public void testTrimLabelsWhenAddRemoveNodeLabels() throws IOException { + + @Test + @Timeout(5000) + void testTrimLabelsWhenAddRemoveNodeLabels() throws IOException { mgr.addToCluserNodeLabelsWithDefaultExclusivity(toSet(" p1")); assertCollectionEquals(toSet("p1"), mgr.getClusterNodeLabelNames()); mgr.removeFromClusterNodeLabels(toSet("p1 ")); - Assert.assertTrue(mgr.getClusterNodeLabelNames().isEmpty()); + assertTrue(mgr.getClusterNodeLabelNames().isEmpty()); } - - @Test(timeout = 5000) - public void testTrimLabelsWhenModifyLabelsOnNodes() throws IOException { + + @Test + @Timeout(5000) + void testTrimLabelsWhenModifyLabelsOnNodes() throws IOException { mgr.addToCluserNodeLabelsWithDefaultExclusivity(toSet(" p1", "p2")); mgr.addLabelsToNode(ImmutableMap.of(toNodeId("n1"), toSet("p1 "))); assertMapEquals( @@ -308,49 +321,51 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { mgr.getNodeLabels(), ImmutableMap.of(toNodeId("n1"), toSet("p2"))); mgr.removeLabelsFromNode(ImmutableMap.of(toNodeId("n1"), toSet(" p2 "))); - Assert.assertTrue(mgr.getNodeLabels().isEmpty()); + assertTrue(mgr.getNodeLabels().isEmpty()); } - - @Test(timeout = 5000) - public void testReplaceLabelsOnHostsShouldUpdateNodesBelongTo() + + @Test + @Timeout(5000) + void testReplaceLabelsOnHostsShouldUpdateNodesBelongTo() throws IOException { mgr.addToCluserNodeLabelsWithDefaultExclusivity(toSet("p1", "p2", "p3")); mgr.addLabelsToNode(ImmutableMap.of(toNodeId("n1"), toSet("p1"))); assertMapEquals( mgr.getNodeLabels(), ImmutableMap.of(toNodeId("n1"), toSet("p1"))); - + // Replace labels on n1:1 to P2 mgr.replaceLabelsOnNode(ImmutableMap.of(toNodeId("n1:1"), toSet("p2"), toNodeId("n1:2"), toSet("p2"))); assertMapEquals(mgr.getNodeLabels(), ImmutableMap.of(toNodeId("n1"), toSet("p1"), toNodeId("n1:1"), toSet("p2"), toNodeId("n1:2"), toSet("p2"))); - + // Replace labels on n1 to P1, both n1:1/n1 will be P1 now mgr.replaceLabelsOnNode(ImmutableMap.of(toNodeId("n1"), toSet("p1"))); assertMapEquals(mgr.getNodeLabels(), ImmutableMap.of(toNodeId("n1"), toSet("p1"), toNodeId("n1:1"), toSet("p1"), toNodeId("n1:2"), toSet("p1"))); - + // Set labels on n1:1 to P2 again to verify if add/remove works mgr.replaceLabelsOnNode(ImmutableMap.of(toNodeId("n1:1"), toSet("p2"))); } private void assertNodeLabelsDisabledErrorMessage(IOException e) { - Assert.assertEquals(CommonNodeLabelsManager.NODE_LABELS_NOT_ENABLED_ERR, + assertEquals(CommonNodeLabelsManager.NODE_LABELS_NOT_ENABLED_ERR, e.getMessage()); } - - @Test(timeout = 5000) - public void testNodeLabelsDisabled() throws IOException { + + @Test + @Timeout(5000) + void testNodeLabelsDisabled() throws IOException { DummyCommonNodeLabelsManager mgr = new DummyCommonNodeLabelsManager(); Configuration conf = new YarnConfiguration(); conf.setBoolean(YarnConfiguration.NODE_LABELS_ENABLED, false); mgr.init(conf); mgr.start(); boolean caught = false; - + // add labels try { mgr.addToCluserNodeLabelsWithDefaultExclusivity(ImmutableSet.of("x")); @@ -359,9 +374,9 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { caught = true; } // check exception caught - Assert.assertTrue(caught); + assertTrue(caught); caught = false; - + // remove labels try { mgr.removeFromClusterNodeLabels(ImmutableSet.of("x")); @@ -370,9 +385,9 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { caught = true; } // check exception caught - Assert.assertTrue(caught); + assertTrue(caught); caught = false; - + // add labels to node try { mgr.addLabelsToNode(ImmutableMap.of(NodeId.newInstance("host", 0), @@ -382,9 +397,9 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { caught = true; } // check exception caught - Assert.assertTrue(caught); + assertTrue(caught); caught = false; - + // remove labels from node try { mgr.removeLabelsFromNode(ImmutableMap.of(NodeId.newInstance("host", 0), @@ -394,9 +409,9 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { caught = true; } // check exception caught - Assert.assertTrue(caught); + assertTrue(caught); caught = false; - + // replace labels on node try { mgr.replaceLabelsOnNode(ImmutableMap.of(NodeId.newInstance("host", 0), @@ -406,14 +421,15 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { caught = true; } // check exception caught - Assert.assertTrue(caught); + assertTrue(caught); caught = false; - - mgr.close(); - } - @Test(timeout = 5000) - public void testLabelsToNodes() + mgr.close(); + } + + @Test + @Timeout(5000) + void testLabelsToNodes() throws IOException { mgr.addToCluserNodeLabelsWithDefaultExclusivity(toSet("p1", "p2", "p3")); mgr.addLabelsToNode(ImmutableMap.of(toNodeId("n1"), toSet("p1"))); @@ -421,7 +437,7 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { assertLabelsToNodesEquals( labelsToNodes, ImmutableMap.of( - "p1", toSet(toNodeId("n1")))); + "p1", toSet(toNodeId("n1")))); assertLabelsToNodesEquals( labelsToNodes, transposeNodeToLabels(mgr.getNodeLabels())); @@ -432,8 +448,8 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { assertLabelsToNodesEquals( labelsToNodes, ImmutableMap.of( - "p1", toSet(toNodeId("n1")), - "p2", toSet(toNodeId("n1:1"),toNodeId("n1:2")))); + "p1", toSet(toNodeId("n1")), + "p2", toSet(toNodeId("n1:1"), toNodeId("n1:2")))); assertLabelsToNodesEquals( labelsToNodes, transposeNodeToLabels(mgr.getNodeLabels())); @@ -443,7 +459,7 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { assertLabelsToNodesEquals( labelsToNodes, ImmutableMap.of( - "p1", toSet(toNodeId("n1"),toNodeId("n1:1"),toNodeId("n1:2")))); + "p1", toSet(toNodeId("n1"), toNodeId("n1:1"), toNodeId("n1:2")))); assertLabelsToNodesEquals( labelsToNodes, transposeNodeToLabels(mgr.getNodeLabels())); @@ -455,9 +471,9 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { assertLabelsToNodesEquals( labelsToNodes, ImmutableMap.of( - "p1", toSet(toNodeId("n1"),toNodeId("n1:2")), - "p2", toSet(toNodeId("n1:1")), - "p3", toSet(toNodeId("n2")))); + "p1", toSet(toNodeId("n1"), toNodeId("n1:2")), + "p2", toSet(toNodeId("n1:1")), + "p3", toSet(toNodeId("n2")))); assertLabelsToNodesEquals( labelsToNodes, transposeNodeToLabels(mgr.getNodeLabels())); @@ -467,20 +483,21 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { assertLabelsToNodesEquals( labelsToNodes, ImmutableMap.of( - "p1", toSet(toNodeId("n1"),toNodeId("n1:2")), - "p2", toSet(toNodeId("n1:1")))); + "p1", toSet(toNodeId("n1"), toNodeId("n1:2")), + "p2", toSet(toNodeId("n1:1")))); assertLabelsToNodesEquals( labelsToNodes, transposeNodeToLabels(mgr.getNodeLabels())); } - @Test(timeout = 5000) - public void testLabelsToNodesForSelectedLabels() + @Test + @Timeout(5000) + void testLabelsToNodesForSelectedLabels() throws IOException { mgr.addToCluserNodeLabelsWithDefaultExclusivity(toSet("p1", "p2", "p3")); mgr.addLabelsToNode( ImmutableMap.of( - toNodeId("n1:1"), toSet("p1"), - toNodeId("n1:2"), toSet("p2"))); + toNodeId("n1:1"), toSet("p1"), + toNodeId("n1:2"), toSet("p2"))); Set setlabels = new HashSet(Arrays.asList(new String[]{"p1"})); assertLabelsToNodesEquals(mgr.getLabelsToNodes(setlabels), @@ -493,14 +510,14 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { assertLabelsToNodesEquals( mgr.getLabelsToNodes(setlabels), ImmutableMap.of( - "p3", toSet(toNodeId("n1"), toNodeId("n1:1"),toNodeId("n1:2")))); + "p3", toSet(toNodeId("n1"), toNodeId("n1:1"), toNodeId("n1:2")))); mgr.addLabelsToNode(ImmutableMap.of(toNodeId("n2"), toSet("p2"))); assertLabelsToNodesEquals( mgr.getLabelsToNodes(setlabels), ImmutableMap.of( - "p2", toSet(toNodeId("n2")), - "p3", toSet(toNodeId("n1"), toNodeId("n1:1"),toNodeId("n1:2")))); + "p2", toSet(toNodeId("n2")), + "p3", toSet(toNodeId("n1"), toNodeId("n1:1"), toNodeId("n1:2")))); mgr.removeLabelsFromNode(ImmutableMap.of(toNodeId("n1"), toSet("p3"))); setlabels = @@ -508,29 +525,30 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { assertLabelsToNodesEquals( mgr.getLabelsToNodes(setlabels), ImmutableMap.of( - "p2", toSet(toNodeId("n2")))); + "p2", toSet(toNodeId("n2")))); mgr.addLabelsToNode(ImmutableMap.of(toNodeId("n3"), toSet("p1"))); assertLabelsToNodesEquals( mgr.getLabelsToNodes(setlabels), ImmutableMap.of( - "p1", toSet(toNodeId("n3")), - "p2", toSet(toNodeId("n2")))); + "p1", toSet(toNodeId("n3")), + "p2", toSet(toNodeId("n2")))); mgr.replaceLabelsOnNode(ImmutableMap.of(toNodeId("n2:2"), toSet("p3"))); assertLabelsToNodesEquals( mgr.getLabelsToNodes(setlabels), ImmutableMap.of( - "p1", toSet(toNodeId("n3")), - "p2", toSet(toNodeId("n2")), - "p3", toSet(toNodeId("n2:2")))); + "p1", toSet(toNodeId("n3")), + "p2", toSet(toNodeId("n2")), + "p3", toSet(toNodeId("n2:2")))); setlabels = new HashSet(Arrays.asList(new String[]{"p1"})); assertLabelsToNodesEquals(mgr.getLabelsToNodes(setlabels), ImmutableMap.of("p1", toSet(toNodeId("n3")))); } - @Test(timeout = 5000) - public void testNoMoreThanOneLabelExistedInOneHost() throws IOException { + @Test + @Timeout(5000) + void testNoMoreThanOneLabelExistedInOneHost() throws IOException { boolean failed = false; // As in YARN-2694, we temporarily disable no more than one label existed in // one host @@ -540,14 +558,14 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { } catch (IOException e) { failed = true; } - Assert.assertTrue("Should failed when set > 1 labels on a host", failed); + assertTrue(failed, "Should failed when set > 1 labels on a host"); try { mgr.addLabelsToNode(ImmutableMap.of(toNodeId("n1"), toSet("p1", "p2"))); } catch (IOException e) { failed = true; } - Assert.assertTrue("Should failed when add > 1 labels on a host", failed); + assertTrue(failed, "Should failed when add > 1 labels on a host"); mgr.addLabelsToNode(ImmutableMap.of(toNodeId("n1"), toSet("p1"))); // add a same label to a node, #labels in this node is still 1, shouldn't @@ -558,20 +576,21 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { } catch (IOException e) { failed = true; } - Assert.assertTrue("Should failed when #labels > 1 on a host after add", - failed); + assertTrue(failed, + "Should failed when #labels > 1 on a host after add"); } private void verifyNodeLabelAdded(Set expectedAddedLabelNames, Collection addedNodeLabels) { - Assert.assertEquals(expectedAddedLabelNames.size(), addedNodeLabels.size()); + assertEquals(expectedAddedLabelNames.size(), addedNodeLabels.size()); for (NodeLabel label : addedNodeLabels) { - Assert.assertTrue(expectedAddedLabelNames.contains(label.getName())); + assertTrue(expectedAddedLabelNames.contains(label.getName())); } } - @Test(timeout = 5000) - public void testReplaceLabelsOnNodeInDistributedMode() throws Exception { + @Test + @Timeout(5000) + void testReplaceLabelsOnNodeInDistributedMode() throws Exception { //create new DummyCommonNodeLabelsManager than the one got from @before mgr.stop(); mgr = new DummyCommonNodeLabelsManager(); @@ -587,16 +606,17 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { mgr.replaceLabelsOnNode(ImmutableMap.of(toNodeId("n1"), toSet("p1"))); Set labelsByNode = mgr.getLabelsByNode(toNodeId("n1")); - Assert.assertNull( - "Labels are not expected to be written to the NodeLabelStore", - mgr.lastNodeToLabels); - Assert.assertNotNull("Updated labels should be available from the Mgr", - labelsByNode); - Assert.assertTrue(labelsByNode.contains("p1")); + assertNull( + mgr.lastNodeToLabels, + "Labels are not expected to be written to the NodeLabelStore"); + assertNotNull(labelsByNode, + "Updated labels should be available from the Mgr"); + assertTrue(labelsByNode.contains("p1")); } - @Test(timeout = 5000) - public void testLabelsInfoToNodes() throws IOException { + @Test + @Timeout(5000) + void testLabelsInfoToNodes() throws IOException { mgr.addToCluserNodeLabels(Arrays.asList(NodeLabel.newInstance("p1", false), NodeLabel.newInstance("p2", true), NodeLabel.newInstance("p3", true))); mgr.addLabelsToNode(ImmutableMap.of(toNodeId("n1"), toSet("p1"))); @@ -605,8 +625,9 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { NodeLabel.newInstance("p1", false), toSet(toNodeId("n1")))); } - @Test(timeout = 5000) - public void testGetNodeLabelsInfo() throws IOException { + @Test + @Timeout(5000) + void testGetNodeLabelsInfo() throws IOException { mgr.addToCluserNodeLabels(Arrays.asList(NodeLabel.newInstance("p1", false), NodeLabel.newInstance("p2", true), NodeLabel.newInstance("p3", false))); mgr.addLabelsToNode(ImmutableMap.of(toNodeId("n1"), toSet("p2"))); @@ -617,8 +638,9 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { toNodeId("n2"), toSet(NodeLabel.newInstance("p3", false)))); } - @Test(timeout = 5000) - public void testRemoveNodeLabelsInfo() throws IOException { + @Test + @Timeout(5000) + void testRemoveNodeLabelsInfo() throws IOException { mgr.addToCluserNodeLabels(Arrays.asList(NodeLabel.newInstance("p1", true))); mgr.addToCluserNodeLabels(Arrays.asList(NodeLabel.newInstance("p2", true))); mgr.addLabelsToNode(ImmutableMap.of(toNodeId("n1:1"), toSet("p1"))); @@ -628,10 +650,10 @@ public class TestCommonNodeLabelsManager extends NodeLabelTestBase { assertLabelsToNodesEquals( labelsToNodes, ImmutableMap.of( - "p2", toSet(toNodeId("n1:1"), toNodeId("n1:0")))); + "p2", toSet(toNodeId("n1:1"), toNodeId("n1:0")))); mgr.replaceLabelsOnNode(ImmutableMap.of(toNodeId("n1"), new HashSet())); Map> labelsToNodes2 = mgr.getLabelsToNodes(); - Assert.assertEquals(labelsToNodes2.get("p2"), null); + assertNull(labelsToNodes2.get("p2")); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java index f0885cd50f2..e769a21a750 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java @@ -24,23 +24,24 @@ import java.util.Arrays; import java.util.Collection; import java.util.Map; +import org.apache.hadoop.thirdparty.com.google.common.collect.ImmutableMap; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.Timeout; +import org.junit.jupiter.params.ParameterizedTest; +import org.junit.jupiter.params.provider.MethodSource; +import org.mockito.Mockito; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.yarn.api.records.NodeLabel; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.event.InlineDispatcher; -import org.junit.After; -import org.junit.Assert; -import org.junit.Before; -import org.junit.Test; -import org.junit.runner.RunWith; -import org.junit.runners.Parameterized; -import org.mockito.Mockito; -import org.apache.hadoop.thirdparty.com.google.common.collect.ImmutableMap; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertTrue; -@RunWith(Parameterized.class) public class TestFileSystemNodeLabelsStore extends NodeLabelTestBase { MockNodeLabelManager mgr = null; Configuration conf = null; @@ -57,26 +58,15 @@ public class TestFileSystemNodeLabelsStore extends NodeLabelTestBase { protected void startDispatcher() { // do nothing } - + @Override protected void stopDispatcher() { // do nothing } } - - public TestFileSystemNodeLabelsStore(String className) { - this.storeClassName = className; - } - - @Parameterized.Parameters - public static Collection getParameters() { - return Arrays.asList( - new String[][] { { FileSystemNodeLabelsStore.class.getCanonicalName() }, - { NonAppendableFSNodeLabelStore.class.getCanonicalName() } }); - } - @Before - public void before() throws IOException { + public void initTestFileSystemNodeLabelsStore(String className) throws IOException { + this.storeClassName = className; mgr = new MockNodeLabelManager(); conf = new Configuration(); conf.setBoolean(YarnConfiguration.NODE_LABELS_ENABLED, true); @@ -91,7 +81,13 @@ public class TestFileSystemNodeLabelsStore extends NodeLabelTestBase { mgr.start(); } - @After + public static Collection getParameters() { + return Arrays.asList( + new String[][]{{FileSystemNodeLabelsStore.class.getCanonicalName()}, + {NonAppendableFSNodeLabelStore.class.getCanonicalName()}}); + } + + @AfterEach public void after() throws IOException { if (mgr.store instanceof FileSystemNodeLabelsStore) { FileSystemNodeLabelsStore fsStore = @@ -101,9 +97,12 @@ public class TestFileSystemNodeLabelsStore extends NodeLabelTestBase { mgr.stop(); } - @SuppressWarnings({ "unchecked", "rawtypes" }) - @Test(timeout = 10000) - public void testRecoverWithMirror() throws Exception { + @MethodSource("getParameters") + @SuppressWarnings({"unchecked", "rawtypes"}) + @ParameterizedTest + @Timeout(10000) + void testRecoverWithMirror(String className) throws Exception { + initTestFileSystemNodeLabelsStore(className); mgr.addToCluserNodeLabelsWithDefaultExclusivity(toSet("p1", "p2", "p3")); mgr.addToCluserNodeLabelsWithDefaultExclusivity(toSet("p4")); mgr.addToCluserNodeLabelsWithDefaultExclusivity(toSet("p5", "p6")); @@ -131,15 +130,15 @@ public class TestFileSystemNodeLabelsStore extends NodeLabelTestBase { mgr.start(); // check variables - Assert.assertEquals(3, mgr.getClusterNodeLabelNames().size()); - Assert.assertTrue(mgr.getClusterNodeLabelNames().containsAll( + assertEquals(3, mgr.getClusterNodeLabelNames().size()); + assertTrue(mgr.getClusterNodeLabelNames().containsAll( Arrays.asList("p2", "p4", "p6"))); assertMapContains(mgr.getNodeLabels(), ImmutableMap.of(toNodeId("n2"), toSet("p2"), toNodeId("n4"), toSet("p4"), toNodeId("n6"), toSet("p6"), toNodeId("n7"), toSet("p6"))); assertLabelsToNodesEquals(mgr.getLabelsToNodes(), - ImmutableMap.of( + ImmutableMap.of( "p6", toSet(toNodeId("n6"), toNodeId("n7")), "p4", toSet(toNodeId("n4")), "p2", toSet(toNodeId("n2")))); @@ -151,24 +150,27 @@ public class TestFileSystemNodeLabelsStore extends NodeLabelTestBase { mgr.start(); // check variables - Assert.assertEquals(3, mgr.getClusterNodeLabelNames().size()); - Assert.assertTrue(mgr.getClusterNodeLabelNames().containsAll( + assertEquals(3, mgr.getClusterNodeLabelNames().size()); + assertTrue(mgr.getClusterNodeLabelNames().containsAll( Arrays.asList("p2", "p4", "p6"))); assertMapContains(mgr.getNodeLabels(), ImmutableMap.of(toNodeId("n2"), toSet("p2"), toNodeId("n4"), toSet("p4"), toNodeId("n6"), toSet("p6"), toNodeId("n7"), toSet("p6"))); assertLabelsToNodesEquals(mgr.getLabelsToNodes(), - ImmutableMap.of( + ImmutableMap.of( "p6", toSet(toNodeId("n6"), toNodeId("n7")), "p4", toSet(toNodeId("n4")), "p2", toSet(toNodeId("n2")))); mgr.stop(); } - @SuppressWarnings({ "unchecked", "rawtypes" }) - @Test(timeout = 10000) - public void testRecoverWithDistributedNodeLabels() throws Exception { + @MethodSource("getParameters") + @SuppressWarnings({"unchecked", "rawtypes"}) + @ParameterizedTest + @Timeout(10000) + void testRecoverWithDistributedNodeLabels(String className) throws Exception { + initTestFileSystemNodeLabelsStore(className); mgr.addToCluserNodeLabelsWithDefaultExclusivity(toSet("p1", "p2", "p3")); mgr.addToCluserNodeLabelsWithDefaultExclusivity(toSet("p4")); mgr.addToCluserNodeLabelsWithDefaultExclusivity(toSet("p5", "p6")); @@ -190,20 +192,23 @@ public class TestFileSystemNodeLabelsStore extends NodeLabelTestBase { mgr.start(); // check variables - Assert.assertEquals(3, mgr.getClusterNodeLabels().size()); - Assert.assertTrue(mgr.getClusterNodeLabelNames().containsAll( + assertEquals(3, mgr.getClusterNodeLabels().size()); + assertTrue(mgr.getClusterNodeLabelNames().containsAll( Arrays.asList("p2", "p4", "p6"))); - Assert.assertTrue("During recovery in distributed node-labels setup, " - + "node to labels mapping should not be recovered ", mgr - .getNodeLabels().size() == 0); + assertTrue(mgr + .getNodeLabels().size() == 0, "During recovery in distributed node-labels setup, " + + "node to labels mapping should not be recovered "); mgr.stop(); } - @SuppressWarnings({ "unchecked", "rawtypes" }) - @Test(timeout = 10000) - public void testEditlogRecover() throws Exception { + @MethodSource("getParameters") + @SuppressWarnings({"unchecked", "rawtypes"}) + @ParameterizedTest + @Timeout(10000) + void testEditlogRecover(String className) throws Exception { + initTestFileSystemNodeLabelsStore(className); mgr.addToCluserNodeLabelsWithDefaultExclusivity(toSet("p1", "p2", "p3")); mgr.addToCluserNodeLabelsWithDefaultExclusivity(toSet("p4")); mgr.addToCluserNodeLabelsWithDefaultExclusivity(toSet("p5", "p6")); @@ -231,24 +236,27 @@ public class TestFileSystemNodeLabelsStore extends NodeLabelTestBase { mgr.start(); // check variables - Assert.assertEquals(3, mgr.getClusterNodeLabelNames().size()); - Assert.assertTrue(mgr.getClusterNodeLabelNames().containsAll( + assertEquals(3, mgr.getClusterNodeLabelNames().size()); + assertTrue(mgr.getClusterNodeLabelNames().containsAll( Arrays.asList("p2", "p4", "p6"))); assertMapContains(mgr.getNodeLabels(), ImmutableMap.of(toNodeId("n2"), toSet("p2"), toNodeId("n4"), toSet("p4"), toNodeId("n6"), toSet("p6"), toNodeId("n7"), toSet("p6"))); assertLabelsToNodesEquals(mgr.getLabelsToNodes(), - ImmutableMap.of( + ImmutableMap.of( "p6", toSet(toNodeId("n6"), toNodeId("n7")), "p4", toSet(toNodeId("n4")), "p2", toSet(toNodeId("n2")))); mgr.stop(); } - - @SuppressWarnings({ "unchecked", "rawtypes" }) - @Test (timeout = 10000) - public void testSerilizationAfterRecovery() throws Exception { + + @MethodSource("getParameters") + @SuppressWarnings({"unchecked", "rawtypes"}) + @ParameterizedTest + @Timeout(10000) + void testSerilizationAfterRecovery(String className) throws Exception { + initTestFileSystemNodeLabelsStore(className); // Add to cluster node labels, p2/p6 are non-exclusive. mgr.addToCluserNodeLabels(Arrays.asList(NodeLabel.newInstance("p1", true), NodeLabel.newInstance("p2", false), NodeLabel.newInstance("p3", true), @@ -289,8 +297,8 @@ public class TestFileSystemNodeLabelsStore extends NodeLabelTestBase { mgr.start(); // check variables - Assert.assertEquals(3, mgr.getClusterNodeLabelNames().size()); - Assert.assertTrue(mgr.getClusterNodeLabelNames().containsAll( + assertEquals(3, mgr.getClusterNodeLabelNames().size()); + assertTrue(mgr.getClusterNodeLabelNames().containsAll( Arrays.asList("p2", "p4", "p6"))); assertMapContains(mgr.getNodeLabels(), ImmutableMap.of(toNodeId("n2"), @@ -298,13 +306,13 @@ public class TestFileSystemNodeLabelsStore extends NodeLabelTestBase { toNodeId("n7"), toSet("p6"))); assertLabelsToNodesEquals(mgr.getLabelsToNodes(), ImmutableMap.of( - "p6", toSet(toNodeId("n6"), toNodeId("n7")), - "p4", toSet(toNodeId("n4")), - "p2", toSet(toNodeId("n2")))); + "p6", toSet(toNodeId("n6"), toNodeId("n7")), + "p4", toSet(toNodeId("n4")), + "p2", toSet(toNodeId("n2")))); - Assert.assertFalse(mgr.isExclusiveNodeLabel("p2")); - Assert.assertTrue(mgr.isExclusiveNodeLabel("p4")); - Assert.assertFalse(mgr.isExclusiveNodeLabel("p6")); + assertFalse(mgr.isExclusiveNodeLabel("p2")); + assertTrue(mgr.isExclusiveNodeLabel("p4")); + assertFalse(mgr.isExclusiveNodeLabel("p6")); /* * Add label p7,p8 then shutdown @@ -314,7 +322,7 @@ public class TestFileSystemNodeLabelsStore extends NodeLabelTestBase { mgr.start(); mgr.addToCluserNodeLabelsWithDefaultExclusivity(toSet("p7", "p8")); mgr.stop(); - + /* * Restart, add label p9 and shutdown */ @@ -323,7 +331,7 @@ public class TestFileSystemNodeLabelsStore extends NodeLabelTestBase { mgr.start(); mgr.addToCluserNodeLabelsWithDefaultExclusivity(toSet("p9")); mgr.stop(); - + /* * Recovery, and see if p9 added */ @@ -332,14 +340,16 @@ public class TestFileSystemNodeLabelsStore extends NodeLabelTestBase { mgr.start(); // check variables - Assert.assertEquals(6, mgr.getClusterNodeLabelNames().size()); - Assert.assertTrue(mgr.getClusterNodeLabelNames().containsAll( + assertEquals(6, mgr.getClusterNodeLabelNames().size()); + assertTrue(mgr.getClusterNodeLabelNames().containsAll( Arrays.asList("p2", "p4", "p6", "p7", "p8", "p9"))); mgr.stop(); } - @Test - public void testRootMkdirOnInitStore() throws Exception { + @MethodSource("getParameters") + @ParameterizedTest + void testRootMkdirOnInitStore(String className) throws Exception { + initTestFileSystemNodeLabelsStore(className); final FileSystem mockFs = Mockito.mock(FileSystem.class); FileSystemNodeLabelsStore mockStore = new FileSystemNodeLabelsStore() { public void initFileSystem(Configuration config) throws IOException { @@ -355,7 +365,7 @@ public class TestFileSystemNodeLabelsStore extends NodeLabelTestBase { } private void verifyMkdirsCount(FileSystemNodeLabelsStore store, - boolean existsRetVal, int expectedNumOfCalls) + boolean existsRetVal, int expectedNumOfCalls) throws Exception { Mockito.when(store.getFs().exists(Mockito.any( Path.class))).thenReturn(existsRetVal); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestNodeLabelUtil.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestNodeLabelUtil.java index 73e0bda91f6..52646b65964 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestNodeLabelUtil.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestNodeLabelUtil.java @@ -17,13 +17,15 @@ */ package org.apache.hadoop.yarn.nodelabels; -import static org.junit.Assert.fail; - import org.apache.hadoop.thirdparty.com.google.common.collect.ImmutableSet; +import org.junit.jupiter.api.Test; + import org.apache.hadoop.yarn.api.records.NodeAttribute; import org.apache.hadoop.yarn.api.records.NodeAttributeType; -import org.junit.Assert; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; /** * Test class to verify node label util ops. @@ -31,7 +33,7 @@ import org.junit.Test; public class TestNodeLabelUtil { @Test - public void testAttributeValueAddition() { + void testAttributeValueAddition() { String[] values = new String[]{"1_8", "1.8", "ABZ", "ABZ", "az", "a-z", "a_z", "123456789"}; @@ -55,7 +57,7 @@ public class TestNodeLabelUtil { } @Test - public void testIsNodeAttributesEquals() { + void testIsNodeAttributesEquals() { NodeAttribute nodeAttributeCK1V1 = NodeAttribute .newInstance(NodeAttribute.PREFIX_CENTRALIZED, "K1", NodeAttributeType.STRING, "V1"); @@ -77,45 +79,45 @@ public class TestNodeLabelUtil { /* * equals if set size equals and items are all the same */ - Assert.assertTrue(NodeLabelUtil.isNodeAttributesEquals(null, null)); - Assert.assertTrue(NodeLabelUtil + assertTrue(NodeLabelUtil.isNodeAttributesEquals(null, null)); + assertTrue(NodeLabelUtil .isNodeAttributesEquals(ImmutableSet.of(), ImmutableSet.of())); - Assert.assertTrue(NodeLabelUtil + assertTrue(NodeLabelUtil .isNodeAttributesEquals(ImmutableSet.of(nodeAttributeCK1V1), ImmutableSet.of(nodeAttributeCK1V1Copy))); - Assert.assertTrue(NodeLabelUtil + assertTrue(NodeLabelUtil .isNodeAttributesEquals(ImmutableSet.of(nodeAttributeDK1V1), ImmutableSet.of(nodeAttributeDK1V1Copy))); - Assert.assertTrue(NodeLabelUtil.isNodeAttributesEquals( + assertTrue(NodeLabelUtil.isNodeAttributesEquals( ImmutableSet.of(nodeAttributeCK1V1, nodeAttributeDK1V1), ImmutableSet.of(nodeAttributeCK1V1Copy, nodeAttributeDK1V1Copy))); /* * not equals if set size not equals or items are different */ - Assert.assertFalse( + assertFalse( NodeLabelUtil.isNodeAttributesEquals(null, ImmutableSet.of())); - Assert.assertFalse( + assertFalse( NodeLabelUtil.isNodeAttributesEquals(ImmutableSet.of(), null)); // different attribute prefix - Assert.assertFalse(NodeLabelUtil + assertFalse(NodeLabelUtil .isNodeAttributesEquals(ImmutableSet.of(nodeAttributeCK1V1), ImmutableSet.of(nodeAttributeDK1V1))); // different attribute name - Assert.assertFalse(NodeLabelUtil + assertFalse(NodeLabelUtil .isNodeAttributesEquals(ImmutableSet.of(nodeAttributeDK1V1), ImmutableSet.of(nodeAttributeDK2V1))); // different attribute value - Assert.assertFalse(NodeLabelUtil + assertFalse(NodeLabelUtil .isNodeAttributesEquals(ImmutableSet.of(nodeAttributeDK2V1), ImmutableSet.of(nodeAttributeDK2V2))); // different set - Assert.assertFalse(NodeLabelUtil + assertFalse(NodeLabelUtil .isNodeAttributesEquals(ImmutableSet.of(nodeAttributeCK1V1), ImmutableSet.of())); - Assert.assertFalse(NodeLabelUtil + assertFalse(NodeLabelUtil .isNodeAttributesEquals(ImmutableSet.of(nodeAttributeCK1V1), ImmutableSet.of(nodeAttributeCK1V1, nodeAttributeDK1V1))); - Assert.assertFalse(NodeLabelUtil.isNodeAttributesEquals( + assertFalse(NodeLabelUtil.isNodeAttributesEquals( ImmutableSet.of(nodeAttributeCK1V1, nodeAttributeDK1V1), ImmutableSet.of(nodeAttributeDK1V1))); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/resourcetypes/ResourceTypesTestHelper.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/resourcetypes/ResourceTypesTestHelper.java index f035da831d9..95b02c49660 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/resourcetypes/ResourceTypesTestHelper.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/resourcetypes/ResourceTypesTestHelper.java @@ -16,17 +16,18 @@ package org.apache.hadoop.yarn.resourcetypes; -import org.apache.hadoop.thirdparty.com.google.common.collect.Maps; -import org.apache.hadoop.yarn.api.records.Resource; -import org.apache.hadoop.yarn.api.records.ResourceInformation; -import org.apache.hadoop.yarn.factories.RecordFactory; -import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider; - import java.util.Map; import java.util.regex.Matcher; import java.util.regex.Pattern; import java.util.stream.Collectors; +import org.apache.hadoop.thirdparty.com.google.common.collect.Maps; + +import org.apache.hadoop.yarn.api.records.Resource; +import org.apache.hadoop.yarn.api.records.ResourceInformation; +import org.apache.hadoop.yarn.factories.RecordFactory; +import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider; + /** * Contains helper methods to create Resource and ResourceInformation objects. * ResourceInformation can be created from a resource name diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/security/TestDockerClientConfigHandler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/security/TestDockerClientConfigHandler.java index cfe5a455693..8613a8b9dc1 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/security/TestDockerClientConfigHandler.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/security/TestDockerClientConfigHandler.java @@ -16,6 +16,14 @@ */ package org.apache.hadoop.yarn.security; +import java.io.BufferedWriter; +import java.io.File; +import java.io.FileWriter; +import java.nio.ByteBuffer; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + import org.apache.commons.io.FileUtils; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; @@ -25,16 +33,9 @@ import org.apache.hadoop.security.Credentials; import org.apache.hadoop.security.token.Token; import org.apache.hadoop.security.token.TokenIdentifier; import org.apache.hadoop.yarn.util.DockerClientConfigHandler; -import org.junit.Before; -import org.junit.Test; -import java.io.BufferedWriter; -import java.io.File; -import java.io.FileWriter; -import java.nio.ByteBuffer; - -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertTrue; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertTrue; /** * Test the functionality of the DockerClientConfigHandler. @@ -51,7 +52,7 @@ public class TestDockerClientConfigHandler { private File file; private Configuration conf = new Configuration(); - @Before + @BeforeEach public void setUp() throws Exception { file = File.createTempFile("docker-client-config", "test"); file.deleteOnExit(); @@ -61,7 +62,7 @@ public class TestDockerClientConfigHandler { } @Test - public void testReadCredentialsFromConfigFile() throws Exception { + void testReadCredentialsFromConfigFile() throws Exception { Credentials credentials = DockerClientConfigHandler.readCredentialsFromConfigFile( new Path(file.toURI()), conf, APPLICATION_ID); @@ -85,7 +86,7 @@ public class TestDockerClientConfigHandler { } @Test - public void testGetCredentialsFromTokensByteBuffer() throws Exception { + void testGetCredentialsFromTokensByteBuffer() throws Exception { Credentials credentials = DockerClientConfigHandler.readCredentialsFromConfigFile( new Path(file.toURI()), conf, APPLICATION_ID); @@ -110,7 +111,7 @@ public class TestDockerClientConfigHandler { } @Test - public void testWriteDockerCredentialsToPath() throws Exception { + void testWriteDockerCredentialsToPath() throws Exception { File outFile = File.createTempFile("docker-client-config", "out"); outFile.deleteOnExit(); Credentials credentials = diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/security/TestYARNTokenIdentifier.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/security/TestYARNTokenIdentifier.java index 8109b5ead4a..69f58839c92 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/security/TestYARNTokenIdentifier.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/security/TestYARNTokenIdentifier.java @@ -21,6 +21,8 @@ import java.io.ByteArrayOutputStream; import java.io.DataOutputStream; import java.io.IOException; +import org.junit.jupiter.api.Test; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.CommonConfigurationKeysPublic; import org.apache.hadoop.io.DataInputBuffer; @@ -43,31 +45,33 @@ import org.apache.hadoop.yarn.security.client.ClientToAMTokenIdentifier; import org.apache.hadoop.yarn.security.client.RMDelegationTokenIdentifier; import org.apache.hadoop.yarn.security.client.TimelineDelegationTokenIdentifier; import org.apache.hadoop.yarn.server.api.ContainerType; -import org.junit.Assert; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertNull; public class TestYARNTokenIdentifier { @Test - public void testNMTokenIdentifier() throws IOException { + void testNMTokenIdentifier() throws IOException { testNMTokenIdentifier(false); } @Test - public void testNMTokenIdentifierOldFormat() throws IOException { + void testNMTokenIdentifierOldFormat() throws IOException { testNMTokenIdentifier(true); } public void testNMTokenIdentifier(boolean oldFormat) throws IOException { - ApplicationAttemptId appAttemptId = ApplicationAttemptId.newInstance( - ApplicationId.newInstance(1, 1), 1); + ApplicationAttemptId appAttemptId = + ApplicationAttemptId.newInstance(ApplicationId.newInstance(1, 1), 1); NodeId nodeId = NodeId.newInstance("host0", 0); String applicationSubmitter = "usr0"; int masterKeyId = 1; - - NMTokenIdentifier token = new NMTokenIdentifier( - appAttemptId, nodeId, applicationSubmitter, masterKeyId); - + + NMTokenIdentifier token = + new NMTokenIdentifier(appAttemptId, nodeId, applicationSubmitter, masterKeyId); + NMTokenIdentifier anotherToken = new NMTokenIdentifier(); byte[] tokenContent; @@ -79,36 +83,32 @@ public class TestYARNTokenIdentifier { DataInputBuffer dib = new DataInputBuffer(); dib.reset(tokenContent, tokenContent.length); anotherToken.readFields(dib); - + // verify the whole record equals with original record - Assert.assertEquals("Token is not the same after serialization " + - "and deserialization.", token, anotherToken); - + assertEquals(token, anotherToken, + "Token is not the same after serialization " + "and deserialization."); + // verify all properties are the same as original - Assert.assertEquals( - "appAttemptId from proto is not the same with original token", - anotherToken.getApplicationAttemptId(), appAttemptId); - - Assert.assertEquals( - "NodeId from proto is not the same with original token", - anotherToken.getNodeId(), nodeId); - - Assert.assertEquals( - "applicationSubmitter from proto is not the same with original token", - anotherToken.getApplicationSubmitter(), applicationSubmitter); - - Assert.assertEquals( - "masterKeyId from proto is not the same with original token", - anotherToken.getKeyId(), masterKeyId); + assertEquals(anotherToken.getApplicationAttemptId(), appAttemptId, + "appAttemptId from proto is not the same with original token"); + + assertEquals(anotherToken.getNodeId(), nodeId, + "NodeId from proto is not the same with original token"); + + assertEquals(anotherToken.getApplicationSubmitter(), applicationSubmitter, + "applicationSubmitter from proto is not the same with original token"); + + assertEquals(anotherToken.getKeyId(), masterKeyId, + "masterKeyId from proto is not the same with original token"); } @Test - public void testAMRMTokenIdentifier() throws IOException { + void testAMRMTokenIdentifier() throws IOException { testAMRMTokenIdentifier(false); } @Test - public void testAMRMTokenIdentifierOldFormat() throws IOException { + void testAMRMTokenIdentifierOldFormat() throws IOException { testAMRMTokenIdentifier(true); } @@ -130,55 +130,55 @@ public class TestYARNTokenIdentifier { DataInputBuffer dib = new DataInputBuffer(); dib.reset(tokenContent, tokenContent.length); anotherToken.readFields(dib); - + // verify the whole record equals with original record - Assert.assertEquals("Token is not the same after serialization " + - "and deserialization.", token, anotherToken); - - Assert.assertEquals("ApplicationAttemptId from proto is not the same with original token", - anotherToken.getApplicationAttemptId(), appAttemptId); - - Assert.assertEquals("masterKeyId from proto is not the same with original token", - anotherToken.getKeyId(), masterKeyId); + assertEquals(token, anotherToken, + "Token is not the same after serialization " + "and deserialization."); + + assertEquals(anotherToken.getApplicationAttemptId(), appAttemptId, + "ApplicationAttemptId from proto is not the same with original token"); + + assertEquals(anotherToken.getKeyId(), masterKeyId, + "masterKeyId from proto is not the same with original token"); } - + @Test - public void testClientToAMTokenIdentifier() throws IOException { + void testClientToAMTokenIdentifier() throws IOException { ApplicationAttemptId appAttemptId = ApplicationAttemptId.newInstance( ApplicationId.newInstance(1, 1), 1); - + String clientName = "user"; - + ClientToAMTokenIdentifier token = new ClientToAMTokenIdentifier( appAttemptId, clientName); - + ClientToAMTokenIdentifier anotherToken = new ClientToAMTokenIdentifier(); - + byte[] tokenContent = token.getBytes(); DataInputBuffer dib = new DataInputBuffer(); dib.reset(tokenContent, tokenContent.length); anotherToken.readFields(dib); - + // verify the whole record equals with original record - Assert.assertEquals("Token is not the same after serialization " + - "and deserialization.", token, anotherToken); - - Assert.assertEquals("ApplicationAttemptId from proto is not the same with original token", - anotherToken.getApplicationAttemptID(), appAttemptId); - - Assert.assertEquals("clientName from proto is not the same with original token", - anotherToken.getClientName(), clientName); + assertEquals(token, anotherToken, + "Token is not the same after serialization " + "and deserialization."); + + assertEquals(anotherToken.getApplicationAttemptID(), appAttemptId, + "ApplicationAttemptId from proto is not the same with original token"); + + assertEquals(anotherToken.getClientName(), clientName, + "clientName from proto is not the same with original token"); } @Test - public void testContainerTokenIdentifierProtoMissingFields() + void testContainerTokenIdentifierProtoMissingFields() throws IOException { ContainerTokenIdentifierProto.Builder builder = ContainerTokenIdentifierProto.newBuilder(); ContainerTokenIdentifierProto proto = builder.build(); - Assert.assertFalse(proto.hasContainerType()); - Assert.assertFalse(proto.hasExecutionType()); - Assert.assertFalse(proto.hasNodeLabelExpression()); + assertFalse(proto.hasContainerType()); + assertFalse(proto.hasExecutionType()); + assertFalse(proto.hasNodeLabelExpression()); byte[] tokenData = proto.toByteArray(); DataInputBuffer dib = new DataInputBuffer(); @@ -186,21 +186,19 @@ public class TestYARNTokenIdentifier { ContainerTokenIdentifier tid = new ContainerTokenIdentifier(); tid.readFields(dib); - Assert.assertEquals("container type", - ContainerType.TASK, tid.getContainerType()); - Assert.assertEquals("execution type", - ExecutionType.GUARANTEED, tid.getExecutionType()); - Assert.assertEquals("node label expression", - CommonNodeLabelsManager.NO_LABEL, tid.getNodeLabelExpression()); + assertEquals(ContainerType.TASK, tid.getContainerType(), "container type"); + assertEquals(ExecutionType.GUARANTEED, tid.getExecutionType(), "execution type"); + assertEquals(CommonNodeLabelsManager.NO_LABEL, tid.getNodeLabelExpression(), + "node label expression"); } @Test - public void testContainerTokenIdentifier() throws IOException { + void testContainerTokenIdentifier() throws IOException { testContainerTokenIdentifier(false, false); } @Test - public void testContainerTokenIdentifierOldFormat() throws IOException { + void testContainerTokenIdentifierOldFormat() throws IOException { testContainerTokenIdentifier(true, true); testContainerTokenIdentifier(true, false); } @@ -236,64 +234,55 @@ public class TestYARNTokenIdentifier { anotherToken.readFields(dib); // verify the whole record equals with original record - Assert.assertEquals("Token is not the same after serialization " + - "and deserialization.", token, anotherToken); - - Assert.assertEquals( - "ContainerID from proto is not the same with original token", - anotherToken.getContainerID(), containerID); - - Assert.assertEquals( - "Hostname from proto is not the same with original token", - anotherToken.getNmHostAddress(), hostName); - - Assert.assertEquals( - "ApplicationSubmitter from proto is not the same with original token", - anotherToken.getApplicationSubmitter(), appSubmitter); - - Assert.assertEquals( - "Resource from proto is not the same with original token", - anotherToken.getResource(), r); - - Assert.assertEquals( - "expiryTimeStamp from proto is not the same with original token", - anotherToken.getExpiryTimeStamp(), expiryTimeStamp); - - Assert.assertEquals( - "KeyId from proto is not the same with original token", - anotherToken.getMasterKeyId(), masterKeyId); - - Assert.assertEquals( - "RMIdentifier from proto is not the same with original token", - anotherToken.getRMIdentifier(), rmIdentifier); - - Assert.assertEquals( - "Priority from proto is not the same with original token", - anotherToken.getPriority(), priority); - - Assert.assertEquals( - "CreationTime from proto is not the same with original token", - anotherToken.getCreationTime(), creationTime); - - Assert.assertNull(anotherToken.getLogAggregationContext()); + assertEquals(token, anotherToken, + "Token is not the same after serialization " + "and deserialization."); - Assert.assertEquals(CommonNodeLabelsManager.NO_LABEL, + assertEquals(anotherToken.getContainerID(), containerID, + "ContainerID from proto is not the same with original token"); + + assertEquals(anotherToken.getNmHostAddress(), hostName, + "Hostname from proto is not the same with original token"); + + assertEquals(anotherToken.getApplicationSubmitter(), appSubmitter, + "ApplicationSubmitter from proto is not the same with original token"); + + assertEquals(anotherToken.getResource(), r, + "Resource from proto is not the same with original token"); + + assertEquals(anotherToken.getExpiryTimeStamp(), expiryTimeStamp, + "expiryTimeStamp from proto is not the same with original token"); + + assertEquals(anotherToken.getMasterKeyId(), masterKeyId, + "KeyId from proto is not the same with original token"); + + assertEquals(anotherToken.getRMIdentifier(), rmIdentifier, + "RMIdentifier from proto is not the same with original token"); + + assertEquals(anotherToken.getPriority(), priority, + "Priority from proto is not the same with original token"); + + assertEquals(anotherToken.getCreationTime(), creationTime, + "CreationTime from proto is not the same with original token"); + + assertNull(anotherToken.getLogAggregationContext()); + + assertEquals(CommonNodeLabelsManager.NO_LABEL, anotherToken.getNodeLabelExpression()); - Assert.assertEquals(ContainerType.TASK, + assertEquals(ContainerType.TASK, anotherToken.getContainerType()); - Assert.assertEquals(ExecutionType.GUARANTEED, + assertEquals(ExecutionType.GUARANTEED, anotherToken.getExecutionType()); } @Test - public void testRMDelegationTokenIdentifier() throws IOException { + void testRMDelegationTokenIdentifier() throws IOException { testRMDelegationTokenIdentifier(false); } @Test - public void testRMDelegationTokenIdentifierOldFormat() throws IOException { + void testRMDelegationTokenIdentifierOldFormat() throws IOException { testRMDelegationTokenIdentifier(true); } @@ -333,30 +322,22 @@ public class TestYARNTokenIdentifier { dib.close(); } // verify the whole record equals with original record - Assert.assertEquals( - "Token is not the same after serialization and deserialization.", - originalToken, anotherToken); - Assert.assertEquals( - "owner from proto is not the same with original token", - owner, anotherToken.getOwner()); - Assert.assertEquals( - "renewer from proto is not the same with original token", - renewer, anotherToken.getRenewer()); - Assert.assertEquals( - "realUser from proto is not the same with original token", - realUser, anotherToken.getRealUser()); - Assert.assertEquals( - "issueDate from proto is not the same with original token", - issueDate, anotherToken.getIssueDate()); - Assert.assertEquals( - "maxDate from proto is not the same with original token", - maxDate, anotherToken.getMaxDate()); - Assert.assertEquals( - "sequenceNumber from proto is not the same with original token", - sequenceNumber, anotherToken.getSequenceNumber()); - Assert.assertEquals( - "masterKeyId from proto is not the same with original token", - masterKeyId, anotherToken.getMasterKeyId()); + assertEquals(originalToken, anotherToken, + "Token is not the same after serialization and deserialization."); + assertEquals(owner, anotherToken.getOwner(), + "owner from proto is not the same with original token"); + assertEquals(renewer, anotherToken.getRenewer(), + "renewer from proto is not the same with original token"); + assertEquals(realUser, anotherToken.getRealUser(), + "realUser from proto is not the same with original token"); + assertEquals(issueDate, anotherToken.getIssueDate(), + "issueDate from proto is not the same with original token"); + assertEquals(maxDate, anotherToken.getMaxDate(), + "maxDate from proto is not the same with original token"); + assertEquals(sequenceNumber, anotherToken.getSequenceNumber(), + "sequenceNumber from proto is not the same with original token"); + assertEquals(masterKeyId, anotherToken.getMasterKeyId(), + "masterKeyId from proto is not the same with original token"); // Test getProto YARNDelegationTokenIdentifierProto tokenProto = originalToken.getProto(); @@ -372,15 +353,15 @@ public class TestYARNTokenIdentifier { readToken.readFields(db); // Verify if read token equals with original token - Assert.assertEquals("Token from getProto is not the same after " + - "serialization and deserialization.", originalToken, readToken); + assertEquals(originalToken, readToken, "Token from getProto is not the same after " + + "serialization and deserialization."); db.close(); out.close(); } - + @Test - public void testTimelineDelegationTokenIdentifier() throws IOException { - + void testTimelineDelegationTokenIdentifier() throws IOException { + Text owner = new Text("user1"); Text renewer = new Text("user2"); Text realUser = new Text("user3"); @@ -388,50 +369,50 @@ public class TestYARNTokenIdentifier { long maxDate = 2; int sequenceNumber = 3; int masterKeyId = 4; - - TimelineDelegationTokenIdentifier token = + + TimelineDelegationTokenIdentifier token = new TimelineDelegationTokenIdentifier(owner, renewer, realUser); token.setIssueDate(issueDate); token.setMaxDate(maxDate); token.setSequenceNumber(sequenceNumber); token.setMasterKeyId(masterKeyId); - - TimelineDelegationTokenIdentifier anotherToken = + + TimelineDelegationTokenIdentifier anotherToken = new TimelineDelegationTokenIdentifier(); - + byte[] tokenContent = token.getBytes(); DataInputBuffer dib = new DataInputBuffer(); dib.reset(tokenContent, tokenContent.length); anotherToken.readFields(dib); - + // verify the whole record equals with original record - Assert.assertEquals("Token is not the same after serialization " + - "and deserialization.", token, anotherToken); - - Assert.assertEquals("owner from proto is not the same with original token", - anotherToken.getOwner(), owner); - - Assert.assertEquals("renewer from proto is not the same with original token", - anotherToken.getRenewer(), renewer); - - Assert.assertEquals("realUser from proto is not the same with original token", - anotherToken.getRealUser(), realUser); - - Assert.assertEquals("issueDate from proto is not the same with original token", - anotherToken.getIssueDate(), issueDate); - - Assert.assertEquals("maxDate from proto is not the same with original token", - anotherToken.getMaxDate(), maxDate); - - Assert.assertEquals("sequenceNumber from proto is not the same with original token", - anotherToken.getSequenceNumber(), sequenceNumber); - - Assert.assertEquals("masterKeyId from proto is not the same with original token", - anotherToken.getMasterKeyId(), masterKeyId); + assertEquals(token, anotherToken, + "Token is not the same after serialization " + "and deserialization."); + + assertEquals(anotherToken.getOwner(), owner, + "owner from proto is not the same with original token"); + + assertEquals(anotherToken.getRenewer(), renewer, + "renewer from proto is not the same with original token"); + + assertEquals(anotherToken.getRealUser(), realUser, + "realUser from proto is not the same with original token"); + + assertEquals(anotherToken.getIssueDate(), issueDate, + "issueDate from proto is not the same with original token"); + + assertEquals(anotherToken.getMaxDate(), maxDate, + "maxDate from proto is not the same with original token"); + + assertEquals(anotherToken.getSequenceNumber(), sequenceNumber, + "sequenceNumber from proto is not the same with original token"); + + assertEquals(anotherToken.getMasterKeyId(), masterKeyId, + "masterKeyId from proto is not the same with original token"); } @Test - public void testParseTimelineDelegationTokenIdentifierRenewer() throws IOException { + void testParseTimelineDelegationTokenIdentifierRenewer() throws IOException { // Server side when generation a timeline DT Configuration conf = new YarnConfiguration(); conf.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTH_TO_LOCAL, @@ -442,11 +423,11 @@ public class TestYARNTokenIdentifier { Text realUser = new Text("realUser"); TimelineDelegationTokenIdentifier token = new TimelineDelegationTokenIdentifier(owner, renewer, realUser); - Assert.assertEquals(new Text("yarn"), token.getRenewer()); + assertEquals(new Text("yarn"), token.getRenewer()); } @Test - public void testAMContainerTokenIdentifier() throws IOException { + void testAMContainerTokenIdentifier() throws IOException { ContainerId containerID = ContainerId.newContainerId( ApplicationAttemptId.newInstance(ApplicationId.newInstance( 1, 1), 1), 1); @@ -471,10 +452,10 @@ public class TestYARNTokenIdentifier { dib.reset(tokenContent, tokenContent.length); anotherToken.readFields(dib); - Assert.assertEquals(ContainerType.APPLICATION_MASTER, + assertEquals(ContainerType.APPLICATION_MASTER, anotherToken.getContainerType()); - Assert.assertEquals(ExecutionType.GUARANTEED, + assertEquals(ExecutionType.GUARANTEED, anotherToken.getExecutionType()); token = @@ -490,10 +471,10 @@ public class TestYARNTokenIdentifier { dib.reset(tokenContent, tokenContent.length); anotherToken.readFields(dib); - Assert.assertEquals(ContainerType.TASK, + assertEquals(ContainerType.TASK, anotherToken.getContainerType()); - Assert.assertEquals(ExecutionType.OPPORTUNISTIC, + assertEquals(ExecutionType.OPPORTUNISTIC, anotherToken.getExecutionType()); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/server/security/TestApplicationACLsManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/server/security/TestApplicationACLsManager.java index 2db1da90416..0a32bb51459 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/server/security/TestApplicationACLsManager.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/server/security/TestApplicationACLsManager.java @@ -17,18 +17,19 @@ */ package org.apache.hadoop.yarn.server.security; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertTrue; - import java.util.HashMap; import java.util.Map; +import org.junit.jupiter.api.Test; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.yarn.api.records.ApplicationAccessType; import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.conf.YarnConfiguration; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertTrue; public class TestApplicationACLsManager { @@ -39,14 +40,14 @@ public class TestApplicationACLsManager { private static final String TESTUSER3 = "testuser3"; @Test - public void testCheckAccess() { + void testCheckAccess() { Configuration conf = new Configuration(); conf.setBoolean(YarnConfiguration.YARN_ACL_ENABLE, true); conf.set(YarnConfiguration.YARN_ADMIN_ACL, ADMIN_USER); ApplicationACLsManager aclManager = new ApplicationACLsManager(conf); - Map aclMap = + Map aclMap = new HashMap(); aclMap.put(ApplicationAccessType.VIEW_APP, TESTUSER1 + "," + TESTUSER3); aclMap.put(ApplicationAccessType.MODIFY_APP, TESTUSER1); @@ -56,46 +57,46 @@ public class TestApplicationACLsManager { //User in ACL, should be allowed access UserGroupInformation testUser1 = UserGroupInformation .createRemoteUser(TESTUSER1); - assertTrue(aclManager.checkAccess(testUser1, ApplicationAccessType.VIEW_APP, + assertTrue(aclManager.checkAccess(testUser1, ApplicationAccessType.VIEW_APP, APP_OWNER, appId)); - assertTrue(aclManager.checkAccess(testUser1, ApplicationAccessType.MODIFY_APP, + assertTrue(aclManager.checkAccess(testUser1, ApplicationAccessType.MODIFY_APP, APP_OWNER, appId)); //User NOT in ACL, should not be allowed access UserGroupInformation testUser2 = UserGroupInformation .createRemoteUser(TESTUSER2); - assertFalse(aclManager.checkAccess(testUser2, ApplicationAccessType.VIEW_APP, + assertFalse(aclManager.checkAccess(testUser2, ApplicationAccessType.VIEW_APP, APP_OWNER, appId)); - assertFalse(aclManager.checkAccess(testUser2, ApplicationAccessType.MODIFY_APP, + assertFalse(aclManager.checkAccess(testUser2, ApplicationAccessType.MODIFY_APP, APP_OWNER, appId)); //User has View access, but not modify access UserGroupInformation testUser3 = UserGroupInformation .createRemoteUser(TESTUSER3); - assertTrue(aclManager.checkAccess(testUser3, ApplicationAccessType.VIEW_APP, + assertTrue(aclManager.checkAccess(testUser3, ApplicationAccessType.VIEW_APP, APP_OWNER, appId)); - assertFalse(aclManager.checkAccess(testUser3, ApplicationAccessType.MODIFY_APP, + assertFalse(aclManager.checkAccess(testUser3, ApplicationAccessType.MODIFY_APP, APP_OWNER, appId)); //Application Owner should have all access UserGroupInformation appOwner = UserGroupInformation .createRemoteUser(APP_OWNER); - assertTrue(aclManager.checkAccess(appOwner, ApplicationAccessType.VIEW_APP, + assertTrue(aclManager.checkAccess(appOwner, ApplicationAccessType.VIEW_APP, APP_OWNER, appId)); - assertTrue(aclManager.checkAccess(appOwner, ApplicationAccessType.MODIFY_APP, + assertTrue(aclManager.checkAccess(appOwner, ApplicationAccessType.MODIFY_APP, APP_OWNER, appId)); //Admin should have all access UserGroupInformation adminUser = UserGroupInformation .createRemoteUser(ADMIN_USER); - assertTrue(aclManager.checkAccess(adminUser, ApplicationAccessType.VIEW_APP, + assertTrue(aclManager.checkAccess(adminUser, ApplicationAccessType.VIEW_APP, APP_OWNER, appId)); - assertTrue(aclManager.checkAccess(adminUser, ApplicationAccessType.MODIFY_APP, + assertTrue(aclManager.checkAccess(adminUser, ApplicationAccessType.MODIFY_APP, APP_OWNER, appId)); } @Test - public void testCheckAccessWithNullACLS() { + void testCheckAccessWithNullACLS() { Configuration conf = new Configuration(); conf.setBoolean(YarnConfiguration.YARN_ACL_ENABLE, true); @@ -108,30 +109,30 @@ public class TestApplicationACLsManager { //Application ACL is not added //Application Owner should have all access even if Application ACL is not added - assertTrue(aclManager.checkAccess(appOwner, ApplicationAccessType.MODIFY_APP, + assertTrue(aclManager.checkAccess(appOwner, ApplicationAccessType.MODIFY_APP, APP_OWNER, appId)); - assertTrue(aclManager.checkAccess(appOwner, ApplicationAccessType.VIEW_APP, + assertTrue(aclManager.checkAccess(appOwner, ApplicationAccessType.VIEW_APP, APP_OWNER, appId)); //Admin should have all access UserGroupInformation adminUser = UserGroupInformation .createRemoteUser(ADMIN_USER); - assertTrue(aclManager.checkAccess(adminUser, ApplicationAccessType.VIEW_APP, + assertTrue(aclManager.checkAccess(adminUser, ApplicationAccessType.VIEW_APP, APP_OWNER, appId)); - assertTrue(aclManager.checkAccess(adminUser, ApplicationAccessType.MODIFY_APP, + assertTrue(aclManager.checkAccess(adminUser, ApplicationAccessType.MODIFY_APP, APP_OWNER, appId)); // A regular user should Not have access UserGroupInformation testUser1 = UserGroupInformation .createRemoteUser(TESTUSER1); - assertFalse(aclManager.checkAccess(testUser1, ApplicationAccessType.VIEW_APP, + assertFalse(aclManager.checkAccess(testUser1, ApplicationAccessType.VIEW_APP, APP_OWNER, appId)); - assertFalse(aclManager.checkAccess(testUser1, ApplicationAccessType.MODIFY_APP, + assertFalse(aclManager.checkAccess(testUser1, ApplicationAccessType.MODIFY_APP, APP_OWNER, appId)); } - + @Test - public void testCheckAccessWithPartialACLS() { + void testCheckAccessWithPartialACLS() { Configuration conf = new Configuration(); conf.setBoolean(YarnConfiguration.YARN_ACL_ENABLE, true); @@ -141,40 +142,40 @@ public class TestApplicationACLsManager { UserGroupInformation appOwner = UserGroupInformation .createRemoteUser(APP_OWNER); // Add only the VIEW ACLS - Map aclMap = + Map aclMap = new HashMap(); - aclMap.put(ApplicationAccessType.VIEW_APP, TESTUSER1 ); + aclMap.put(ApplicationAccessType.VIEW_APP, TESTUSER1); ApplicationId appId = ApplicationId.newInstance(1, 1); aclManager.addApplication(appId, aclMap); //Application Owner should have all access even if Application ACL is not added - assertTrue(aclManager.checkAccess(appOwner, ApplicationAccessType.MODIFY_APP, + assertTrue(aclManager.checkAccess(appOwner, ApplicationAccessType.MODIFY_APP, APP_OWNER, appId)); - assertTrue(aclManager.checkAccess(appOwner, ApplicationAccessType.VIEW_APP, + assertTrue(aclManager.checkAccess(appOwner, ApplicationAccessType.VIEW_APP, APP_OWNER, appId)); //Admin should have all access UserGroupInformation adminUser = UserGroupInformation .createRemoteUser(ADMIN_USER); - assertTrue(aclManager.checkAccess(adminUser, ApplicationAccessType.VIEW_APP, + assertTrue(aclManager.checkAccess(adminUser, ApplicationAccessType.VIEW_APP, APP_OWNER, appId)); - assertTrue(aclManager.checkAccess(adminUser, ApplicationAccessType.MODIFY_APP, + assertTrue(aclManager.checkAccess(adminUser, ApplicationAccessType.MODIFY_APP, APP_OWNER, appId)); // testuser1 should have view access only UserGroupInformation testUser1 = UserGroupInformation .createRemoteUser(TESTUSER1); - assertTrue(aclManager.checkAccess(testUser1, ApplicationAccessType.VIEW_APP, + assertTrue(aclManager.checkAccess(testUser1, ApplicationAccessType.VIEW_APP, APP_OWNER, appId)); - assertFalse(aclManager.checkAccess(testUser1, ApplicationAccessType.MODIFY_APP, + assertFalse(aclManager.checkAccess(testUser1, ApplicationAccessType.MODIFY_APP, APP_OWNER, appId)); - + // A testuser2 should Not have access UserGroupInformation testUser2 = UserGroupInformation .createRemoteUser(TESTUSER2); - assertFalse(aclManager.checkAccess(testUser2, ApplicationAccessType.VIEW_APP, + assertFalse(aclManager.checkAccess(testUser2, ApplicationAccessType.VIEW_APP, APP_OWNER, appId)); - assertFalse(aclManager.checkAccess(testUser2, ApplicationAccessType.MODIFY_APP, + assertFalse(aclManager.checkAccess(testUser2, ApplicationAccessType.MODIFY_APP, APP_OWNER, appId)); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestAdHocLogDumper.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestAdHocLogDumper.java index 4b2545e812c..2b7be0be284 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestAdHocLogDumper.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestAdHocLogDumper.java @@ -18,23 +18,24 @@ package org.apache.hadoop.yarn.util; - -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; -import org.apache.hadoop.util.Time; -import org.apache.log4j.Appender; -import org.apache.log4j.AppenderSkeleton; -import org.apache.log4j.Priority; -import org.apache.log4j.LogManager; -import org.junit.Assert; -import org.junit.Test; - import java.io.File; import java.util.Enumeration; import java.util.HashMap; import java.util.Map; +import org.junit.jupiter.api.Test; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.util.Time; +import org.apache.log4j.Appender; +import org.apache.log4j.AppenderSkeleton; +import org.apache.log4j.LogManager; +import org.apache.log4j.Priority; + import static org.apache.hadoop.util.GenericsUtil.isLog4jLogger; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertTrue; public class TestAdHocLogDumper { @@ -42,14 +43,14 @@ public class TestAdHocLogDumper { LoggerFactory.getLogger(TestAdHocLogDumper.class); @Test - public void testDumpingSchedulerLogs() throws Exception { + void testDumpingSchedulerLogs() throws Exception { Map levels = new HashMap<>(); String logFilename = "test.log"; Logger logger = LoggerFactory.getLogger(TestAdHocLogDumper.class); if (isLog4jLogger(this.getClass())) { - for (Enumeration appenders = LogManager.getRootLogger(). - getAllAppenders(); appenders.hasMoreElements();) { + for (Enumeration appenders = + LogManager.getRootLogger().getAllAppenders(); appenders.hasMoreElements();) { Object obj = appenders.nextElement(); if (obj instanceof AppenderSkeleton) { AppenderSkeleton appender = (AppenderSkeleton) obj; @@ -64,11 +65,11 @@ public class TestAdHocLogDumper { LOG.debug("test message 1"); LOG.info("test message 2"); File logFile = new File(logFilename); - Assert.assertTrue(logFile.exists()); + assertTrue(logFile.exists()); Thread.sleep(2000); long lastWrite = logFile.lastModified(); - Assert.assertTrue(lastWrite < Time.now()); - Assert.assertTrue(logFile.length() != 0); + assertTrue(lastWrite < Time.now()); + assertTrue(logFile.length() != 0); // make sure levels are set back to their original values if (isLog4jLogger(this.getClass())) { @@ -77,12 +78,12 @@ public class TestAdHocLogDumper { Object obj = appenders.nextElement(); if (obj instanceof AppenderSkeleton) { AppenderSkeleton appender = (AppenderSkeleton) obj; - Assert.assertEquals(levels.get(appender), appender.getThreshold()); + assertEquals(levels.get(appender), appender.getThreshold()); } } } boolean del = logFile.delete(); - if(!del) { + if (!del) { LOG.info("Couldn't clean up after test"); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestApps.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestApps.java index 5db8046c15c..5866a3e6946 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestApps.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestApps.java @@ -17,22 +17,23 @@ */ package org.apache.hadoop.yarn.util; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.util.Shell; -import org.junit.Test; - import java.io.File; import java.util.HashMap; import java.util.Map; +import org.junit.jupiter.api.Test; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.util.Shell; + import static org.assertj.core.api.Assertions.assertThat; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertNull; -import static org.junit.Assert.assertTrue; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNull; +import static org.junit.jupiter.api.Assertions.assertTrue; public class TestApps { @Test - public void testSetEnvFromInputString() { + void testSetEnvFromInputString() { Map environment = new HashMap(); environment.put("JAVA_HOME", "/path/jdk"); String goodEnv = "a1=1,b_2=2,_c=3,d=4,e=,f_win=%JAVA_HOME%" @@ -64,7 +65,7 @@ public class TestApps { } @Test - public void testSetEnvFromInputProperty() { + void testSetEnvFromInputProperty() { Configuration conf = new Configuration(false); Map env = new HashMap<>(); String propName = "mapreduce.map.env"; @@ -91,7 +92,7 @@ public class TestApps { } @Test - public void testSetEnvFromInputPropertyDefault() { + void testSetEnvFromInputPropertyDefault() { Configuration conf = new Configuration(false); Map env = new HashMap<>(); String propName = "mapreduce.map.env"; @@ -122,7 +123,7 @@ public class TestApps { } @Test - public void testSetEnvFromInputPropertyOverrideDefault() { + void testSetEnvFromInputPropertyOverrideDefault() { Configuration conf = new Configuration(false); Map env = new HashMap<>(); @@ -152,7 +153,7 @@ public class TestApps { } @Test - public void testSetEnvFromInputPropertyCommas() { + void testSetEnvFromInputPropertyCommas() { Configuration conf = new Configuration(false); Map env = new HashMap<>(); String propName = "mapreduce.reduce.env"; @@ -176,7 +177,7 @@ public class TestApps { } @Test - public void testSetEnvFromInputPropertyNull() { + void testSetEnvFromInputPropertyNull() { Configuration conf = new Configuration(false); Map env = new HashMap<>(); String propName = "mapreduce.map.env"; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestBoundedAppender.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestBoundedAppender.java index 2b9cfce25fe..411be311974 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestBoundedAppender.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestBoundedAppender.java @@ -18,48 +18,49 @@ package org.apache.hadoop.yarn.util; -import org.junit.Rule; -import org.junit.Test; -import org.junit.rules.ExpectedException; +import org.junit.jupiter.api.Test; -import static org.junit.Assert.assertEquals; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertThrows; +import static org.junit.jupiter.api.Assertions.assertTrue; /** * Test class for {@link BoundedAppender}. */ public class TestBoundedAppender { - @Rule - public ExpectedException expected = ExpectedException.none(); @Test - public void initWithZeroLimitThrowsException() { - expected.expect(IllegalArgumentException.class); - expected.expectMessage("limit should be positive"); + void initWithZeroLimitThrowsException() { + Throwable exception = assertThrows(IllegalArgumentException.class, () -> { - new BoundedAppender(0); + new BoundedAppender(0); + }); + assertTrue(exception.getMessage().contains("limit should be positive")); } @Test - public void nullAppendedNullStringRead() { + void nullAppendedNullStringRead() { final BoundedAppender boundedAppender = new BoundedAppender(4); boundedAppender.append(null); - assertEquals("null appended, \"null\" read", "null", - boundedAppender.toString()); + assertEquals("null", + boundedAppender.toString(), + "null appended, \"null\" read"); } @Test - public void appendBelowLimitOnceValueIsReadCorrectly() { + void appendBelowLimitOnceValueIsReadCorrectly() { final BoundedAppender boundedAppender = new BoundedAppender(2); boundedAppender.append("ab"); - assertEquals("value appended is read correctly", "ab", - boundedAppender.toString()); + assertEquals("ab", + boundedAppender.toString(), + "value appended is read correctly"); } @Test - public void appendValuesBelowLimitAreReadCorrectlyInFifoOrder() { + void appendValuesBelowLimitAreReadCorrectlyInFifoOrder() { final BoundedAppender boundedAppender = new BoundedAppender(3); boundedAppender.append("ab"); @@ -67,13 +68,13 @@ public class TestBoundedAppender { boundedAppender.append("e"); boundedAppender.append("fg"); - assertEquals("last values appended fitting limit are read correctly", - String.format(BoundedAppender.TRUNCATED_MESSAGES_TEMPLATE, 3, 7, "efg"), - boundedAppender.toString()); + assertEquals(String.format(BoundedAppender.TRUNCATED_MESSAGES_TEMPLATE, 3, 7, "efg"), + boundedAppender.toString(), + "last values appended fitting limit are read correctly"); } @Test - public void appendLastAboveLimitPreservesLastMessagePostfix() { + void appendLastAboveLimitPreservesLastMessagePostfix() { final BoundedAppender boundedAppender = new BoundedAppender(3); boundedAppender.append("ab"); @@ -81,35 +82,37 @@ public class TestBoundedAppender { boundedAppender.append("fghij"); assertEquals( - "last value appended above limit postfix is read correctly", String + String .format(BoundedAppender.TRUNCATED_MESSAGES_TEMPLATE, 3, 10, "hij"), - boundedAppender.toString()); + boundedAppender.toString(), + "last value appended above limit postfix is read correctly"); } @Test - public void appendMiddleAboveLimitPreservesLastMessageAndMiddlePostfix() { + void appendMiddleAboveLimitPreservesLastMessageAndMiddlePostfix() { final BoundedAppender boundedAppender = new BoundedAppender(3); boundedAppender.append("ab"); boundedAppender.append("cde"); - assertEquals("last value appended above limit postfix is read correctly", - String.format(BoundedAppender.TRUNCATED_MESSAGES_TEMPLATE, 3, 5, "cde"), - boundedAppender.toString()); + assertEquals(String.format(BoundedAppender.TRUNCATED_MESSAGES_TEMPLATE, 3, 5, "cde"), + boundedAppender.toString(), + "last value appended above limit postfix is read correctly"); boundedAppender.append("fg"); assertEquals( - "middle value appended above limit postfix and last value are " - + "read correctly", String.format(BoundedAppender.TRUNCATED_MESSAGES_TEMPLATE, 3, 7, "efg"), - boundedAppender.toString()); + boundedAppender.toString(), + "middle value appended above limit postfix and last value are " + + "read correctly"); boundedAppender.append("hijkl"); assertEquals( - "last value appended above limit postfix is read correctly", String + String .format(BoundedAppender.TRUNCATED_MESSAGES_TEMPLATE, 3, 12, "jkl"), - boundedAppender.toString()); + boundedAppender.toString(), + "last value appended above limit postfix is read correctly"); } } \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestConverterUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestConverterUtils.java index 57542218aa2..954efda5264 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestConverterUtils.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestConverterUtils.java @@ -17,23 +17,25 @@ */ package org.apache.hadoop.yarn.util; -import static org.assertj.core.api.Assertions.assertThat; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertNull; - import java.net.URISyntaxException; +import org.junit.jupiter.api.Test; + import org.apache.hadoop.fs.Path; import org.apache.hadoop.yarn.api.TestContainerId; import org.apache.hadoop.yarn.api.records.ContainerId; -import org.apache.hadoop.yarn.api.records.URL; import org.apache.hadoop.yarn.api.records.NodeId; -import org.junit.Test; +import org.apache.hadoop.yarn.api.records.URL; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNull; +import static org.junit.jupiter.api.Assertions.assertThrows; public class TestConverterUtils { - + @Test - public void testConvertUrlWithNoPort() throws URISyntaxException { + void testConvertUrlWithNoPort() throws URISyntaxException { Path expectedPath = new Path("hdfs://foo.com"); URL url = URL.fromPath(expectedPath); Path actualPath = url.toPath(); @@ -41,15 +43,15 @@ public class TestConverterUtils { } @Test - public void testConvertUrlWithUserinfo() throws URISyntaxException { + void testConvertUrlWithUserinfo() throws URISyntaxException { Path expectedPath = new Path("foo://username:password@example.com:8042"); URL url = URL.fromPath(expectedPath); Path actualPath = url.toPath(); assertEquals(expectedPath, actualPath); } - + @Test - public void testContainerId() throws URISyntaxException { + void testContainerId() throws URISyntaxException { ContainerId id = TestContainerId.newContainerId(0, 0, 0, 0); String cid = id.toString(); assertEquals("container_0_0000_00_000000", cid); @@ -58,7 +60,7 @@ public class TestConverterUtils { } @Test - public void testContainerIdWithEpoch() throws URISyntaxException { + void testContainerIdWithEpoch() throws URISyntaxException { ContainerId id = TestContainerId.newContainerId(0, 0, 0, 25645811); String cid = id.toString(); assertEquals("container_0_0000_00_25645811", cid); @@ -85,38 +87,44 @@ public class TestConverterUtils { @Test @SuppressWarnings("deprecation") - public void testContainerIdNull() throws URISyntaxException { - assertNull(ConverterUtils.toString((ContainerId)null)); - } - + void testContainerIdNull() throws URISyntaxException { + assertNull(ConverterUtils.toString((ContainerId) null)); + } + @Test - public void testNodeIdWithDefaultPort() throws URISyntaxException { + void testNodeIdWithDefaultPort() throws URISyntaxException { NodeId nid; - + nid = ConverterUtils.toNodeIdWithDefaultPort("node:10"); assertThat(nid.getPort()).isEqualTo(10); assertThat(nid.getHost()).isEqualTo("node"); - + nid = ConverterUtils.toNodeIdWithDefaultPort("node"); assertThat(nid.getPort()).isEqualTo(0); assertThat(nid.getHost()).isEqualTo("node"); } - @Test(expected = IllegalArgumentException.class) + @Test @SuppressWarnings("deprecation") - public void testInvalidContainerId() { - ContainerId.fromString("container_e20_1423221031460_0003_01"); + void testInvalidContainerId() { + assertThrows(IllegalArgumentException.class, () -> { + ContainerId.fromString("container_e20_1423221031460_0003_01"); + }); } - @Test(expected = IllegalArgumentException.class) + @Test @SuppressWarnings("deprecation") - public void testInvalidAppattemptId() { - ConverterUtils.toApplicationAttemptId("appattempt_1423221031460"); + void testInvalidAppattemptId() { + assertThrows(IllegalArgumentException.class, () -> { + ConverterUtils.toApplicationAttemptId("appattempt_1423221031460"); + }); } - @Test(expected = IllegalArgumentException.class) + @Test @SuppressWarnings("deprecation") - public void testApplicationId() { - ConverterUtils.toApplicationId("application_1423221031460"); + void testApplicationId() { + assertThrows(IllegalArgumentException.class, () -> { + ConverterUtils.toApplicationId("application_1423221031460"); + }); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestFSDownload.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestFSDownload.java index 59b779c071d..eb3e67bd572 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestFSDownload.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestFSDownload.java @@ -18,13 +18,6 @@ package org.apache.hadoop.yarn.util; -import static org.apache.hadoop.fs.CreateFlag.CREATE; -import static org.apache.hadoop.fs.CreateFlag.OVERWRITE; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertSame; -import static org.junit.Assert.assertTrue; -import static org.junit.Assume.assumeTrue; - import java.io.File; import java.io.FileOutputStream; import java.io.IOException; @@ -52,14 +45,17 @@ import java.util.zip.GZIPOutputStream; import java.util.zip.ZipEntry; import java.util.zip.ZipOutputStream; -import org.apache.hadoop.util.concurrent.HadoopExecutors; -import org.apache.hadoop.yarn.api.records.URL; -import org.junit.Assert; +import org.apache.hadoop.thirdparty.com.google.common.cache.CacheBuilder; +import org.apache.hadoop.thirdparty.com.google.common.cache.CacheLoader; +import org.apache.hadoop.thirdparty.com.google.common.cache.LoadingCache; +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; import org.apache.commons.compress.archivers.tar.TarArchiveEntry; import org.apache.commons.compress.archivers.tar.TarArchiveOutputStream; -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.CommonConfigurationKeys; import org.apache.hadoop.fs.FSDataOutputStream; @@ -70,17 +66,21 @@ import org.apache.hadoop.fs.LocalDirAllocator; import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.permission.FsPermission; import org.apache.hadoop.security.UserGroupInformation; +import org.apache.hadoop.util.concurrent.HadoopExecutors; import org.apache.hadoop.yarn.api.records.LocalResource; import org.apache.hadoop.yarn.api.records.LocalResourceType; import org.apache.hadoop.yarn.api.records.LocalResourceVisibility; +import org.apache.hadoop.yarn.api.records.URL; import org.apache.hadoop.yarn.factories.RecordFactory; import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider; -import org.junit.AfterClass; -import org.junit.Test; -import org.apache.hadoop.thirdparty.com.google.common.cache.CacheBuilder; -import org.apache.hadoop.thirdparty.com.google.common.cache.CacheLoader; -import org.apache.hadoop.thirdparty.com.google.common.cache.LoadingCache; +import static org.apache.hadoop.fs.CreateFlag.CREATE; +import static org.apache.hadoop.fs.CreateFlag.OVERWRITE; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertSame; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; +import static org.junit.jupiter.api.Assumptions.assumeTrue; /** * Unit test for the FSDownload class. @@ -96,7 +96,7 @@ public class TestFSDownload { }; private Configuration conf = new Configuration(); - @AfterClass + @AfterAll public static void deleteTestDir() throws IOException { FileContext fs = FileContext.getLocalFSFileContext(); fs.delete(new Path("target", TestFSDownload.class.getSimpleName()), true); @@ -270,16 +270,17 @@ public class TestFSDownload { return ret; } - @Test (timeout=10000) - public void testDownloadBadPublic() throws IOException, URISyntaxException, + @Test + @Timeout(10000) + void testDownloadBadPublic() throws IOException, URISyntaxException, InterruptedException { conf.set(CommonConfigurationKeys.FS_PERMISSIONS_UMASK_KEY, "077"); FileContext files = FileContext.getLocalFSFileContext(conf); final Path basedir = files.makeQualified(new Path("target", - TestFSDownload.class.getSimpleName())); + TestFSDownload.class.getSimpleName())); files.mkdir(basedir, null, true); conf.setStrings(TestFSDownload.class.getName(), basedir.toString()); - + Map rsrcVis = new HashMap(); @@ -288,11 +289,11 @@ public class TestFSDownload { rand.setSeed(sharedSeed); System.out.println("SEED: " + sharedSeed); - Map> pending = - new HashMap>(); + Map> pending = + new HashMap>(); ExecutorService exec = HadoopExecutors.newSingleThreadExecutor(); LocalDirAllocator dirs = - new LocalDirAllocator(TestFSDownload.class.getName()); + new LocalDirAllocator(TestFSDownload.class.getName()); int size = 512; LocalResourceVisibility vis = LocalResourceVisibility.PUBLIC; Path path = new Path(basedir, "test-file"); @@ -300,32 +301,33 @@ public class TestFSDownload { rsrcVis.put(rsrc, vis); Path destPath = dirs.getLocalPathForWrite( basedir.toString(), size, conf); - destPath = new Path (destPath, - Long.toString(uniqueNumberGenerator.incrementAndGet())); + destPath = new Path(destPath, + Long.toString(uniqueNumberGenerator.incrementAndGet())); FSDownload fsd = - new FSDownload(files, UserGroupInformation.getCurrentUser(), conf, - destPath, rsrc); + new FSDownload(files, UserGroupInformation.getCurrentUser(), conf, + destPath, rsrc); pending.put(rsrc, exec.submit(fsd)); exec.shutdown(); - while (!exec.awaitTermination(1000, TimeUnit.MILLISECONDS)); - Assert.assertTrue(pending.get(rsrc).isDone()); + while (!exec.awaitTermination(1000, TimeUnit.MILLISECONDS)) ; + assertTrue(pending.get(rsrc).isDone()); try { - for (Map.Entry> p : pending.entrySet()) { + for (Map.Entry> p : pending.entrySet()) { p.getValue().get(); - Assert.fail("We localized a file that is not public."); + fail("We localized a file that is not public."); } } catch (ExecutionException e) { - Assert.assertTrue(e.getCause() instanceof IOException); + assertTrue(e.getCause() instanceof IOException); } } - @Test (timeout=60000) - public void testDownloadPublicWithStatCache() throws IOException, + @Test + @Timeout(60000) + void testDownloadPublicWithStatCache() throws IOException, URISyntaxException, InterruptedException, ExecutionException { FileContext files = FileContext.getLocalFSFileContext(conf); Path basedir = files.makeQualified(new Path("target", - TestFSDownload.class.getSimpleName())); + TestFSDownload.class.getSimpleName())); // if test directory doesn't have ancestor permission, skip this test FileSystem f = basedir.getFileSystem(conf); @@ -336,28 +338,28 @@ public class TestFSDownload { int size = 512; - final ConcurrentMap counts = - new ConcurrentHashMap(); - final CacheLoader> loader = + final ConcurrentMap counts = + new ConcurrentHashMap(); + final CacheLoader> loader = FSDownload.createStatusCacheLoader(conf); - final LoadingCache> statCache = - CacheBuilder.newBuilder().build(new CacheLoader>() { - public Future load(Path path) throws Exception { - // increment the count - AtomicInteger count = counts.get(path); - if (count == null) { - count = new AtomicInteger(0); - AtomicInteger existing = counts.putIfAbsent(path, count); - if (existing != null) { - count = existing; - } - } - count.incrementAndGet(); + final LoadingCache> statCache = + CacheBuilder.newBuilder().build(new CacheLoader>() { + public Future load(Path path) throws Exception { + // increment the count + AtomicInteger count = counts.get(path); + if (count == null) { + count = new AtomicInteger(0); + AtomicInteger existing = counts.putIfAbsent(path, count); + if (existing != null) { + count = existing; + } + } + count.incrementAndGet(); - // use the default loader - return loader.load(path); - } - }); + // use the default loader + return loader.load(path); + } + }); // test FSDownload.isPublic() concurrently final int fileCount = 3; @@ -382,11 +384,11 @@ public class TestFSDownload { try { List> futures = exec.invokeAll(tasks); // files should be public - for (Future future: futures) { + for (Future future : futures) { assertTrue(future.get()); } // for each path exactly one file status call should be made - for (AtomicInteger count: counts.values()) { + for (AtomicInteger count : counts.values()) { assertSame(count.get(), 1); } } finally { @@ -394,16 +396,17 @@ public class TestFSDownload { } } - @Test (timeout=10000) - public void testDownload() throws IOException, URISyntaxException, + @Test + @Timeout(10000) + void testDownload() throws IOException, URISyntaxException, InterruptedException { conf.set(CommonConfigurationKeys.FS_PERMISSIONS_UMASK_KEY, "077"); FileContext files = FileContext.getLocalFSFileContext(conf); final Path basedir = files.makeQualified(new Path("target", - TestFSDownload.class.getSimpleName())); + TestFSDownload.class.getSimpleName())); files.mkdir(basedir, null, true); conf.setStrings(TestFSDownload.class.getName(), basedir.toString()); - + Map rsrcVis = new HashMap(); @@ -412,16 +415,16 @@ public class TestFSDownload { rand.setSeed(sharedSeed); System.out.println("SEED: " + sharedSeed); - Map> pending = - new HashMap>(); + Map> pending = + new HashMap>(); ExecutorService exec = HadoopExecutors.newSingleThreadExecutor(); LocalDirAllocator dirs = - new LocalDirAllocator(TestFSDownload.class.getName()); + new LocalDirAllocator(TestFSDownload.class.getName()); int[] sizes = new int[10]; for (int i = 0; i < 10; ++i) { sizes[i] = rand.nextInt(512) + 512; LocalResourceVisibility vis = LocalResourceVisibility.PRIVATE; - if (i%2 == 1) { + if (i % 2 == 1) { vis = LocalResourceVisibility.APPLICATION; } Path p = new Path(basedir, "" + i); @@ -429,7 +432,7 @@ public class TestFSDownload { rsrcVis.put(rsrc, vis); Path destPath = dirs.getLocalPathForWrite( basedir.toString(), sizes[i], conf); - destPath = new Path (destPath, + destPath = new Path(destPath, Long.toString(uniqueNumberGenerator.incrementAndGet())); FSDownload fsd = new FSDownload(files, UserGroupInformation.getCurrentUser(), conf, @@ -438,29 +441,29 @@ public class TestFSDownload { } exec.shutdown(); - while (!exec.awaitTermination(1000, TimeUnit.MILLISECONDS)); - for (Future path: pending.values()) { - Assert.assertTrue(path.isDone()); + while (!exec.awaitTermination(1000, TimeUnit.MILLISECONDS)) ; + for (Future path : pending.values()) { + assertTrue(path.isDone()); } try { - for (Map.Entry> p : pending.entrySet()) { + for (Map.Entry> p : pending.entrySet()) { Path localized = p.getValue().get(); assertEquals(sizes[Integer.parseInt(localized.getName())], p.getKey() .getSize()); FileStatus status = files.getFileStatus(localized.getParent()); FsPermission perm = status.getPermission(); - assertEquals("Cache directory permissions are incorrect", - new FsPermission((short)0755), perm); + assertEquals(new FsPermission((short) 0755), perm, + "Cache directory permissions are incorrect"); status = files.getFileStatus(localized); perm = status.getPermission(); - System.out.println("File permission " + perm + + System.out.println("File permission " + perm + " for rsrc vis " + p.getKey().getVisibility().name()); assert(rsrcVis.containsKey(p.getKey())); - Assert.assertTrue("Private file should be 500", - perm.toShort() == FSDownload.PRIVATE_FILE_PERMS.toShort()); + assertTrue(perm.toShort() == FSDownload.PRIVATE_FILE_PERMS.toShort(), + "Private file should be 500"); } } catch (ExecutionException e) { throw new IOException("Failed exec", e); @@ -524,15 +527,14 @@ public class TestFSDownload { FileStatus[] childFiles = files.getDefaultFileSystem().listStatus( filestatus.getPath()); for (FileStatus childfile : childFiles) { - if(strFileName.endsWith(".ZIP") && - childfile.getPath().getName().equals(strFileName) && - !childfile.isDirectory()) { - Assert.fail("Failure...After unzip, there should have been a" + - " directory formed with zip file name but found a file. " - + childfile.getPath()); + if (strFileName.endsWith(".ZIP") && childfile.getPath().getName().equals(strFileName) + && !childfile.isDirectory()) { + fail("Failure...After unzip, there should have been a" + + " directory formed with zip file name but found a file. " + + childfile.getPath()); } if (childfile.getPath().getName().startsWith("tmp")) { - Assert.fail("Tmp File should not have been there " + fail("Tmp File should not have been there " + childfile.getPath()); } } @@ -543,20 +545,23 @@ public class TestFSDownload { } } - @Test (timeout=10000) - public void testDownloadArchive() throws IOException, URISyntaxException, + @Test + @Timeout(10000) + void testDownloadArchive() throws IOException, URISyntaxException, InterruptedException { downloadWithFileType(TEST_FILE_TYPE.TAR); } - @Test (timeout=10000) - public void testDownloadPatternJar() throws IOException, URISyntaxException, + @Test + @Timeout(10000) + void testDownloadPatternJar() throws IOException, URISyntaxException, InterruptedException { downloadWithFileType(TEST_FILE_TYPE.JAR); } - @Test (timeout=10000) - public void testDownloadArchiveZip() throws IOException, URISyntaxException, + @Test + @Timeout(10000) + void testDownloadArchiveZip() throws IOException, URISyntaxException, InterruptedException { downloadWithFileType(TEST_FILE_TYPE.ZIP); } @@ -564,8 +569,9 @@ public class TestFSDownload { /* * To test fix for YARN-3029 */ - @Test (timeout=10000) - public void testDownloadArchiveZipWithTurkishLocale() throws IOException, + @Test + @Timeout(10000) + void testDownloadArchiveZipWithTurkishLocale() throws IOException, URISyntaxException, InterruptedException { Locale defaultLocale = Locale.getDefault(); // Set to Turkish @@ -576,8 +582,9 @@ public class TestFSDownload { Locale.setDefault(defaultLocale); } - @Test (timeout=10000) - public void testDownloadArchiveTgz() throws IOException, URISyntaxException, + @Test + @Timeout(10000) + void testDownloadArchiveTgz() throws IOException, URISyntaxException, InterruptedException { downloadWithFileType(TEST_FILE_TYPE.TGZ); } @@ -588,11 +595,11 @@ public class TestFSDownload { FileStatus status = files.getFileStatus(p); if (status.isDirectory()) { if (vis == LocalResourceVisibility.PUBLIC) { - Assert.assertTrue(status.getPermission().toShort() == + assertTrue(status.getPermission().toShort() == FSDownload.PUBLIC_DIR_PERMS.toShort()); } else { - Assert.assertTrue(status.getPermission().toShort() == + assertTrue(status.getPermission().toShort() == FSDownload.PRIVATE_DIR_PERMS.toShort()); } if (!status.isSymlink()) { @@ -604,40 +611,41 @@ public class TestFSDownload { } else { if (vis == LocalResourceVisibility.PUBLIC) { - Assert.assertTrue(status.getPermission().toShort() == + assertTrue(status.getPermission().toShort() == FSDownload.PUBLIC_FILE_PERMS.toShort()); } else { - Assert.assertTrue(status.getPermission().toShort() == + assertTrue(status.getPermission().toShort() == FSDownload.PRIVATE_FILE_PERMS.toShort()); } } } - @Test (timeout=10000) - public void testDirDownload() throws IOException, InterruptedException { + @Test + @Timeout(10000) + void testDirDownload() throws IOException, InterruptedException { FileContext files = FileContext.getLocalFSFileContext(conf); final Path basedir = files.makeQualified(new Path("target", - TestFSDownload.class.getSimpleName())); + TestFSDownload.class.getSimpleName())); files.mkdir(basedir, null, true); conf.setStrings(TestFSDownload.class.getName(), basedir.toString()); - + Map rsrcVis = new HashMap(); - + Random rand = new Random(); long sharedSeed = rand.nextLong(); rand.setSeed(sharedSeed); System.out.println("SEED: " + sharedSeed); - Map> pending = - new HashMap>(); + Map> pending = + new HashMap>(); ExecutorService exec = HadoopExecutors.newSingleThreadExecutor(); LocalDirAllocator dirs = - new LocalDirAllocator(TestFSDownload.class.getName()); + new LocalDirAllocator(TestFSDownload.class.getName()); for (int i = 0; i < 5; ++i) { LocalResourceVisibility vis = LocalResourceVisibility.PRIVATE; - if (i%2 == 1) { + if (i % 2 == 1) { vis = LocalResourceVisibility.APPLICATION; } @@ -646,7 +654,7 @@ public class TestFSDownload { rsrcVis.put(rsrc, vis); Path destPath = dirs.getLocalPathForWrite( basedir.toString(), conf); - destPath = new Path (destPath, + destPath = new Path(destPath, Long.toString(uniqueNumberGenerator.incrementAndGet())); FSDownload fsd = new FSDownload(files, UserGroupInformation.getCurrentUser(), conf, @@ -655,21 +663,21 @@ public class TestFSDownload { } exec.shutdown(); - while (!exec.awaitTermination(1000, TimeUnit.MILLISECONDS)); - for (Future path: pending.values()) { - Assert.assertTrue(path.isDone()); + while (!exec.awaitTermination(1000, TimeUnit.MILLISECONDS)) ; + for (Future path : pending.values()) { + assertTrue(path.isDone()); } try { - - for (Map.Entry> p : pending.entrySet()) { + + for (Map.Entry> p : pending.entrySet()) { Path localized = p.getValue().get(); FileStatus status = files.getFileStatus(localized); System.out.println("Testing path " + localized); assert(status.isDirectory()); assert(rsrcVis.containsKey(p.getKey())); - + verifyPermsRecursively(localized.getFileSystem(conf), files, localized, rsrcVis.get(p.getKey())); } @@ -678,8 +686,9 @@ public class TestFSDownload { } } - @Test (timeout=10000) - public void testUniqueDestinationPath() throws Exception { + @Test + @Timeout(10000) + void testUniqueDestinationPath() throws Exception { FileContext files = FileContext.getLocalFSFileContext(conf); final Path basedir = files.makeQualified(new Path("target", TestFSDownload.class.getSimpleName())); @@ -704,12 +713,12 @@ public class TestFSDownload { destPath, rsrc); Future rPath = singleThreadedExec.submit(fsd); singleThreadedExec.shutdown(); - while (!singleThreadedExec.awaitTermination(1000, TimeUnit.MILLISECONDS)); - Assert.assertTrue(rPath.isDone()); + while (!singleThreadedExec.awaitTermination(1000, TimeUnit.MILLISECONDS)) ; + assertTrue(rPath.isDone()); // Now FSDownload will not create a random directory to localize the // resource. Therefore the final localizedPath for the resource should be // destination directory (passed as an argument) + file name. - Assert.assertEquals(destPath, rPath.get().getParent()); + assertEquals(destPath, rPath.get().getParent()); } /** @@ -717,8 +726,9 @@ public class TestFSDownload { * from modification to the local resource's timestamp on the source FS just * before the download of this local resource has started. */ - @Test(timeout=10000) - public void testResourceTimestampChangeDuringDownload() + @Test + @Timeout(10000) + void testResourceTimestampChangeDuringDownload() throws IOException, InterruptedException { conf = new Configuration(); FileContext files = FileContext.getLocalFSFileContext(conf); @@ -759,7 +769,7 @@ public class TestFSDownload { FileSystem sourceFs = sourceFsPath.getFileSystem(conf); sourceFs.setTimes(sourceFsPath, modifiedFSTimestamp, modifiedFSTimestamp); } catch (URISyntaxException use) { - Assert.fail("No exception expected."); + fail("No exception expected."); } // Execute the FSDownload operation. @@ -770,19 +780,19 @@ public class TestFSDownload { exec.shutdown(); exec.awaitTermination(1000, TimeUnit.MILLISECONDS); - Assert.assertTrue(pending.get(localResource).isDone()); + assertTrue(pending.get(localResource).isDone()); try { for (Map.Entry> p : pending.entrySet()) { p.getValue().get(); } - Assert.fail("Exception expected from timestamp update during download"); + fail("Exception expected from timestamp update during download"); } catch (ExecutionException ee) { - Assert.assertTrue(ee.getCause() instanceof IOException); - Assert.assertTrue("Exception contains original timestamp", - ee.getMessage().contains(Times.formatISO8601(origLRTimestamp))); - Assert.assertTrue("Exception contains modified timestamp", - ee.getMessage().contains(Times.formatISO8601(modifiedFSTimestamp))); + assertTrue(ee.getCause() instanceof IOException); + assertTrue(ee.getMessage().contains(Times.formatISO8601(origLRTimestamp)), + "Exception contains original timestamp"); + assertTrue(ee.getMessage().contains(Times.formatISO8601(modifiedFSTimestamp)), + "Exception contains modified timestamp"); } } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestLRUCacheHashMap.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestLRUCacheHashMap.java index 9d3ec32975a..5cf9a61dcad 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestLRUCacheHashMap.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestLRUCacheHashMap.java @@ -19,9 +19,12 @@ package org.apache.hadoop.yarn.util; import java.io.IOException; +import org.junit.jupiter.api.Test; + import org.apache.hadoop.yarn.exceptions.YarnException; -import org.junit.Assert; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertTrue; /** * Test class to validate the correctness of the {@code LRUCacheHashMap}. @@ -34,7 +37,7 @@ public class TestLRUCacheHashMap { * expected. */ @Test - public void testLRUCache() + void testLRUCache() throws YarnException, IOException, InterruptedException { int mapSize = 5; @@ -48,11 +51,11 @@ public class TestLRUCacheHashMap { map.put("4", 4); map.put("5", 5); - Assert.assertEquals(mapSize, map.size()); + assertEquals(mapSize, map.size()); // Check if all the elements in the map are from 1 to 5 for (int i = 1; i < mapSize; i++) { - Assert.assertTrue(map.containsKey(Integer.toString(i))); + assertTrue(map.containsKey(Integer.toString(i))); } map.put("6", 6); @@ -60,14 +63,14 @@ public class TestLRUCacheHashMap { map.put("7", 7); map.put("8", 8); - Assert.assertEquals(mapSize, map.size()); + assertEquals(mapSize, map.size()); // Check if all the elements in the map are from 5 to 8 and the 3 for (int i = 5; i < mapSize; i++) { - Assert.assertTrue(map.containsKey(Integer.toString(i))); + assertTrue(map.containsKey(Integer.toString(i))); } - Assert.assertTrue(map.containsKey("3")); + assertTrue(map.containsKey("3")); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestLog4jWarningErrorMetricsAppender.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestLog4jWarningErrorMetricsAppender.java index 46c891a55bd..346239f8e1b 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestLog4jWarningErrorMetricsAppender.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestLog4jWarningErrorMetricsAppender.java @@ -18,20 +18,22 @@ package org.apache.hadoop.yarn.util; +import java.util.ArrayList; +import java.util.List; +import java.util.Map; + +import org.junit.jupiter.api.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.slf4j.Marker; import org.slf4j.MarkerFactory; -import org.apache.log4j.LogManager; -import org.apache.log4j.Level; + import org.apache.hadoop.util.Time; -import org.junit.Assert; -import org.junit.Test; +import org.apache.log4j.Level; +import org.apache.log4j.LogManager; - -import java.util.ArrayList; -import java.util.List; -import java.util.Map; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertTrue; public class TestLog4jWarningErrorMetricsAppender { @@ -81,186 +83,179 @@ public class TestLog4jWarningErrorMetricsAppender { } @Test - public void testPurge() throws Exception { + void testPurge() throws Exception { setupAppender(2, 1, 1); logMessages(Level.ERROR, "test message 1", 1); cutoff.clear(); cutoff.add(0L); - Assert.assertEquals(1, appender.getErrorCounts(cutoff).size()); - Assert.assertEquals(1, appender.getErrorCounts(cutoff).get(0).longValue()); - Assert.assertEquals(1, appender.getErrorMessagesAndCounts(cutoff).get(0) - .size()); + assertEquals(1, appender.getErrorCounts(cutoff).size()); + assertEquals(1, appender.getErrorCounts(cutoff).get(0).longValue()); + assertEquals(1, appender.getErrorMessagesAndCounts(cutoff).get(0) + .size()); Thread.sleep(3000); - Assert.assertEquals(1, appender.getErrorCounts(cutoff).size()); - Assert.assertEquals(0, appender.getErrorCounts(cutoff).get(0).longValue()); - Assert.assertEquals(0, appender.getErrorMessagesAndCounts(cutoff).get(0) - .size()); + assertEquals(1, appender.getErrorCounts(cutoff).size()); + assertEquals(0, appender.getErrorCounts(cutoff).get(0).longValue()); + assertEquals(0, appender.getErrorMessagesAndCounts(cutoff).get(0) + .size()); setupAppender(2, 1000, 2); logMessages(Level.ERROR, "test message 1", 3); logMessages(Level.ERROR, "test message 2", 2); - Assert.assertEquals(1, appender.getErrorCounts(cutoff).size()); - Assert.assertEquals(5, appender.getErrorCounts(cutoff).get(0).longValue()); - Assert.assertEquals(2, appender.getErrorMessagesAndCounts(cutoff).get(0) - .size()); + assertEquals(1, appender.getErrorCounts(cutoff).size()); + assertEquals(5, appender.getErrorCounts(cutoff).get(0).longValue()); + assertEquals(2, appender.getErrorMessagesAndCounts(cutoff).get(0) + .size()); logMessages(Level.ERROR, "test message 3", 3); Thread.sleep(2000); - Assert.assertEquals(8, appender.getErrorCounts(cutoff).get(0).longValue()); - Assert.assertEquals(2, appender.getErrorMessagesAndCounts(cutoff).get(0) - .size()); + assertEquals(8, appender.getErrorCounts(cutoff).get(0).longValue()); + assertEquals(2, appender.getErrorMessagesAndCounts(cutoff).get(0) + .size()); } @Test - public void testErrorCounts() throws Exception { + void testErrorCounts() throws Exception { cutoff.clear(); setupAppender(100, 100, 100); cutoff.add(0L); logMessages(Level.ERROR, "test message 1", 2); logMessages(Level.ERROR, "test message 2", 3); - Assert.assertEquals(1, appender.getErrorCounts(cutoff).size()); - Assert.assertEquals(1, appender.getWarningCounts(cutoff).size()); - Assert.assertEquals(5, appender.getErrorCounts(cutoff).get(0).longValue()); - Assert - .assertEquals(0, appender.getWarningCounts(cutoff).get(0).longValue()); + assertEquals(1, appender.getErrorCounts(cutoff).size()); + assertEquals(1, appender.getWarningCounts(cutoff).size()); + assertEquals(5, appender.getErrorCounts(cutoff).get(0).longValue()); + assertEquals(0, appender.getWarningCounts(cutoff).get(0).longValue()); Thread.sleep(1000); cutoff.add(Time.now() / 1000); logMessages(Level.ERROR, "test message 3", 2); - Assert.assertEquals(2, appender.getErrorCounts(cutoff).size()); - Assert.assertEquals(2, appender.getWarningCounts(cutoff).size()); - Assert.assertEquals(7, appender.getErrorCounts(cutoff).get(0).longValue()); - Assert.assertEquals(2, appender.getErrorCounts(cutoff).get(1).longValue()); - Assert - .assertEquals(0, appender.getWarningCounts(cutoff).get(0).longValue()); - Assert - .assertEquals(0, appender.getWarningCounts(cutoff).get(1).longValue()); + assertEquals(2, appender.getErrorCounts(cutoff).size()); + assertEquals(2, appender.getWarningCounts(cutoff).size()); + assertEquals(7, appender.getErrorCounts(cutoff).get(0).longValue()); + assertEquals(2, appender.getErrorCounts(cutoff).get(1).longValue()); + assertEquals(0, appender.getWarningCounts(cutoff).get(0).longValue()); + assertEquals(0, appender.getWarningCounts(cutoff).get(1).longValue()); } @Test - public void testWarningCounts() throws Exception { + void testWarningCounts() throws Exception { cutoff.clear(); setupAppender(100, 100, 100); cutoff.add(0L); logMessages(Level.WARN, "test message 1", 2); logMessages(Level.WARN, "test message 2", 3); - Assert.assertEquals(1, appender.getErrorCounts(cutoff).size()); - Assert.assertEquals(1, appender.getWarningCounts(cutoff).size()); - Assert.assertEquals(0, appender.getErrorCounts(cutoff).get(0).longValue()); - Assert - .assertEquals(5, appender.getWarningCounts(cutoff).get(0).longValue()); + assertEquals(1, appender.getErrorCounts(cutoff).size()); + assertEquals(1, appender.getWarningCounts(cutoff).size()); + assertEquals(0, appender.getErrorCounts(cutoff).get(0).longValue()); + assertEquals(5, appender.getWarningCounts(cutoff).get(0).longValue()); Thread.sleep(1000); cutoff.add(Time.now() / 1000); logMessages(Level.WARN, "test message 3", 2); - Assert.assertEquals(2, appender.getErrorCounts(cutoff).size()); - Assert.assertEquals(2, appender.getWarningCounts(cutoff).size()); - Assert.assertEquals(0, appender.getErrorCounts(cutoff).get(0).longValue()); - Assert.assertEquals(0, appender.getErrorCounts(cutoff).get(1).longValue()); - Assert - .assertEquals(7, appender.getWarningCounts(cutoff).get(0).longValue()); - Assert - .assertEquals(2, appender.getWarningCounts(cutoff).get(1).longValue()); + assertEquals(2, appender.getErrorCounts(cutoff).size()); + assertEquals(2, appender.getWarningCounts(cutoff).size()); + assertEquals(0, appender.getErrorCounts(cutoff).get(0).longValue()); + assertEquals(0, appender.getErrorCounts(cutoff).get(1).longValue()); + assertEquals(7, appender.getWarningCounts(cutoff).get(0).longValue()); + assertEquals(2, appender.getWarningCounts(cutoff).get(1).longValue()); } @Test - public void testWarningMessages() throws Exception { + void testWarningMessages() throws Exception { cutoff.clear(); setupAppender(100, 100, 100); cutoff.add(0L); logMessages(Level.WARN, "test message 1", 2); logMessages(Level.WARN, "test message 2", 3); - Assert.assertEquals(1, appender.getErrorMessagesAndCounts(cutoff).size()); - Assert.assertEquals(1, appender.getWarningMessagesAndCounts(cutoff).size()); + assertEquals(1, appender.getErrorMessagesAndCounts(cutoff).size()); + assertEquals(1, appender.getWarningMessagesAndCounts(cutoff).size()); Map errorsMap = appender.getErrorMessagesAndCounts(cutoff).get(0); Map warningsMap = appender.getWarningMessagesAndCounts(cutoff).get(0); - Assert.assertEquals(0, errorsMap.size()); - Assert.assertEquals(2, warningsMap.size()); - Assert.assertTrue(warningsMap.containsKey("test message 1")); - Assert.assertTrue(warningsMap.containsKey("test message 2")); + assertEquals(0, errorsMap.size()); + assertEquals(2, warningsMap.size()); + assertTrue(warningsMap.containsKey("test message 1")); + assertTrue(warningsMap.containsKey("test message 2")); Log4jWarningErrorMetricsAppender.Element msg1Info = warningsMap.get("test message 1"); Log4jWarningErrorMetricsAppender.Element msg2Info = warningsMap.get("test message 2"); - Assert.assertEquals(2, msg1Info.count.intValue()); - Assert.assertEquals(3, msg2Info.count.intValue()); + assertEquals(2, msg1Info.count.intValue()); + assertEquals(3, msg2Info.count.intValue()); Thread.sleep(1000); cutoff.add(Time.now() / 1000); logMessages(Level.WARN, "test message 3", 2); - Assert.assertEquals(2, appender.getErrorMessagesAndCounts(cutoff).size()); - Assert.assertEquals(2, appender.getWarningMessagesAndCounts(cutoff).size()); + assertEquals(2, appender.getErrorMessagesAndCounts(cutoff).size()); + assertEquals(2, appender.getWarningMessagesAndCounts(cutoff).size()); errorsMap = appender.getErrorMessagesAndCounts(cutoff).get(0); warningsMap = appender.getWarningMessagesAndCounts(cutoff).get(0); - Assert.assertEquals(0, errorsMap.size()); - Assert.assertEquals(3, warningsMap.size()); - Assert.assertTrue(warningsMap.containsKey("test message 3")); + assertEquals(0, errorsMap.size()); + assertEquals(3, warningsMap.size()); + assertTrue(warningsMap.containsKey("test message 3")); errorsMap = appender.getErrorMessagesAndCounts(cutoff).get(1); warningsMap = appender.getWarningMessagesAndCounts(cutoff).get(1); - Assert.assertEquals(0, errorsMap.size()); - Assert.assertEquals(1, warningsMap.size()); - Assert.assertTrue(warningsMap.containsKey("test message 3")); + assertEquals(0, errorsMap.size()); + assertEquals(1, warningsMap.size()); + assertTrue(warningsMap.containsKey("test message 3")); Log4jWarningErrorMetricsAppender.Element msg3Info = warningsMap.get("test message 3"); - Assert.assertEquals(2, msg3Info.count.intValue()); + assertEquals(2, msg3Info.count.intValue()); } @Test - public void testErrorMessages() throws Exception { + void testErrorMessages() throws Exception { cutoff.clear(); setupAppender(100, 100, 100); cutoff.add(0L); logMessages(Level.ERROR, "test message 1", 2); logMessages(Level.ERROR, "test message 2", 3); - Assert.assertEquals(1, appender.getErrorMessagesAndCounts(cutoff).size()); - Assert.assertEquals(1, appender.getWarningMessagesAndCounts(cutoff).size()); + assertEquals(1, appender.getErrorMessagesAndCounts(cutoff).size()); + assertEquals(1, appender.getWarningMessagesAndCounts(cutoff).size()); Map errorsMap = appender.getErrorMessagesAndCounts(cutoff).get(0); Map warningsMap = appender.getWarningMessagesAndCounts(cutoff).get(0); - Assert.assertEquals(2, errorsMap.size()); - Assert.assertEquals(0, warningsMap.size()); - Assert.assertTrue(errorsMap.containsKey("test message 1")); - Assert.assertTrue(errorsMap.containsKey("test message 2")); + assertEquals(2, errorsMap.size()); + assertEquals(0, warningsMap.size()); + assertTrue(errorsMap.containsKey("test message 1")); + assertTrue(errorsMap.containsKey("test message 2")); Log4jWarningErrorMetricsAppender.Element msg1Info = errorsMap.get("test message 1"); Log4jWarningErrorMetricsAppender.Element msg2Info = errorsMap.get("test message 2"); - Assert.assertEquals(2, msg1Info.count.intValue()); - Assert.assertEquals(3, msg2Info.count.intValue()); + assertEquals(2, msg1Info.count.intValue()); + assertEquals(3, msg2Info.count.intValue()); Thread.sleep(1000); cutoff.add(Time.now() / 1000); logMessages(Level.ERROR, "test message 3", 2); - Assert.assertEquals(2, appender.getErrorMessagesAndCounts(cutoff).size()); - Assert.assertEquals(2, appender.getWarningMessagesAndCounts(cutoff).size()); + assertEquals(2, appender.getErrorMessagesAndCounts(cutoff).size()); + assertEquals(2, appender.getWarningMessagesAndCounts(cutoff).size()); errorsMap = appender.getErrorMessagesAndCounts(cutoff).get(0); warningsMap = appender.getWarningMessagesAndCounts(cutoff).get(0); - Assert.assertEquals(3, errorsMap.size()); - Assert.assertEquals(0, warningsMap.size()); - Assert.assertTrue(errorsMap.containsKey("test message 3")); + assertEquals(3, errorsMap.size()); + assertEquals(0, warningsMap.size()); + assertTrue(errorsMap.containsKey("test message 3")); errorsMap = appender.getErrorMessagesAndCounts(cutoff).get(1); warningsMap = appender.getWarningMessagesAndCounts(cutoff).get(1); - Assert.assertEquals(1, errorsMap.size()); - Assert.assertEquals(0, warningsMap.size()); - Assert.assertTrue(errorsMap.containsKey("test message 3")); + assertEquals(1, errorsMap.size()); + assertEquals(0, warningsMap.size()); + assertTrue(errorsMap.containsKey("test message 3")); Log4jWarningErrorMetricsAppender.Element msg3Info = errorsMap.get("test message 3"); - Assert.assertEquals(2, msg3Info.count.intValue()); + assertEquals(2, msg3Info.count.intValue()); } @Test - public void testInfoDebugTrace() { + void testInfoDebugTrace() { cutoff.clear(); setupAppender(100, 100, 100); cutoff.add(0L); logMessages(Level.INFO, "test message 1", 2); logMessages(Level.DEBUG, "test message 2", 2); logMessages(Level.TRACE, "test message 3", 2); - Assert.assertEquals(1, appender.getErrorMessagesAndCounts(cutoff).size()); - Assert.assertEquals(1, appender.getWarningMessagesAndCounts(cutoff).size()); - Assert.assertEquals(1, appender.getErrorCounts(cutoff).size()); - Assert.assertEquals(1, appender.getWarningCounts(cutoff).size()); - Assert.assertEquals(0, appender.getErrorCounts(cutoff).get(0).longValue()); - Assert - .assertEquals(0, appender.getWarningCounts(cutoff).get(0).longValue()); - Assert.assertEquals(0, appender.getErrorMessagesAndCounts(cutoff).get(0) - .size()); - Assert.assertEquals(0, appender.getWarningMessagesAndCounts(cutoff).get(0) - .size()); + assertEquals(1, appender.getErrorMessagesAndCounts(cutoff).size()); + assertEquals(1, appender.getWarningMessagesAndCounts(cutoff).size()); + assertEquals(1, appender.getErrorCounts(cutoff).size()); + assertEquals(1, appender.getWarningCounts(cutoff).size()); + assertEquals(0, appender.getErrorCounts(cutoff).get(0).longValue()); + assertEquals(0, appender.getWarningCounts(cutoff).get(0).longValue()); + assertEquals(0, appender.getErrorMessagesAndCounts(cutoff).get(0) + .size()); + assertEquals(0, appender.getWarningMessagesAndCounts(cutoff).get(0) + .size()); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestProcfsBasedProcessTree.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestProcfsBasedProcessTree.java index 8065f40eb43..09dfb92f1d0 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestProcfsBasedProcessTree.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestProcfsBasedProcessTree.java @@ -18,11 +18,6 @@ package org.apache.hadoop.yarn.util; -import static org.apache.hadoop.yarn.util.ProcfsBasedProcessTree.KB_TO_BYTES; -import static org.apache.hadoop.yarn.util.ResourceCalculatorProcessTree.UNAVAILABLE; -import static org.junit.Assert.fail; -import static org.junit.Assume.assumeTrue; - import java.io.BufferedReader; import java.io.BufferedWriter; import java.io.File; @@ -37,9 +32,13 @@ import java.util.Vector; import java.util.regex.Matcher; import java.util.regex.Pattern; -import org.apache.commons.io.FileUtils; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; import org.slf4j.Logger; import org.slf4j.LoggerFactory; + +import org.apache.commons.io.FileUtils; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileContext; import org.apache.hadoop.fs.FileUtil; @@ -52,19 +51,23 @@ import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.util.ProcfsBasedProcessTree.MemInfo; import org.apache.hadoop.yarn.util.ProcfsBasedProcessTree.ProcessSmapMemoryInfo; import org.apache.hadoop.yarn.util.ProcfsBasedProcessTree.ProcessTreeSmapMemInfo; -import org.junit.Assert; -import org.junit.Before; -import org.junit.Test; + +import static org.apache.hadoop.yarn.util.ProcfsBasedProcessTree.KB_TO_BYTES; +import static org.apache.hadoop.yarn.util.ResourceCalculatorProcessTree.UNAVAILABLE; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; +import static org.junit.jupiter.api.Assumptions.assumeTrue; /** * A JUnit test to test ProcfsBasedProcessTree. */ public class TestProcfsBasedProcessTree { - private static final Logger LOG = LoggerFactory - .getLogger(TestProcfsBasedProcessTree.class); - protected static File TEST_ROOT_DIR = new File("target", - TestProcfsBasedProcessTree.class.getName() + "-localDir"); + private static final Logger LOG = LoggerFactory.getLogger(TestProcfsBasedProcessTree.class); + protected static File TEST_ROOT_DIR = + new File("target", TestProcfsBasedProcessTree.class.getName() + "-localDir"); private ShellCommandExecutor shexec = null; private String pidFile, lowestDescendant, lostDescendant; @@ -111,21 +114,22 @@ public class TestProcfsBasedProcessTree { return getPidFromPidFile(pidFile); } - @Before + @BeforeEach public void setup() throws IOException { assumeTrue(Shell.LINUX); FileContext.getLocalFSFileContext().delete( new Path(TEST_ROOT_DIR.getAbsolutePath()), true); } - @Test(timeout = 30000) - public void testProcessTree() throws Exception { + @Test + @Timeout(30000) + void testProcessTree() throws Exception { try { - Assert.assertTrue(ProcfsBasedProcessTree.isAvailable()); + assertTrue(ProcfsBasedProcessTree.isAvailable()); } catch (Exception e) { LOG.info(StringUtils.stringifyException(e)); - Assert.assertTrue("ProcfsBaseProcessTree should be available on Linux", - false); + assertTrue(false, + "ProcfsBaseProcessTree should be available on Linux"); return; } // create shell script @@ -156,7 +160,7 @@ public class TestProcfsBasedProcessTree { + "(sleep 300&\n" + "echo $! > " + lostDescendant + ")\n" + " while true\n do\n" + " sleep 5\n" + " done\n" + "fi", - StandardCharsets.UTF_8); + StandardCharsets.UTF_8); Thread t = new RogueTaskThread(); t.start(); @@ -182,8 +186,8 @@ public class TestProcfsBasedProcessTree { // Verify the orphaned pid is In process tree String lostpid = getPidFromPidFile(lostDescendant); LOG.info("Orphaned pid: " + lostpid); - Assert.assertTrue("Child process owned by init escaped process tree.", - p.contains(lostpid)); + assertTrue(p.contains(lostpid), + "Child process owned by init escaped process tree."); // Get the process-tree dump String processTreeDump = p.getProcessTreeDump(); @@ -208,18 +212,18 @@ public class TestProcfsBasedProcessTree { } LOG.info("Process-tree dump follows: \n" + processTreeDump); - Assert.assertTrue("Process-tree dump doesn't start with a proper header", - processTreeDump.startsWith("\t|- PID PPID PGRPID SESSID CMD_NAME " - + "USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) " - + "RSSMEM_USAGE(PAGES) FULL_CMD_LINE\n")); + assertTrue(processTreeDump.startsWith("\t|- PID PPID PGRPID SESSID CMD_NAME " + + "USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) " + + "RSSMEM_USAGE(PAGES) FULL_CMD_LINE\n"), + "Process-tree dump doesn't start with a proper header"); for (int i = N; i >= 0; i--) { String cmdLineDump = "\\|- [0-9]+ [0-9]+ [0-9]+ [0-9]+ \\(sh\\)" + " [0-9]+ [0-9]+ [0-9]+ [0-9]+ sh " + shellScript + " " + i; Pattern pat = Pattern.compile(cmdLineDump); Matcher mat = pat.matcher(processTreeDump); - Assert.assertTrue("Process-tree dump doesn't contain the cmdLineDump of " - + i + "th process!", mat.find()); + assertTrue(mat.find(), "Process-tree dump doesn't contain the cmdLineDump of " + + i + "th process!"); } // Not able to join thread sometimes when forking with large N. @@ -232,12 +236,12 @@ public class TestProcfsBasedProcessTree { // ProcessTree is gone now. Any further calls should be sane. p.updateProcessTree(); - Assert.assertFalse("ProcessTree must have been gone", isAlive(pid)); - Assert.assertTrue( - "vmem for the gone-process is " + p.getVirtualMemorySize() - + " . It should be UNAVAILABLE(-1).", - p.getVirtualMemorySize() == UNAVAILABLE); - Assert.assertEquals("[ ]", p.toString()); + assertFalse(isAlive(pid), "ProcessTree must have been gone"); + assertTrue( + p.getVirtualMemorySize() == UNAVAILABLE, + "vmem for the gone-process is " + p.getVirtualMemorySize() + + " . It should be UNAVAILABLE(-1)."); + assertEquals("[ ]", p.toString()); } protected ProcfsBasedProcessTree createProcessTree(String pid) { @@ -395,11 +399,12 @@ public class TestProcfsBasedProcessTree { * if there was a problem setting up the fake procfs directories or * files. */ - @Test(timeout = 30000) - public void testCpuAndMemoryForProcessTree() throws IOException { + @Test + @Timeout(30000) + void testCpuAndMemoryForProcessTree() throws IOException { // test processes - String[] pids = { "100", "200", "300", "400" }; + String[] pids = {"100", "200", "300", "400"}; ControlledClock testClock = new ControlledClock(); testClock.setTime(0); // create the fake procfs root directory. @@ -442,34 +447,35 @@ public class TestProcfsBasedProcessTree { processTree.updateProcessTree(); // verify virtual memory - Assert.assertEquals("Virtual memory does not match", 600000L, - processTree.getVirtualMemorySize()); + assertEquals(600000L, processTree.getVirtualMemorySize(), "Virtual memory does not match"); // verify rss memory long cumuRssMem = ProcfsBasedProcessTree.PAGE_SIZE > 0 - ? 600L * ProcfsBasedProcessTree.PAGE_SIZE : - ResourceCalculatorProcessTree.UNAVAILABLE; - Assert.assertEquals("rss memory does not match", cumuRssMem, - processTree.getRssMemorySize()); + ? 600L * ProcfsBasedProcessTree.PAGE_SIZE : + ResourceCalculatorProcessTree.UNAVAILABLE; + assertEquals(cumuRssMem, + processTree.getRssMemorySize(), + "rss memory does not match"); // verify cumulative cpu time long cumuCpuTime = ProcfsBasedProcessTree.JIFFY_LENGTH_IN_MILLIS > 0 ? 7200L * ProcfsBasedProcessTree.JIFFY_LENGTH_IN_MILLIS : 0L; - Assert.assertEquals("Cumulative cpu time does not match", cumuCpuTime, - processTree.getCumulativeCpuTime()); + assertEquals(cumuCpuTime, + processTree.getCumulativeCpuTime(), + "Cumulative cpu time does not match"); // verify CPU usage - Assert.assertEquals("Percent CPU time should be set to -1 initially", - -1.0, processTree.getCpuUsagePercent(), - 0.01); + assertEquals(-1.0, processTree.getCpuUsagePercent(), + 0.01, + "Percent CPU time should be set to -1 initially"); // Check by enabling smaps setSmapsInProceTree(processTree, true); // anon (exclude r-xs,r--s) - Assert.assertEquals("rss memory does not match", - (20 * KB_TO_BYTES * 3), processTree.getRssMemorySize()); + assertEquals((20 * KB_TO_BYTES * 3), processTree.getRssMemorySize(), + "rss memory does not match"); // test the cpu time again to see if it cumulates procInfos[0] = @@ -490,8 +496,9 @@ public class TestProcfsBasedProcessTree { cumuCpuTime = ProcfsBasedProcessTree.JIFFY_LENGTH_IN_MILLIS > 0 ? 9400L * ProcfsBasedProcessTree.JIFFY_LENGTH_IN_MILLIS : 0L; - Assert.assertEquals("Cumulative cpu time does not match", cumuCpuTime, - processTree.getCumulativeCpuTime()); + assertEquals(cumuCpuTime, + processTree.getCumulativeCpuTime(), + "Cumulative cpu time does not match"); double expectedCpuUsagePercent = (ProcfsBasedProcessTree.JIFFY_LENGTH_IN_MILLIS > 0) ? @@ -500,11 +507,12 @@ public class TestProcfsBasedProcessTree { // expectedCpuUsagePercent is given by (94000L - 72000) * 100/ // 200000; // which in this case is 11. Lets verify that first - Assert.assertEquals(11, expectedCpuUsagePercent, 0.001); - Assert.assertEquals("Percent CPU time is not correct expected " + - expectedCpuUsagePercent, expectedCpuUsagePercent, + assertEquals(11, expectedCpuUsagePercent, 0.001); + assertEquals(expectedCpuUsagePercent, processTree.getCpuUsagePercent(), - 0.01); + 0.01, + "Percent CPU time is not correct expected " + + expectedCpuUsagePercent); } finally { FileUtil.fullyDelete(procfsRootDir); } @@ -529,8 +537,9 @@ public class TestProcfsBasedProcessTree { * if there was a problem setting up the fake procfs directories or * files. */ - @Test(timeout = 30000) - public void testMemForOlderProcesses() throws IOException { + @Test + @Timeout(30000) + void testMemForOlderProcesses() throws IOException { testMemForOlderProcesses(false); testMemForOlderProcesses(true); } @@ -576,8 +585,7 @@ public class TestProcfsBasedProcessTree { setSmapsInProceTree(processTree, smapEnabled); // verify virtual memory - Assert.assertEquals("Virtual memory does not match", 700000L, - processTree.getVirtualMemorySize()); + assertEquals(700000L, processTree.getVirtualMemorySize(), "Virtual memory does not match"); // write one more process as child of 100. String[] newPids = { "500" }; @@ -594,36 +602,31 @@ public class TestProcfsBasedProcessTree { // check memory includes the new process. processTree.updateProcessTree(); - Assert.assertEquals("vmem does not include new process", - 1200000L, processTree.getVirtualMemorySize()); + assertEquals(1200000L, processTree.getVirtualMemorySize(), + "vmem does not include new process"); if (!smapEnabled) { - long cumuRssMem = - ProcfsBasedProcessTree.PAGE_SIZE > 0 - ? 1200L * ProcfsBasedProcessTree.PAGE_SIZE : - ResourceCalculatorProcessTree.UNAVAILABLE; - Assert.assertEquals("rssmem does not include new process", - cumuRssMem, processTree.getRssMemorySize()); + long cumuRssMem = ProcfsBasedProcessTree.PAGE_SIZE > 0 ? + 1200L * ProcfsBasedProcessTree.PAGE_SIZE : + ResourceCalculatorProcessTree.UNAVAILABLE; + assertEquals(cumuRssMem, processTree.getRssMemorySize(), + "rssmem does not include new process"); } else { - Assert.assertEquals("rssmem does not include new process", - 20 * KB_TO_BYTES * 4, processTree.getRssMemorySize()); + assertEquals(20 * KB_TO_BYTES * 4, processTree.getRssMemorySize(), + "rssmem does not include new process"); } // however processes older than 1 iteration will retain the older value - Assert.assertEquals( - "vmem shouldn't have included new process", 700000L, - processTree.getVirtualMemorySize(1)); + assertEquals(700000L, processTree.getVirtualMemorySize(1), + "vmem shouldn't have included new process"); if (!smapEnabled) { - long cumuRssMem = - ProcfsBasedProcessTree.PAGE_SIZE > 0 - ? 700L * ProcfsBasedProcessTree.PAGE_SIZE : - ResourceCalculatorProcessTree.UNAVAILABLE; - Assert.assertEquals( - "rssmem shouldn't have included new process", cumuRssMem, - processTree.getRssMemorySize(1)); + long cumuRssMem = ProcfsBasedProcessTree.PAGE_SIZE > 0 ? + 700L * ProcfsBasedProcessTree.PAGE_SIZE : + ResourceCalculatorProcessTree.UNAVAILABLE; + assertEquals(cumuRssMem, processTree.getRssMemorySize(1), + "rssmem shouldn't have included new process"); } else { - Assert.assertEquals( - "rssmem shouldn't have included new process", - 20 * KB_TO_BYTES * 3, processTree.getRssMemorySize(1)); + assertEquals(20 * KB_TO_BYTES * 3, processTree.getRssMemorySize(1), + "rssmem shouldn't have included new process"); } // one more process @@ -643,49 +646,41 @@ public class TestProcfsBasedProcessTree { processTree.updateProcessTree(); // processes older than 2 iterations should be same as before. - Assert.assertEquals( - "vmem shouldn't have included new processes", 700000L, - processTree.getVirtualMemorySize(2)); + assertEquals(700000L, processTree.getVirtualMemorySize(2), + "vmem shouldn't have included new processes"); if (!smapEnabled) { long cumuRssMem = ProcfsBasedProcessTree.PAGE_SIZE > 0 ? 700L * ProcfsBasedProcessTree.PAGE_SIZE : ResourceCalculatorProcessTree.UNAVAILABLE; - Assert.assertEquals( - "rssmem shouldn't have included new processes", - cumuRssMem, processTree.getRssMemorySize(2)); + assertEquals(cumuRssMem, processTree.getRssMemorySize(2), + "rssmem shouldn't have included new processes"); } else { - Assert.assertEquals( - "rssmem shouldn't have included new processes", - 20 * KB_TO_BYTES * 3, processTree.getRssMemorySize(2)); + assertEquals(20 * KB_TO_BYTES * 3, processTree.getRssMemorySize(2), + "rssmem shouldn't have included new processes"); } // processes older than 1 iteration should not include new process, // but include process 500 - Assert.assertEquals( - "vmem shouldn't have included new processes", 1200000L, - processTree.getVirtualMemorySize(1)); + assertEquals(1200000L, processTree.getVirtualMemorySize(1), + "vmem shouldn't have included new processes"); if (!smapEnabled) { long cumuRssMem = ProcfsBasedProcessTree.PAGE_SIZE > 0 ? 1200L * ProcfsBasedProcessTree.PAGE_SIZE : ResourceCalculatorProcessTree.UNAVAILABLE; - Assert.assertEquals( - "rssmem shouldn't have included new processes", - cumuRssMem, processTree.getRssMemorySize(1)); + assertEquals(cumuRssMem, processTree.getRssMemorySize(1), + "rssmem shouldn't have included new processes"); } else { - Assert.assertEquals( - "rssmem shouldn't have included new processes", - 20 * KB_TO_BYTES * 4, processTree.getRssMemorySize(1)); + assertEquals(20 * KB_TO_BYTES * 4, processTree.getRssMemorySize(1), + "rssmem shouldn't have included new processes"); } // no processes older than 3 iterations - Assert.assertEquals( - "Getting non-zero vmem for processes older than 3 iterations", - 0, processTree.getVirtualMemorySize(3)); - Assert.assertEquals( - "Getting non-zero rssmem for processes older than 3 iterations", - 0, processTree.getRssMemorySize(3)); + assertEquals(0, processTree.getVirtualMemorySize(3), + "Getting non-zero vmem for processes older than 3 iterations"); + assertEquals(0, processTree.getRssMemorySize(3), + "Getting non-zero rssmem for processes older than 3 iterations"); } finally { FileUtil.fullyDelete(procfsRootDir); } @@ -700,8 +695,9 @@ public class TestProcfsBasedProcessTree { * if there was a problem setting up the fake procfs directories or * files. */ - @Test(timeout = 30000) - public void testDestroyProcessTree() throws IOException { + @Test + @Timeout(30000) + void testDestroyProcessTree() throws IOException { // test process String pid = "100"; // create the fake procfs root directory. @@ -715,8 +711,8 @@ public class TestProcfsBasedProcessTree { SystemClock.getInstance()); // Let us not create stat file for pid 100. - Assert.assertTrue(ProcfsBasedProcessTree.checkPidPgrpidForMatch(pid, - procfsRootDir.getAbsolutePath())); + assertTrue(ProcfsBasedProcessTree.checkPidPgrpidForMatch(pid, + procfsRootDir.getAbsolutePath())); } finally { FileUtil.fullyDelete(procfsRootDir); } @@ -727,10 +723,11 @@ public class TestProcfsBasedProcessTree { * * @throws IOException */ - @Test(timeout = 30000) - public void testProcessTreeDump() throws IOException { + @Test + @Timeout(30000) + void testProcessTreeDump() throws IOException { - String[] pids = { "100", "200", "300", "400", "500", "600" }; + String[] pids = {"100", "200", "300", "400", "500", "600"}; File procfsRootDir = new File(TEST_ROOT_DIR, "proc"); @@ -790,29 +787,29 @@ public class TestProcfsBasedProcessTree { String processTreeDump = processTree.getProcessTreeDump(); LOG.info("Process-tree dump follows: \n" + processTreeDump); - Assert.assertTrue("Process-tree dump doesn't start with a proper header", - processTreeDump.startsWith("\t|- PID PPID PGRPID SESSID CMD_NAME " - + "USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) " - + "RSSMEM_USAGE(PAGES) FULL_CMD_LINE\n")); + assertTrue(processTreeDump.startsWith("\t|- PID PPID PGRPID SESSID CMD_NAME " + + "USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) " + + "RSSMEM_USAGE(PAGES) FULL_CMD_LINE\n"), + "Process-tree dump doesn't start with a proper header"); for (int i = 0; i < 5; i++) { ProcessStatInfo p = procInfos[i]; - Assert.assertTrue( - "Process-tree dump doesn't contain the cmdLineDump of process " - + p.pid, - processTreeDump.contains("\t|- " + p.pid + " " + p.ppid + " " - + p.pgrpId + " " + p.session + " (" + p.name + ") " + p.utime - + " " + p.stime + " " + p.vmem + " " + p.rssmemPage + " " - + cmdLines[i])); + assertTrue( + processTreeDump.contains("\t|- " + p.pid + " " + p.ppid + " " + + p.pgrpId + " " + p.session + " (" + p.name + ") " + p.utime + + " " + p.stime + " " + p.vmem + " " + p.rssmemPage + " " + + cmdLines[i]), + "Process-tree dump doesn't contain the cmdLineDump of process " + + p.pid); } // 600 should not be in the dump ProcessStatInfo p = procInfos[5]; - Assert.assertFalse( - "Process-tree dump shouldn't contain the cmdLineDump of process " - + p.pid, - processTreeDump.contains("\t|- " + p.pid + " " + p.ppid + " " - + p.pgrpId + " " + p.session + " (" + p.name + ") " + p.utime + " " - + p.stime + " " + p.vmem + " " + cmdLines[5])); + assertFalse( + processTreeDump.contains("\t|- " + p.pid + " " + p.ppid + " " + + p.pgrpId + " " + p.session + " (" + p.name + ") " + p.utime + " " + + p.stime + " " + p.vmem + " " + cmdLines[5]), + "Process-tree dump shouldn't contain the cmdLineDump of process " + + p.pid); } finally { FileUtil.fullyDelete(procfsRootDir); } @@ -887,11 +884,11 @@ public class TestProcfsBasedProcessTree { public static void setupProcfsRootDir(File procfsRootDir) throws IOException { // cleanup any existing process root dir. if (procfsRootDir.exists()) { - Assert.assertTrue(FileUtil.fullyDelete(procfsRootDir)); + assertTrue(FileUtil.fullyDelete(procfsRootDir)); } // create afresh - Assert.assertTrue(procfsRootDir.mkdirs()); + assertTrue(procfsRootDir.mkdirs()); } /** diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestRackResolver.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestRackResolver.java index 4b1d7921fdb..9079fa00573 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestRackResolver.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestRackResolver.java @@ -24,16 +24,19 @@ import java.util.ArrayList; import java.util.Arrays; import java.util.List; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.CommonConfigurationKeysPublic; import org.apache.hadoop.net.DNSToSwitchMapping; import org.apache.hadoop.net.NetworkTopology; import org.apache.hadoop.net.Node; -import org.junit.Assert; -import org.junit.Before; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertTrue; public class TestRackResolver { @@ -41,7 +44,7 @@ public class TestRackResolver { LoggerFactory.getLogger(TestRackResolver.class); private static final String invalidHost = "invalidHost"; - @Before + @BeforeEach public void setUp() { RackResolver.reset(); } @@ -54,8 +57,7 @@ public class TestRackResolver { @Override public List resolve(List hostList) { // Only one host at a time - Assert.assertTrue("hostList size is " + hostList.size(), - hostList.size() <= 1); + assertTrue(hostList.size() <= 1, "hostList size is " + hostList.size()); List returnList = new ArrayList(); if (hostList.isEmpty()) { return returnList; @@ -74,7 +76,7 @@ public class TestRackResolver { } // I should not be reached again as RackResolver is supposed to do // caching. - Assert.assertTrue(numHost1 <= 1); + assertTrue(numHost1 <= 1); return returnList; } @@ -112,7 +114,7 @@ public class TestRackResolver { // I should not be reached again as RackResolver is supposed to do // caching. } - Assert.assertEquals(returnList.size(), hostList.size()); + assertEquals(returnList.size(), hostList.size()); return returnList; } @@ -127,11 +129,11 @@ public class TestRackResolver { } @Test - public void testCaching() { + void testCaching() { Configuration conf = new Configuration(); conf.setClass( - CommonConfigurationKeysPublic.NET_TOPOLOGY_NODE_SWITCH_MAPPING_IMPL_KEY, - MyResolver.class, DNSToSwitchMapping.class); + CommonConfigurationKeysPublic.NET_TOPOLOGY_NODE_SWITCH_MAPPING_IMPL_KEY, + MyResolver.class, DNSToSwitchMapping.class); RackResolver.init(conf); try { InetAddress iaddr = InetAddress.getByName("host1"); @@ -140,15 +142,15 @@ public class TestRackResolver { // Ignore if not found } Node node = RackResolver.resolve("host1"); - Assert.assertEquals("/rack1", node.getNetworkLocation()); + assertEquals("/rack1", node.getNetworkLocation()); node = RackResolver.resolve("host1"); - Assert.assertEquals("/rack1", node.getNetworkLocation()); + assertEquals("/rack1", node.getNetworkLocation()); node = RackResolver.resolve(invalidHost); - Assert.assertEquals(NetworkTopology.DEFAULT_RACK, node.getNetworkLocation()); + assertEquals(NetworkTopology.DEFAULT_RACK, node.getNetworkLocation()); } @Test - public void testMultipleHosts() { + void testMultipleHosts() { Configuration conf = new Configuration(); conf.setClass( CommonConfigurationKeysPublic @@ -158,9 +160,9 @@ public class TestRackResolver { RackResolver.init(conf); List nodes = RackResolver.resolve( Arrays.asList("host1", invalidHost, "host2")); - Assert.assertEquals("/rack1", nodes.get(0).getNetworkLocation()); - Assert.assertEquals(NetworkTopology.DEFAULT_RACK, + assertEquals("/rack1", nodes.get(0).getNetworkLocation()); + assertEquals(NetworkTopology.DEFAULT_RACK, nodes.get(1).getNetworkLocation()); - Assert.assertEquals("/rack2", nodes.get(2).getNetworkLocation()); + assertEquals("/rack2", nodes.get(2).getNetworkLocation()); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestRackResolverScriptBasedMapping.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestRackResolverScriptBasedMapping.java index e8e875978b5..41c6d439f65 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestRackResolverScriptBasedMapping.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestRackResolverScriptBasedMapping.java @@ -18,17 +18,19 @@ package org.apache.hadoop.yarn.util; +import org.junit.jupiter.api.Test; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.CommonConfigurationKeysPublic; import org.apache.hadoop.net.DNSToSwitchMapping; import org.apache.hadoop.net.ScriptBasedMapping; -import org.junit.Assert; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; public class TestRackResolverScriptBasedMapping { @Test - public void testScriptName() { + void testScriptName() { Configuration conf = new Configuration(); conf .setClass( @@ -38,7 +40,7 @@ public class TestRackResolverScriptBasedMapping { conf.set(CommonConfigurationKeysPublic.NET_TOPOLOGY_SCRIPT_FILE_NAME_KEY, "testScript"); RackResolver.init(conf); - Assert.assertEquals(RackResolver.getDnsToSwitchMapping().toString(), + assertEquals(RackResolver.getDnsToSwitchMapping().toString(), "script-based mapping with script testScript"); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestResourceCalculatorProcessTree.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestResourceCalculatorProcessTree.java index 28cee7f1028..15ac9a54ea0 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestResourceCalculatorProcessTree.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestResourceCalculatorProcessTree.java @@ -17,11 +17,14 @@ */ package org.apache.hadoop.yarn.util; +import org.junit.jupiter.api.Test; + import org.apache.hadoop.conf.Configuration; -import org.junit.Test; -import static org.junit.Assert.*; -import static org.hamcrest.core.IsInstanceOf.*; -import static org.hamcrest.core.IsSame.*; + +import static org.hamcrest.MatcherAssert.assertThat; +import static org.hamcrest.core.IsInstanceOf.instanceOf; +import static org.hamcrest.core.IsSame.sameInstance; +import static org.junit.jupiter.api.Assertions.assertNotNull; /** * A JUnit test to test {@link ResourceCalculatorPlugin} @@ -73,7 +76,7 @@ public class TestResourceCalculatorProcessTree { } @Test - public void testCreateInstance() { + void testCreateInstance() { ResourceCalculatorProcessTree tree; tree = ResourceCalculatorProcessTree.getResourceCalculatorProcessTree("1", EmptyProcessTree.class, new Configuration()); assertNotNull(tree); @@ -81,7 +84,7 @@ public class TestResourceCalculatorProcessTree { } @Test - public void testCreatedInstanceConfigured() { + void testCreatedInstanceConfigured() { ResourceCalculatorProcessTree tree; Configuration conf = new Configuration(); tree = ResourceCalculatorProcessTree.getResourceCalculatorProcessTree("1", EmptyProcessTree.class, conf); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestTimelineServiceHelper.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestTimelineServiceHelper.java index 21a27baccd4..480d2635fb7 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestTimelineServiceHelper.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestTimelineServiceHelper.java @@ -17,34 +17,35 @@ */ package org.apache.hadoop.yarn.util; -import static org.assertj.core.api.Assertions.assertThat; - import java.util.HashMap; import java.util.HashSet; import java.util.Map; import java.util.Set; import java.util.TreeMap; -import org.junit.Assert; -import org.junit.Test; +import org.junit.jupiter.api.Test; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNull; public class TestTimelineServiceHelper { @Test - public void testMapCastToHashMap() { + void testMapCastToHashMap() { // Test null map be casted to null Map nullMap = null; - Assert.assertNull(TimelineServiceHelper.mapCastToHashMap(nullMap)); + assertNull(TimelineServiceHelper.mapCastToHashMap(nullMap)); // Test empty hashmap be casted to a empty hashmap Map emptyHashMap = new HashMap(); - Assert.assertEquals( + assertEquals( TimelineServiceHelper.mapCastToHashMap(emptyHashMap).size(), 0); // Test empty non-hashmap be casted to a empty hashmap Map emptyTreeMap = new TreeMap(); - Assert.assertEquals( + assertEquals( TimelineServiceHelper.mapCastToHashMap(emptyTreeMap).size(), 0); // Test non-empty hashmap be casted to hashmap correctly @@ -52,7 +53,7 @@ public class TestTimelineServiceHelper { String key = "KEY"; String value = "VALUE"; firstHashMap.put(key, value); - Assert.assertEquals( + assertEquals( TimelineServiceHelper.mapCastToHashMap(firstHashMap), firstHashMap); // Test non-empty non-hashmap is casted correctly. @@ -60,7 +61,7 @@ public class TestTimelineServiceHelper { firstTreeMap.put(key, value); HashMap alternateHashMap = TimelineServiceHelper.mapCastToHashMap(firstTreeMap); - Assert.assertEquals(firstTreeMap.size(), alternateHashMap.size()); + assertEquals(firstTreeMap.size(), alternateHashMap.size()); assertThat(alternateHashMap.get(key)).isEqualTo(value); // Test complicated hashmap be casted correctly @@ -69,7 +70,7 @@ public class TestTimelineServiceHelper { Set hashSet = new HashSet(); hashSet.add(value); complicatedHashMap.put(key, hashSet); - Assert.assertEquals( + assertEquals( TimelineServiceHelper.mapCastToHashMap(complicatedHashMap), complicatedHashMap); @@ -77,7 +78,7 @@ public class TestTimelineServiceHelper { Map> complicatedTreeMap = new TreeMap>(); complicatedTreeMap.put(key, hashSet); - Assert.assertEquals( + assertEquals( TimelineServiceHelper.mapCastToHashMap(complicatedTreeMap).get(key), hashSet); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestTimes.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestTimes.java index 36b94bad554..4c438b1bbf9 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestTimes.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestTimes.java @@ -18,64 +18,64 @@ package org.apache.hadoop.yarn.util; -import org.junit.Assert; -import org.junit.Test; - import java.io.IOException; import java.text.SimpleDateFormat; import java.util.Date; +import org.junit.jupiter.api.Test; + import static org.apache.hadoop.yarn.util.Times.ISO8601_DATE_FORMAT; +import static org.junit.jupiter.api.Assertions.assertEquals; public class TestTimes { @Test - public void testNegativeStartTimes() { + void testNegativeStartTimes() { long elapsed = Times.elapsed(-5, 10, true); - Assert.assertEquals("Elapsed time is not 0", 0, elapsed); + assertEquals(0, elapsed, "Elapsed time is not 0"); elapsed = Times.elapsed(-5, 10, false); - Assert.assertEquals("Elapsed time is not -1", -1, elapsed); + assertEquals(-1, elapsed, "Elapsed time is not -1"); } @Test - public void testNegativeFinishTimes() { + void testNegativeFinishTimes() { long elapsed = Times.elapsed(5, -10, false); - Assert.assertEquals("Elapsed time is not -1", -1, elapsed); + assertEquals(-1, elapsed, "Elapsed time is not -1"); } @Test - public void testNegativeStartandFinishTimes() { + void testNegativeStartandFinishTimes() { long elapsed = Times.elapsed(-5, -10, false); - Assert.assertEquals("Elapsed time is not -1", -1, elapsed); + assertEquals(-1, elapsed, "Elapsed time is not -1"); } @Test - public void testPositiveStartandFinishTimes() { + void testPositiveStartandFinishTimes() { long elapsed = Times.elapsed(5, 10, true); - Assert.assertEquals("Elapsed time is not 5", 5, elapsed); + assertEquals(5, elapsed, "Elapsed time is not 5"); elapsed = Times.elapsed(5, 10, false); - Assert.assertEquals("Elapsed time is not 5", 5, elapsed); + assertEquals(5, elapsed, "Elapsed time is not 5"); } @Test - public void testFinishTimesAheadOfStartTimes() { + void testFinishTimesAheadOfStartTimes() { long elapsed = Times.elapsed(10, 5, true); - Assert.assertEquals("Elapsed time is not -1", -1, elapsed); + assertEquals(-1, elapsed, "Elapsed time is not -1"); elapsed = Times.elapsed(10, 5, false); - Assert.assertEquals("Elapsed time is not -1", -1, elapsed); + assertEquals(-1, elapsed, "Elapsed time is not -1"); // use Long.MAX_VALUE to ensure started time is after the current one elapsed = Times.elapsed(Long.MAX_VALUE, 0, true); - Assert.assertEquals("Elapsed time is not -1", -1, elapsed); + assertEquals(-1, elapsed, "Elapsed time is not -1"); } @Test - public void validateISO() throws IOException { + void validateISO() throws IOException { SimpleDateFormat isoFormat = new SimpleDateFormat(ISO8601_DATE_FORMAT); for (int i = 0; i < 1000; i++) { long now = System.currentTimeMillis(); String instant = Times.formatISO8601(now); String date = isoFormat.format(new Date(now)); - Assert.assertEquals(date, instant); + assertEquals(date, instant); } } } \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestWindowsBasedProcessTree.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestWindowsBasedProcessTree.java index db5d4bea940..a83bb19c11f 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestWindowsBasedProcessTree.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestWindowsBasedProcessTree.java @@ -18,13 +18,14 @@ package org.apache.hadoop.yarn.util; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; import org.slf4j.Logger; import org.slf4j.LoggerFactory; -import org.junit.Assert; -import org.junit.Test; import static org.apache.hadoop.test.PlatformAssumptions.assumeWindows; -import static org.junit.Assert.assertTrue; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertTrue; public class TestWindowsBasedProcessTree { private static final Logger LOG = LoggerFactory @@ -42,12 +43,13 @@ public class TestWindowsBasedProcessTree { } } - @Test (timeout = 30000) + @Test + @Timeout(30000) @SuppressWarnings("deprecation") - public void tree() { + void tree() { assumeWindows(); - assertTrue("WindowsBasedProcessTree should be available on Windows", - WindowsBasedProcessTree.isAvailable()); + assertTrue(WindowsBasedProcessTree.isAvailable(), + "WindowsBasedProcessTree should be available on Windows"); ControlledClock testClock = new ControlledClock(); long elapsedTimeBetweenUpdatesMsec = 0; testClock.setTime(elapsedTimeBetweenUpdatesMsec); @@ -72,8 +74,7 @@ public class TestWindowsBasedProcessTree { assertTrue(pTree.getRssMemorySize(1) == 2048); assertTrue(pTree.getCumulativeCpuTime() == 3000); assertTrue(pTree.getCpuUsagePercent() == 200); - Assert.assertEquals("Percent CPU time is not correct", - pTree.getCpuUsagePercent(), 200, 0.01); + assertEquals(pTree.getCpuUsagePercent(), 200, 0.01, "Percent CPU time is not correct"); pTree.infoStr = "3524,1024,1024,1500\r\n2844,1024,1024,1500\r\n"; elapsedTimeBetweenUpdatesMsec = 2000; @@ -84,7 +85,6 @@ public class TestWindowsBasedProcessTree { assertTrue(pTree.getRssMemorySize() == 2048); assertTrue(pTree.getRssMemorySize(2) == 2048); assertTrue(pTree.getCumulativeCpuTime() == 4000); - Assert.assertEquals("Percent CPU time is not correct", - pTree.getCpuUsagePercent(), 0, 0.01); + assertEquals(pTree.getCpuUsagePercent(), 0, 0.01, "Percent CPU time is not correct"); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestYarnVersionInfo.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestYarnVersionInfo.java index 7e41501aaf2..5f38e9ce573 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestYarnVersionInfo.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestYarnVersionInfo.java @@ -20,39 +20,38 @@ package org.apache.hadoop.yarn.util; import java.io.IOException; -import org.junit.Test; -import static org.junit.Assert.assertTrue; -import static org.junit.Assert.assertNotEquals; -import static org.junit.Assert.assertNotNull; +import org.junit.jupiter.api.Test; + +import static org.junit.jupiter.api.Assertions.assertNotEquals; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertTrue; + /** * A JUnit test to test {@link YarnVersionInfo} */ public class TestYarnVersionInfo { - + /** * Test the yarn version info routines. * @throws IOException */ @Test - public void versionInfoGenerated() throws IOException { + void versionInfoGenerated() throws IOException { // can't easily know what the correct values are going to be so just // make sure they aren't Unknown - assertNotEquals("getVersion returned Unknown", - "Unknown", YarnVersionInfo.getVersion()); - assertNotEquals("getUser returned Unknown", - "Unknown", YarnVersionInfo.getUser()); - assertNotEquals("getSrcChecksum returned Unknown", - "Unknown", YarnVersionInfo.getSrcChecksum()); + assertNotEquals("Unknown", YarnVersionInfo.getVersion(), "getVersion returned Unknown"); + assertNotEquals("Unknown", YarnVersionInfo.getUser(), "getUser returned Unknown"); + assertNotEquals("Unknown", YarnVersionInfo.getSrcChecksum(), "getSrcChecksum returned Unknown"); // these could be Unknown if the VersionInfo generated from code not in svn or git // so just check that they return something - assertNotNull("getUrl returned null", YarnVersionInfo.getUrl()); - assertNotNull("getRevision returned null", YarnVersionInfo.getRevision()); - assertNotNull("getBranch returned null", YarnVersionInfo.getBranch()); + assertNotNull(YarnVersionInfo.getUrl(), "getUrl returned null"); + assertNotNull(YarnVersionInfo.getRevision(), "getRevision returned null"); + assertNotNull(YarnVersionInfo.getBranch(), "getBranch returned null"); - assertTrue("getBuildVersion check doesn't contain: source checksum", - YarnVersionInfo.getBuildVersion().contains("source checksum")); + assertTrue(YarnVersionInfo.getBuildVersion().contains("source checksum"), + "getBuildVersion check doesn't contain: source checksum"); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/CustomResourceTypesConfigurationProvider.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/CustomResourceTypesConfigurationProvider.java index 2b26151ccfd..16cb8d1e9d8 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/CustomResourceTypesConfigurationProvider.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/CustomResourceTypesConfigurationProvider.java @@ -16,14 +16,6 @@ package org.apache.hadoop.yarn.util.resource; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.util.Lists; -import org.apache.hadoop.yarn.LocalConfigurationProvider; -import org.apache.hadoop.yarn.api.protocolrecords.ResourceTypes; -import org.apache.hadoop.yarn.api.records.ResourceInformation; -import org.apache.hadoop.yarn.conf.YarnConfiguration; -import org.apache.hadoop.yarn.exceptions.YarnException; - import java.io.ByteArrayInputStream; import java.io.IOException; import java.io.InputStream; @@ -33,6 +25,14 @@ import java.util.Map; import java.util.stream.Collectors; import java.util.stream.IntStream; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.util.Lists; +import org.apache.hadoop.yarn.LocalConfigurationProvider; +import org.apache.hadoop.yarn.api.protocolrecords.ResourceTypes; +import org.apache.hadoop.yarn.api.records.ResourceInformation; +import org.apache.hadoop.yarn.conf.YarnConfiguration; +import org.apache.hadoop.yarn.exceptions.YarnException; + import static java.util.stream.Collectors.toList; /** diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceCalculator.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceCalculator.java index 39f33990270..561aaed97b7 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceCalculator.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceCalculator.java @@ -22,32 +22,32 @@ import java.util.Arrays; import java.util.Collection; import org.apache.hadoop.thirdparty.com.google.common.collect.ImmutableSet; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Timeout; +import org.junit.jupiter.params.ParameterizedTest; +import org.junit.jupiter.params.provider.MethodSource; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.yarn.api.records.Resource; import org.apache.hadoop.yarn.api.records.ResourceInformation; import org.apache.hadoop.yarn.conf.YarnConfiguration; -import org.junit.Assert; -import org.junit.Before; -import org.junit.Test; -import org.junit.runner.RunWith; -import org.junit.runners.Parameterized; -import static org.junit.Assert.assertEquals; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertTrue; -@RunWith(Parameterized.class) public class TestResourceCalculator { private static final String EXTRA_RESOURCE_NAME = "test"; - private final ResourceCalculator resourceCalculator; + private ResourceCalculator resourceCalculator; - @Parameterized.Parameters(name = "{0}") public static Collection getParameters() { - return Arrays.asList(new Object[][] { - { "DefaultResourceCalculator", new DefaultResourceCalculator() }, - { "DominantResourceCalculator", new DominantResourceCalculator() } }); + return Arrays.asList(new Object[][]{ + {"DefaultResourceCalculator", new DefaultResourceCalculator()}, + {"DominantResourceCalculator", new DominantResourceCalculator()}}); } - @Before + @BeforeEach public void setupNoExtraResource() { // This has to run before each test because we don't know when // setupExtraResource() might be called @@ -61,34 +61,38 @@ public class TestResourceCalculator { ResourceUtils.resetResourceTypes(conf); } - public TestResourceCalculator(String name, ResourceCalculator rs) { + public void initTestResourceCalculator(String name, ResourceCalculator rs) { this.resourceCalculator = rs; } - - @Test(timeout = 10000) - public void testFitsIn() { + + @MethodSource("getParameters") + @ParameterizedTest(name = "{0}") + @Timeout(10000) + void testFitsIn(String name, ResourceCalculator rs) { + + initTestResourceCalculator(name, rs); if (resourceCalculator instanceof DefaultResourceCalculator) { - Assert.assertTrue(resourceCalculator.fitsIn( + assertTrue(resourceCalculator.fitsIn( Resource.newInstance(1, 2), Resource.newInstance(2, 1))); - Assert.assertTrue(resourceCalculator.fitsIn( + assertTrue(resourceCalculator.fitsIn( Resource.newInstance(1, 2), Resource.newInstance(2, 2))); - Assert.assertTrue(resourceCalculator.fitsIn( + assertTrue(resourceCalculator.fitsIn( Resource.newInstance(1, 2), Resource.newInstance(1, 2))); - Assert.assertTrue(resourceCalculator.fitsIn( + assertTrue(resourceCalculator.fitsIn( Resource.newInstance(1, 2), Resource.newInstance(1, 1))); - Assert.assertFalse(resourceCalculator.fitsIn( + assertFalse(resourceCalculator.fitsIn( Resource.newInstance(2, 1), Resource.newInstance(1, 2))); } else if (resourceCalculator instanceof DominantResourceCalculator) { - Assert.assertFalse(resourceCalculator.fitsIn( + assertFalse(resourceCalculator.fitsIn( Resource.newInstance(1, 2), Resource.newInstance(2, 1))); - Assert.assertTrue(resourceCalculator.fitsIn( + assertTrue(resourceCalculator.fitsIn( Resource.newInstance(1, 2), Resource.newInstance(2, 2))); - Assert.assertTrue(resourceCalculator.fitsIn( + assertTrue(resourceCalculator.fitsIn( Resource.newInstance(1, 2), Resource.newInstance(1, 2))); - Assert.assertFalse(resourceCalculator.fitsIn( + assertFalse(resourceCalculator.fitsIn( Resource.newInstance(1, 2), Resource.newInstance(1, 1))); - Assert.assertFalse(resourceCalculator.fitsIn( + assertFalse(resourceCalculator.fitsIn( Resource.newInstance(2, 1), Resource.newInstance(1, 2))); } } @@ -121,22 +125,22 @@ public class TestResourceCalculator { int expected) { int actual = resourceCalculator.compare(cluster, res1, res2); - assertEquals(String.format("Resource comparison did not give the expected " - + "result for %s v/s %s", res1.toString(), res2.toString()), - expected, actual); + assertEquals(expected, actual, String.format("Resource comparison did not give the expected " + + "result for %s v/s %s", res1.toString(), res2.toString())); if (expected != 0) { // Try again with args in the opposite order and the negative of the // expected result. actual = resourceCalculator.compare(cluster, res2, res1); - assertEquals(String.format("Resource comparison did not give the " - + "expected result for %s v/s %s", res2.toString(), res1.toString()), - expected * -1, actual); + assertEquals(expected * -1, actual, String.format("Resource comparison did not give the " + + "expected result for %s v/s %s", res2.toString(), res1.toString())); } } - @Test - public void testCompareWithOnlyMandatory() { + @MethodSource("getParameters") + @ParameterizedTest(name = "{0}") + void testCompareWithOnlyMandatory(String name, ResourceCalculator rs) { + initTestResourceCalculator(name, rs); // This test is necessary because there are optimizations that are only // triggered when only the mandatory resources are configured. @@ -173,8 +177,10 @@ public class TestResourceCalculator { assertComparison(cluster, newResource(3, 1), newResource(3, 0), 1); } - @Test - public void testCompare() { + @MethodSource("getParameters") + @ParameterizedTest(name = "{0}") + void testCompare(String name, ResourceCalculator rs) { + initTestResourceCalculator(name, rs); // Test with 3 resources setupExtraResource(); @@ -209,7 +215,7 @@ public class TestResourceCalculator { /** * Verify compare when one or all the resource are zero. */ - private void testCompareDominantZeroValueResource(){ + private void testCompareDominantZeroValueResource() { Resource cluster = newResource(4L, 4, 0); assertComparison(cluster, newResource(2, 1, 1), newResource(1, 1, 2), 1); assertComparison(cluster, newResource(2, 2, 1), newResource(1, 2, 2), 1); @@ -261,8 +267,11 @@ public class TestResourceCalculator { assertComparison(cluster, newResource(3, 1, 1), newResource(3, 0, 0), 1); } - @Test(timeout = 10000) - public void testCompareWithEmptyCluster() { + @MethodSource("getParameters") + @ParameterizedTest(name = "{0}") + @Timeout(10000) + void testCompareWithEmptyCluster(String name, ResourceCalculator rs) { + initTestResourceCalculator(name, rs); Resource clusterResource = Resource.newInstance(0, 0); // For lhs == rhs @@ -316,35 +325,39 @@ public class TestResourceCalculator { boolean greaterThan, boolean greaterThanOrEqual, Resource max, Resource min) { - assertEquals("Less Than operation is wrongly calculated.", lessThan, - Resources.lessThan(resourceCalculator, clusterResource, lhs, rhs)); + assertEquals(lessThan, + Resources.lessThan(resourceCalculator, clusterResource, lhs, rhs), + "Less Than operation is wrongly calculated."); assertEquals( - "Less Than Or Equal To operation is wrongly calculated.", lessThanOrEqual, Resources.lessThanOrEqual(resourceCalculator, - clusterResource, lhs, rhs)); + clusterResource, lhs, rhs), "Less Than Or Equal To operation is wrongly calculated."); - assertEquals("Greater Than operation is wrongly calculated.", - greaterThan, - Resources.greaterThan(resourceCalculator, clusterResource, lhs, rhs)); + assertEquals(greaterThan, + Resources.greaterThan(resourceCalculator, clusterResource, lhs, rhs), + "Greater Than operation is wrongly calculated."); - assertEquals( - "Greater Than Or Equal To operation is wrongly calculated.", - greaterThanOrEqual, Resources.greaterThanOrEqual(resourceCalculator, - clusterResource, lhs, rhs)); + assertEquals(greaterThanOrEqual, + Resources.greaterThanOrEqual(resourceCalculator, clusterResource, lhs, rhs), + "Greater Than Or Equal To operation is wrongly calculated."); - assertEquals("Max(value) Operation wrongly calculated.", max, - Resources.max(resourceCalculator, clusterResource, lhs, rhs)); + assertEquals(max, + Resources.max(resourceCalculator, clusterResource, lhs, rhs), + "Max(value) Operation wrongly calculated."); - assertEquals("Min(value) operation is wrongly calculated.", min, - Resources.min(resourceCalculator, clusterResource, lhs, rhs)); + assertEquals(min, + Resources.min(resourceCalculator, clusterResource, lhs, rhs), + "Min(value) operation is wrongly calculated."); } /** * Test resource normalization. */ - @Test(timeout = 10000) - public void testNormalize() { + @MethodSource("getParameters") + @ParameterizedTest(name = "{0}") + @Timeout(10000) + void testNormalize(String name, ResourceCalculator rs) { + initTestResourceCalculator(name, rs); // requested resources value cannot be an arbitrary number. Resource ask = Resource.newInstance(1111, 2); Resource min = Resource.newInstance(1024, 1); @@ -420,22 +433,28 @@ public class TestResourceCalculator { } } - @Test - public void testDivisionByZeroRatioDenominatorIsZero() { + @MethodSource("getParameters") + @ParameterizedTest(name = "{0}") + void testDivisionByZeroRatioDenominatorIsZero(String name, ResourceCalculator rs) { + initTestResourceCalculator(name, rs); float ratio = resourceCalculator.ratio(newResource(1, 1), newResource(0, 0)); assertEquals(Float.POSITIVE_INFINITY, ratio, 0.00001); } - @Test - public void testDivisionByZeroRatioNumeratorAndDenominatorIsZero() { + @MethodSource("getParameters") + @ParameterizedTest(name = "{0}") + void testDivisionByZeroRatioNumeratorAndDenominatorIsZero(String name, ResourceCalculator rs) { + initTestResourceCalculator(name, rs); float ratio = resourceCalculator.ratio(newResource(0, 0), newResource(0, 0)); assertEquals(0.0, ratio, 0.00001); } - @Test - public void testFitsInDiagnosticsCollector() { + @MethodSource("getParameters") + @ParameterizedTest(name = "{0}") + void testFitsInDiagnosticsCollector(String name, ResourceCalculator rs) { + initTestResourceCalculator(name, rs); if (resourceCalculator instanceof DefaultResourceCalculator) { // required-resource = (0, 0) assertEquals(ImmutableSet.of(), @@ -551,8 +570,10 @@ public class TestResourceCalculator { } } - @Test - public void testRatioWithNoExtraResource() { + @MethodSource("getParameters") + @ParameterizedTest(name = "{0}") + void testRatioWithNoExtraResource(String name, ResourceCalculator rs) { + initTestResourceCalculator(name, rs); //setup Resource resource1 = newResource(1, 1); Resource resource2 = newResource(2, 1); @@ -570,8 +591,10 @@ public class TestResourceCalculator { } } - @Test - public void testRatioWithExtraResource() { + @MethodSource("getParameters") + @ParameterizedTest(name = "{0}") + void testRatioWithExtraResource(String name, ResourceCalculator rs) { + initTestResourceCalculator(name, rs); //setup setupExtraResource(); Resource resource1 = newResource(1, 1, 2); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceUtils.java index 8f742fa902f..cdd99d1f79f 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceUtils.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceUtils.java @@ -18,25 +18,6 @@ package org.apache.hadoop.yarn.util.resource; -import static org.assertj.core.api.Assertions.assertThat; - -import org.apache.commons.io.FileUtils; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.yarn.api.protocolrecords.ResourceTypes; -import org.apache.hadoop.yarn.api.records.Resource; -import org.apache.hadoop.yarn.api.records.ResourceInformation; -import org.apache.hadoop.yarn.api.records.ResourceTypeInfo; -import org.apache.hadoop.yarn.conf.YarnConfiguration; -import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; -import org.junit.After; -import org.junit.Assert; -import org.junit.Before; -import org.junit.Rule; -import org.junit.Test; -import org.junit.rules.ExpectedException; -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; - import java.io.File; import java.io.IOException; import java.net.URL; @@ -48,6 +29,27 @@ import java.util.List; import java.util.Map; import java.util.Set; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.commons.io.FileUtils; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.yarn.api.protocolrecords.ResourceTypes; +import org.apache.hadoop.yarn.api.records.Resource; +import org.apache.hadoop.yarn.api.records.ResourceInformation; +import org.apache.hadoop.yarn.api.records.ResourceTypeInfo; +import org.apache.hadoop.yarn.conf.YarnConfiguration; +import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertThrows; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; + /** * Test class to verify all resource utility methods. */ @@ -70,15 +72,12 @@ public class TestResourceUtils { } } - @Rule - public ExpectedException expexted = ExpectedException.none(); - - @Before + @BeforeEach public void setup() { ResourceUtils.resetResourceTypes(); } - @After + @AfterEach public void teardown() { if (nodeResourcesFile != null && nodeResourcesFile.exists()) { nodeResourcesFile.delete(); @@ -137,29 +136,28 @@ public class TestResourceUtils { private void testMemoryAndVcores(Map res) { String memory = ResourceInformation.MEMORY_MB.getName(); String vcores = ResourceInformation.VCORES.getName(); - Assert.assertTrue("Resource 'memory' missing", res.containsKey(memory)); - Assert.assertEquals("'memory' units incorrect", - ResourceInformation.MEMORY_MB.getUnits(), res.get(memory).getUnits()); - Assert.assertEquals("'memory' types incorrect", - ResourceInformation.MEMORY_MB.getResourceType(), - res.get(memory).getResourceType()); - Assert.assertTrue("Resource 'vcores' missing", res.containsKey(vcores)); - Assert.assertEquals("'vcores' units incorrect", - ResourceInformation.VCORES.getUnits(), res.get(vcores).getUnits()); - Assert.assertEquals("'vcores' type incorrect", - ResourceInformation.VCORES.getResourceType(), - res.get(vcores).getResourceType()); + assertTrue(res.containsKey(memory), "Resource 'memory' missing"); + assertEquals(ResourceInformation.MEMORY_MB.getUnits(), res.get(memory).getUnits(), + "'memory' units incorrect"); + assertEquals(ResourceInformation.MEMORY_MB.getResourceType(), res.get(memory).getResourceType(), + "'memory' types incorrect"); + assertTrue(res.containsKey(vcores), "Resource 'vcores' missing"); + assertEquals(ResourceInformation.VCORES.getUnits(), res.get(vcores).getUnits(), + "'vcores' units incorrect"); + assertEquals(ResourceInformation.VCORES.getResourceType(), + res.get(vcores).getResourceType(), + "'vcores' type incorrect"); } @Test - public void testGetResourceTypes() { + void testGetResourceTypes() { Map res = ResourceUtils.getResourceTypes(); - Assert.assertEquals(2, res.size()); + assertEquals(2, res.size()); testMemoryAndVcores(res); } @Test - public void testGetResourceTypesConfigs() throws Exception { + void testGetResourceTypesConfigs() throws Exception { Configuration conf = new YarnConfiguration(); ResourceFileInformation testFile1 = @@ -183,19 +181,19 @@ public class TestResourceUtils { ResourceUtils.resetResourceTypes(); res = setupResourceTypesInternal(conf, testInformation.filename); testMemoryAndVcores(res); - Assert.assertEquals(testInformation.resourceCount, res.size()); + assertEquals(testInformation.resourceCount, res.size()); for (Map.Entry entry : testInformation.resourceNameUnitsMap.entrySet()) { String resourceName = entry.getKey(); - Assert.assertTrue("Missing key " + resourceName, - res.containsKey(resourceName)); - Assert.assertEquals(entry.getValue(), res.get(resourceName).getUnits()); + assertTrue(res.containsKey(resourceName), + "Missing key " + resourceName); + assertEquals(entry.getValue(), res.get(resourceName).getUnits()); } } } @Test - public void testGetRequestedResourcesFromConfig() { + void testGetRequestedResourcesFromConfig() { Configuration conf = new Configuration(); //these resource type configurations should be recognised @@ -229,14 +227,14 @@ public class TestResourceUtils { Set expectedSet = new HashSet<>(Arrays.asList(expectedKeys)); - Assert.assertEquals(properList.size(), expectedKeys.length); + assertEquals(properList.size(), expectedKeys.length); properList.forEach( - item -> Assert.assertTrue(expectedSet.contains(item.getName()))); + item -> assertTrue(expectedSet.contains(item.getName()))); } @Test - public void testGetResourceTypesConfigErrors() throws IOException { + void testGetResourceTypesConfigErrors() throws IOException { Configuration conf = new YarnConfiguration(); String[] resourceFiles = {"resource-types-error-1.xml", @@ -246,7 +244,7 @@ public class TestResourceUtils { ResourceUtils.resetResourceTypes(); try { setupResourceTypesInternal(conf, resourceFile); - Assert.fail("Expected error with file " + resourceFile); + fail("Expected error with file " + resourceFile); } catch (YarnRuntimeException | IllegalArgumentException e) { //Test passed } @@ -254,7 +252,7 @@ public class TestResourceUtils { } @Test - public void testInitializeResourcesMap() { + void testInitializeResourcesMap() { String[] empty = {"", ""}; String[] res1 = {"resource1", "m"}; String[] res2 = {"resource2", "G"}; @@ -291,31 +289,30 @@ public class TestResourceUtils { len = 4; } - Assert.assertEquals(len, ret.size()); + assertEquals(len, ret.size()); for (String[] resources : test) { if (resources[0].length() == 0) { continue; } - Assert.assertTrue(ret.containsKey(resources[0])); + assertTrue(ret.containsKey(resources[0])); ResourceInformation resInfo = ret.get(resources[0]); - Assert.assertEquals(resources[1], resInfo.getUnits()); - Assert.assertEquals(ResourceTypes.COUNTABLE, resInfo.getResourceType()); + assertEquals(resources[1], resInfo.getUnits()); + assertEquals(ResourceTypes.COUNTABLE, resInfo.getResourceType()); } // we must always have memory and vcores with their fixed units - Assert.assertTrue(ret.containsKey("memory-mb")); + assertTrue(ret.containsKey("memory-mb")); ResourceInformation memInfo = ret.get("memory-mb"); - Assert.assertEquals("Mi", memInfo.getUnits()); - Assert.assertEquals(ResourceTypes.COUNTABLE, memInfo.getResourceType()); - Assert.assertTrue(ret.containsKey("vcores")); + assertEquals("Mi", memInfo.getUnits()); + assertEquals(ResourceTypes.COUNTABLE, memInfo.getResourceType()); + assertTrue(ret.containsKey("vcores")); ResourceInformation vcoresInfo = ret.get("vcores"); - Assert.assertEquals("", vcoresInfo.getUnits()); - Assert - .assertEquals(ResourceTypes.COUNTABLE, vcoresInfo.getResourceType()); + assertEquals("", vcoresInfo.getUnits()); + assertEquals(ResourceTypes.COUNTABLE, vcoresInfo.getResourceType()); } } @Test - public void testInitializeResourcesMapErrors() { + void testInitializeResourcesMapErrors() { String[] mem1 = {"memory-mb", ""}; String[] vcores1 = {"vcores", "M"}; @@ -346,7 +343,7 @@ public class TestResourceUtils { } try { ResourceUtils.initializeResourcesMap(conf); - Assert.fail("resource map initialization should fail"); + fail("resource map initialization should fail"); } catch (Exception e) { //Test passed } @@ -354,7 +351,7 @@ public class TestResourceUtils { } @Test - public void testGetResourceInformation() throws Exception { + void testGetResourceInformation() throws Exception { Configuration conf = new YarnConfiguration(); Map testRun = new HashMap<>(); setupResourceTypesInternal(conf, "resource-types-4.xml"); @@ -372,16 +369,16 @@ public class TestResourceUtils { ResourceUtils.resetNodeResources(); Map actual = setupNodeResources(conf, resourceFile); - Assert.assertEquals(actual.size(), + assertEquals(actual.size(), entry.getValue().getResources().length); for (ResourceInformation resInfo : entry.getValue().getResources()) { - Assert.assertEquals(resInfo, actual.get(resInfo.getName())); + assertEquals(resInfo, actual.get(resInfo.getName())); } } } @Test - public void testGetNodeResourcesConfigErrors() throws Exception { + void testGetNodeResourcesConfigErrors() throws Exception { Configuration conf = new YarnConfiguration(); setupResourceTypesInternal(conf, "resource-types-4.xml"); String[] invalidNodeResFiles = {"node-resources-error-1.xml"}; @@ -390,7 +387,7 @@ public class TestResourceUtils { ResourceUtils.resetNodeResources(); try { setupNodeResources(conf, resourceFile); - Assert.fail("Expected error with file " + resourceFile); + fail("Expected error with file " + resourceFile); } catch (YarnRuntimeException e) { //Test passed } @@ -398,26 +395,28 @@ public class TestResourceUtils { } @Test - public void testGetNodeResourcesRedefineFpgaErrors() throws Exception { - Configuration conf = new YarnConfiguration(); - expexted.expect(YarnRuntimeException.class); - expexted.expectMessage("Defined mandatory resource type=yarn.io/fpga"); - setupResourceTypesInternal(conf, - "resource-types-error-redefine-fpga-unit.xml"); + void testGetNodeResourcesRedefineFpgaErrors() throws Exception { + Throwable exception = assertThrows(YarnRuntimeException.class, () -> { + Configuration conf = new YarnConfiguration(); + setupResourceTypesInternal(conf, + "resource-types-error-redefine-fpga-unit.xml"); + }); + assertTrue(exception.getMessage().contains("Defined mandatory resource type=yarn.io/fpga")); } @Test - public void testGetNodeResourcesRedefineGpuErrors() throws Exception { - Configuration conf = new YarnConfiguration(); - expexted.expect(YarnRuntimeException.class); - expexted.expectMessage("Defined mandatory resource type=yarn.io/gpu"); - setupResourceTypesInternal(conf, - "resource-types-error-redefine-gpu-unit.xml"); + void testGetNodeResourcesRedefineGpuErrors() throws Exception { + Throwable exception = assertThrows(YarnRuntimeException.class, () -> { + Configuration conf = new YarnConfiguration(); + setupResourceTypesInternal(conf, + "resource-types-error-redefine-gpu-unit.xml"); + }); + assertTrue(exception.getMessage().contains("Defined mandatory resource type=yarn.io/gpu")); } @Test - public void testResourceNameFormatValidation() { - String[] validNames = new String[] { + void testResourceNameFormatValidation() { + String[] validNames = new String[]{ "yarn.io/gpu", "gpu", "g_1_2", @@ -427,7 +426,7 @@ public class TestResourceUtils { "a....b", }; - String[] invalidNames = new String[] { + String[] invalidNames = new String[]{ "asd/resource/-name", "prefix/-resource_1", "prefix/0123resource", @@ -443,7 +442,7 @@ public class TestResourceUtils { for (String invalidName : invalidNames) { try { ResourceUtils.validateNameOfResourceNameAndThrowException(invalidName); - Assert.fail("Expected to fail name check, the name=" + invalidName + fail("Expected to fail name check, the name=" + invalidName + " is illegal."); } catch (YarnRuntimeException e) { // Expected @@ -452,7 +451,7 @@ public class TestResourceUtils { } @Test - public void testGetResourceInformationWithDiffUnits() throws Exception { + void testGetResourceInformationWithDiffUnits() throws Exception { Configuration conf = new YarnConfiguration(); Map testRun = new HashMap<>(); setupResourceTypesInternal(conf, "resource-types-4.xml"); @@ -476,82 +475,82 @@ public class TestResourceUtils { ResourceUtils.resetNodeResources(); Map actual = setupNodeResources(conf, resourceFile); - Assert.assertEquals(actual.size(), + assertEquals(actual.size(), entry.getValue().getResources().length); for (ResourceInformation resInfo : entry.getValue().getResources()) { - Assert.assertEquals(resInfo, actual.get(resInfo.getName())); + assertEquals(resInfo, actual.get(resInfo.getName())); } } } @Test - public void testResourceUnitParsing() throws Exception { + void testResourceUnitParsing() throws Exception { Resource res = ResourceUtils.createResourceFromString("memory=20g,vcores=3", - ResourceUtils.getResourcesTypeInfo()); - Assert.assertEquals(Resources.createResource(20 * 1024, 3), res); + ResourceUtils.getResourcesTypeInfo()); + assertEquals(Resources.createResource(20 * 1024, 3), res); res = ResourceUtils.createResourceFromString("memory=20G,vcores=3", - ResourceUtils.getResourcesTypeInfo()); - Assert.assertEquals(Resources.createResource(20 * 1024, 3), res); + ResourceUtils.getResourcesTypeInfo()); + assertEquals(Resources.createResource(20 * 1024, 3), res); res = ResourceUtils.createResourceFromString("memory=20M,vcores=3", - ResourceUtils.getResourcesTypeInfo()); - Assert.assertEquals(Resources.createResource(20, 3), res); + ResourceUtils.getResourcesTypeInfo()); + assertEquals(Resources.createResource(20, 3), res); res = ResourceUtils.createResourceFromString("memory=20m,vcores=3", - ResourceUtils.getResourcesTypeInfo()); - Assert.assertEquals(Resources.createResource(20, 3), res); + ResourceUtils.getResourcesTypeInfo()); + assertEquals(Resources.createResource(20, 3), res); res = ResourceUtils.createResourceFromString("memory-mb=20,vcores=3", - ResourceUtils.getResourcesTypeInfo()); - Assert.assertEquals(Resources.createResource(20, 3), res); + ResourceUtils.getResourcesTypeInfo()); + assertEquals(Resources.createResource(20, 3), res); res = ResourceUtils.createResourceFromString("memory-mb=20m,vcores=3", - ResourceUtils.getResourcesTypeInfo()); - Assert.assertEquals(Resources.createResource(20, 3), res); + ResourceUtils.getResourcesTypeInfo()); + assertEquals(Resources.createResource(20, 3), res); res = ResourceUtils.createResourceFromString("memory-mb=20G,vcores=3", - ResourceUtils.getResourcesTypeInfo()); - Assert.assertEquals(Resources.createResource(20 * 1024, 3), res); + ResourceUtils.getResourcesTypeInfo()); + assertEquals(Resources.createResource(20 * 1024, 3), res); // W/o unit for memory means bits, and 20 bits will be rounded to 0 res = ResourceUtils.createResourceFromString("memory=20,vcores=3", - ResourceUtils.getResourcesTypeInfo()); - Assert.assertEquals(Resources.createResource(0, 3), res); + ResourceUtils.getResourcesTypeInfo()); + assertEquals(Resources.createResource(0, 3), res); // Test multiple resources List resTypes = new ArrayList<>( - ResourceUtils.getResourcesTypeInfo()); + ResourceUtils.getResourcesTypeInfo()); resTypes.add(ResourceTypeInfo.newInstance(ResourceInformation.GPU_URI, "")); ResourceUtils.reinitializeResources(resTypes); res = ResourceUtils.createResourceFromString("memory=2G,vcores=3,gpu=0", - resTypes); - Assert.assertEquals(2 * 1024, res.getMemorySize()); - Assert.assertEquals(0, res.getResourceValue(ResourceInformation.GPU_URI)); + resTypes); + assertEquals(2 * 1024, res.getMemorySize()); + assertEquals(0, res.getResourceValue(ResourceInformation.GPU_URI)); res = ResourceUtils.createResourceFromString("memory=2G,vcores=3,gpu=3", - resTypes); - Assert.assertEquals(2 * 1024, res.getMemorySize()); - Assert.assertEquals(3, res.getResourceValue(ResourceInformation.GPU_URI)); + resTypes); + assertEquals(2 * 1024, res.getMemorySize()); + assertEquals(3, res.getResourceValue(ResourceInformation.GPU_URI)); res = ResourceUtils.createResourceFromString("memory=2G,vcores=3", - resTypes); - Assert.assertEquals(2 * 1024, res.getMemorySize()); - Assert.assertEquals(0, res.getResourceValue(ResourceInformation.GPU_URI)); + resTypes); + assertEquals(2 * 1024, res.getMemorySize()); + assertEquals(0, res.getResourceValue(ResourceInformation.GPU_URI)); res = ResourceUtils.createResourceFromString( - "memory=2G,vcores=3,yarn.io/gpu=0", resTypes); - Assert.assertEquals(2 * 1024, res.getMemorySize()); - Assert.assertEquals(0, res.getResourceValue(ResourceInformation.GPU_URI)); + "memory=2G,vcores=3,yarn.io/gpu=0", resTypes); + assertEquals(2 * 1024, res.getMemorySize()); + assertEquals(0, res.getResourceValue(ResourceInformation.GPU_URI)); res = ResourceUtils.createResourceFromString( - "memory=2G,vcores=3,yarn.io/gpu=3", resTypes); - Assert.assertEquals(2 * 1024, res.getMemorySize()); - Assert.assertEquals(3, res.getResourceValue(ResourceInformation.GPU_URI)); + "memory=2G,vcores=3,yarn.io/gpu=3", resTypes); + assertEquals(2 * 1024, res.getMemorySize()); + assertEquals(3, res.getResourceValue(ResourceInformation.GPU_URI)); } @Test - public void testMultipleOpsForResourcesWithTags() throws Exception { + void testMultipleOpsForResourcesWithTags() throws Exception { Configuration conf = new YarnConfiguration(); setupResourceTypes(conf, "resource-types-6.xml"); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResources.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResources.java index 307e0d8e07d..3f724096277 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResources.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResources.java @@ -18,28 +18,30 @@ package org.apache.hadoop.yarn.util.resource; +import java.io.File; + +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.yarn.api.records.Resource; import org.apache.hadoop.yarn.api.records.ResourceInformation; import org.apache.hadoop.yarn.conf.YarnConfiguration; -import org.junit.After; -import org.junit.Before; -import org.junit.Test; -import java.io.File; - -import static org.apache.hadoop.yarn.util.resource.Resources.componentwiseMin; -import static org.apache.hadoop.yarn.util.resource.Resources.componentwiseMax; import static org.apache.hadoop.yarn.util.resource.Resources.add; -import static org.apache.hadoop.yarn.util.resource.Resources.multiplyAndRoundUp; -import static org.apache.hadoop.yarn.util.resource.Resources.subtract; +import static org.apache.hadoop.yarn.util.resource.Resources.componentwiseMax; +import static org.apache.hadoop.yarn.util.resource.Resources.componentwiseMin; +import static org.apache.hadoop.yarn.util.resource.Resources.fitsIn; import static org.apache.hadoop.yarn.util.resource.Resources.multiply; import static org.apache.hadoop.yarn.util.resource.Resources.multiplyAndAddTo; import static org.apache.hadoop.yarn.util.resource.Resources.multiplyAndRoundDown; -import static org.apache.hadoop.yarn.util.resource.Resources.fitsIn; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertTrue; +import static org.apache.hadoop.yarn.util.resource.Resources.multiplyAndRoundUp; +import static org.apache.hadoop.yarn.util.resource.Resources.subtract; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertTrue; public class TestResources { private static final String INVALID_RESOURCE_MSG = "Invalid resource value"; @@ -75,12 +77,12 @@ public class TestResources { } } - @Before + @BeforeEach public void setup() throws Exception { setupExtraResourceType(); } - @After + @AfterEach public void teardown() { deleteResourceTypesFile(); } @@ -96,8 +98,9 @@ public class TestResources { return ret; } - @Test(timeout = 10000) - public void testCompareToWithUnboundedResource() { + @Test + @Timeout(10000) + void testCompareToWithUnboundedResource() { unsetExtraResourceType(); Resource unboundedClone = Resources.clone(ExtendedResources.unbounded()); assertTrue(unboundedClone @@ -107,8 +110,9 @@ public class TestResources { unboundedClone.compareTo(createResource(0, Integer.MAX_VALUE)) > 0); } - @Test(timeout = 10000) - public void testCompareToWithNoneResource() { + @Test + @Timeout(10000) + void testCompareToWithNoneResource() { assertTrue(Resources.none().compareTo(createResource(0, 0)) == 0); assertTrue(Resources.none().compareTo(createResource(1, 0)) < 0); assertTrue(Resources.none().compareTo(createResource(0, 1)) < 0); @@ -118,8 +122,9 @@ public class TestResources { assertTrue(Resources.none().compareTo(createResource(0, 0, 1)) < 0); } - @Test(timeout = 1000) - public void testFitsIn() { + @Test + @Timeout(1000) + void testFitsIn() { assertTrue(fitsIn(createResource(1, 1), createResource(2, 2))); assertTrue(fitsIn(createResource(2, 2), createResource(2, 2))); assertFalse(fitsIn(createResource(2, 2), createResource(1, 1))); @@ -130,8 +135,9 @@ public class TestResources { assertTrue(fitsIn(createResource(1, 1, 1), createResource(2, 2, 2))); } - @Test(timeout = 1000) - public void testComponentwiseMin() { + @Test + @Timeout(1000) + void testComponentwiseMin() { assertEquals(createResource(1, 1), componentwiseMin(createResource(1, 1), createResource(2, 2))); assertEquals(createResource(1, 1), @@ -147,7 +153,7 @@ public class TestResources { } @Test - public void testComponentwiseMax() { + void testComponentwiseMax() { assertEquals(createResource(2, 2), componentwiseMax(createResource(1, 1), createResource(2, 2))); assertEquals(createResource(2, 2), @@ -165,7 +171,7 @@ public class TestResources { } @Test - public void testAdd() { + void testAdd() { assertEquals(createResource(2, 3), add(createResource(1, 1), createResource(1, 2))); assertEquals(createResource(3, 2), @@ -177,7 +183,7 @@ public class TestResources { } @Test - public void testSubtract() { + void testSubtract() { assertEquals(createResource(1, 0), subtract(createResource(2, 1), createResource(1, 1))); assertEquals(createResource(0, 1), @@ -189,7 +195,7 @@ public class TestResources { } @Test - public void testClone() { + void testClone() { assertEquals(createResource(1, 1), Resources.clone(createResource(1, 1))); assertEquals(createResource(1, 1, 0), Resources.clone(createResource(1, 1))); @@ -200,7 +206,7 @@ public class TestResources { } @Test - public void testMultiply() { + void testMultiply() { assertEquals(createResource(4, 2), multiply(createResource(2, 1), 2)); assertEquals(createResource(4, 2, 0), multiply(createResource(2, 1), 2)); assertEquals(createResource(2, 4), multiply(createResource(1, 2), 2)); @@ -209,61 +215,74 @@ public class TestResources { assertEquals(createResource(4, 4, 6), multiply(createResource(2, 2, 3), 2)); } - @Test(timeout=10000) - public void testMultiplyRoundUp() { + @Test + @Timeout(10000) + void testMultiplyRoundUp() { final double by = 0.5; final String memoryErrorMsg = "Invalid memory size."; final String vcoreErrorMsg = "Invalid virtual core number."; Resource resource = Resources.createResource(1, 1); Resource result = Resources.multiplyAndRoundUp(resource, by); - assertEquals(memoryErrorMsg, result.getMemorySize(), 1); - assertEquals(vcoreErrorMsg, result.getVirtualCores(), 1); + assertEquals(result.getMemorySize(), 1, memoryErrorMsg); + assertEquals(result.getVirtualCores(), 1, vcoreErrorMsg); resource = Resources.createResource(2, 2); result = Resources.multiplyAndRoundUp(resource, by); - assertEquals(memoryErrorMsg, result.getMemorySize(), 1); - assertEquals(vcoreErrorMsg, result.getVirtualCores(), 1); + assertEquals(result.getMemorySize(), 1, memoryErrorMsg); + assertEquals(result.getVirtualCores(), 1, vcoreErrorMsg); resource = Resources.createResource(0, 0); result = Resources.multiplyAndRoundUp(resource, by); - assertEquals(memoryErrorMsg, result.getMemorySize(), 0); - assertEquals(vcoreErrorMsg, result.getVirtualCores(), 0); + assertEquals(result.getMemorySize(), 0, memoryErrorMsg); + assertEquals(result.getVirtualCores(), 0, vcoreErrorMsg); } @Test - public void testMultiplyAndRoundUpCustomResources() { - assertEquals(INVALID_RESOURCE_MSG, createResource(5, 2, 8), - multiplyAndRoundUp(createResource(3, 1, 5), 1.5)); - assertEquals(INVALID_RESOURCE_MSG, createResource(5, 2, 0), - multiplyAndRoundUp(createResource(3, 1, 0), 1.5)); - assertEquals(INVALID_RESOURCE_MSG, createResource(5, 5, 0), - multiplyAndRoundUp(createResource(3, 3, 0), 1.5)); - assertEquals(INVALID_RESOURCE_MSG, createResource(8, 3, 13), - multiplyAndRoundUp(createResource(3, 1, 5), 2.5)); - assertEquals(INVALID_RESOURCE_MSG, createResource(8, 3, 0), - multiplyAndRoundUp(createResource(3, 1, 0), 2.5)); - assertEquals(INVALID_RESOURCE_MSG, createResource(8, 8, 0), - multiplyAndRoundUp(createResource(3, 3, 0), 2.5)); + void testMultiplyAndRoundUpCustomResources() { + assertEquals(createResource(5, 2, 8), + multiplyAndRoundUp(createResource(3, 1, 5), 1.5), + INVALID_RESOURCE_MSG); + assertEquals(createResource(5, 2, 0), + multiplyAndRoundUp(createResource(3, 1, 0), 1.5), + INVALID_RESOURCE_MSG); + assertEquals(createResource(5, 5, 0), + multiplyAndRoundUp(createResource(3, 3, 0), 1.5), + INVALID_RESOURCE_MSG); + assertEquals(createResource(8, 3, 13), + multiplyAndRoundUp(createResource(3, 1, 5), 2.5), + INVALID_RESOURCE_MSG); + assertEquals(createResource(8, 3, 0), + multiplyAndRoundUp(createResource(3, 1, 0), 2.5), + INVALID_RESOURCE_MSG); + assertEquals(createResource(8, 8, 0), + multiplyAndRoundUp(createResource(3, 3, 0), 2.5), + INVALID_RESOURCE_MSG); } @Test - public void testMultiplyAndRoundDown() { - assertEquals(INVALID_RESOURCE_MSG, createResource(4, 1), - multiplyAndRoundDown(createResource(3, 1), 1.5)); - assertEquals(INVALID_RESOURCE_MSG, createResource(4, 1, 0), - multiplyAndRoundDown(createResource(3, 1), 1.5)); - assertEquals(INVALID_RESOURCE_MSG, createResource(1, 4), - multiplyAndRoundDown(createResource(1, 3), 1.5)); - assertEquals(INVALID_RESOURCE_MSG, createResource(1, 4, 0), - multiplyAndRoundDown(createResource(1, 3), 1.5)); - assertEquals(INVALID_RESOURCE_MSG, createResource(7, 7, 0), - multiplyAndRoundDown(createResource(3, 3, 0), 2.5)); - assertEquals(INVALID_RESOURCE_MSG, createResource(2, 2, 7), - multiplyAndRoundDown(createResource(1, 1, 3), 2.5)); + void testMultiplyAndRoundDown() { + assertEquals(createResource(4, 1), + multiplyAndRoundDown(createResource(3, 1), 1.5), + INVALID_RESOURCE_MSG); + assertEquals(createResource(4, 1, 0), + multiplyAndRoundDown(createResource(3, 1), 1.5), + INVALID_RESOURCE_MSG); + assertEquals(createResource(1, 4), + multiplyAndRoundDown(createResource(1, 3), 1.5), + INVALID_RESOURCE_MSG); + assertEquals(createResource(1, 4, 0), + multiplyAndRoundDown(createResource(1, 3), 1.5), + INVALID_RESOURCE_MSG); + assertEquals(createResource(7, 7, 0), + multiplyAndRoundDown(createResource(3, 3, 0), 2.5), + INVALID_RESOURCE_MSG); + assertEquals(createResource(2, 2, 7), + multiplyAndRoundDown(createResource(1, 1, 3), 2.5), + INVALID_RESOURCE_MSG); } @Test - public void testMultiplyAndAddTo() throws Exception { + void testMultiplyAndAddTo() throws Exception { unsetExtraResourceType(); setupExtraResourceType(); assertEquals(createResource(6, 4), @@ -283,7 +302,7 @@ public class TestResources { } @Test - public void testCreateResourceWithSameLongValue() throws Exception { + void testCreateResourceWithSameLongValue() throws Exception { unsetExtraResourceType(); setupExtraResourceType(); @@ -294,7 +313,7 @@ public class TestResources { } @Test - public void testCreateResourceWithSameIntValue() throws Exception { + void testCreateResourceWithSameIntValue() throws Exception { unsetExtraResourceType(); setupExtraResourceType(); @@ -305,14 +324,14 @@ public class TestResources { } @Test - public void testCreateSimpleResourceWithSameLongValue() { + void testCreateSimpleResourceWithSameLongValue() { Resource res = ResourceUtils.createResourceWithSameValue(11L); assertEquals(11L, res.getMemorySize()); assertEquals(11, res.getVirtualCores()); } @Test - public void testCreateSimpleResourceWithSameIntValue() { + void testCreateSimpleResourceWithSameIntValue() { Resource res = ResourceUtils.createResourceWithSameValue(11); assertEquals(11, res.getMemorySize()); assertEquals(11, res.getVirtualCores()); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/timeline/TestShortenedFlowName.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/timeline/TestShortenedFlowName.java index f9295579ea2..e69734199b6 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/timeline/TestShortenedFlowName.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/timeline/TestShortenedFlowName.java @@ -17,12 +17,14 @@ */ package org.apache.hadoop.yarn.util.timeline; -import org.apache.hadoop.yarn.conf.YarnConfiguration; -import org.junit.Assert; -import org.junit.Test; - import java.util.UUID; +import org.junit.jupiter.api.Test; + +import org.apache.hadoop.yarn.conf.YarnConfiguration; + +import static org.junit.jupiter.api.Assertions.assertEquals; + /** * Test case for limiting flow name size. */ @@ -31,22 +33,22 @@ public class TestShortenedFlowName { private static final String TEST_FLOW_NAME = "TestFlowName"; @Test - public void testRemovingUUID() { + void testRemovingUUID() { String flowName = TEST_FLOW_NAME + "-" + UUID.randomUUID(); flowName = TimelineUtils.removeUUID(flowName); - Assert.assertEquals(TEST_FLOW_NAME, flowName); + assertEquals(TEST_FLOW_NAME, flowName); } @Test - public void testShortenedFlowName() { + void testShortenedFlowName() { YarnConfiguration conf = new YarnConfiguration(); String flowName = TEST_FLOW_NAME + UUID.randomUUID(); conf.setInt(YarnConfiguration.FLOW_NAME_MAX_SIZE, 8); String shortenedFlowName = TimelineUtils.shortenFlowName(flowName, conf); - Assert.assertEquals("TestFlow", shortenedFlowName); + assertEquals("TestFlow", shortenedFlowName); conf.setInt(YarnConfiguration.FLOW_NAME_MAX_SIZE, YarnConfiguration.FLOW_NAME_DEFAULT_MAX_SIZE); shortenedFlowName = TimelineUtils.shortenFlowName(flowName, conf); - Assert.assertEquals(TEST_FLOW_NAME, shortenedFlowName); + assertEquals(TEST_FLOW_NAME, shortenedFlowName); } } \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/JerseyTestBase.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/JerseyTestBase.java index d537fa748f9..6578248cae0 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/JerseyTestBase.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/JerseyTestBase.java @@ -21,11 +21,11 @@ package org.apache.hadoop.yarn.webapp; import java.io.IOException; import java.util.Random; -import org.apache.hadoop.net.ServerSocketUtil; - import com.sun.jersey.test.framework.JerseyTest; import com.sun.jersey.test.framework.WebAppDescriptor; +import org.apache.hadoop.net.ServerSocketUtil; + public abstract class JerseyTestBase extends JerseyTest { public JerseyTestBase(WebAppDescriptor appDescriptor) { super(appDescriptor); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/MyTestJAXBContextResolver.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/MyTestJAXBContextResolver.java index 6f6ee5d4b53..242bf047805 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/MyTestJAXBContextResolver.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/MyTestJAXBContextResolver.java @@ -21,17 +21,16 @@ package org.apache.hadoop.yarn.webapp; import java.util.Arrays; import java.util.HashSet; import java.util.Set; - import javax.ws.rs.ext.ContextResolver; import javax.ws.rs.ext.Provider; import javax.xml.bind.JAXBContext; -import org.apache.hadoop.yarn.webapp.MyTestWebService.MyInfo; - import com.google.inject.Singleton; import com.sun.jersey.api.json.JSONConfiguration; import com.sun.jersey.api.json.JSONJAXBContext; +import org.apache.hadoop.yarn.webapp.MyTestWebService.MyInfo; + @Singleton @Provider public class MyTestJAXBContextResolver implements ContextResolver { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/MyTestWebService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/MyTestWebService.java index 74623a4fed9..1d0a01ea53d 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/MyTestWebService.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/MyTestWebService.java @@ -27,6 +27,7 @@ import javax.xml.bind.annotation.XmlAccessorType; import javax.xml.bind.annotation.XmlRootElement; import com.google.inject.Singleton; + import org.apache.hadoop.http.JettyUtils; @Singleton diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/TestParseRoute.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/TestParseRoute.java index 87d62cc7e77..7d22aaea9d2 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/TestParseRoute.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/TestParseRoute.java @@ -20,64 +20,76 @@ package org.apache.hadoop.yarn.webapp; import java.util.Arrays; -import org.apache.hadoop.yarn.webapp.WebApp; -import org.apache.hadoop.yarn.webapp.WebAppException; -import org.junit.Test; -import static org.junit.Assert.*; +import org.junit.jupiter.api.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertThrows; public class TestParseRoute { - @Test public void testNormalAction() { + @Test + void testNormalAction() { assertEquals(Arrays.asList("/foo/action", "foo", "action", ":a1", ":a2"), - WebApp.parseRoute("/foo/action/:a1/:a2")); + WebApp.parseRoute("/foo/action/:a1/:a2")); } - @Test public void testDefaultController() { + @Test + void testDefaultController() { assertEquals(Arrays.asList("/", "default", "index"), - WebApp.parseRoute("/")); + WebApp.parseRoute("/")); } - @Test public void testDefaultAction() { + @Test + void testDefaultAction() { assertEquals(Arrays.asList("/foo", "foo", "index"), - WebApp.parseRoute("/foo")); + WebApp.parseRoute("/foo")); assertEquals(Arrays.asList("/foo", "foo", "index"), - WebApp.parseRoute("/foo/")); + WebApp.parseRoute("/foo/")); } - @Test public void testMissingAction() { + @Test + void testMissingAction() { assertEquals(Arrays.asList("/foo", "foo", "index", ":a1"), - WebApp.parseRoute("/foo/:a1")); + WebApp.parseRoute("/foo/:a1")); } - @Test public void testDefaultCapture() { + @Test + void testDefaultCapture() { assertEquals(Arrays.asList("/", "default", "index", ":a"), - WebApp.parseRoute("/:a")); + WebApp.parseRoute("/:a")); } - @Test public void testPartialCapture1() { + @Test + void testPartialCapture1() { assertEquals(Arrays.asList("/foo/action/bar", "foo", "action", "bar", ":a"), - WebApp.parseRoute("/foo/action/bar/:a")); + WebApp.parseRoute("/foo/action/bar/:a")); } - @Test public void testPartialCapture2() { + @Test + void testPartialCapture2() { assertEquals(Arrays.asList("/foo/action", "foo", "action", ":a1", "bar", - ":a2", ":a3"), - WebApp.parseRoute("/foo/action/:a1/bar/:a2/:a3")); + ":a2", ":a3"), + WebApp.parseRoute("/foo/action/:a1/bar/:a2/:a3")); } - @Test public void testLeadingPaddings() { + @Test + void testLeadingPaddings() { assertEquals(Arrays.asList("/foo/action", "foo", "action", ":a"), - WebApp.parseRoute(" /foo/action/ :a")); + WebApp.parseRoute(" /foo/action/ :a")); } - @Test public void testTrailingPaddings() { + @Test + void testTrailingPaddings() { assertEquals(Arrays.asList("/foo/action", "foo", "action", ":a"), - WebApp.parseRoute("/foo/action//:a / ")); + WebApp.parseRoute("/foo/action//:a / ")); assertEquals(Arrays.asList("/foo/action", "foo", "action"), - WebApp.parseRoute("/foo/action / ")); + WebApp.parseRoute("/foo/action / ")); } - @Test(expected=WebAppException.class) public void testMissingLeadingSlash() { - WebApp.parseRoute("foo/bar"); + @Test + void testMissingLeadingSlash() { + assertThrows(WebAppException.class, () -> { + WebApp.parseRoute("foo/bar"); + }); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/TestSubViews.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/TestSubViews.java index 075bed216dc..fd7525167f8 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/TestSubViews.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/TestSubViews.java @@ -18,15 +18,18 @@ package org.apache.hadoop.yarn.webapp; +import java.io.PrintWriter; +import javax.servlet.http.HttpServletResponse; + +import com.google.inject.Injector; +import org.junit.jupiter.api.Test; + import org.apache.hadoop.yarn.webapp.test.WebAppTests; import org.apache.hadoop.yarn.webapp.view.HtmlBlock; import org.apache.hadoop.yarn.webapp.view.HtmlPage; -import java.io.PrintWriter; -import javax.servlet.http.HttpServletResponse; -import com.google.inject.Injector; -import org.junit.Test; -import static org.mockito.Mockito.*; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; public class TestSubViews { @@ -61,7 +64,8 @@ public class TestSubViews { } } - @Test public void testSubView() throws Exception { + @Test + void testSubView() throws Exception { Injector injector = WebAppTests.createMockInjector(this); injector.getInstance(MainView.class).render(); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/TestWebApp.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/TestWebApp.java index 98b75054268..7d7a1575b47 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/TestWebApp.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/TestWebApp.java @@ -18,22 +18,16 @@ package org.apache.hadoop.yarn.webapp; -import static org.apache.hadoop.yarn.util.StringHelper.join; -import static org.apache.hadoop.yarn.webapp.view.JQueryUI.C_TABLE; -import static org.apache.hadoop.yarn.webapp.view.JQueryUI.DATATABLES; -import static org.apache.hadoop.yarn.webapp.view.JQueryUI.DATATABLES_ID; -import static org.apache.hadoop.yarn.webapp.view.JQueryUI._INFO_WRAP; -import static org.apache.hadoop.yarn.webapp.view.JQueryUI._TH; -import static org.apache.hadoop.yarn.webapp.view.JQueryUI.initID; -import static org.apache.hadoop.yarn.webapp.view.JQueryUI.tableInit; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertTrue; - import java.io.InputStream; import java.net.HttpURLConnection; import java.net.URL; import java.net.URLEncoder; +import com.google.inject.Inject; +import org.junit.jupiter.api.Test; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + import org.apache.commons.lang3.ArrayUtils; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.net.ServerSocketUtil; @@ -42,11 +36,18 @@ import org.apache.hadoop.yarn.webapp.view.HtmlPage; import org.apache.hadoop.yarn.webapp.view.JQueryUI; import org.apache.hadoop.yarn.webapp.view.RobotsTextPage; import org.apache.hadoop.yarn.webapp.view.TextPage; -import org.junit.Test; -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; -import com.google.inject.Inject; +import static org.apache.hadoop.yarn.util.StringHelper.join; +import static org.apache.hadoop.yarn.webapp.view.JQueryUI.C_TABLE; +import static org.apache.hadoop.yarn.webapp.view.JQueryUI.DATATABLES; +import static org.apache.hadoop.yarn.webapp.view.JQueryUI.DATATABLES_ID; +import static org.apache.hadoop.yarn.webapp.view.JQueryUI._INFO_WRAP; +import static org.apache.hadoop.yarn.webapp.view.JQueryUI._TH; +import static org.apache.hadoop.yarn.webapp.view.JQueryUI.initID; +import static org.apache.hadoop.yarn.webapp.view.JQueryUI.tableInit; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertThrows; +import static org.junit.jupiter.api.Assertions.assertTrue; public class TestWebApp { static final Logger LOG = LoggerFactory.getLogger(TestWebApp.class); @@ -150,12 +151,14 @@ public class TestWebApp { String echo(String s) { return s; } - @Test public void testCreate() { + @Test + void testCreate() { WebApp app = WebApps.$for(this).start(); app.stop(); } - @Test public void testCreateWithPort() { + @Test + void testCreateWithPort() { // see if the ephemeral port is updated WebApp app = WebApps.$for(this).at(0).start(); int port = app.getListenerAddress().getPort(); @@ -167,72 +170,80 @@ public class TestWebApp { app.stop(); } - @Test(expected=org.apache.hadoop.yarn.webapp.WebAppException.class) - public void testCreateWithBindAddressNonZeroPort() { - WebApp app = WebApps.$for(this).at("0.0.0.0:50000").start(); - int port = app.getListenerAddress().getPort(); - assertEquals(50000, port); - // start another WebApp with same NonZero port - WebApp app2 = WebApps.$for(this).at("0.0.0.0:50000").start(); - // An exception occurs (findPort disabled) - app.stop(); - app2.stop(); + @Test + void testCreateWithBindAddressNonZeroPort() { + assertThrows(org.apache.hadoop.yarn.webapp.WebAppException.class, () -> { + WebApp app = WebApps.$for(this).at("0.0.0.0:50000").start(); + int port = app.getListenerAddress().getPort(); + assertEquals(50000, port); + // start another WebApp with same NonZero port + WebApp app2 = WebApps.$for(this).at("0.0.0.0:50000").start(); + // An exception occurs (findPort disabled) + app.stop(); + app2.stop(); + }); } - @Test(expected=org.apache.hadoop.yarn.webapp.WebAppException.class) - public void testCreateWithNonZeroPort() { - WebApp app = WebApps.$for(this).at(50000).start(); - int port = app.getListenerAddress().getPort(); - assertEquals(50000, port); - // start another WebApp with same NonZero port - WebApp app2 = WebApps.$for(this).at(50000).start(); - // An exception occurs (findPort disabled) - app.stop(); - app2.stop(); + @Test + void testCreateWithNonZeroPort() { + assertThrows(org.apache.hadoop.yarn.webapp.WebAppException.class, () -> { + WebApp app = WebApps.$for(this).at(50000).start(); + int port = app.getListenerAddress().getPort(); + assertEquals(50000, port); + // start another WebApp with same NonZero port + WebApp app2 = WebApps.$for(this).at(50000).start(); + // An exception occurs (findPort disabled) + app.stop(); + app2.stop(); + }); } - @Test public void testServePaths() { + @Test + void testServePaths() { WebApp app = WebApps.$for("test", this).start(); assertEquals("/test", app.getRedirectPath()); - String[] expectedPaths = { "/test", "/test/*" }; + String[] expectedPaths = {"/test", "/test/*"}; String[] pathSpecs = app.getServePathSpecs(); - + assertEquals(2, pathSpecs.length); - for(int i = 0; i < expectedPaths.length; i++) { + for (int i = 0; i < expectedPaths.length; i++) { assertTrue(ArrayUtils.contains(pathSpecs, expectedPaths[i])); } app.stop(); } - @Test public void testServePathsNoName() { + @Test + void testServePathsNoName() { WebApp app = WebApps.$for("", this).start(); assertEquals("/", app.getRedirectPath()); - String[] expectedPaths = { "/*" }; + String[] expectedPaths = {"/*"}; String[] pathSpecs = app.getServePathSpecs(); - + assertEquals(1, pathSpecs.length); - for(int i = 0; i < expectedPaths.length; i++) { + for (int i = 0; i < expectedPaths.length; i++) { assertTrue(ArrayUtils.contains(pathSpecs, expectedPaths[i])); } app.stop(); } - @Test public void testDefaultRoutes() throws Exception { + @Test + void testDefaultRoutes() throws Exception { WebApp app = WebApps.$for("test", this).start(); String baseUrl = baseUrl(app); try { - assertEquals("foo", getContent(baseUrl +"test/foo").trim()); - assertEquals("foo", getContent(baseUrl +"test/foo/index").trim()); - assertEquals("bar", getContent(baseUrl +"test/foo/bar").trim()); - assertEquals("default", getContent(baseUrl +"test").trim()); - assertEquals("default", getContent(baseUrl +"test/").trim()); + assertEquals("foo", getContent(baseUrl + "test/foo").trim()); + assertEquals("foo", getContent(baseUrl + "test/foo/index").trim()); + assertEquals("bar", getContent(baseUrl + "test/foo/bar").trim()); + assertEquals("default", getContent(baseUrl + "test").trim()); + assertEquals("default", getContent(baseUrl + "test/").trim()); assertEquals("default", getContent(baseUrl).trim()); } finally { app.stop(); } } - @Test public void testCustomRoutes() throws Exception { + @Test + void testCustomRoutes() throws Exception { WebApp app = WebApps.$for("test", TestWebApp.class, this, "ws").start(new WebApp() { @Override @@ -249,21 +260,22 @@ public class TestWebApp { String baseUrl = baseUrl(app); try { assertEquals("foo", getContent(baseUrl).trim()); - assertEquals("foo", getContent(baseUrl +"test").trim()); - assertEquals("foo1", getContent(baseUrl +"test/1").trim()); - assertEquals("bar", getContent(baseUrl +"test/bar/foo").trim()); - assertEquals("default", getContent(baseUrl +"test/foo/bar").trim()); - assertEquals("default1", getContent(baseUrl +"test/foo/1").trim()); - assertEquals("default2", getContent(baseUrl +"test/foo/bar/2").trim()); - assertEquals(404, getResponseCode(baseUrl +"test/goo")); - assertEquals(200, getResponseCode(baseUrl +"ws/v1/test")); - assertTrue(getContent(baseUrl +"ws/v1/test").contains("myInfo")); + assertEquals("foo", getContent(baseUrl + "test").trim()); + assertEquals("foo1", getContent(baseUrl + "test/1").trim()); + assertEquals("bar", getContent(baseUrl + "test/bar/foo").trim()); + assertEquals("default", getContent(baseUrl + "test/foo/bar").trim()); + assertEquals("default1", getContent(baseUrl + "test/foo/1").trim()); + assertEquals("default2", getContent(baseUrl + "test/foo/bar/2").trim()); + assertEquals(404, getResponseCode(baseUrl + "test/goo")); + assertEquals(200, getResponseCode(baseUrl + "ws/v1/test")); + assertTrue(getContent(baseUrl + "ws/v1/test").contains("myInfo")); } finally { app.stop(); } } - @Test public void testEncodedUrl() throws Exception { + @Test + void testEncodedUrl() throws Exception { WebApp app = WebApps.$for("test", TestWebApp.class, this, "ws").start(new WebApp() { @Override @@ -292,7 +304,8 @@ public class TestWebApp { } } - @Test public void testRobotsText() throws Exception { + @Test + void testRobotsText() throws Exception { WebApp app = WebApps.$for("test", TestWebApp.class, this, "ws").start(new WebApp() { @Override @@ -319,18 +332,20 @@ public class TestWebApp { // This is to test the GuiceFilter should only be applied to webAppContext, // not to logContext; - @Test public void testYARNWebAppContext() throws Exception { + @Test + void testYARNWebAppContext() throws Exception { // setting up the log context System.setProperty("hadoop.log.dir", "/Not/Existing/dir"); WebApp app = WebApps.$for("test", this).start(new WebApp() { - @Override public void setup() { + @Override + public void setup() { route("/", FooController.class); } }); String baseUrl = baseUrl(app); try { // Not able to access a non-existing dir, should not redirect to foo. - assertEquals(404, getResponseCode(baseUrl +"logs")); + assertEquals(404, getResponseCode(baseUrl + "logs")); // should be able to redirect to foo. assertEquals("foo", getContent(baseUrl).trim()); } finally { @@ -345,7 +360,7 @@ public class TestWebApp { } @Test - public void testPortRanges() throws Exception { + void testPortRanges() throws Exception { WebApp app = WebApps.$for("test", this).start(); String baseUrl = baseUrl(app); WebApp app1 = null; @@ -355,7 +370,7 @@ public class TestWebApp { WebApp app5 = null; try { int port = ServerSocketUtil.waitForPort(48000, 60); - assertEquals("foo", getContent(baseUrl +"test/foo").trim()); + assertEquals("foo", getContent(baseUrl + "test/foo").trim()); app1 = WebApps.$for("test", this).at(port).start(); assertEquals(port, app1.getListenerAddress().getPort()); app2 = WebApps.$for("test", this).at("0.0.0.0", port, true).start(); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/WebServicesTestUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/WebServicesTestUtils.java index b4832eff420..ce93b06e70a 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/WebServicesTestUtils.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/WebServicesTestUtils.java @@ -18,9 +18,6 @@ package org.apache.hadoop.yarn.webapp; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertTrue; - import java.util.ArrayList; import java.util.List; import javax.ws.rs.core.Response.StatusType; @@ -30,6 +27,8 @@ import org.w3c.dom.Element; import org.w3c.dom.Node; import org.w3c.dom.NodeList; +import static org.assertj.core.api.Assertions.assertThat; + public class WebServicesTestUtils { public static long getXmlLong(Element element, String name) { String val = getXmlString(element, name); @@ -121,30 +120,24 @@ public class WebServicesTestUtils { } public static void checkStringMatch(String print, String expected, String got) { - assertTrue( - print + " doesn't match, got: " + got + " expected: " + expected, - got.matches(expected)); + assertThat(got).as(print).matches(expected); } public static void checkStringContains(String print, String expected, String got) { - assertTrue( - print + " doesn't contain expected string, got: " + got + " expected: " + expected, - got.contains(expected)); + assertThat(got).as(print).contains(expected); } public static void checkStringEqual(String print, String expected, String got) { - assertTrue( - print + " is not equal, got: " + got + " expected: " + expected, - got.equals(expected)); + assertThat(got).as(print).isEqualTo(expected); } public static void assertResponseStatusCode(StatusType expected, StatusType actual) { - assertResponseStatusCode(null, expected, actual); + assertThat(expected.getStatusCode()).isEqualTo(actual.getStatusCode()); } public static void assertResponseStatusCode(String errmsg, StatusType expected, StatusType actual) { - assertEquals(errmsg, expected.getStatusCode(), actual.getStatusCode()); + assertThat(expected.getStatusCode()).withFailMessage(errmsg).isEqualTo(actual.getStatusCode()); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/hamlet2/TestHamlet.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/hamlet2/TestHamlet.java index 275b64cb79d..6aa499fad4f 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/hamlet2/TestHamlet.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/hamlet2/TestHamlet.java @@ -17,19 +17,27 @@ */ package org.apache.hadoop.yarn.webapp.hamlet2; -import java.util.EnumSet; import java.io.PrintWriter; -import org.junit.Test; +import java.util.EnumSet; + +import org.junit.jupiter.api.Test; import org.apache.hadoop.yarn.webapp.SubView; -import static org.apache.hadoop.yarn.webapp.hamlet2.HamletSpec.*; -import static org.junit.Assert.assertEquals; -import static org.mockito.Mockito.*; +import static org.apache.hadoop.yarn.webapp.hamlet2.HamletSpec.LinkType; +import static org.apache.hadoop.yarn.webapp.hamlet2.HamletSpec.Media; +import static org.apache.hadoop.yarn.webapp.hamlet2.HamletSpec.TABLE; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.mockito.Mockito.atLeast; +import static org.mockito.Mockito.never; +import static org.mockito.Mockito.spy; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; public class TestHamlet { - @Test public void testHamlet() { + @Test + void testHamlet() { Hamlet h = newHamlet(). title("test"). h1("heading 1"). @@ -69,7 +77,8 @@ public class TestHamlet { verify(out, never()).print("

    "); } - @Test public void testTable() { + @Test + void testTable() { Hamlet h = newHamlet(). title("test table"). link("style.css"); @@ -90,7 +99,8 @@ public class TestHamlet { verify(out, atLeast(1)).print("
    %n Multiple_line_value%n %n This is one line.%n %n Multiple_line_value%n %n This is one line.%n
    %n Multiple_line_value%n %n
    %n" - + " This is first line.%n
    %n
    %n" - + " This is second line.%n
    %n"); + + "
    %n Multiple_line_value%n %n
    %n" + + " This is first line.%n
    %n
    %n" + + " This is second line.%n
    %n"); assertTrue(output.contains(expectedMultilineData1) && output.contains(expectedMultilineData2)); } - - @Test(timeout=60000L) - public void testJavaScriptInfoBlock() throws Exception{ + + @Test + @Timeout(60000L) + void testJavaScriptInfoBlock() throws Exception { WebAppTests.testBlock(JavaScriptInfoBlock.class); TestInfoBlock.pw.flush(); String output = TestInfoBlock.sw.toString(); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/view/TestTwoColumnCssPage.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/view/TestTwoColumnCssPage.java index 20df4093ad3..a6ec759e893 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/view/TestTwoColumnCssPage.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/view/TestTwoColumnCssPage.java @@ -18,11 +18,12 @@ package org.apache.hadoop.yarn.webapp.view; +import org.junit.jupiter.api.Test; + import org.apache.hadoop.yarn.MockApps; import org.apache.hadoop.yarn.webapp.Controller; import org.apache.hadoop.yarn.webapp.WebApps; import org.apache.hadoop.yarn.webapp.test.WebAppTests; -import org.junit.Test; public class TestTwoColumnCssPage { @@ -57,7 +58,8 @@ public class TestTwoColumnCssPage { } } - @Test public void shouldNotThrow() { + @Test + void shouldNotThrow() { WebAppTests.testPage(TwoColumnCssLayout.class); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/view/TestTwoColumnLayout.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/view/TestTwoColumnLayout.java index 52ae6ae9bd5..d2e4176d772 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/view/TestTwoColumnLayout.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/view/TestTwoColumnLayout.java @@ -18,10 +18,11 @@ package org.apache.hadoop.yarn.webapp.view; +import org.junit.jupiter.api.Test; + import org.apache.hadoop.yarn.webapp.Controller; import org.apache.hadoop.yarn.webapp.WebApps; import org.apache.hadoop.yarn.webapp.test.WebAppTests; -import org.junit.Test; public class TestTwoColumnLayout { @@ -34,7 +35,8 @@ public class TestTwoColumnLayout { } } - @Test public void shouldNotThrow() { + @Test + void shouldNotThrow() { WebAppTests.testPage(TwoColumnLayout.class); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml index fdc36667bfe..6837de80014 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml @@ -144,6 +144,7 @@ org.hsqldb hsqldb test + jdk8 com.microsoft.sqlserver diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/lib/ZKClient.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/lib/ZKClient.java index ba130c61ba0..21cbe20ab48 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/lib/ZKClient.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/lib/ZKClient.java @@ -40,18 +40,19 @@ public class ZKClient { * the zookeeper client library to * talk to zookeeper * @param string the host - * @throws IOException + * @throws IOException if there are I/O errors. */ public ZKClient(String string) throws IOException { zkClient = new ZooKeeper(string, 30000, new ZKWatcher()); } /** - * register the service to a specific path + * register the service to a specific path. + * * @param path the path in zookeeper namespace to register to * @param data the data that is part of this registration - * @throws IOException - * @throws InterruptedException + * @throws IOException if there are I/O errors. + * @throws InterruptedException if any thread has interrupted. */ public void registerService(String path, String data) throws IOException, InterruptedException { @@ -64,13 +65,14 @@ public class ZKClient { } /** - * unregister the service. + * unregister the service. + * * @param path the path at which the service was registered - * @throws IOException - * @throws InterruptedException + * @throws IOException if there are I/O errors. + * @throws InterruptedException if any thread has interrupted. */ public void unregisterService(String path) throws IOException, - InterruptedException { + InterruptedException { try { zkClient.delete(path, -1); } catch(KeeperException ke) { @@ -79,15 +81,16 @@ public class ZKClient { } /** - * list the services registered under a path + * list the services registered under a path. + * * @param path the path under which services are * registered * @return the list of names of services registered - * @throws IOException - * @throws InterruptedException + * @throws IOException if there are I/O errors. + * @throws InterruptedException if any thread has interrupted. */ public List listServices(String path) throws IOException, - InterruptedException { + InterruptedException { List children = null; try { children = zkClient.getChildren(path, false); @@ -98,14 +101,15 @@ public class ZKClient { } /** - * get data published by the service at the registration address + * get data published by the service at the registration address. + * * @param path the path where the service is registered * @return the data of the registered service - * @throws IOException - * @throws InterruptedException + * @throws IOException if there are I/O errors. + * @throws InterruptedException if any thread has interrupted. */ public String getServiceData(String path) throws IOException, - InterruptedException { + InterruptedException { String data; try { Stat stat = new Stat(); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/lib/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/lib/package-info.java index d4fa452c3ae..ba287fdfe43 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/lib/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/lib/package-info.java @@ -15,7 +15,11 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -@InterfaceAudience.Private -package org.apache.hadoop.yarn.lib; -import org.apache.hadoop.classification.InterfaceAudience; + +/** + * This package contains zkClient related classes. + */ +@Private +package org.apache.hadoop.yarn.lib; +import org.apache.hadoop.classification.InterfaceAudience.Private; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/AMHeartbeatRequestHandler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/AMHeartbeatRequestHandler.java index 9a73fb308ce..cb59d41505d 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/AMHeartbeatRequestHandler.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/AMHeartbeatRequestHandler.java @@ -153,6 +153,7 @@ public class AMHeartbeatRequestHandler extends Thread { /** * Set the UGI for RM connection. + * @param ugi UserGroupInformation. */ public void setUGI(UserGroupInformation ugi) { this.userUgi = ugi; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/SCMUploaderProtocol.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/SCMUploaderProtocol.java index 937f648510c..b73a02af6c5 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/SCMUploaderProtocol.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/SCMUploaderProtocol.java @@ -53,8 +53,8 @@ public interface SCMUploaderProtocol { * to the shared cache * @return response indicating if the newly uploaded resource should be * deleted - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException if there are I/O errors. */ public SCMUploaderNotifyResponse notify(SCMUploaderNotifyRequest request) @@ -73,8 +73,8 @@ public interface SCMUploaderProtocol { * * @param request whether the resource can be uploaded to the shared cache * @return response indicating if resource can be uploaded to the shared cache - * @throws YarnException - * @throws IOException + * @throws YarnException exceptions from yarn servers. + * @throws IOException if there are I/O errors. */ public SCMUploaderCanUploadResponse canUpload(SCMUploaderCanUploadRequest request) diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/ServerRMProxy.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/ServerRMProxy.java index 50eed3a75d5..fa7e390a947 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/ServerRMProxy.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/ServerRMProxy.java @@ -49,7 +49,7 @@ public class ServerRMProxy extends RMProxy { * @param protocol Server protocol for which proxy is being requested. * @param Type of proxy. * @return Proxy to the ResourceManager for the specified server protocol. - * @throws IOException + * @throws IOException if there are I/O errors. */ public static T createRMProxy(final Configuration configuration, final Class protocol) throws IOException { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/NMContainerStatus.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/NMContainerStatus.java index 065918d5eb9..6027ce7452e 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/NMContainerStatus.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/NMContainerStatus.java @@ -120,14 +120,18 @@ public abstract class NMContainerStatus { public abstract void setPriority(Priority priority); /** - * Get the time when the container is created + * Get the time when the container is created. + * + * @return CreationTime. */ public abstract long getCreationTime(); public abstract void setCreationTime(long creationTime); /** - * Get the node-label-expression in the original ResourceRequest + * Get the node-label-expression in the original ResourceRequest. + * + * @return NodeLabelExpression. */ public abstract String getNodeLabelExpression(); @@ -167,6 +171,7 @@ public abstract class NMContainerStatus { /** * Get and set the Allocation tags associated with the container. + * @return Allocation tags. */ public Set getAllocationTags() { return Collections.emptySet(); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/RemoteNode.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/RemoteNode.java index 67ad5bac294..72dcb6e9914 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/RemoteNode.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/RemoteNode.java @@ -148,7 +148,7 @@ public abstract class RemoteNode implements Comparable { /** * Set Node Partition. - * @param nodePartition + * @param nodePartition node Partition. */ @Private @Unstable diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/policies/amrmproxy/LocalityMulticastAMRMProxyPolicy.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/policies/amrmproxy/LocalityMulticastAMRMProxyPolicy.java index c8f945896e4..17cc8390a11 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/policies/amrmproxy/LocalityMulticastAMRMProxyPolicy.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/policies/amrmproxy/LocalityMulticastAMRMProxyPolicy.java @@ -290,6 +290,10 @@ public class LocalityMulticastAMRMProxyPolicy extends AbstractAMRMProxyPolicy { /** * For unit test to override. + * + * @param bookKeeper bookKeeper + * @param allocationId allocationId. + * @return SubClusterId. */ protected SubClusterId getSubClusterForUnResolvedRequest( AllocationBookkeeper bookKeeper, long allocationId) { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/retry/FederationActionRetry.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/retry/FederationActionRetry.java new file mode 100644 index 00000000000..3068526c1ea --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/retry/FederationActionRetry.java @@ -0,0 +1,45 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with this + * work for additional information regarding copyright ownership. The ASF + * licenses this file to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations under + * the License. + */ + +package org.apache.hadoop.yarn.server.federation.retry; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +public interface FederationActionRetry { + + Logger LOG = LoggerFactory.getLogger(FederationActionRetry.class); + + T run(int retry) throws Exception; + + default T runWithRetries(int retryCount, long retrySleepTime) throws Exception { + int retry = 0; + while (true) { + try { + return run(retry); + } catch (Exception e) { + LOG.info("Exception while executing an Federation operation.", e); + if (++retry > retryCount) { + LOG.info("Maxed out Federation retries. Giving up!"); + throw e; + } + LOG.info("Retrying operation on Federation. Retry no. {}", retry); + Thread.sleep(retrySleepTime); + } + } + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/retry/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/retry/package-info.java new file mode 100644 index 00000000000..5d8477cfe59 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/retry/package-info.java @@ -0,0 +1,19 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +/** Federation Retry Policies. **/ +package org.apache.hadoop.yarn.server.federation.retry; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/FederationDelegationTokenStateStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/FederationDelegationTokenStateStore.java index 294c0726795..452bcf9d4ad 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/FederationDelegationTokenStateStore.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/FederationDelegationTokenStateStore.java @@ -21,6 +21,8 @@ import org.apache.hadoop.classification.InterfaceStability.Unstable; import org.apache.hadoop.yarn.exceptions.YarnException; import org.apache.hadoop.yarn.server.federation.store.records.RouterMasterKeyRequest; import org.apache.hadoop.yarn.server.federation.store.records.RouterMasterKeyResponse; +import org.apache.hadoop.yarn.server.federation.store.records.RouterRMTokenRequest; +import org.apache.hadoop.yarn.server.federation.store.records.RouterRMTokenResponse; import java.io.IOException; @@ -66,4 +68,83 @@ public interface FederationDelegationTokenStateStore { */ RouterMasterKeyResponse getMasterKeyByDelegationKey(RouterMasterKeyRequest request) throws YarnException, IOException; + + /** + * The Router Supports Store RMDelegationTokenIdentifier. + * + * @param request The request contains RouterRMToken (RMDelegationTokenIdentifier and renewDate) + * @return routerRMTokenResponse. + * @throws YarnException if the call to the state store is unsuccessful + * @throws IOException An IO Error occurred + */ + RouterRMTokenResponse storeNewToken(RouterRMTokenRequest request) + throws YarnException, IOException; + + /** + * The Router Supports Update RMDelegationTokenIdentifier. + * + * @param request The request contains RouterRMToken (RMDelegationTokenIdentifier and renewDate) + * @return RouterRMTokenResponse. + * @throws YarnException if the call to the state store is unsuccessful + * @throws IOException An IO Error occurred + */ + RouterRMTokenResponse updateStoredToken(RouterRMTokenRequest request) + throws YarnException, IOException; + + /** + * The Router Supports Remove RMDelegationTokenIdentifier. + * + * @param request The request contains RouterRMToken (RMDelegationTokenIdentifier and renewDate) + * @return RouterRMTokenResponse. + * @throws YarnException if the call to the state store is unsuccessful + * @throws IOException An IO Error occurred + */ + RouterRMTokenResponse removeStoredToken(RouterRMTokenRequest request) + throws YarnException, IOException; + + /** + * The Router Supports GetTokenByRouterStoreToken. + * + * @param request The request contains RouterRMToken (RMDelegationTokenIdentifier and renewDate) + * @return RouterRMTokenResponse. + * @throws YarnException if the call to the state store is unsuccessful + * @throws IOException An IO Error occurred + */ + RouterRMTokenResponse getTokenByRouterStoreToken(RouterRMTokenRequest request) + throws YarnException, IOException; + + /** + * The Router Supports incrementDelegationTokenSeqNum. + * + * @return DelegationTokenSeqNum. + */ + int incrementDelegationTokenSeqNum(); + + /** + * The Router Supports getDelegationTokenSeqNum. + * + * @return DelegationTokenSeqNum. + */ + int getDelegationTokenSeqNum(); + + /** + * The Router Supports setDelegationTokenSeqNum. + * + * @param seqNum DelegationTokenSeqNum. + */ + void setDelegationTokenSeqNum(int seqNum); + + /** + * The Router Supports getCurrentKeyId. + * + * @return CurrentKeyId. + */ + int getCurrentKeyId(); + + /** + * The Router Supports incrementCurrentKeyId. + * + * @return CurrentKeyId. + */ + int incrementCurrentKeyId(); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java index 98ddb93730a..41ade680be2 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java @@ -27,12 +27,14 @@ import java.util.Map.Entry; import java.util.Set; import java.util.TimeZone; import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.atomic.AtomicInteger; import java.util.stream.Collectors; import java.util.Comparator; import org.apache.hadoop.classification.VisibleForTesting; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.security.token.delegation.DelegationKey; +import org.apache.hadoop.yarn.security.client.RMDelegationTokenIdentifier; import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.api.records.ReservationId; import org.apache.hadoop.yarn.conf.YarnConfiguration; @@ -83,6 +85,9 @@ import org.apache.hadoop.yarn.server.federation.store.records.RouterMasterKey; import org.apache.hadoop.yarn.server.federation.store.records.RouterMasterKeyRequest; import org.apache.hadoop.yarn.server.federation.store.records.RouterMasterKeyResponse; import org.apache.hadoop.yarn.server.federation.store.records.RouterRMDTSecretManagerState; +import org.apache.hadoop.yarn.server.federation.store.records.RouterRMTokenResponse; +import org.apache.hadoop.yarn.server.federation.store.records.RouterRMTokenRequest; +import org.apache.hadoop.yarn.server.federation.store.records.RouterStoreToken; import org.apache.hadoop.yarn.server.federation.store.utils.FederationApplicationHomeSubClusterStoreInputValidator; import org.apache.hadoop.yarn.server.federation.store.utils.FederationReservationHomeSubClusterStoreInputValidator; import org.apache.hadoop.yarn.server.federation.store.utils.FederationMembershipStateStoreInputValidator; @@ -106,6 +111,8 @@ public class MemoryFederationStateStore implements FederationStateStore { private Map policies; private RouterRMDTSecretManagerState routerRMSecretManagerState; private int maxAppsInStateStore; + private AtomicInteger sequenceNum; + private AtomicInteger masterKeyId; private final MonotonicClock clock = new MonotonicClock(); @@ -122,6 +129,8 @@ public class MemoryFederationStateStore implements FederationStateStore { maxAppsInStateStore = conf.getInt( YarnConfiguration.FEDERATION_STATESTORE_MAX_APPLICATIONS, YarnConfiguration.DEFAULT_FEDERATION_STATESTORE_MAX_APPLICATIONS); + sequenceNum = new AtomicInteger(); + masterKeyId = new AtomicInteger(); } @Override @@ -479,6 +488,96 @@ public class MemoryFederationStateStore implements FederationStateStore { return RouterMasterKeyResponse.newInstance(resultRouterMasterKey); } + @Override + public RouterRMTokenResponse storeNewToken(RouterRMTokenRequest request) + throws YarnException, IOException { + RouterStoreToken storeToken = request.getRouterStoreToken(); + RMDelegationTokenIdentifier tokenIdentifier = + (RMDelegationTokenIdentifier) storeToken.getTokenIdentifier(); + Long renewDate = storeToken.getRenewDate(); + storeOrUpdateRouterRMDT(tokenIdentifier, renewDate, false); + return RouterRMTokenResponse.newInstance(storeToken); + } + + @Override + public RouterRMTokenResponse updateStoredToken(RouterRMTokenRequest request) + throws YarnException, IOException { + RouterStoreToken storeToken = request.getRouterStoreToken(); + RMDelegationTokenIdentifier tokenIdentifier = + (RMDelegationTokenIdentifier) storeToken.getTokenIdentifier(); + Long renewDate = storeToken.getRenewDate(); + Map rmDTState = routerRMSecretManagerState.getTokenState(); + rmDTState.remove(tokenIdentifier); + storeOrUpdateRouterRMDT(tokenIdentifier, renewDate, true); + return RouterRMTokenResponse.newInstance(storeToken); + } + + @Override + public RouterRMTokenResponse removeStoredToken(RouterRMTokenRequest request) + throws YarnException, IOException { + RouterStoreToken storeToken = request.getRouterStoreToken(); + RMDelegationTokenIdentifier tokenIdentifier = + (RMDelegationTokenIdentifier) storeToken.getTokenIdentifier(); + Map rmDTState = routerRMSecretManagerState.getTokenState(); + rmDTState.remove(tokenIdentifier); + return RouterRMTokenResponse.newInstance(storeToken); + } + + @Override + public RouterRMTokenResponse getTokenByRouterStoreToken(RouterRMTokenRequest request) + throws YarnException, IOException { + RouterStoreToken storeToken = request.getRouterStoreToken(); + RMDelegationTokenIdentifier tokenIdentifier = + (RMDelegationTokenIdentifier) storeToken.getTokenIdentifier(); + Map rmDTState = routerRMSecretManagerState.getTokenState(); + if (!rmDTState.containsKey(tokenIdentifier)) { + LOG.info("Router RMDelegationToken: {} does not exist.", tokenIdentifier); + throw new IOException("Router RMDelegationToken: " + tokenIdentifier + " does not exist."); + } + RouterStoreToken resultToken = + RouterStoreToken.newInstance(tokenIdentifier, rmDTState.get(tokenIdentifier)); + return RouterRMTokenResponse.newInstance(resultToken); + } + + @Override + public int incrementDelegationTokenSeqNum() { + return sequenceNum.incrementAndGet(); + } + + @Override + public int getDelegationTokenSeqNum() { + return sequenceNum.get(); + } + + @Override + public void setDelegationTokenSeqNum(int seqNum) { + sequenceNum.set(seqNum); + } + + @Override + public int getCurrentKeyId() { + return masterKeyId.get(); + } + + @Override + public int incrementCurrentKeyId() { + return masterKeyId.incrementAndGet(); + } + + private void storeOrUpdateRouterRMDT(RMDelegationTokenIdentifier rmDTIdentifier, + Long renewDate, boolean isUpdate) throws IOException { + Map rmDTState = routerRMSecretManagerState.getTokenState(); + if (rmDTState.containsKey(rmDTIdentifier)) { + LOG.info("Error storing info for RMDelegationToken: {}.", rmDTIdentifier); + throw new IOException("Router RMDelegationToken: " + rmDTIdentifier + "is already stored."); + } + rmDTState.put(rmDTIdentifier, renewDate); + if (!isUpdate) { + routerRMSecretManagerState.setDtSequenceNumber(rmDTIdentifier.getSequenceNumber()); + } + LOG.info("Store Router RM-RMDT with sequence number {}.", rmDTIdentifier.getSequenceNumber()); + } + /** * Get DelegationKey By based on MasterKey. * diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/SQLFederationStateStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/SQLFederationStateStore.java index 889c1e06413..1e3f3a12f3d 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/SQLFederationStateStore.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/SQLFederationStateStore.java @@ -84,6 +84,8 @@ import org.apache.hadoop.yarn.server.federation.store.records.UpdateReservationH import org.apache.hadoop.yarn.server.federation.store.records.ReservationHomeSubCluster; import org.apache.hadoop.yarn.server.federation.store.records.RouterMasterKeyRequest; import org.apache.hadoop.yarn.server.federation.store.records.RouterMasterKeyResponse; +import org.apache.hadoop.yarn.server.federation.store.records.RouterRMTokenRequest; +import org.apache.hadoop.yarn.server.federation.store.records.RouterRMTokenResponse; import org.apache.hadoop.yarn.server.federation.store.utils.FederationApplicationHomeSubClusterStoreInputValidator; import org.apache.hadoop.yarn.server.federation.store.utils.FederationMembershipStateStoreInputValidator; import org.apache.hadoop.yarn.server.federation.store.utils.FederationPolicyStoreInputValidator; @@ -175,7 +177,7 @@ public class SQLFederationStateStore implements FederationStateStore { private HikariDataSource dataSource = null; private final Clock clock = new MonotonicClock(); @VisibleForTesting - Connection conn = null; + private Connection conn = null; private int maxAppsInStateStore; @Override @@ -195,8 +197,7 @@ public class SQLFederationStateStore implements FederationStateStore { try { Class.forName(driverClass); } catch (ClassNotFoundException e) { - FederationStateStoreUtils.logAndThrowException(LOG, - "Driver class not found.", e); + FederationStateStoreUtils.logAndThrowException(LOG, "Driver class not found.", e); } // Create the data source to pool connections in a thread-safe manner @@ -207,14 +208,14 @@ public class SQLFederationStateStore implements FederationStateStore { FederationStateStoreUtils.setProperty(dataSource, FederationStateStoreUtils.FEDERATION_STORE_URL, url); dataSource.setMaximumPoolSize(maximumPoolSize); - LOG.info("Initialized connection pool to the Federation StateStore " - + "database at address: " + url); + LOG.info("Initialized connection pool to the Federation StateStore database at address: {}.", + url); + try { conn = getConnection(); LOG.debug("Connection created"); } catch (SQLException e) { - FederationStateStoreUtils.logAndThrowRetriableException(LOG, - "Not able to get Connection", e); + FederationStateStoreUtils.logAndThrowRetriableException(LOG, "Not able to get Connection", e); } maxAppsInStateStore = conf.getInt( @@ -224,32 +225,29 @@ public class SQLFederationStateStore implements FederationStateStore { @Override public SubClusterRegisterResponse registerSubCluster( - SubClusterRegisterRequest registerSubClusterRequest) - throws YarnException { + SubClusterRegisterRequest registerSubClusterRequest) throws YarnException { // Input validator - FederationMembershipStateStoreInputValidator - .validate(registerSubClusterRequest); + FederationMembershipStateStoreInputValidator.validate(registerSubClusterRequest); CallableStatement cstmt = null; - SubClusterInfo subClusterInfo = - registerSubClusterRequest.getSubClusterInfo(); + SubClusterInfo subClusterInfo = registerSubClusterRequest.getSubClusterInfo(); SubClusterId subClusterId = subClusterInfo.getSubClusterId(); try { cstmt = getCallableStatement(CALL_SP_REGISTER_SUBCLUSTER); // Set the parameters for the stored procedure - cstmt.setString(1, subClusterId.getId()); - cstmt.setString(2, subClusterInfo.getAMRMServiceAddress()); - cstmt.setString(3, subClusterInfo.getClientRMServiceAddress()); - cstmt.setString(4, subClusterInfo.getRMAdminServiceAddress()); - cstmt.setString(5, subClusterInfo.getRMWebServiceAddress()); - cstmt.setString(6, subClusterInfo.getState().toString()); - cstmt.setLong(7, subClusterInfo.getLastStartTime()); - cstmt.setString(8, subClusterInfo.getCapability()); - cstmt.registerOutParameter(9, java.sql.Types.INTEGER); + cstmt.setString("subClusterId_IN", subClusterId.getId()); + cstmt.setString("amRMServiceAddress_IN", subClusterInfo.getAMRMServiceAddress()); + cstmt.setString("clientRMServiceAddress_IN", subClusterInfo.getClientRMServiceAddress()); + cstmt.setString("rmAdminServiceAddress_IN", subClusterInfo.getRMAdminServiceAddress()); + cstmt.setString("rmWebServiceAddress_IN", subClusterInfo.getRMWebServiceAddress()); + cstmt.setString("state_IN", subClusterInfo.getState().toString()); + cstmt.setLong("lastStartTime_IN", subClusterInfo.getLastStartTime()); + cstmt.setString("capability_IN", subClusterInfo.getCapability()); + cstmt.registerOutParameter("rowCount_OUT", java.sql.Types.INTEGER); // Execute the query long startTime = clock.getTime(); @@ -258,30 +256,26 @@ public class SQLFederationStateStore implements FederationStateStore { // Check the ROWCOUNT value, if it is equal to 0 it means the call // did not add a new subcluster into FederationStateStore - if (cstmt.getInt(9) == 0) { - String errMsg = "SubCluster " + subClusterId - + " was not registered into the StateStore"; - FederationStateStoreUtils.logAndThrowStoreException(LOG, errMsg); + int rowCount = cstmt.getInt("rowCount_OUT"); + if (rowCount == 0) { + FederationStateStoreUtils.logAndThrowStoreException(LOG, + "SubCluster %s was not registered into the StateStore.", subClusterId); } // Check the ROWCOUNT value, if it is different from 1 it means the call // had a wrong behavior. Maybe the database is not set correctly. - if (cstmt.getInt(9) != 1) { - String errMsg = "Wrong behavior during registration of SubCluster " - + subClusterId + " into the StateStore"; - FederationStateStoreUtils.logAndThrowStoreException(LOG, errMsg); + if (rowCount != 1) { + FederationStateStoreUtils.logAndThrowStoreException(LOG, + "Wrong behavior during registration of SubCluster %s into the StateStore", + subClusterId); } - LOG.info( - "Registered the SubCluster " + subClusterId + " into the StateStore"); - FederationStateStoreClientMetrics - .succeededStateStoreCall(stopTime - startTime); + LOG.info("Registered the SubCluster {} into the StateStore.", subClusterId); + FederationStateStoreClientMetrics.succeededStateStoreCall(stopTime - startTime); } catch (SQLException e) { FederationStateStoreClientMetrics.failedStateStoreCall(); - FederationStateStoreUtils.logAndThrowRetriableException(LOG, - "Unable to register the SubCluster " + subClusterId - + " into the StateStore", - e); + FederationStateStoreUtils.logAndThrowRetriableException(e, + LOG, "Unable to register the SubCluster %s into the StateStore.", subClusterId); } finally { // Return to the pool the CallableStatement FederationStateStoreUtils.returnToPool(LOG, cstmt); @@ -292,12 +286,10 @@ public class SQLFederationStateStore implements FederationStateStore { @Override public SubClusterDeregisterResponse deregisterSubCluster( - SubClusterDeregisterRequest subClusterDeregisterRequest) - throws YarnException { + SubClusterDeregisterRequest subClusterDeregisterRequest) throws YarnException { // Input validator - FederationMembershipStateStoreInputValidator - .validate(subClusterDeregisterRequest); + FederationMembershipStateStoreInputValidator.validate(subClusterDeregisterRequest); CallableStatement cstmt = null; @@ -308,9 +300,9 @@ public class SQLFederationStateStore implements FederationStateStore { cstmt = getCallableStatement(CALL_SP_DEREGISTER_SUBCLUSTER); // Set the parameters for the stored procedure - cstmt.setString(1, subClusterId.getId()); - cstmt.setString(2, state.toString()); - cstmt.registerOutParameter(3, java.sql.Types.INTEGER); + cstmt.setString("subClusterId_IN", subClusterId.getId()); + cstmt.setString("state_IN", state.toString()); + cstmt.registerOutParameter("rowCount_OUT", java.sql.Types.INTEGER); // Execute the query long startTime = clock.getTime(); @@ -319,29 +311,25 @@ public class SQLFederationStateStore implements FederationStateStore { // Check the ROWCOUNT value, if it is equal to 0 it means the call // did not deregister the subcluster into FederationStateStore - if (cstmt.getInt(3) == 0) { - String errMsg = "SubCluster " + subClusterId + " not found"; - FederationStateStoreUtils.logAndThrowStoreException(LOG, errMsg); + int rowCount = cstmt.getInt("rowCount_OUT"); + if (rowCount == 0) { + FederationStateStoreUtils.logAndThrowStoreException(LOG, + "SubCluster %s not found.", subClusterId); } // Check the ROWCOUNT value, if it is different from 1 it means the call // had a wrong behavior. Maybe the database is not set correctly. - if (cstmt.getInt(3) != 1) { - String errMsg = "Wrong behavior during deregistration of SubCluster " - + subClusterId + " from the StateStore"; - FederationStateStoreUtils.logAndThrowStoreException(LOG, errMsg); + if (rowCount != 1) { + FederationStateStoreUtils.logAndThrowStoreException(LOG, + "Wrong behavior during deregistration of SubCluster %s from the StateStore.", + subClusterId); } - - LOG.info("Deregistered the SubCluster " + subClusterId + " state to " - + state.toString()); - FederationStateStoreClientMetrics - .succeededStateStoreCall(stopTime - startTime); + LOG.info("Deregistered the SubCluster {} state to {}.", subClusterId, state.toString()); + FederationStateStoreClientMetrics.succeededStateStoreCall(stopTime - startTime); } catch (SQLException e) { FederationStateStoreClientMetrics.failedStateStoreCall(); - FederationStateStoreUtils.logAndThrowRetriableException(LOG, - "Unable to deregister the sub-cluster " + subClusterId + " state to " - + state.toString(), - e); + FederationStateStoreUtils.logAndThrowRetriableException(e, LOG, + "Unable to deregister the sub-cluster %s state to %s.", subClusterId, state.toString()); } finally { // Return to the pool the CallableStatement FederationStateStoreUtils.returnToPool(LOG, cstmt); @@ -351,12 +339,10 @@ public class SQLFederationStateStore implements FederationStateStore { @Override public SubClusterHeartbeatResponse subClusterHeartbeat( - SubClusterHeartbeatRequest subClusterHeartbeatRequest) - throws YarnException { + SubClusterHeartbeatRequest subClusterHeartbeatRequest) throws YarnException { // Input validator - FederationMembershipStateStoreInputValidator - .validate(subClusterHeartbeatRequest); + FederationMembershipStateStoreInputValidator.validate(subClusterHeartbeatRequest); CallableStatement cstmt = null; @@ -367,10 +353,10 @@ public class SQLFederationStateStore implements FederationStateStore { cstmt = getCallableStatement(CALL_SP_SUBCLUSTER_HEARTBEAT); // Set the parameters for the stored procedure - cstmt.setString(1, subClusterId.getId()); - cstmt.setString(2, state.toString()); - cstmt.setString(3, subClusterHeartbeatRequest.getCapability()); - cstmt.registerOutParameter(4, java.sql.Types.INTEGER); + cstmt.setString("subClusterId_IN", subClusterId.getId()); + cstmt.setString("state_IN", state.toString()); + cstmt.setString("capability_IN", subClusterHeartbeatRequest.getCapability()); + cstmt.registerOutParameter("rowCount_OUT", java.sql.Types.INTEGER); // Execute the query long startTime = clock.getTime(); @@ -379,30 +365,25 @@ public class SQLFederationStateStore implements FederationStateStore { // Check the ROWCOUNT value, if it is equal to 0 it means the call // did not update the subcluster into FederationStateStore - if (cstmt.getInt(4) == 0) { - String errMsg = "SubCluster " + subClusterId.toString() - + " does not exist; cannot heartbeat"; - FederationStateStoreUtils.logAndThrowStoreException(LOG, errMsg); + int rowCount = cstmt.getInt("rowCount_OUT"); + if (rowCount == 0) { + FederationStateStoreUtils.logAndThrowStoreException(LOG, + "SubCluster %s does not exist; cannot heartbeat.", subClusterId); } // Check the ROWCOUNT value, if it is different from 1 it means the call // had a wrong behavior. Maybe the database is not set correctly. - if (cstmt.getInt(4) != 1) { - String errMsg = - "Wrong behavior during the heartbeat of SubCluster " + subClusterId; - FederationStateStoreUtils.logAndThrowStoreException(LOG, errMsg); + if (rowCount != 1) { + FederationStateStoreUtils.logAndThrowStoreException(LOG, + "Wrong behavior during the heartbeat of SubCluster %s.", subClusterId); } - LOG.info("Heartbeated the StateStore for the specified SubCluster " - + subClusterId); - FederationStateStoreClientMetrics - .succeededStateStoreCall(stopTime - startTime); + LOG.info("Heartbeated the StateStore for the specified SubCluster {}.", subClusterId); + FederationStateStoreClientMetrics.succeededStateStoreCall(stopTime - startTime); } catch (SQLException e) { FederationStateStoreClientMetrics.failedStateStoreCall(); - FederationStateStoreUtils.logAndThrowRetriableException(LOG, - "Unable to heartbeat the StateStore for the specified SubCluster " - + subClusterId, - e); + FederationStateStoreUtils.logAndThrowRetriableException(e, LOG, + "Unable to heartbeat the StateStore for the specified SubCluster %s.", subClusterId); } finally { // Return to the pool the CallableStatement FederationStateStoreUtils.returnToPool(LOG, cstmt); @@ -424,27 +405,27 @@ public class SQLFederationStateStore implements FederationStateStore { try { cstmt = getCallableStatement(CALL_SP_GET_SUBCLUSTER); - cstmt.setString(1, subClusterId.getId()); + cstmt.setString("subClusterId_IN", subClusterId.getId()); // Set the parameters for the stored procedure - cstmt.registerOutParameter(2, java.sql.Types.VARCHAR); - cstmt.registerOutParameter(3, java.sql.Types.VARCHAR); - cstmt.registerOutParameter(4, java.sql.Types.VARCHAR); - cstmt.registerOutParameter(5, java.sql.Types.VARCHAR); - cstmt.registerOutParameter(6, java.sql.Types.TIMESTAMP); - cstmt.registerOutParameter(7, java.sql.Types.VARCHAR); - cstmt.registerOutParameter(8, java.sql.Types.BIGINT); - cstmt.registerOutParameter(9, java.sql.Types.VARCHAR); + cstmt.registerOutParameter("amRMServiceAddress_OUT", java.sql.Types.VARCHAR); + cstmt.registerOutParameter("clientRMServiceAddress_OUT", java.sql.Types.VARCHAR); + cstmt.registerOutParameter("rmAdminServiceAddress_OUT", java.sql.Types.VARCHAR); + cstmt.registerOutParameter("rmWebServiceAddress_OUT", java.sql.Types.VARCHAR); + cstmt.registerOutParameter("lastHeartBeat_OUT", java.sql.Types.TIMESTAMP); + cstmt.registerOutParameter("state_OUT", java.sql.Types.VARCHAR); + cstmt.registerOutParameter("lastStartTime_OUT", java.sql.Types.BIGINT); + cstmt.registerOutParameter("capability_OUT", java.sql.Types.VARCHAR); // Execute the query long startTime = clock.getTime(); cstmt.execute(); long stopTime = clock.getTime(); - String amRMAddress = cstmt.getString(2); - String clientRMAddress = cstmt.getString(3); - String rmAdminAddress = cstmt.getString(4); - String webAppAddress = cstmt.getString(5); + String amRMAddress = cstmt.getString("amRMServiceAddress_OUT"); + String clientRMAddress = cstmt.getString("clientRMServiceAddress_OUT"); + String rmAdminAddress = cstmt.getString("rmAdminServiceAddress_OUT"); + String webAppAddress = cstmt.getString("rmWebServiceAddress_OUT"); // first check if the subCluster exists if((amRMAddress == null) || (clientRMAddress == null)) { @@ -452,36 +433,31 @@ public class SQLFederationStateStore implements FederationStateStore { return null; } - Timestamp heartBeatTimeStamp = cstmt.getTimestamp(6, utcCalendar); - long lastHeartBeat = - heartBeatTimeStamp != null ? heartBeatTimeStamp.getTime() : 0; + Timestamp heartBeatTimeStamp = cstmt.getTimestamp("lastHeartBeat_OUT", utcCalendar); + long lastHeartBeat = heartBeatTimeStamp != null ? heartBeatTimeStamp.getTime() : 0; - SubClusterState state = SubClusterState.fromString(cstmt.getString(7)); - long lastStartTime = cstmt.getLong(8); - String capability = cstmt.getString(9); + SubClusterState state = SubClusterState.fromString(cstmt.getString("state_OUT")); + long lastStartTime = cstmt.getLong("lastStartTime_OUT"); + String capability = cstmt.getString("capability_OUT"); subClusterInfo = SubClusterInfo.newInstance(subClusterId, amRMAddress, clientRMAddress, rmAdminAddress, webAppAddress, lastHeartBeat, state, lastStartTime, capability); - FederationStateStoreClientMetrics - .succeededStateStoreCall(stopTime - startTime); + FederationStateStoreClientMetrics.succeededStateStoreCall(stopTime - startTime); // Check if the output it is a valid subcluster try { - FederationMembershipStateStoreInputValidator - .checkSubClusterInfo(subClusterInfo); + FederationMembershipStateStoreInputValidator.checkSubClusterInfo(subClusterInfo); } catch (FederationStateStoreInvalidInputException e) { - String errMsg = - "SubCluster " + subClusterId.toString() + " does not exist"; - FederationStateStoreUtils.logAndThrowStoreException(LOG, errMsg); + FederationStateStoreUtils.logAndThrowStoreException(e, LOG, + "SubCluster %s does not exist.", subClusterId); } - LOG.debug("Got the information about the specified SubCluster {}", - subClusterInfo); + LOG.debug("Got the information about the specified SubCluster {}", subClusterInfo); } catch (SQLException e) { FederationStateStoreClientMetrics.failedStateStoreCall(); - FederationStateStoreUtils.logAndThrowRetriableException(LOG, - "Unable to obtain the SubCluster information for " + subClusterId, e); + FederationStateStoreUtils.logAndThrowRetriableException(e, LOG, + "Unable to obtain the SubCluster information for %s.", subClusterId); } finally { // Return to the pool the CallableStatement FederationStateStoreUtils.returnToPool(LOG, cstmt); @@ -494,7 +470,7 @@ public class SQLFederationStateStore implements FederationStateStore { GetSubClustersInfoRequest subClustersRequest) throws YarnException { CallableStatement cstmt = null; ResultSet rs = null; - List subClusters = new ArrayList(); + List subClusters = new ArrayList<>(); try { cstmt = getCallableStatement(CALL_SP_GET_SUBCLUSTERS); @@ -507,15 +483,15 @@ public class SQLFederationStateStore implements FederationStateStore { while (rs.next()) { // Extract the output for each tuple - String subClusterName = rs.getString(1); - String amRMAddress = rs.getString(2); - String clientRMAddress = rs.getString(3); - String rmAdminAddress = rs.getString(4); - String webAppAddress = rs.getString(5); - long lastHeartBeat = rs.getTimestamp(6, utcCalendar).getTime(); - SubClusterState state = SubClusterState.fromString(rs.getString(7)); - long lastStartTime = rs.getLong(8); - String capability = rs.getString(9); + String subClusterName = rs.getString("subClusterId"); + String amRMAddress = rs.getString("amRMServiceAddress"); + String clientRMAddress = rs.getString("clientRMServiceAddress"); + String rmAdminAddress = rs.getString("rmAdminServiceAddress"); + String webAppAddress = rs.getString("rmWebServiceAddress"); + long lastHeartBeat = rs.getTimestamp("lastHeartBeat", utcCalendar).getTime(); + SubClusterState state = SubClusterState.fromString(rs.getString("state")); + long lastStartTime = rs.getLong("lastStartTime"); + String capability = rs.getString("capability"); SubClusterId subClusterId = SubClusterId.newInstance(subClusterName); SubClusterInfo subClusterInfo = SubClusterInfo.newInstance(subClusterId, @@ -525,15 +501,12 @@ public class SQLFederationStateStore implements FederationStateStore { FederationStateStoreClientMetrics .succeededStateStoreCall(stopTime - startTime); - // Check if the output it is a valid subcluster try { - FederationMembershipStateStoreInputValidator - .checkSubClusterInfo(subClusterInfo); + FederationMembershipStateStoreInputValidator.checkSubClusterInfo(subClusterInfo); } catch (FederationStateStoreInvalidInputException e) { - String errMsg = - "SubCluster " + subClusterId.toString() + " is not valid"; - FederationStateStoreUtils.logAndThrowStoreException(LOG, errMsg); + FederationStateStoreUtils.logAndThrowStoreException(e, LOG, + "SubCluster %s is not valid.", subClusterId); } // Filter the inactive @@ -573,68 +546,61 @@ public class SQLFederationStateStore implements FederationStateStore { cstmt = getCallableStatement(CALL_SP_ADD_APPLICATION_HOME_SUBCLUSTER); // Set the parameters for the stored procedure - cstmt.setString(1, appId.toString()); - cstmt.setString(2, subClusterId.getId()); - cstmt.registerOutParameter(3, java.sql.Types.VARCHAR); - cstmt.registerOutParameter(4, java.sql.Types.INTEGER); + cstmt.setString("applicationId_IN", appId.toString()); + cstmt.setString("homeSubCluster_IN", subClusterId.getId()); + cstmt.registerOutParameter("storedHomeSubCluster_OUT", java.sql.Types.VARCHAR); + cstmt.registerOutParameter("rowCount_OUT", java.sql.Types.INTEGER); // Execute the query long startTime = clock.getTime(); cstmt.executeUpdate(); long stopTime = clock.getTime(); - subClusterHome = cstmt.getString(3); + subClusterHome = cstmt.getString("storedHomeSubCluster_OUT"); SubClusterId subClusterIdHome = SubClusterId.newInstance(subClusterHome); - FederationStateStoreClientMetrics - .succeededStateStoreCall(stopTime - startTime); + FederationStateStoreClientMetrics.succeededStateStoreCall(stopTime - startTime); // For failover reason, we check the returned SubClusterId. // If it is equal to the subclusterId we sent, the call added the new // application into FederationStateStore. If the call returns a different // SubClusterId it means we already tried to insert this application but a // component (Router/StateStore/RM) failed during the submission. + int rowCount = cstmt.getInt("rowCount_OUT"); if (subClusterId.equals(subClusterIdHome)) { // Check the ROWCOUNT value, if it is equal to 0 it means the call // did not add a new application into FederationStateStore - if (cstmt.getInt(4) == 0) { - LOG.info( - "The application {} was not inserted in the StateStore because it" - + " was already present in SubCluster {}", - appId, subClusterHome); - } else if (cstmt.getInt(4) != 1) { + if (rowCount == 0) { + LOG.info("The application {} was not inserted in the StateStore because it" + + " was already present in SubCluster {}", appId, subClusterHome); + } else if (cstmt.getInt("rowCount_OUT") != 1) { // Check the ROWCOUNT value, if it is different from 1 it means the // call had a wrong behavior. Maybe the database is not set correctly. - String errMsg = "Wrong behavior during the insertion of SubCluster " - + subClusterId; - FederationStateStoreUtils.logAndThrowStoreException(LOG, errMsg); + FederationStateStoreUtils.logAndThrowStoreException(LOG, + "Wrong behavior during the insertion of SubCluster %s.", subClusterId); } - LOG.info("Insert into the StateStore the application: " + appId - + " in SubCluster: " + subClusterHome); + LOG.info("Insert into the StateStore the application: {} in SubCluster: {}.", + appId, subClusterHome); } else { // Check the ROWCOUNT value, if it is different from 0 it means the call // did edited the table - if (cstmt.getInt(4) != 0) { - String errMsg = - "The application " + appId + " does exist but was overwritten"; - FederationStateStoreUtils.logAndThrowStoreException(LOG, errMsg); + if (rowCount != 0) { + FederationStateStoreUtils.logAndThrowStoreException(LOG, + "The application %s does exist but was overwritten.", appId); } - LOG.info("Application: " + appId + " already present with SubCluster: " - + subClusterHome); + LOG.info("Application: {} already present with SubCluster: {}.", appId, subClusterHome); } } catch (SQLException e) { FederationStateStoreClientMetrics.failedStateStoreCall(); - FederationStateStoreUtils - .logAndThrowRetriableException(LOG, - "Unable to insert the newly generated application " - + request.getApplicationHomeSubCluster().getApplicationId(), - e); + FederationStateStoreUtils.logAndThrowRetriableException(e, LOG, + "Unable to insert the newly generated application %s.", appId); } finally { // Return to the pool the CallableStatement FederationStateStoreUtils.returnToPool(LOG, cstmt); } + return AddApplicationHomeSubClusterResponse .newInstance(SubClusterId.newInstance(subClusterHome)); } @@ -657,9 +623,9 @@ public class SQLFederationStateStore implements FederationStateStore { cstmt = getCallableStatement(CALL_SP_UPDATE_APPLICATION_HOME_SUBCLUSTER); // Set the parameters for the stored procedure - cstmt.setString(1, appId.toString()); - cstmt.setString(2, subClusterId.getId()); - cstmt.registerOutParameter(3, java.sql.Types.INTEGER); + cstmt.setString("applicationId_IN", appId.toString()); + cstmt.setString("homeSubCluster_IN", subClusterId.getId()); + cstmt.registerOutParameter("rowCount_OUT", java.sql.Types.INTEGER); // Execute the query long startTime = clock.getTime(); @@ -668,31 +634,25 @@ public class SQLFederationStateStore implements FederationStateStore { // Check the ROWCOUNT value, if it is equal to 0 it means the call // did not update the application into FederationStateStore - if (cstmt.getInt(3) == 0) { - String errMsg = "Application " + appId + " does not exist"; - FederationStateStoreUtils.logAndThrowStoreException(LOG, errMsg); + int rowCount = cstmt.getInt("rowCount_OUT"); + if (rowCount == 0) { + FederationStateStoreUtils.logAndThrowStoreException(LOG, + "Application %s does not exist.", appId); } // Check the ROWCOUNT value, if it is different from 1 it means the call // had a wrong behavior. Maybe the database is not set correctly. - if (cstmt.getInt(3) != 1) { - String errMsg = - "Wrong behavior during the update of SubCluster " + subClusterId; - FederationStateStoreUtils.logAndThrowStoreException(LOG, errMsg); + if (cstmt.getInt("rowCount_OUT") != 1) { + FederationStateStoreUtils.logAndThrowStoreException(LOG, + "Wrong behavior during the update of SubCluster %s.", subClusterId); } - LOG.info( - "Update the SubCluster to {} for application {} in the StateStore", + LOG.info("Update the SubCluster to {} for application {} in the StateStore", subClusterId, appId); - FederationStateStoreClientMetrics - .succeededStateStoreCall(stopTime - startTime); - + FederationStateStoreClientMetrics.succeededStateStoreCall(stopTime - startTime); } catch (SQLException e) { FederationStateStoreClientMetrics.failedStateStoreCall(); - FederationStateStoreUtils - .logAndThrowRetriableException(LOG, - "Unable to update the application " - + request.getApplicationHomeSubCluster().getApplicationId(), - e); + FederationStateStoreUtils.logAndThrowRetriableException(e, LOG, + "Unable to update the application %s.", appId); } finally { // Return to the pool the CallableStatement FederationStateStoreUtils.returnToPool(LOG, cstmt); @@ -710,44 +670,43 @@ public class SQLFederationStateStore implements FederationStateStore { SubClusterId homeRM = null; + ApplicationId applicationId = request.getApplicationId(); + try { cstmt = getCallableStatement(CALL_SP_GET_APPLICATION_HOME_SUBCLUSTER); // Set the parameters for the stored procedure - cstmt.setString(1, request.getApplicationId().toString()); - cstmt.registerOutParameter(2, java.sql.Types.VARCHAR); + cstmt.setString("applicationId_IN", applicationId.toString()); + cstmt.registerOutParameter("homeSubCluster_OUT", java.sql.Types.VARCHAR); // Execute the query long startTime = clock.getTime(); cstmt.execute(); long stopTime = clock.getTime(); - if (cstmt.getString(2) != null) { - homeRM = SubClusterId.newInstance(cstmt.getString(2)); + String homeSubCluster = cstmt.getString("homeSubCluster_OUT"); + if (homeSubCluster != null) { + homeRM = SubClusterId.newInstance(homeSubCluster); } else { - String errMsg = - "Application " + request.getApplicationId() + " does not exist"; - FederationStateStoreUtils.logAndThrowStoreException(LOG, errMsg); + FederationStateStoreUtils.logAndThrowStoreException(LOG, + "Application %s does not exist.", applicationId); } LOG.debug("Got the information about the specified application {}." - + " The AM is running in {}", request.getApplicationId(), homeRM); + + " The AM is running in {}", applicationId, homeRM); - FederationStateStoreClientMetrics - .succeededStateStoreCall(stopTime - startTime); + FederationStateStoreClientMetrics.succeededStateStoreCall(stopTime - startTime); } catch (SQLException e) { FederationStateStoreClientMetrics.failedStateStoreCall(); - FederationStateStoreUtils.logAndThrowRetriableException(LOG, - "Unable to obtain the application information " - + "for the specified application " + request.getApplicationId(), - e); + FederationStateStoreUtils.logAndThrowRetriableException(e, LOG, + "Unable to obtain the application information for the specified application %s.", + applicationId); } finally { // Return to the pool the CallableStatement FederationStateStoreUtils.returnToPool(LOG, cstmt); } - return GetApplicationHomeSubClusterResponse - .newInstance(request.getApplicationId(), homeRM); + return GetApplicationHomeSubClusterResponse.newInstance(request.getApplicationId(), homeRM); } @Override @@ -788,8 +747,7 @@ public class SQLFederationStateStore implements FederationStateStore { SubClusterId.newInstance(homeSubCluster))); } - FederationStateStoreClientMetrics - .succeededStateStoreCall(stopTime - startTime); + FederationStateStoreClientMetrics.succeededStateStoreCall(stopTime - startTime); } catch (SQLException e) { FederationStateStoreClientMetrics.failedStateStoreCall(); @@ -811,13 +769,13 @@ public class SQLFederationStateStore implements FederationStateStore { FederationApplicationHomeSubClusterStoreInputValidator.validate(request); CallableStatement cstmt = null; - + ApplicationId applicationId = request.getApplicationId(); try { cstmt = getCallableStatement(CALL_SP_DELETE_APPLICATION_HOME_SUBCLUSTER); // Set the parameters for the stored procedure - cstmt.setString(1, request.getApplicationId().toString()); - cstmt.registerOutParameter(2, java.sql.Types.INTEGER); + cstmt.setString("applicationId_IN", applicationId.toString()); + cstmt.registerOutParameter("rowCount_OUT", java.sql.Types.INTEGER); // Execute the query long startTime = clock.getTime(); @@ -826,28 +784,25 @@ public class SQLFederationStateStore implements FederationStateStore { // Check the ROWCOUNT value, if it is equal to 0 it means the call // did not delete the application from FederationStateStore - if (cstmt.getInt(2) == 0) { - String errMsg = - "Application " + request.getApplicationId() + " does not exist"; - FederationStateStoreUtils.logAndThrowStoreException(LOG, errMsg); + int rowCount = cstmt.getInt("rowCount_OUT"); + if (rowCount == 0) { + FederationStateStoreUtils.logAndThrowStoreException(LOG, + "Application %s does not exist.", applicationId); } // Check the ROWCOUNT value, if it is different from 1 it means the call // had a wrong behavior. Maybe the database is not set correctly. - if (cstmt.getInt(2) != 1) { - String errMsg = "Wrong behavior during deleting the application " - + request.getApplicationId(); - FederationStateStoreUtils.logAndThrowStoreException(LOG, errMsg); + if (cstmt.getInt("rowCount_OUT") != 1) { + FederationStateStoreUtils.logAndThrowStoreException(LOG, + "Wrong behavior during deleting the application %s.", applicationId); } - LOG.info("Delete from the StateStore the application: {}", - request.getApplicationId()); - FederationStateStoreClientMetrics - .succeededStateStoreCall(stopTime - startTime); + LOG.info("Delete from the StateStore the application: {}", request.getApplicationId()); + FederationStateStoreClientMetrics.succeededStateStoreCall(stopTime - startTime); } catch (SQLException e) { FederationStateStoreClientMetrics.failedStateStoreCall(); - FederationStateStoreUtils.logAndThrowRetriableException(LOG, - "Unable to delete the application " + request.getApplicationId(), e); + FederationStateStoreUtils.logAndThrowRetriableException(e, LOG, + "Unable to delete the application %s.", applicationId); } finally { // Return to the pool the CallableStatement FederationStateStoreUtils.returnToPool(LOG, cstmt); @@ -869,9 +824,9 @@ public class SQLFederationStateStore implements FederationStateStore { cstmt = getCallableStatement(CALL_SP_GET_POLICY_CONFIGURATION); // Set the parameters for the stored procedure - cstmt.setString(1, request.getQueue()); - cstmt.registerOutParameter(2, java.sql.Types.VARCHAR); - cstmt.registerOutParameter(3, java.sql.Types.VARBINARY); + cstmt.setString("queue_IN", request.getQueue()); + cstmt.registerOutParameter("policyType_OUT", java.sql.Types.VARCHAR); + cstmt.registerOutParameter("params_OUT", java.sql.Types.VARBINARY); // Execute the query long startTime = clock.getTime(); @@ -879,10 +834,11 @@ public class SQLFederationStateStore implements FederationStateStore { long stopTime = clock.getTime(); // Check if the output it is a valid policy - if (cstmt.getString(2) != null && cstmt.getBytes(3) != null) { - subClusterPolicyConfiguration = - SubClusterPolicyConfiguration.newInstance(request.getQueue(), - cstmt.getString(2), ByteBuffer.wrap(cstmt.getBytes(3))); + String policyType = cstmt.getString("policyType_OUT"); + byte[] param = cstmt.getBytes("params_OUT"); + if (policyType != null && param != null) { + subClusterPolicyConfiguration = SubClusterPolicyConfiguration.newInstance( + request.getQueue(), policyType, ByteBuffer.wrap(param)); LOG.debug("Selected from StateStore the policy for the queue: {}", subClusterPolicyConfiguration); } else { @@ -890,20 +846,17 @@ public class SQLFederationStateStore implements FederationStateStore { return null; } - FederationStateStoreClientMetrics - .succeededStateStoreCall(stopTime - startTime); + FederationStateStoreClientMetrics.succeededStateStoreCall(stopTime - startTime); } catch (SQLException e) { FederationStateStoreClientMetrics.failedStateStoreCall(); - FederationStateStoreUtils.logAndThrowRetriableException(LOG, - "Unable to select the policy for the queue :" + request.getQueue(), - e); + FederationStateStoreUtils.logAndThrowRetriableException(e, LOG, + "Unable to select the policy for the queue : %s." + request.getQueue()); } finally { // Return to the pool the CallableStatement FederationStateStoreUtils.returnToPool(LOG, cstmt); } - return GetSubClusterPolicyConfigurationResponse - .newInstance(subClusterPolicyConfiguration); + return GetSubClusterPolicyConfigurationResponse.newInstance(subClusterPolicyConfiguration); } @Override @@ -921,10 +874,10 @@ public class SQLFederationStateStore implements FederationStateStore { cstmt = getCallableStatement(CALL_SP_SET_POLICY_CONFIGURATION); // Set the parameters for the stored procedure - cstmt.setString(1, policyConf.getQueue()); - cstmt.setString(2, policyConf.getType()); - cstmt.setBytes(3, getByteArray(policyConf.getParams())); - cstmt.registerOutParameter(4, java.sql.Types.INTEGER); + cstmt.setString("queue_IN", policyConf.getQueue()); + cstmt.setString("policyType_IN", policyConf.getType()); + cstmt.setBytes("params_IN", getByteArray(policyConf.getParams())); + cstmt.registerOutParameter("rowCount_OUT", java.sql.Types.INTEGER); // Execute the query long startTime = clock.getTime(); @@ -933,30 +886,25 @@ public class SQLFederationStateStore implements FederationStateStore { // Check the ROWCOUNT value, if it is equal to 0 it means the call // did not add a new policy into FederationStateStore - if (cstmt.getInt(4) == 0) { - String errMsg = "The policy " + policyConf.getQueue() - + " was not insert into the StateStore"; - FederationStateStoreUtils.logAndThrowStoreException(LOG, errMsg); + int rowCount = cstmt.getInt("rowCount_OUT"); + if (rowCount == 0) { + FederationStateStoreUtils.logAndThrowStoreException(LOG, + "The policy %s was not insert into the StateStore.", policyConf.getQueue()); } // Check the ROWCOUNT value, if it is different from 1 it means the call // had a wrong behavior. Maybe the database is not set correctly. - if (cstmt.getInt(4) != 1) { - String errMsg = - "Wrong behavior during insert the policy " + policyConf.getQueue(); - FederationStateStoreUtils.logAndThrowStoreException(LOG, errMsg); + if (rowCount != 1) { + FederationStateStoreUtils.logAndThrowStoreException(LOG, + "Wrong behavior during insert the policy %s.", policyConf.getQueue()); } - LOG.info("Insert into the state store the policy for the queue: " - + policyConf.getQueue()); - FederationStateStoreClientMetrics - .succeededStateStoreCall(stopTime - startTime); + LOG.info("Insert into the state store the policy for the queue: {}.", policyConf.getQueue()); + FederationStateStoreClientMetrics.succeededStateStoreCall(stopTime - startTime); } catch (SQLException e) { FederationStateStoreClientMetrics.failedStateStoreCall(); - FederationStateStoreUtils.logAndThrowRetriableException(LOG, - "Unable to insert the newly generated policy for the queue :" - + policyConf.getQueue(), - e); + FederationStateStoreUtils.logAndThrowRetriableException(e, LOG, + "Unable to insert the newly generated policy for the queue : %s.", policyConf.getQueue()); } finally { // Return to the pool the CallableStatement FederationStateStoreUtils.returnToPool(LOG, cstmt); @@ -970,8 +918,7 @@ public class SQLFederationStateStore implements FederationStateStore { CallableStatement cstmt = null; ResultSet rs = null; - List policyConfigurations = - new ArrayList(); + List policyConfigurations = new ArrayList<>(); try { cstmt = getCallableStatement(CALL_SP_GET_POLICIES_CONFIGURATIONS); @@ -982,20 +929,17 @@ public class SQLFederationStateStore implements FederationStateStore { long stopTime = clock.getTime(); while (rs.next()) { - // Extract the output for each tuple - String queue = rs.getString(1); - String type = rs.getString(2); - byte[] policyInfo = rs.getBytes(3); + String queue = rs.getString("queue"); + String type = rs.getString("policyType"); + byte[] policyInfo = rs.getBytes("params"); SubClusterPolicyConfiguration subClusterPolicyConfiguration = - SubClusterPolicyConfiguration.newInstance(queue, type, - ByteBuffer.wrap(policyInfo)); + SubClusterPolicyConfiguration.newInstance(queue, type, ByteBuffer.wrap(policyInfo)); policyConfigurations.add(subClusterPolicyConfiguration); } - FederationStateStoreClientMetrics - .succeededStateStoreCall(stopTime - startTime); + FederationStateStoreClientMetrics.succeededStateStoreCall(stopTime - startTime); } catch (SQLException e) { FederationStateStoreClientMetrics.failedStateStoreCall(); @@ -1006,8 +950,7 @@ public class SQLFederationStateStore implements FederationStateStore { FederationStateStoreUtils.returnToPool(LOG, cstmt, null, rs); } - return GetSubClusterPoliciesConfigurationsResponse - .newInstance(policyConfigurations); + return GetSubClusterPoliciesConfigurationsResponse.newInstance(policyConfigurations); } @Override @@ -1427,4 +1370,53 @@ public class SQLFederationStateStore implements FederationStateStore { throws YarnException, IOException { throw new NotImplementedException("Code is not implemented"); } + + @Override + public RouterRMTokenResponse storeNewToken(RouterRMTokenRequest request) + throws YarnException, IOException { + throw new NotImplementedException("Code is not implemented"); + } + + @Override + public RouterRMTokenResponse updateStoredToken(RouterRMTokenRequest request) + throws YarnException, IOException { + throw new NotImplementedException("Code is not implemented"); + } + + @Override + public RouterRMTokenResponse removeStoredToken(RouterRMTokenRequest request) + throws YarnException, IOException { + throw new NotImplementedException("Code is not implemented"); + } + + @Override + public RouterRMTokenResponse getTokenByRouterStoreToken(RouterRMTokenRequest request) + throws YarnException, IOException { + throw new NotImplementedException("Code is not implemented"); + } + + @Override + public int incrementDelegationTokenSeqNum() { + return 0; + } + + @Override + public int getDelegationTokenSeqNum() { + return 0; + } + + @Override + public void setDelegationTokenSeqNum(int seqNum) { + return; + } + + @Override + public int getCurrentKeyId() { + return 0; + } + + @Override + public int incrementCurrentKeyId() { + return 0; + } } \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/ZKFederationStateStoreOpDurations.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/ZKFederationStateStoreOpDurations.java index 113e4850a57..54b8b5f4dda 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/ZKFederationStateStoreOpDurations.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/ZKFederationStateStoreOpDurations.java @@ -89,6 +89,27 @@ public final class ZKFederationStateStoreOpDurations implements MetricsSource { @Metric("Duration for a update reservation homeSubCluster call") private MutableRate updateReservationHomeSubCluster; + @Metric("Duration for a store new master key call") + private MutableRate storeNewMasterKey; + + @Metric("Duration for a remove new master key call") + private MutableRate removeStoredMasterKey; + + @Metric("Duration for a get master key by delegation key call") + private MutableRate getMasterKeyByDelegationKey; + + @Metric("Duration for a store new token call") + private MutableRate storeNewToken; + + @Metric("Duration for a update stored token call") + private MutableRate updateStoredToken; + + @Metric("Duration for a remove stored token call") + private MutableRate removeStoredToken; + + @Metric("Duration for a get token by router store token call") + private MutableRate getTokenByRouterStoreToken; + protected static final MetricsInfo RECORD_INFO = info("ZKFederationStateStoreOpDurations", "Durations of ZKFederationStateStore calls"); @@ -187,4 +208,32 @@ public final class ZKFederationStateStoreOpDurations implements MetricsSource { public void addUpdateReservationHomeSubClusterDuration(long startTime, long endTime) { updateReservationHomeSubCluster.add(endTime - startTime); } + + public void addStoreNewMasterKeyDuration(long startTime, long endTime) { + storeNewMasterKey.add(endTime - startTime); + } + + public void removeStoredMasterKeyDuration(long startTime, long endTime) { + removeStoredMasterKey.add(endTime - startTime); + } + + public void getMasterKeyByDelegationKeyDuration(long startTime, long endTime) { + getMasterKeyByDelegationKey.add(endTime - startTime); + } + + public void getStoreNewTokenDuration(long startTime, long endTime) { + storeNewToken.add(endTime - startTime); + } + + public void updateStoredTokenDuration(long startTime, long endTime) { + updateStoredToken.add(endTime - startTime); + } + + public void removeStoredTokenDuration(long startTime, long endTime) { + removeStoredToken.add(endTime - startTime); + } + + public void getTokenByRouterStoreTokenDuration(long startTime, long endTime) { + getTokenByRouterStoreToken.add(endTime - startTime); + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/ZookeeperFederationStateStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/ZookeeperFederationStateStore.java index 3215333cbb9..95903b81d18 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/ZookeeperFederationStateStore.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/ZookeeperFederationStateStore.java @@ -17,9 +17,12 @@ package org.apache.hadoop.yarn.server.federation.store.impl; -import static org.apache.hadoop.util.curator.ZKCuratorManager.getNodePath; - import java.io.IOException; +import java.io.ByteArrayOutputStream; +import java.io.DataOutputStream; +import java.io.ByteArrayInputStream; +import java.io.DataInputStream; +import java.nio.ByteBuffer; import java.util.ArrayList; import java.util.Calendar; import java.util.List; @@ -27,9 +30,11 @@ import java.util.TimeZone; import java.util.Comparator; import java.util.stream.Collectors; -import org.apache.commons.lang3.NotImplementedException; +import org.apache.curator.framework.recipes.shared.SharedCount; +import org.apache.curator.framework.recipes.shared.VersionedValue; import org.apache.hadoop.classification.VisibleForTesting; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.security.token.delegation.DelegationKey; import org.apache.hadoop.util.curator.ZKCuratorManager; import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.conf.YarnConfiguration; @@ -37,6 +42,7 @@ import org.apache.hadoop.yarn.exceptions.YarnException; import org.apache.hadoop.yarn.federation.proto.YarnServerFederationProtos.SubClusterIdProto; import org.apache.hadoop.yarn.federation.proto.YarnServerFederationProtos.SubClusterInfoProto; import org.apache.hadoop.yarn.federation.proto.YarnServerFederationProtos.SubClusterPolicyConfigurationProto; +import org.apache.hadoop.yarn.security.client.YARNDelegationTokenIdentifier; import org.apache.hadoop.yarn.server.federation.store.FederationStateStore; import org.apache.hadoop.yarn.server.federation.store.records.AddApplicationHomeSubClusterRequest; import org.apache.hadoop.yarn.server.federation.store.records.AddApplicationHomeSubClusterResponse; @@ -82,17 +88,23 @@ import org.apache.hadoop.yarn.server.federation.store.records.UpdateReservationH import org.apache.hadoop.yarn.server.federation.store.records.UpdateReservationHomeSubClusterResponse; import org.apache.hadoop.yarn.server.federation.store.records.RouterMasterKeyResponse; import org.apache.hadoop.yarn.server.federation.store.records.RouterMasterKeyRequest; +import org.apache.hadoop.yarn.server.federation.store.records.RouterRMTokenResponse; +import org.apache.hadoop.yarn.server.federation.store.records.RouterRMTokenRequest; import org.apache.hadoop.yarn.server.federation.store.records.impl.pb.SubClusterIdPBImpl; import org.apache.hadoop.yarn.server.federation.store.records.impl.pb.SubClusterInfoPBImpl; import org.apache.hadoop.yarn.server.federation.store.records.impl.pb.SubClusterPolicyConfigurationPBImpl; +import org.apache.hadoop.yarn.server.federation.store.records.RouterMasterKey; +import org.apache.hadoop.yarn.server.federation.store.records.RouterStoreToken; import org.apache.hadoop.yarn.server.federation.store.utils.FederationApplicationHomeSubClusterStoreInputValidator; import org.apache.hadoop.yarn.server.federation.store.utils.FederationMembershipStateStoreInputValidator; import org.apache.hadoop.yarn.server.federation.store.utils.FederationPolicyStoreInputValidator; import org.apache.hadoop.yarn.server.federation.store.utils.FederationStateStoreUtils; import org.apache.hadoop.yarn.server.federation.store.utils.FederationReservationHomeSubClusterStoreInputValidator; +import org.apache.hadoop.yarn.server.federation.store.utils.FederationRouterRMTokenInputValidator; import org.apache.hadoop.yarn.server.records.Version; import org.apache.hadoop.yarn.api.records.ReservationId; import org.apache.hadoop.yarn.util.Clock; +import org.apache.hadoop.yarn.util.Records; import org.apache.hadoop.yarn.util.SystemClock; import org.apache.zookeeper.data.ACL; import org.slf4j.Logger; @@ -101,11 +113,14 @@ import org.slf4j.LoggerFactory; import org.apache.hadoop.thirdparty.protobuf.InvalidProtocolBufferException; import static org.apache.hadoop.yarn.server.federation.store.utils.FederationStateStoreUtils.filterHomeSubCluster; +import static org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.ZK_DTSM_TOKEN_SEQNUM_BATCH_SIZE; +import static org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.ZK_DTSM_TOKEN_SEQNUM_BATCH_SIZE_DEFAULT; +import static org.apache.hadoop.util.curator.ZKCuratorManager.getNodePath; /** * ZooKeeper implementation of {@link FederationStateStore}. - * * The znode structure is as follows: + * * ROOT_DIR_PATH * |--- MEMBERSHIP * | |----- SC1 @@ -119,6 +134,14 @@ import static org.apache.hadoop.yarn.server.federation.store.utils.FederationSta * |--- RESERVATION * | |----- RESERVATION1 * | |----- RESERVATION2 + * |--- ROUTER_RM_DT_SECRET_MANAGER_ROOT + * | |----- ROUTER_RM_DELEGATION_TOKENS_ROOT + * | | |----- RM_DELEGATION_TOKEN_1 + * | | |----- RM_DELEGATION_TOKEN_2 + * | | |----- RM_DELEGATION_TOKEN_3 + * | |----- ROUTER_RM_DT_MASTER_KEYS_ROOT + * | | |----- DELEGATION_KEY_1 + * | |----- ROUTER_RM_DT_SEQUENTIAL_NUMBER */ public class ZookeeperFederationStateStore implements FederationStateStore { @@ -130,9 +153,29 @@ public class ZookeeperFederationStateStore implements FederationStateStore { private final static String ROOT_ZNODE_NAME_POLICY = "policies"; private final static String ROOT_ZNODE_NAME_RESERVATION = "reservation"; + /** Store Delegation Token Node. */ + private final static String ROUTER_RM_DT_SECRET_MANAGER_ROOT = "router_rm_dt_secret_manager_root"; + private static final String ROUTER_RM_DT_MASTER_KEYS_ROOT_ZNODE_NAME = + "router_rm_dt_master_keys_root"; + private static final String ROUTER_RM_DELEGATION_TOKENS_ROOT_ZNODE_NAME = + "router_rm_delegation_tokens_root"; + private static final String ROUTER_RM_DT_SEQUENTIAL_NUMBER_ZNODE_NAME = + "router_rm_dt_sequential_number"; + private static final String ROUTER_RM_DT_MASTER_KEY_ID_ZNODE_NAME = + "router_rm_dt_master_key_id"; + private static final String ROUTER_RM_DELEGATION_KEY_PREFIX = "delegation_key_"; + private static final String ROUTER_RM_DELEGATION_TOKEN_PREFIX = "rm_delegation_token_"; + /** Interface to Zookeeper. */ private ZKCuratorManager zkManager; + /** Store sequenceNum. **/ + private int seqNumBatchSize; + private int currentSeqNum; + private int currentMaxSeqNum; + private SharedCount delTokSeqCounter; + private SharedCount keyIdSeqCounter; + /** Directory to store the state store data. */ private String baseZNode; @@ -142,6 +185,13 @@ public class ZookeeperFederationStateStore implements FederationStateStore { private String reservationsZNode; private int maxAppsInStateStore; + /** Directory to store the delegation token data. **/ + private String routerRMDTSecretManagerRoot; + private String routerRMDTMasterKeysRootPath; + private String routerRMDelegationTokensRootPath; + private String routerRMSequenceNumberPath; + private String routerRMMasterKeyIdPath; + private volatile Clock clock = SystemClock.getInstance(); @VisibleForTesting @@ -150,6 +200,7 @@ public class ZookeeperFederationStateStore implements FederationStateStore { @Override public void init(Configuration conf) throws YarnException { + LOG.info("Initializing ZooKeeper connection"); maxAppsInStateStore = conf.getInt( @@ -172,6 +223,17 @@ public class ZookeeperFederationStateStore implements FederationStateStore { policiesZNode = getNodePath(baseZNode, ROOT_ZNODE_NAME_POLICY); reservationsZNode = getNodePath(baseZNode, ROOT_ZNODE_NAME_RESERVATION); + // delegation token znodes + routerRMDTSecretManagerRoot = getNodePath(baseZNode, ROUTER_RM_DT_SECRET_MANAGER_ROOT); + routerRMDTMasterKeysRootPath = getNodePath(routerRMDTSecretManagerRoot, + ROUTER_RM_DT_MASTER_KEYS_ROOT_ZNODE_NAME); + routerRMDelegationTokensRootPath = getNodePath(routerRMDTSecretManagerRoot, + ROUTER_RM_DELEGATION_TOKENS_ROOT_ZNODE_NAME); + routerRMSequenceNumberPath = getNodePath(routerRMDTSecretManagerRoot, + ROUTER_RM_DT_SEQUENTIAL_NUMBER_ZNODE_NAME); + routerRMMasterKeyIdPath = getNodePath(routerRMDTSecretManagerRoot, + ROUTER_RM_DT_MASTER_KEY_ID_ZNODE_NAME); + // Create base znode for each entity try { List zkAcl = ZKCuratorManager.getZKAcls(conf); @@ -179,14 +241,68 @@ public class ZookeeperFederationStateStore implements FederationStateStore { zkManager.createRootDirRecursively(appsZNode, zkAcl); zkManager.createRootDirRecursively(policiesZNode, zkAcl); zkManager.createRootDirRecursively(reservationsZNode, zkAcl); + zkManager.createRootDirRecursively(routerRMDTSecretManagerRoot, zkAcl); + zkManager.createRootDirRecursively(routerRMDTMasterKeysRootPath, zkAcl); + zkManager.createRootDirRecursively(routerRMDelegationTokensRootPath, zkAcl); } catch (Exception e) { String errMsg = "Cannot create base directories: " + e.getMessage(); FederationStateStoreUtils.logAndThrowStoreException(LOG, errMsg); } + + // Distributed sequenceNum. + try { + seqNumBatchSize = conf.getInt(ZK_DTSM_TOKEN_SEQNUM_BATCH_SIZE, + ZK_DTSM_TOKEN_SEQNUM_BATCH_SIZE_DEFAULT); + + delTokSeqCounter = new SharedCount(zkManager.getCurator(), routerRMSequenceNumberPath, 0); + + if (delTokSeqCounter != null) { + delTokSeqCounter.start(); + } + + // the first batch range should be allocated during this starting window + // by calling the incrSharedCount + currentSeqNum = incrSharedCount(delTokSeqCounter, seqNumBatchSize); + currentMaxSeqNum = currentSeqNum + seqNumBatchSize; + + LOG.info("Fetched initial range of seq num, from {} to {} ", + currentSeqNum + 1, currentMaxSeqNum); + } catch (Exception e) { + throw new YarnException("Could not start Sequence Counter.", e); + } + + // Distributed masterKeyId. + try { + keyIdSeqCounter = new SharedCount(zkManager.getCurator(), routerRMMasterKeyIdPath, 0); + if (keyIdSeqCounter != null) { + keyIdSeqCounter.start(); + } + } catch (Exception e) { + throw new YarnException("Could not start Master KeyId Counter", e); + } } @Override public void close() throws Exception { + + try { + if (delTokSeqCounter != null) { + delTokSeqCounter.close(); + delTokSeqCounter = null; + } + } catch (Exception e) { + LOG.error("Could not Stop Delegation Token Counter.", e); + } + + try { + if (keyIdSeqCounter != null) { + keyIdSeqCounter.close(); + keyIdSeqCounter = null; + } + } catch (Exception e) { + LOG.error("Could not stop Master KeyId Counter.", e); + } + if (zkManager != null) { zkManager.close(); } @@ -884,21 +1000,599 @@ public class ZookeeperFederationStateStore implements FederationStateStore { return UpdateReservationHomeSubClusterResponse.newInstance(); } + /** + * ZookeeperFederationStateStore Supports Store NewMasterKey. + * + * @param request The request contains RouterMasterKey, which is an abstraction for DelegationKey + * @return routerMasterKeyResponse, the response contains the RouterMasterKey. + * @throws YarnException if the call to the state store is unsuccessful. + * @throws IOException An IO Error occurred. + */ @Override public RouterMasterKeyResponse storeNewMasterKey(RouterMasterKeyRequest request) throws YarnException, IOException { - throw new NotImplementedException("Code is not implemented"); + + long start = clock.getTime(); + // For the verification of the request, after passing the verification, + // the request and the internal objects will not be empty and can be used directly. + FederationRouterRMTokenInputValidator.validate(request); + + // Parse the delegationKey from the request and get the ZK storage path. + DelegationKey delegationKey = convertMasterKeyToDelegationKey(request); + String nodeCreatePath = getMasterKeyZNodePathByDelegationKey(delegationKey); + LOG.debug("Storing RMDelegationKey_{}, ZkNodePath = {}.", delegationKey.getKeyId(), + nodeCreatePath); + + // Write master key data to zk. + try (ByteArrayOutputStream os = new ByteArrayOutputStream(); + DataOutputStream fsOut = new DataOutputStream(os)) { + delegationKey.write(fsOut); + put(nodeCreatePath, os.toByteArray(), false); + } + + // Get the stored masterKey from zk. + RouterMasterKey masterKeyFromZK = getRouterMasterKeyFromZK(nodeCreatePath); + long end = clock.getTime(); + opDurations.addStoreNewMasterKeyDuration(start, end); + return RouterMasterKeyResponse.newInstance(masterKeyFromZK); } + /** + * ZookeeperFederationStateStore Supports Remove MasterKey. + * + * @param request The request contains RouterMasterKey, which is an abstraction for DelegationKey + * @return routerMasterKeyResponse, the response contains the RouterMasterKey. + * @throws YarnException if the call to the state store is unsuccessful. + * @throws IOException An IO Error occurred. + */ @Override public RouterMasterKeyResponse removeStoredMasterKey(RouterMasterKeyRequest request) throws YarnException, IOException { - throw new NotImplementedException("Code is not implemented"); + + long start = clock.getTime(); + // For the verification of the request, after passing the verification, + // the request and the internal objects will not be empty and can be used directly. + FederationRouterRMTokenInputValidator.validate(request); + + try { + // Parse the delegationKey from the request and get the ZK storage path. + RouterMasterKey masterKey = request.getRouterMasterKey(); + DelegationKey delegationKey = convertMasterKeyToDelegationKey(request); + String nodeRemovePath = getMasterKeyZNodePathByDelegationKey(delegationKey); + LOG.debug("Removing RMDelegationKey_{}, ZkNodePath = {}.", delegationKey.getKeyId(), + nodeRemovePath); + + // Check if the path exists, Throws an exception if the path does not exist. + if (!exists(nodeRemovePath)) { + throw new YarnException("ZkNodePath = " + nodeRemovePath + " not exists!"); + } + + // try to remove masterKey. + zkManager.delete(nodeRemovePath); + long end = clock.getTime(); + opDurations.removeStoredMasterKeyDuration(start, end); + return RouterMasterKeyResponse.newInstance(masterKey); + } catch (Exception e) { + throw new YarnException(e); + } } + /** + * ZookeeperFederationStateStore Supports Remove MasterKey. + * + * @param request The request contains RouterMasterKey, which is an abstraction for DelegationKey + * @return routerMasterKeyResponse, the response contains the RouterMasterKey. + * @throws YarnException if the call to the state store is unsuccessful. + * @throws IOException An IO Error occurred. + */ @Override public RouterMasterKeyResponse getMasterKeyByDelegationKey(RouterMasterKeyRequest request) throws YarnException, IOException { - throw new NotImplementedException("Code is not implemented"); + + long start = clock.getTime(); + // For the verification of the request, after passing the verification, + // the request and the internal objects will not be empty and can be used directly. + FederationRouterRMTokenInputValidator.validate(request); + + try { + + // Parse the delegationKey from the request and get the ZK storage path. + DelegationKey delegationKey = convertMasterKeyToDelegationKey(request); + String nodePath = getMasterKeyZNodePathByDelegationKey(delegationKey); + + // Check if the path exists, Throws an exception if the path does not exist. + if (!exists(nodePath)) { + throw new YarnException("ZkNodePath = " + nodePath + " not exists!"); + } + + // Get the stored masterKey from zk. + RouterMasterKey routerMasterKey = getRouterMasterKeyFromZK(nodePath); + long end = clock.getTime(); + opDurations.getMasterKeyByDelegationKeyDuration(start, end); + return RouterMasterKeyResponse.newInstance(routerMasterKey); + } catch (Exception e) { + throw new YarnException(e); + } + } + + /** + * Get MasterKeyZNodePath based on DelegationKey. + * + * @param delegationKey delegationKey. + * @return masterKey ZNodePath. + */ + private String getMasterKeyZNodePathByDelegationKey(DelegationKey delegationKey) { + return getMasterKeyZNodePathByKeyId(delegationKey.getKeyId()); + } + + /** + * Get MasterKeyZNodePath based on KeyId. + * + * @param keyId master key id. + * @return masterKey ZNodePath. + */ + private String getMasterKeyZNodePathByKeyId(int keyId) { + String nodeName = ROUTER_RM_DELEGATION_KEY_PREFIX + keyId; + return getNodePath(routerRMDTMasterKeysRootPath, nodeName); + } + + /** + * Get RouterMasterKey from ZK. + * + * @param nodePath The path where masterKey is stored in zk. + * + * @return RouterMasterKey. + * @throws IOException An IO Error occurred. + */ + private RouterMasterKey getRouterMasterKeyFromZK(String nodePath) + throws IOException { + try { + byte[] data = get(nodePath); + if ((data == null) || (data.length == 0)) { + return null; + } + + ByteArrayInputStream bin = new ByteArrayInputStream(data); + DataInputStream din = new DataInputStream(bin); + DelegationKey key = new DelegationKey(); + key.readFields(din); + + return RouterMasterKey.newInstance(key.getKeyId(), + ByteBuffer.wrap(key.getEncodedKey()), key.getExpiryDate()); + } catch (Exception ex) { + LOG.error("No node in path {}.", nodePath); + throw new IOException(ex); + } + } + + /** + * ZookeeperFederationStateStore Supports Store RMDelegationTokenIdentifier. + * + * The stored token method is a synchronized method + * used to ensure that storeNewToken is a thread-safe method. + * + * @param request The request contains RouterRMToken (RMDelegationTokenIdentifier and renewDate) + * @return routerRMTokenResponse, the response contains the RouterStoreToken. + * @throws YarnException if the call to the state store is unsuccessful. + * @throws IOException An IO Error occurred. + */ + @Override + public RouterRMTokenResponse storeNewToken(RouterRMTokenRequest request) + throws YarnException, IOException { + + long start = clock.getTime(); + // We verify the RouterRMTokenRequest to ensure that the request is not empty, + // and that the internal RouterStoreToken is not empty. + FederationRouterRMTokenInputValidator.validate(request); + + try { + + // add delegationToken + storeOrUpdateRouterRMDT(request, false); + + // Get the stored delegationToken from ZK and return. + RouterStoreToken resultStoreToken = getStoreTokenFromZK(request); + long end = clock.getTime(); + opDurations.getStoreNewTokenDuration(start, end); + return RouterRMTokenResponse.newInstance(resultStoreToken); + } catch (YarnException | IOException e) { + throw e; + } catch (Exception e) { + throw new YarnException(e); + } + } + + /** + * ZookeeperFederationStateStore Supports Update RMDelegationTokenIdentifier. + * + * The update stored token method is a synchronized method + * used to ensure that storeNewToken is a thread-safe method. + * + * @param request The request contains RouterRMToken (RMDelegationTokenIdentifier and renewDate) + * @return routerRMTokenResponse, the response contains the RouterStoreToken. + * @throws YarnException if the call to the state store is unsuccessful. + * @throws IOException An IO Error occurred. + */ + @Override + public RouterRMTokenResponse updateStoredToken(RouterRMTokenRequest request) + throws YarnException, IOException { + + long start = clock.getTime(); + // We verify the RouterRMTokenRequest to ensure that the request is not empty, + // and that the internal RouterStoreToken is not empty. + FederationRouterRMTokenInputValidator.validate(request); + + try { + + // get the Token storage path + String nodePath = getStoreTokenZNodePathByTokenRequest(request); + + // updateStoredToken needs to determine whether the zkNode exists. + // If it exists, update the token data. + // If it does not exist, write the new token data directly. + boolean pathExists = true; + if (!exists(nodePath)) { + pathExists = false; + } + + if (pathExists) { + // update delegationToken + storeOrUpdateRouterRMDT(request, true); + } else { + // add new delegationToken + storeNewToken(request); + } + + // Get the stored delegationToken from ZK and return. + RouterStoreToken resultStoreToken = getStoreTokenFromZK(request); + long end = clock.getTime(); + opDurations.updateStoredTokenDuration(start, end); + return RouterRMTokenResponse.newInstance(resultStoreToken); + } catch (YarnException | IOException e) { + throw e; + } catch (Exception e) { + throw new YarnException(e); + } + } + + /** + * ZookeeperFederationStateStore Supports Remove RMDelegationTokenIdentifier. + * + * The remove stored token method is a synchronized method + * used to ensure that storeNewToken is a thread-safe method. + * + * @param request The request contains RouterRMToken (RMDelegationTokenIdentifier and renewDate) + * @return routerRMTokenResponse, the response contains the RouterStoreToken. + * @throws YarnException if the call to the state store is unsuccessful. + * @throws IOException An IO Error occurred. + */ + @Override + public RouterRMTokenResponse removeStoredToken(RouterRMTokenRequest request) + throws YarnException, IOException { + + long start = clock.getTime(); + // We verify the RouterRMTokenRequest to ensure that the request is not empty, + // and that the internal RouterStoreToken is not empty. + FederationRouterRMTokenInputValidator.validate(request); + + try { + + // get the Token storage path + String nodePath = getStoreTokenZNodePathByTokenRequest(request); + + // If the path to be deleted does not exist, throw an exception directly. + if (!exists(nodePath)) { + throw new YarnException("ZkNodePath = " + nodePath + " not exists!"); + } + + // Check again, first get the data from ZK, + // if the data is not empty, then delete it + RouterStoreToken storeToken = getStoreTokenFromZK(request); + if (storeToken != null) { + zkManager.delete(nodePath); + } + + // return deleted token data. + long end = clock.getTime(); + opDurations.removeStoredTokenDuration(start, end); + return RouterRMTokenResponse.newInstance(storeToken); + } catch (YarnException | IOException e) { + throw e; + } catch (Exception e) { + throw new YarnException(e); + } + } + + /** + * The Router Supports GetTokenByRouterStoreToken. + * + * @param request The request contains RouterRMToken (RMDelegationTokenIdentifier and renewDate) + * @return RouterRMTokenResponse. + * @throws YarnException if the call to the state store is unsuccessful + * @throws IOException An IO Error occurred + */ + @Override + public RouterRMTokenResponse getTokenByRouterStoreToken(RouterRMTokenRequest request) + throws YarnException, IOException { + + long start = clock.getTime(); + // We verify the RouterRMTokenRequest to ensure that the request is not empty, + // and that the internal RouterStoreToken is not empty. + FederationRouterRMTokenInputValidator.validate(request); + + try { + + // Before get the token, + // we need to determine whether the path where the token is stored exists. + // If it doesn't exist, we will throw an exception. + String nodePath = getStoreTokenZNodePathByTokenRequest(request); + if (!exists(nodePath)) { + throw new YarnException("ZkNodePath = " + nodePath + " not exists!"); + } + + // Get the stored delegationToken from ZK and return. + RouterStoreToken resultStoreToken = getStoreTokenFromZK(request); + // return deleted token data. + long end = clock.getTime(); + opDurations.getTokenByRouterStoreTokenDuration(start, end); + return RouterRMTokenResponse.newInstance(resultStoreToken); + } catch (YarnException | IOException e) { + throw e; + } catch (Exception e) { + throw new YarnException(e); + } + } + + /** + * Convert MasterKey to DelegationKey. + * + * Before using this function, + * please use FederationRouterRMTokenInputValidator to verify the request. + * By default, the request is not empty, and the internal object is not empty. + * + * @param request RouterMasterKeyRequest + * @return DelegationKey. + */ + private DelegationKey convertMasterKeyToDelegationKey(RouterMasterKeyRequest request) { + RouterMasterKey masterKey = request.getRouterMasterKey(); + return convertMasterKeyToDelegationKey(masterKey); + } + + /** + * Convert MasterKey to DelegationKey. + * + * @param masterKey masterKey. + * @return DelegationKey. + */ + private DelegationKey convertMasterKeyToDelegationKey(RouterMasterKey masterKey) { + ByteBuffer keyByteBuf = masterKey.getKeyBytes(); + byte[] keyBytes = new byte[keyByteBuf.remaining()]; + keyByteBuf.get(keyBytes); + return new DelegationKey(masterKey.getKeyId(), masterKey.getExpiryDate(), keyBytes); + } + + /** + * Check if a path exists in zk. + * + * @param path Path to be checked. + * @return Returns true if the path exists, false if the path does not exist. + * @throws Exception When an exception to access zk occurs. + */ + @VisibleForTesting + boolean exists(final String path) throws Exception { + return zkManager.exists(path); + } + + /** + * Add or update delegationToken. + * + * Before using this function, + * please use FederationRouterRMTokenInputValidator to verify the request. + * By default, the request is not empty, and the internal object is not empty. + * + * @param request storeToken + * @param isUpdate true, update the token; false, create a new token. + * @throws Exception exception occurs. + */ + private void storeOrUpdateRouterRMDT(RouterRMTokenRequest request, boolean isUpdate) + throws Exception { + + RouterStoreToken routerStoreToken = request.getRouterStoreToken(); + String nodeCreatePath = getStoreTokenZNodePathByTokenRequest(request); + LOG.debug("nodeCreatePath = {}, isUpdate = {}", nodeCreatePath, isUpdate); + put(nodeCreatePath, routerStoreToken.toByteArray(), isUpdate); + } + + /** + * Get ZNode Path of StoreToken. + * + * Before using this method, we should use FederationRouterRMTokenInputValidator + * to verify the request,ensure that the request is not empty, + * and ensure that the object in the request is not empty. + * + * @param request RouterMasterKeyRequest. + * @return RouterRMToken ZNode Path. + * @throws IOException io exception occurs. + */ + private String getStoreTokenZNodePathByTokenRequest(RouterRMTokenRequest request) + throws IOException { + RouterStoreToken routerStoreToken = request.getRouterStoreToken(); + YARNDelegationTokenIdentifier identifier = routerStoreToken.getTokenIdentifier(); + return getStoreTokenZNodePathByIdentifier(identifier); + } + + /** + * Get ZNode Path of StoreToken. + * + * @param identifier YARNDelegationTokenIdentifier + * @return RouterRMToken ZNode Path. + */ + private String getStoreTokenZNodePathByIdentifier(YARNDelegationTokenIdentifier identifier) { + String nodePath = getNodePath(routerRMDelegationTokensRootPath, + ROUTER_RM_DELEGATION_TOKEN_PREFIX + identifier.getSequenceNumber()); + return nodePath; + } + + /** + * Get RouterStoreToken from ZK. + * + * @param request RouterMasterKeyRequest. + * @return RouterStoreToken. + * @throws IOException io exception occurs. + */ + private RouterStoreToken getStoreTokenFromZK(RouterRMTokenRequest request) throws IOException { + RouterStoreToken routerStoreToken = request.getRouterStoreToken(); + YARNDelegationTokenIdentifier identifier = routerStoreToken.getTokenIdentifier(); + return getStoreTokenFromZK(identifier); + } + + /** + * Get RouterStoreToken from ZK. + * + * @param identifier YARN DelegationToken Identifier. + * @return RouterStoreToken. + * @throws IOException io exception occurs. + */ + private RouterStoreToken getStoreTokenFromZK(YARNDelegationTokenIdentifier identifier) + throws IOException { + // get the Token storage path + String nodePath = getStoreTokenZNodePathByIdentifier(identifier); + return getStoreTokenFromZK(nodePath); + } + + /** + * Get RouterStoreToken from ZK. + * + * @param nodePath Znode location where data is stored. + * @return RouterStoreToken. + * @throws IOException io exception occurs. + */ + private RouterStoreToken getStoreTokenFromZK(String nodePath) + throws IOException { + try { + byte[] data = get(nodePath); + if ((data == null) || (data.length == 0)) { + return null; + } + ByteArrayInputStream bin = new ByteArrayInputStream(data); + DataInputStream din = new DataInputStream(bin); + RouterStoreToken storeToken = Records.newRecord(RouterStoreToken.class); + storeToken.readFields(din); + return storeToken; + } catch (Exception ex) { + LOG.error("No node in path [{}]", nodePath, ex); + throw new IOException(ex); + } + } + + /** + * Increase SequenceNum. For zk, this is a distributed value. + * To ensure data consistency, we will use the synchronized keyword. + * + * For ZookeeperFederationStateStore, in order to reduce the interaction with ZK, + * we will apply for SequenceNum from ZK in batches(Apply + * when currentSeqNum >= currentMaxSeqNum), + * and assign this value to the variable currentMaxSeqNum. + * + * When calling the method incrementDelegationTokenSeqNum, + * if currentSeqNum < currentMaxSeqNum, we return ++currentMaxSeqNum, + * When currentSeqNum >= currentMaxSeqNum, we re-apply SequenceNum from zk. + * + * @return SequenceNum. + */ + @Override + public int incrementDelegationTokenSeqNum() { + // The secret manager will keep a local range of seq num which won't be + // seen by peers, so only when the range is exhausted it will ask zk for + // another range again + if (currentSeqNum >= currentMaxSeqNum) { + try { + // after a successful batch request, we can get the range starting point + currentSeqNum = incrSharedCount(delTokSeqCounter, seqNumBatchSize); + currentMaxSeqNum = currentSeqNum + seqNumBatchSize; + LOG.info("Fetched new range of seq num, from {} to {} ", + currentSeqNum + 1, currentMaxSeqNum); + } catch (InterruptedException e) { + // The ExpirationThread is just finishing.. so don't do anything.. + LOG.debug("Thread interrupted while performing token counter increment", e); + Thread.currentThread().interrupt(); + } catch (Exception e) { + throw new RuntimeException("Could not increment shared counter !!", e); + } + } + return ++currentSeqNum; + } + + /** + * Increment the value of the shared variable. + * + * @param sharedCount zk SharedCount. + * @param batchSize batch size. + * @return new SequenceNum. + * @throws Exception exception occurs. + */ + private int incrSharedCount(SharedCount sharedCount, int batchSize) + throws Exception { + while (true) { + // Loop until we successfully increment the counter + VersionedValue versionedValue = sharedCount.getVersionedValue(); + if (sharedCount.trySetCount(versionedValue, versionedValue.getValue() + batchSize)) { + return versionedValue.getValue(); + } + } + } + + /** + * Get DelegationToken SeqNum. + * + * @return delegationTokenSeqNum. + */ + @Override + public int getDelegationTokenSeqNum() { + return delTokSeqCounter.getCount(); + } + + /** + * Set DelegationToken SeqNum. + * + * @param seqNum sequenceNum. + */ + @Override + public void setDelegationTokenSeqNum(int seqNum) { + try { + delTokSeqCounter.setCount(seqNum); + } catch (Exception e) { + throw new RuntimeException("Could not set shared counter !!", e); + } + } + + /** + * Get Current KeyId. + * + * @return currentKeyId. + */ + @Override + public int getCurrentKeyId() { + return keyIdSeqCounter.getCount(); + } + + /** + * The Router Supports incrementCurrentKeyId. + * + * @return CurrentKeyId. + */ + @Override + public int incrementCurrentKeyId() { + try { + // It should be noted that the BatchSize of MasterKeyId defaults to 1. + incrSharedCount(keyIdSeqCounter, 1); + } catch (InterruptedException e) { + // The ExpirationThread is just finishing.. so don't do anything.. + LOG.debug("Thread interrupted while performing Master keyId increment", e); + Thread.currentThread().interrupt(); + } catch (Exception e) { + throw new RuntimeException("Could not increment shared Master keyId counter !!", e); + } + return keyIdSeqCounter.getCount(); } } \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/ApplicationHomeSubCluster.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/ApplicationHomeSubCluster.java index 898e11f1820..e1ea302380a 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/ApplicationHomeSubCluster.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/ApplicationHomeSubCluster.java @@ -17,6 +17,8 @@ package org.apache.hadoop.yarn.server.federation.store.records; +import org.apache.commons.lang3.builder.EqualsBuilder; +import org.apache.commons.lang3.builder.HashCodeBuilder; import org.apache.hadoop.classification.InterfaceAudience.Private; import org.apache.hadoop.classification.InterfaceAudience.Public; import org.apache.hadoop.classification.InterfaceStability.Unstable; @@ -123,32 +125,42 @@ public abstract class ApplicationHomeSubCluster { @Override public boolean equals(Object obj) { + if (this == obj) { return true; } + if (obj == null) { return false; } - if (getClass() != obj.getClass()) { - return false; + + if (obj instanceof ApplicationHomeSubCluster) { + ApplicationHomeSubCluster other = (ApplicationHomeSubCluster) obj; + return new EqualsBuilder() + .append(this.getApplicationId(), other.getApplicationId()) + .append(this.getHomeSubCluster(), other.getHomeSubCluster()) + .isEquals(); } - ApplicationHomeSubCluster other = (ApplicationHomeSubCluster) obj; - if (!this.getApplicationId().equals(other.getApplicationId())) { - return false; - } - return this.getHomeSubCluster().equals(other.getHomeSubCluster()); + + return false; } @Override public int hashCode() { - return getApplicationId().hashCode() * 31 + getHomeSubCluster().hashCode(); + return new HashCodeBuilder(). + append(this.getApplicationId()). + append(this.getHomeSubCluster()). + append(this.getCreateTime()).toHashCode(); } @Override public String toString() { - return "ApplicationHomeSubCluster [getApplicationId()=" - + getApplicationId() + ", getHomeSubCluster()=" + getHomeSubCluster() - + "]"; + StringBuilder sb = new StringBuilder(); + sb.append("ApplicationHomeSubCluster: [") + .append("ApplicationId: ").append(getApplicationId()).append(", ") + .append("HomeSubCluster: ").append(getHomeSubCluster()).append(", ") + .append("CreateTime: ").append(getCreateTime()).append(", ") + .append("]"); + return sb.toString(); } - } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/ReservationHomeSubCluster.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/ReservationHomeSubCluster.java index e080d115716..c1a15536d26 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/ReservationHomeSubCluster.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/ReservationHomeSubCluster.java @@ -94,21 +94,24 @@ public abstract class ReservationHomeSubCluster { @Override public boolean equals(Object obj) { + if (this == obj) { return true; } + if (obj == null) { return false; } - if (getClass() != obj.getClass()) { - return false; - } - ReservationHomeSubCluster other = (ReservationHomeSubCluster) obj; - return new EqualsBuilder() - .append(this.getReservationId(), other.getReservationId()) - .append(this.getHomeSubCluster(), other.getHomeSubCluster()) - .isEquals(); + if (obj instanceof ReservationHomeSubCluster) { + ReservationHomeSubCluster other = (ReservationHomeSubCluster) obj; + return new EqualsBuilder() + .append(this.getReservationId(), other.getReservationId()) + .append(this.getHomeSubCluster(), other.getHomeSubCluster()) + .isEquals(); + } + + return false; } @Override @@ -121,9 +124,11 @@ public abstract class ReservationHomeSubCluster { @Override public String toString() { - return "ReservationHomeSubCluster [getReservationId()=" - + getReservationId() + ", getApplicationHomeSubcluster()=" + getHomeSubCluster() - + "]"; + StringBuilder sb = new StringBuilder(); + sb.append("ReservationHomeSubCluster: [") + .append("ReservationId: ").append(getReservationId()).append(", ") + .append("HomeSubCluster: ").append(getHomeSubCluster()) + .append("]"); + return sb.toString(); } - } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/RouterMasterKey.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/RouterMasterKey.java index 0090723e517..8cd80328c3f 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/RouterMasterKey.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/RouterMasterKey.java @@ -114,20 +114,36 @@ public abstract class RouterMasterKey { } @Override - public boolean equals(Object right) { - if (this == right) { + public boolean equals(Object obj) { + + if (this == obj) { return true; } - if (right == null || getClass() != right.getClass()) { + if (obj == null) { return false; } - RouterMasterKey r = (RouterMasterKey) right; - return new EqualsBuilder() - .append(this.getKeyId().intValue(), r.getKeyId().intValue()) - .append(this.getExpiryDate().longValue(), this.getExpiryDate().longValue()) - .append(getKeyBytes().array(), r.getKeyBytes()) - .isEquals(); + if (obj instanceof RouterMasterKey) { + RouterMasterKey other = (RouterMasterKey) obj; + return new EqualsBuilder() + .append(this.getKeyId().intValue(), other.getKeyId().intValue()) + .append(this.getExpiryDate().longValue(), other.getExpiryDate().longValue()) + .append(this.getKeyBytes().array(), other.getKeyBytes()) + .isEquals(); + } + + return false; + } + + @Override + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append("RouterMasterKey: [") + .append("KeyId: ").append(getKeyId()).append(", ") + .append("ExpiryDate: ").append(getExpiryDate()).append(", ") + .append("KeyBytes: ").append(getKeyBytes()).append(", ") + .append("]"); + return sb.toString(); } } diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/HttpBodyContent.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/RouterRMTokenRequest.java similarity index 50% rename from hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/HttpBodyContent.java rename to hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/RouterRMTokenRequest.java index b471f218e57..790ee513bbd 100644 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/HttpBodyContent.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/RouterRMTokenRequest.java @@ -1,4 +1,4 @@ -/* +/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,31 +15,30 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -package org.apache.hadoop.fs.swift.http; +package org.apache.hadoop.yarn.server.federation.store.records; -/** - * Response tuple from GET operations; combines the input stream with the content length - */ -public class HttpBodyContent { - private final long contentLength; - private final HttpInputStreamWithRelease inputStream; +import org.apache.hadoop.classification.InterfaceAudience.Private; +import org.apache.hadoop.classification.InterfaceAudience.Public; +import org.apache.hadoop.classification.InterfaceStability.Unstable; +import org.apache.hadoop.yarn.util.Records; - /** - * build a body response - * @param inputStream input stream from the operation - * @param contentLength length of content; may be -1 for "don't know" - */ - public HttpBodyContent(HttpInputStreamWithRelease inputStream, - long contentLength) { - this.contentLength = contentLength; - this.inputStream = inputStream; +@Private +@Unstable +public abstract class RouterRMTokenRequest { + + @Private + @Unstable + public static RouterRMTokenRequest newInstance(RouterStoreToken routerStoreToken) { + RouterRMTokenRequest request = Records.newRecord(RouterRMTokenRequest.class); + request.setRouterStoreToken(routerStoreToken); + return request; } - public long getContentLength() { - return contentLength; - } + @Public + @Unstable + public abstract RouterStoreToken getRouterStoreToken(); - public HttpInputStreamWithRelease getInputStream() { - return inputStream; - } + @Private + @Unstable + public abstract void setRouterStoreToken(RouterStoreToken routerStoreToken); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/RouterRMTokenResponse.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/RouterRMTokenResponse.java new file mode 100644 index 00000000000..c629e46a048 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/RouterRMTokenResponse.java @@ -0,0 +1,44 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.yarn.server.federation.store.records; + +import org.apache.hadoop.classification.InterfaceAudience.Private; +import org.apache.hadoop.classification.InterfaceAudience.Public; +import org.apache.hadoop.classification.InterfaceStability.Unstable; +import org.apache.hadoop.yarn.util.Records; + +@Private +@Unstable +public abstract class RouterRMTokenResponse { + + @Private + @Unstable + public static RouterRMTokenResponse newInstance(RouterStoreToken routerStoreToken) { + RouterRMTokenResponse request = Records.newRecord(RouterRMTokenResponse.class); + request.setRouterStoreToken(routerStoreToken); + return request; + } + + @Public + @Unstable + public abstract RouterStoreToken getRouterStoreToken(); + + @Private + @Unstable + public abstract void setRouterStoreToken(RouterStoreToken routerStoreToken); +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/RouterStoreToken.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/RouterStoreToken.java new file mode 100644 index 00000000000..29f86903f91 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/RouterStoreToken.java @@ -0,0 +1,65 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.yarn.server.federation.store.records; + +import org.apache.hadoop.classification.InterfaceAudience.Private; +import org.apache.hadoop.classification.InterfaceStability.Unstable; +import org.apache.hadoop.yarn.security.client.YARNDelegationTokenIdentifier; +import org.apache.hadoop.yarn.util.Records; + +import java.io.DataInput; +import java.io.IOException; + +@Private +@Unstable +public abstract class RouterStoreToken { + + @Private + @Unstable + public static RouterStoreToken newInstance(YARNDelegationTokenIdentifier identifier, + Long renewdate) { + RouterStoreToken storeToken = Records.newRecord(RouterStoreToken.class); + storeToken.setIdentifier(identifier); + storeToken.setRenewDate(renewdate); + return storeToken; + } + + @Private + @Unstable + public abstract YARNDelegationTokenIdentifier getTokenIdentifier() throws IOException; + + @Private + @Unstable + public abstract void setIdentifier(YARNDelegationTokenIdentifier identifier); + + @Private + @Unstable + public abstract Long getRenewDate(); + + @Private + @Unstable + public abstract void setRenewDate(Long renewDate); + + @Private + @Unstable + public abstract byte[] toByteArray() throws IOException; + + @Private + @Unstable + public abstract void readFields(DataInput in) throws IOException; +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/SubClusterId.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/SubClusterId.java index 7eeb44bba55..db638c2fac5 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/SubClusterId.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/SubClusterId.java @@ -17,6 +17,8 @@ package org.apache.hadoop.yarn.server.federation.store.records; +import org.apache.commons.lang3.builder.EqualsBuilder; +import org.apache.commons.lang3.builder.HashCodeBuilder; import org.apache.hadoop.classification.InterfaceAudience.Private; import org.apache.hadoop.classification.InterfaceAudience.Public; import org.apache.hadoop.classification.InterfaceStability.Unstable; @@ -78,19 +80,26 @@ public abstract class SubClusterId implements Comparable { if (this == obj) { return true; } + if (obj == null) { return false; } - if (getClass() != obj.getClass()) { - return false; + + if (obj instanceof SubClusterId) { + SubClusterId other = (SubClusterId) obj; + return new EqualsBuilder() + .append(this.getId(), other.getId()) + .isEquals(); } - SubClusterId other = (SubClusterId) obj; - return this.getId().equals(other.getId()); + + return false; } @Override public int hashCode() { - return getId().hashCode(); + return new HashCodeBuilder(). + append(this.getId()). + toHashCode(); } @Override @@ -104,5 +113,4 @@ public abstract class SubClusterId implements Comparable { sb.append(getId()); return sb.toString(); } - } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/SubClusterIdInfo.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/SubClusterIdInfo.java index e2260a1f457..ad03fb09a4e 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/SubClusterIdInfo.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/SubClusterIdInfo.java @@ -18,6 +18,8 @@ package org.apache.hadoop.yarn.server.federation.store.records; +import org.apache.commons.lang3.builder.EqualsBuilder; +import org.apache.commons.lang3.builder.HashCodeBuilder; import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; @@ -58,18 +60,28 @@ public class SubClusterIdInfo { } @Override - public boolean equals(Object other) { - if (other instanceof SubClusterIdInfo) { - if (((SubClusterIdInfo) other).id.equals(this.id)) { - return true; - } + public boolean equals(Object obj) { + + if (this == obj) { + return true; } + + if (obj == null) { + return false; + } + + if (obj instanceof SubClusterIdInfo) { + SubClusterIdInfo other = (SubClusterIdInfo) obj; + return new EqualsBuilder() + .append(this.id, other.id) + .isEquals(); + } + return false; } @Override public int hashCode() { - return id.hashCode(); + return new HashCodeBuilder().append(this.id).toHashCode(); } - } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/SubClusterInfo.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/SubClusterInfo.java index cbf64e6126b..40b87c7eb09 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/SubClusterInfo.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/SubClusterInfo.java @@ -17,6 +17,8 @@ package org.apache.hadoop.yarn.server.federation.store.records; +import org.apache.commons.lang3.builder.EqualsBuilder; +import org.apache.commons.lang3.builder.HashCodeBuilder; import org.apache.hadoop.classification.InterfaceAudience.Private; import org.apache.hadoop.classification.InterfaceAudience.Public; import org.apache.hadoop.classification.InterfaceStability.Unstable; @@ -43,6 +45,7 @@ public abstract class SubClusterInfo { @Private @Unstable + @SuppressWarnings("checkstyle:ParameterNumber") public static SubClusterInfo newInstance(SubClusterId subClusterId, String amRMServiceAddress, String clientRMServiceAddress, String rmAdminServiceAddress, String rmWebServiceAddress, @@ -54,6 +57,7 @@ public abstract class SubClusterInfo { @Private @Unstable + @SuppressWarnings("checkstyle:ParameterNumber") public static SubClusterInfo newInstance(SubClusterId subClusterId, String amRMServiceAddress, String clientRMServiceAddress, String rmAdminServiceAddress, String rmWebServiceAddress, @@ -252,48 +256,49 @@ public abstract class SubClusterInfo { @Override public String toString() { - return "SubClusterInfo [getSubClusterId() = " + getSubClusterId() - + ", getAMRMServiceAddress() = " + getAMRMServiceAddress() - + ", getClientRMServiceAddress() = " + getClientRMServiceAddress() - + ", getRMAdminServiceAddress() = " + getRMAdminServiceAddress() - + ", getRMWebServiceAddress() = " + getRMWebServiceAddress() - + ", getState() = " + getState() + ", getLastStartTime() = " - + getLastStartTime() + ", getCapability() = " + getCapability() + "]"; + StringBuilder sb = new StringBuilder(); + sb.append("SubClusterInfo: [") + .append("SubClusterId: ").append(getSubClusterId()).append(", ") + .append("AMRMServiceAddress: ").append(getAMRMServiceAddress()).append(", ") + .append("ClientRMServiceAddress: ").append(getClientRMServiceAddress()).append(", ") + .append("RMAdminServiceAddress: ").append(getRMAdminServiceAddress()).append(", ") + .append("RMWebServiceAddress: ").append(getRMWebServiceAddress()).append(", ") + .append("State: ").append(getState()).append(", ") + .append("LastStartTime: ").append(getLastStartTime()).append(", ") + .append("Capability: ").append(getCapability()) + .append("]"); + return sb.toString(); } @Override public boolean equals(Object obj) { + if (this == obj) { return true; } + if (obj == null) { return false; } + if (getClass() != obj.getClass()) { return false; } - SubClusterInfo other = (SubClusterInfo) obj; - if (!this.getSubClusterId().equals(other.getSubClusterId())) { - return false; + + if (obj instanceof SubClusterInfo) { + SubClusterInfo other = (SubClusterInfo) obj; + return new EqualsBuilder() + .append(this.getSubClusterId(), other.getSubClusterId()) + .append(this.getAMRMServiceAddress(), other.getAMRMServiceAddress()) + .append(this.getClientRMServiceAddress(), other.getClientRMServiceAddress()) + .append(this.getRMAdminServiceAddress(), other.getRMAdminServiceAddress()) + .append(this.getRMWebServiceAddress(), other.getRMWebServiceAddress()) + .append(this.getState(), other.getState()) + .append(this.getLastStartTime(), other.getLastStartTime()) + .isEquals(); } - if (!this.getAMRMServiceAddress().equals(other.getAMRMServiceAddress())) { - return false; - } - if (!this.getClientRMServiceAddress() - .equals(other.getClientRMServiceAddress())) { - return false; - } - if (!this.getRMAdminServiceAddress() - .equals(other.getRMAdminServiceAddress())) { - return false; - } - if (!this.getRMWebServiceAddress().equals(other.getRMWebServiceAddress())) { - return false; - } - if (!this.getState().equals(other.getState())) { - return false; - } - return this.getLastStartTime() == other.getLastStartTime(); + + return false; // Capability and HeartBeat fields are not included as they are temporal // (i.e. timestamps), so they change during the lifetime of the same // sub-cluster @@ -301,23 +306,16 @@ public abstract class SubClusterInfo { @Override public int hashCode() { - final int prime = 31; - int result = 1; - result = prime * result - + ((getSubClusterId() == null) ? 0 : getSubClusterId().hashCode()); - result = prime * result + ((getAMRMServiceAddress() == null) ? 0 - : getAMRMServiceAddress().hashCode()); - result = prime * result + ((getClientRMServiceAddress() == null) ? 0 - : getClientRMServiceAddress().hashCode()); - result = prime * result + ((getRMAdminServiceAddress() == null) ? 0 - : getRMAdminServiceAddress().hashCode()); - result = prime * result + ((getRMWebServiceAddress() == null) ? 0 - : getRMWebServiceAddress().hashCode()); - result = - prime * result + ((getState() == null) ? 0 : getState().hashCode()); - result = prime * result - + (int) (getLastStartTime() ^ (getLastStartTime() >>> 32)); - return result; + + return new HashCodeBuilder() + .append(this.getSubClusterId()) + .append(this.getAMRMServiceAddress()) + .append(this.getClientRMServiceAddress()) + .append(this.getRMAdminServiceAddress()) + .append(this.getRMWebServiceAddress()) + .append(this.getState()) + .append(this.getLastStartTime()) + .toHashCode(); // Capability and HeartBeat fields are not included as they are temporal // (i.e. timestamps), so they change during the lifetime of the same // sub-cluster diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/SubClusterPolicyConfiguration.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/SubClusterPolicyConfiguration.java index 817d270146f..822e40c384e 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/SubClusterPolicyConfiguration.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/SubClusterPolicyConfiguration.java @@ -18,6 +18,8 @@ package org.apache.hadoop.yarn.server.federation.store.records; +import org.apache.commons.lang3.builder.EqualsBuilder; +import org.apache.commons.lang3.builder.HashCodeBuilder; import org.apache.hadoop.classification.InterfaceAudience.Private; import org.apache.hadoop.classification.InterfaceAudience.Public; import org.apache.hadoop.classification.InterfaceStability.Unstable; @@ -127,36 +129,44 @@ public abstract class SubClusterPolicyConfiguration { @Override public int hashCode() { - return 31 * getParams().hashCode() + getType().hashCode(); + return new HashCodeBuilder() + .append(this.getType()) + .append(this.getQueue()) + .append(this.getParams()). + toHashCode(); } @Override public boolean equals(Object obj) { + if (this == obj) { return true; } + if (obj == null) { return false; } - if (getClass() != obj.getClass()) { - return false; + + if (obj instanceof SubClusterPolicyConfiguration) { + SubClusterPolicyConfiguration other = (SubClusterPolicyConfiguration) obj; + return new EqualsBuilder() + .append(this.getType(), other.getType()) + .append(this.getQueue(), other.getQueue()) + .append(this.getParams(), other.getParams()) + .isEquals(); } - SubClusterPolicyConfiguration other = (SubClusterPolicyConfiguration) obj; - if (!this.getType().equals(other.getType())) { - return false; - } - if (!this.getParams().equals(other.getParams())) { - return false; - } - return true; + + return false; } @Override public String toString() { StringBuilder sb = new StringBuilder(); - sb.append(getType()) - .append(" : ") - .append(getParams()); + sb.append("SubClusterPolicyConfiguration: [") + .append("Type: ").append(getType()).append(", ") + .append("Queue: ").append(getQueue()).append(", ") + .append("Params: ").append(getParams()).append(", ") + .append("]"); return sb.toString(); } -} \ No newline at end of file +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/SubClusterState.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/SubClusterState.java index b30bd32fd02..e05f17ad2a5 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/SubClusterState.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/SubClusterState.java @@ -70,15 +70,15 @@ public enum SubClusterState { /** * Convert a string into {@code SubClusterState}. * - * @param x the string to convert in SubClusterState + * @param state the string to convert in SubClusterState * @return the respective {@code SubClusterState} */ - public static SubClusterState fromString(String x) { + public static SubClusterState fromString(String state) { try { - return SubClusterState.valueOf(x); + return SubClusterState.valueOf(state); } catch (Exception e) { - LOG.error("Invalid SubCluster State value in the StateStore does not" - + " match with the YARN Federation standard."); + LOG.error("Invalid SubCluster State value({}) in the StateStore does not" + + " match with the YARN Federation standard.", state); return null; } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/impl/pb/RouterRMTokenRequestPBImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/impl/pb/RouterRMTokenRequestPBImpl.java new file mode 100644 index 00000000000..1358a78326f --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/impl/pb/RouterRMTokenRequestPBImpl.java @@ -0,0 +1,129 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.yarn.server.federation.store.records.impl.pb; + +import org.apache.hadoop.thirdparty.protobuf.TextFormat; +import org.apache.hadoop.yarn.federation.proto.YarnServerFederationProtos.RouterRMTokenRequestProto; +import org.apache.hadoop.yarn.federation.proto.YarnServerFederationProtos.RouterRMTokenRequestProtoOrBuilder; +import org.apache.hadoop.yarn.federation.proto.YarnServerFederationProtos.RouterStoreTokenProto; +import org.apache.hadoop.yarn.server.federation.store.records.RouterRMTokenRequest; +import org.apache.hadoop.yarn.server.federation.store.records.RouterStoreToken; + +public class RouterRMTokenRequestPBImpl extends RouterRMTokenRequest { + + private RouterRMTokenRequestProto proto = RouterRMTokenRequestProto.getDefaultInstance(); + private RouterRMTokenRequestProto.Builder builder = null; + private boolean viaProto = false; + private RouterStoreToken routerStoreToken = null; + + public RouterRMTokenRequestPBImpl() { + builder = RouterRMTokenRequestProto.newBuilder(); + } + + public RouterRMTokenRequestPBImpl(RouterRMTokenRequestProto requestProto) { + this.proto = requestProto; + viaProto = true; + } + + public RouterRMTokenRequestProto getProto() { + mergeLocalToProto(); + proto = viaProto ? proto : builder.build(); + viaProto = true; + return proto; + } + + private void mergeLocalToProto() { + if (viaProto) { + maybeInitBuilder(); + } + mergeLocalToBuilder(); + proto = builder.build(); + viaProto = true; + } + + private void maybeInitBuilder() { + if (viaProto || builder == null) { + builder = RouterRMTokenRequestProto.newBuilder(proto); + } + viaProto = false; + } + + private void mergeLocalToBuilder() { + if (this.routerStoreToken != null) { + RouterStoreTokenPBImpl routerStoreTokenPBImpl = + (RouterStoreTokenPBImpl) this.routerStoreToken; + RouterStoreTokenProto storeTokenProto = routerStoreTokenPBImpl.getProto(); + if (!storeTokenProto.equals(builder.getRouterStoreToken())) { + builder.setRouterStoreToken(convertToProtoFormat(this.routerStoreToken)); + } + } + } + + @Override + public RouterStoreToken getRouterStoreToken() { + RouterRMTokenRequestProtoOrBuilder p = viaProto ? proto : builder; + if (this.routerStoreToken != null) { + return this.routerStoreToken; + } + if (!p.hasRouterStoreToken()) { + return null; + } + this.routerStoreToken = convertFromProtoFormat(p.getRouterStoreToken()); + return this.routerStoreToken; + } + + @Override + public void setRouterStoreToken(RouterStoreToken storeToken) { + maybeInitBuilder(); + if (storeToken == null) { + builder.clearRouterStoreToken(); + return; + } + this.routerStoreToken = storeToken; + this.builder.setRouterStoreToken(convertToProtoFormat(storeToken)); + } + + @Override + public int hashCode() { + return getProto().hashCode(); + } + + @Override + public boolean equals(Object other) { + if (other == null) { + return false; + } + if (other.getClass().isAssignableFrom(this.getClass())) { + return this.getProto().equals(this.getClass().cast(other).getProto()); + } + return false; + } + + @Override + public String toString() { + return TextFormat.shortDebugString(getProto()); + } + + private RouterStoreTokenProto convertToProtoFormat(RouterStoreToken storeToken) { + return ((RouterStoreTokenPBImpl) storeToken).getProto(); + } + + private RouterStoreToken convertFromProtoFormat(RouterStoreTokenProto storeTokenProto) { + return new RouterStoreTokenPBImpl(storeTokenProto); + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/impl/pb/RouterRMTokenResponsePBImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/impl/pb/RouterRMTokenResponsePBImpl.java new file mode 100644 index 00000000000..f50967d352b --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/impl/pb/RouterRMTokenResponsePBImpl.java @@ -0,0 +1,131 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.yarn.server.federation.store.records.impl.pb; + +import org.apache.hadoop.classification.InterfaceAudience.Private; +import org.apache.hadoop.classification.InterfaceStability.Unstable; +import org.apache.hadoop.thirdparty.protobuf.TextFormat; +import org.apache.hadoop.yarn.federation.proto.YarnServerFederationProtos.RouterRMTokenResponseProto; +import org.apache.hadoop.yarn.federation.proto.YarnServerFederationProtos.RouterRMTokenResponseProtoOrBuilder; +import org.apache.hadoop.yarn.federation.proto.YarnServerFederationProtos.RouterStoreTokenProto; +import org.apache.hadoop.yarn.server.federation.store.records.RouterRMTokenResponse; +import org.apache.hadoop.yarn.server.federation.store.records.RouterStoreToken; + +@Private +@Unstable +public class RouterRMTokenResponsePBImpl extends RouterRMTokenResponse { + + private RouterRMTokenResponseProto proto = RouterRMTokenResponseProto.getDefaultInstance(); + private RouterRMTokenResponseProto.Builder builder = null; + private boolean viaProto = false; + private RouterStoreToken routerStoreToken = null; + + public RouterRMTokenResponsePBImpl() { + builder = RouterRMTokenResponseProto.newBuilder(); + } + + public RouterRMTokenResponsePBImpl(RouterRMTokenResponseProto requestProto) { + this.proto = requestProto; + viaProto = true; + } + + public RouterRMTokenResponseProto getProto() { + mergeLocalToProto(); + proto = viaProto ? proto : builder.build(); + viaProto = true; + return proto; + } + + private void mergeLocalToProto() { + if (viaProto) { + maybeInitBuilder(); + } + mergeLocalToBuilder(); + proto = builder.build(); + viaProto = true; + } + + private void maybeInitBuilder() { + if (viaProto || builder == null) { + builder = RouterRMTokenResponseProto.newBuilder(proto); + } + viaProto = false; + } + + private void mergeLocalToBuilder() { + if (this.routerStoreToken != null) { + RouterStoreTokenPBImpl routerStoreTokenPBImpl = + (RouterStoreTokenPBImpl) this.routerStoreToken; + RouterStoreTokenProto storeTokenProto = routerStoreTokenPBImpl.getProto(); + if (!storeTokenProto.equals(builder.getRouterStoreToken())) { + builder.setRouterStoreToken(convertToProtoFormat(this.routerStoreToken)); + } + } + } + + @Override + public RouterStoreToken getRouterStoreToken() { + RouterRMTokenResponseProtoOrBuilder p = viaProto ? proto : builder; + if (this.routerStoreToken != null) { + return this.routerStoreToken; + } + if (!p.hasRouterStoreToken()) { + return null; + } + this.routerStoreToken = convertFromProtoFormat(p.getRouterStoreToken()); + return this.routerStoreToken; + } + + @Override + public void setRouterStoreToken(RouterStoreToken storeToken) { + maybeInitBuilder(); + if (storeToken == null) { + builder.clearRouterStoreToken(); + } + this.routerStoreToken = storeToken; + } + + private RouterStoreTokenProto convertToProtoFormat(RouterStoreToken storeToken) { + return ((RouterStoreTokenPBImpl) storeToken).getProto(); + } + + private RouterStoreToken convertFromProtoFormat(RouterStoreTokenProto storeTokenProto) { + return new RouterStoreTokenPBImpl(storeTokenProto); + } + + @Override + public int hashCode() { + return getProto().hashCode(); + } + + @Override + public boolean equals(Object other) { + if (other == null) { + return false; + } + if (other.getClass().isAssignableFrom(this.getClass())) { + return this.getProto().equals(this.getClass().cast(other).getProto()); + } + return false; + } + + @Override + public String toString() { + return TextFormat.shortDebugString(getProto()); + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/impl/pb/RouterStoreTokenPBImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/impl/pb/RouterStoreTokenPBImpl.java new file mode 100644 index 00000000000..df6030a3f0d --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/records/impl/pb/RouterStoreTokenPBImpl.java @@ -0,0 +1,180 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.yarn.server.federation.store.records.impl.pb; + +import org.apache.hadoop.classification.InterfaceAudience.Private; +import org.apache.hadoop.classification.InterfaceStability.Unstable; +import org.apache.hadoop.thirdparty.protobuf.TextFormat; +import org.apache.hadoop.yarn.proto.YarnSecurityTokenProtos.YARNDelegationTokenIdentifierProto; +import org.apache.hadoop.yarn.security.client.RMDelegationTokenIdentifier; +import org.apache.hadoop.yarn.security.client.YARNDelegationTokenIdentifier; +import org.apache.hadoop.yarn.server.federation.store.records.RouterStoreToken; +import org.apache.hadoop.yarn.federation.proto.YarnServerFederationProtos.RouterStoreTokenProto; +import org.apache.hadoop.yarn.federation.proto.YarnServerFederationProtos.RouterStoreTokenProtoOrBuilder; + +import java.io.ByteArrayInputStream; +import java.io.DataInput; +import java.io.DataInputStream; +import java.io.IOException; + +/** + * Protocol buffer based implementation of {@link RouterStoreToken}. + */ +@Private +@Unstable +public class RouterStoreTokenPBImpl extends RouterStoreToken { + + private RouterStoreTokenProto proto = RouterStoreTokenProto.getDefaultInstance(); + + private RouterStoreTokenProto.Builder builder = null; + + private boolean viaProto = false; + + private YARNDelegationTokenIdentifier rMDelegationTokenIdentifier = null; + private Long renewDate; + + public RouterStoreTokenPBImpl() { + builder = RouterStoreTokenProto.newBuilder(); + } + + public RouterStoreTokenPBImpl(RouterStoreTokenProto storeTokenProto) { + this.proto = storeTokenProto; + viaProto = true; + } + + public RouterStoreTokenProto getProto() { + mergeLocalToProto(); + proto = viaProto ? proto : builder.build(); + viaProto = true; + return proto; + } + + private void mergeLocalToProto() { + if (viaProto) { + maybeInitBuilder(); + } + mergeLocalToBuilder(); + proto = builder.build(); + viaProto = true; + } + + private void mergeLocalToBuilder() { + if (this.rMDelegationTokenIdentifier != null) { + YARNDelegationTokenIdentifierProto idProto = this.rMDelegationTokenIdentifier.getProto(); + if (!idProto.equals(builder.getTokenIdentifier())) { + builder.setTokenIdentifier(convertToProtoFormat(this.rMDelegationTokenIdentifier)); + } + } + + if (this.renewDate != null) { + builder.setRenewDate(this.renewDate); + } + } + + private void maybeInitBuilder() { + if (viaProto || builder == null) { + builder = RouterStoreTokenProto.newBuilder(proto); + } + viaProto = false; + } + + public int hashCode() { + return getProto().hashCode(); + } + + @Override + public boolean equals(Object other) { + if (other == null) { + return false; + } + if (other.getClass().isAssignableFrom(this.getClass())) { + return this.getProto().equals(this.getClass().cast(other).getProto()); + } + return false; + } + + @Override + public String toString() { + return TextFormat.shortDebugString(getProto()); + } + + @Override + public YARNDelegationTokenIdentifier getTokenIdentifier() throws IOException { + RouterStoreTokenProtoOrBuilder p = viaProto ? proto : builder; + if (rMDelegationTokenIdentifier != null) { + return rMDelegationTokenIdentifier; + } + if(!p.hasTokenIdentifier()){ + return null; + } + YARNDelegationTokenIdentifierProto identifierProto = p.getTokenIdentifier(); + ByteArrayInputStream in = new ByteArrayInputStream(identifierProto.toByteArray()); + RMDelegationTokenIdentifier identifier = new RMDelegationTokenIdentifier(); + identifier.readFields(new DataInputStream(in)); + this.rMDelegationTokenIdentifier = identifier; + return identifier; + } + + @Override + public Long getRenewDate() { + RouterStoreTokenProtoOrBuilder p = viaProto ? proto : builder; + if (this.renewDate != null) { + return this.renewDate; + } + if (!p.hasRenewDate()) { + return null; + } + this.renewDate = p.getRenewDate(); + return this.renewDate; + } + + @Override + public void setIdentifier(YARNDelegationTokenIdentifier identifier) { + maybeInitBuilder(); + if(identifier == null) { + builder.clearTokenIdentifier(); + return; + } + this.rMDelegationTokenIdentifier = identifier; + this.builder.setTokenIdentifier(identifier.getProto()); + } + + @Override + public void setRenewDate(Long renewDate) { + maybeInitBuilder(); + if(renewDate == null) { + builder.clearRenewDate(); + return; + } + this.renewDate = renewDate; + this.builder.setRenewDate(renewDate); + } + + private YARNDelegationTokenIdentifierProto convertToProtoFormat( + YARNDelegationTokenIdentifier delegationTokenIdentifier) { + return delegationTokenIdentifier.getProto(); + } + + public byte[] toByteArray() throws IOException { + return builder.build().toByteArray(); + } + + public void readFields(DataInput in) throws IOException { + builder.mergeFrom((DataInputStream) in); + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/utils/FederationRouterRMTokenInputValidator.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/utils/FederationRouterRMTokenInputValidator.java new file mode 100644 index 00000000000..40fe1f36cfb --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/utils/FederationRouterRMTokenInputValidator.java @@ -0,0 +1,105 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.federation.store.utils; + +import org.apache.hadoop.yarn.security.client.YARNDelegationTokenIdentifier; +import org.apache.hadoop.yarn.server.federation.store.exception.FederationStateStoreInvalidInputException; +import org.apache.hadoop.yarn.server.federation.store.records.RouterRMTokenRequest; +import org.apache.hadoop.yarn.server.federation.store.records.RouterStoreToken; +import org.apache.hadoop.yarn.server.federation.store.records.RouterMasterKeyRequest; +import org.apache.hadoop.yarn.server.federation.store.records.RouterMasterKey; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +public final class FederationRouterRMTokenInputValidator { + + private static final Logger LOG = + LoggerFactory.getLogger(FederationRouterRMTokenInputValidator.class); + + private FederationRouterRMTokenInputValidator() { + } + + /** + * We will check with the RouterRMTokenRequest{@link RouterRMTokenRequest} + * to ensure that the request object is not empty and that the RouterStoreToken is not empty. + * + * @param request RouterRMTokenRequest Request. + * @throws FederationStateStoreInvalidInputException if the request is invalid. + */ + public static void validate(RouterRMTokenRequest request) + throws FederationStateStoreInvalidInputException { + + if (request == null) { + String message = "Missing RouterRMToken Request." + + " Please try again by specifying a router rm token information."; + LOG.warn(message); + throw new FederationStateStoreInvalidInputException(message); + } + + RouterStoreToken storeToken = request.getRouterStoreToken(); + if (storeToken == null) { + String message = "Missing RouterStoreToken." + + " Please try again by specifying a router rm token information."; + LOG.warn(message); + throw new FederationStateStoreInvalidInputException(message); + } + + try { + YARNDelegationTokenIdentifier identifier = storeToken.getTokenIdentifier(); + if (identifier == null) { + String message = "Missing YARNDelegationTokenIdentifier." + + " Please try again by specifying a router rm token information."; + LOG.warn(message); + throw new FederationStateStoreInvalidInputException(message); + } + } catch (Exception e) { + throw new FederationStateStoreInvalidInputException(e); + } + } + + /** + * We will check with the RouterMasterKeyRequest{@link RouterMasterKeyRequest} + * to ensure that the request object is not empty and that the RouterMasterKey is not empty. + * + * @param request RouterMasterKey Request. + * @throws FederationStateStoreInvalidInputException if the request is invalid. + */ + public static void validate(RouterMasterKeyRequest request) + throws FederationStateStoreInvalidInputException { + + // Verify the request to ensure that the request is not empty, + // if the request is found to be empty, an exception will be thrown. + if (request == null) { + String message = "Missing RouterMasterKey Request." + + " Please try again by specifying a router master key request information."; + LOG.warn(message); + throw new FederationStateStoreInvalidInputException(message); + } + + // Check whether the masterKey is empty, + // if the masterKey is empty, throw an exception message. + RouterMasterKey masterKey = request.getRouterMasterKey(); + if (masterKey == null) { + String message = "Missing RouterMasterKey." + + " Please try again by specifying a router master key information."; + LOG.warn(message); + throw new FederationStateStoreInvalidInputException(message); + } + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/utils/FederationStateStoreUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/utils/FederationStateStoreUtils.java index 52ef725fb2b..f14867a0e65 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/utils/FederationStateStoreUtils.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/utils/FederationStateStoreUtils.java @@ -162,6 +162,29 @@ public final class FederationStateStoreUtils { throw new FederationStateStoreException(errMsg); } + + /** + * Throws an FederationStateStoreException due to an error in + * FederationStateStore. + * + * @param t the throwable raised in the called class. + * @param log the logger interface. + * @param errMsgFormat the error message format string. + * @param args referenced by the format specifiers in the format string. + * @throws YarnException on failure + */ + public static void logAndThrowStoreException( + Throwable t, Logger log, String errMsgFormat, Object... args) throws YarnException { + String errMsg = String.format(errMsgFormat, args); + if (t != null) { + log.error(errMsg, t); + throw new FederationStateStoreException(errMsg, t); + } else { + log.error(errMsg); + throw new FederationStateStoreException(errMsg); + } + } + /** * Throws an FederationStateStoreInvalidInputException due to an * error in FederationStateStore. diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationMethodWrapper.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationMethodWrapper.java new file mode 100644 index 00000000000..9b8944049a6 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationMethodWrapper.java @@ -0,0 +1,76 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.federation.utils; + +import org.apache.hadoop.yarn.exceptions.YarnException; + +import java.io.IOException; +import java.util.Arrays; +import java.util.Collection; + + +public abstract class FederationMethodWrapper { + + /** + * List of parameters: static and dynamic values, matchings types. + */ + private Object[] params; + + /** + * List of method parameters types, matches parameters. + */ + private Class[] types; + + /** + * String name of the method. + */ + private String methodName; + + public FederationMethodWrapper(Class[] pTypes, Object... pParams) + throws IOException { + if (pParams.length != pTypes.length) { + throw new IOException("Invalid parameters for method."); + } + this.params = pParams; + this.types = Arrays.copyOf(pTypes, pTypes.length); + } + + public Object[] getParams() { + return Arrays.copyOf(this.params, this.params.length); + } + + public String getMethodName() { + return methodName; + } + + public void setMethodName(String methodName) { + this.methodName = methodName; + } + + /** + * Get the calling types for this method. + * + * @return An array of calling types. + */ + public Class[] getTypes() { + return Arrays.copyOf(this.types, this.types.length); + } + + protected abstract Collection invokeConcurrent(Class clazz) throws YarnException; +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationRegistryClient.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationRegistryClient.java index 1eb120c4554..fa64188a608 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationRegistryClient.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationRegistryClient.java @@ -111,6 +111,7 @@ public class FederationRegistryClient { /** * Write/update the UAM token for an application and a sub-cluster. * + * @param appId ApplicationId. * @param subClusterId sub-cluster id of the token * @param token the UAM of the application * @return whether the amrmToken is added or updated to a new value diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationStateStoreFacade.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationStateStoreFacade.java index 2044f290993..8c36fba1f1e 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationStateStoreFacade.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationStateStoreFacade.java @@ -22,8 +22,10 @@ import java.io.IOException; import java.nio.ByteBuffer; import java.util.HashMap; import java.util.List; +import java.util.ArrayList; import java.util.Map; import java.util.concurrent.TimeUnit; +import java.util.Random; import javax.cache.Cache; import javax.cache.CacheManager; @@ -38,6 +40,8 @@ import javax.cache.integration.CacheLoader; import javax.cache.integration.CacheLoaderException; import javax.cache.spi.CachingProvider; +import org.apache.commons.collections.CollectionUtils; +import org.apache.commons.collections.MapUtils; import org.apache.commons.lang3.NotImplementedException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.io.retry.RetryPolicies; @@ -50,6 +54,8 @@ import org.apache.hadoop.yarn.api.records.ReservationId; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.exceptions.YarnException; import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; +import org.apache.hadoop.yarn.server.federation.policies.FederationPolicyUtils; +import org.apache.hadoop.yarn.server.federation.policies.exceptions.FederationPolicyException; import org.apache.hadoop.yarn.server.federation.resolver.SubClusterResolver; import org.apache.hadoop.yarn.server.federation.store.FederationStateStore; import org.apache.hadoop.yarn.server.federation.store.exception.FederationStateStoreRetriableException; @@ -80,6 +86,10 @@ import org.apache.hadoop.yarn.server.federation.store.records.DeleteReservationH import org.apache.hadoop.yarn.server.federation.store.records.RouterMasterKeyRequest; import org.apache.hadoop.yarn.server.federation.store.records.RouterMasterKeyResponse; import org.apache.hadoop.yarn.server.federation.store.records.RouterMasterKey; +import org.apache.hadoop.yarn.server.federation.store.records.RouterStoreToken; +import org.apache.hadoop.yarn.server.federation.store.records.RouterRMTokenRequest; +import org.apache.hadoop.yarn.server.federation.store.records.RouterRMTokenResponse; +import org.apache.hadoop.yarn.security.client.RMDelegationTokenIdentifier; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -106,6 +116,8 @@ public final class FederationStateStoreFacade { private static final FederationStateStoreFacade FACADE = new FederationStateStoreFacade(); + private static Random rand = new Random(System.currentTimeMillis()); + private FederationStateStore stateStore; private int cacheTimeToLive; private Configuration conf; @@ -492,6 +504,7 @@ public final class FederationStateStoreFacade { * @param defaultValue the default implementation for fallback * @param type the class for which a retry proxy is required * @param retryPolicy the policy for retrying method call failures + * @param The type of the instance. * @return a retry proxy for the specified interface */ public static Object createRetryInstance(Configuration conf, @@ -727,7 +740,7 @@ public final class FederationStateStoreFacade { return stateStore; } - /* + /** * The Router Supports Store NewMasterKey (RouterMasterKey{@link RouterMasterKey}). * * @param newKey Key used for generating and verifying delegation tokens @@ -778,4 +791,364 @@ public final class FederationStateStoreFacade { RouterMasterKeyRequest keyRequest = RouterMasterKeyRequest.newInstance(masterKey); return stateStore.getMasterKeyByDelegationKey(keyRequest); } + + /** + * The Router Supports Store RMDelegationTokenIdentifier{@link RMDelegationTokenIdentifier}. + * + * @param identifier delegation tokens from the RM + * @param renewDate renewDate + * @throws YarnException if the call to the state store is unsuccessful + * @throws IOException An IO Error occurred + */ + public void storeNewToken(RMDelegationTokenIdentifier identifier, + long renewDate) throws YarnException, IOException { + LOG.info("storing RMDelegation token with sequence number: {}.", + identifier.getSequenceNumber()); + RouterStoreToken storeToken = RouterStoreToken.newInstance(identifier, renewDate); + RouterRMTokenRequest request = RouterRMTokenRequest.newInstance(storeToken); + stateStore.storeNewToken(request); + } + + /** + * The Router Supports Update RMDelegationTokenIdentifier{@link RMDelegationTokenIdentifier}. + * + * @param identifier delegation tokens from the RM + * @param renewDate renewDate + * @throws YarnException if the call to the state store is unsuccessful + * @throws IOException An IO Error occurred + */ + public void updateStoredToken(RMDelegationTokenIdentifier identifier, + long renewDate) throws YarnException, IOException { + LOG.info("updating RMDelegation token with sequence number: {}.", + identifier.getSequenceNumber()); + RouterStoreToken storeToken = RouterStoreToken.newInstance(identifier, renewDate); + RouterRMTokenRequest request = RouterRMTokenRequest.newInstance(storeToken); + stateStore.updateStoredToken(request); + } + + /** + * The Router Supports Remove RMDelegationTokenIdentifier{@link RMDelegationTokenIdentifier}. + * + * @param identifier delegation tokens from the RM + * @throws YarnException if the call to the state store is unsuccessful + * @throws IOException An IO Error occurred + */ + public void removeStoredToken(RMDelegationTokenIdentifier identifier) + throws YarnException, IOException{ + LOG.info("removing RMDelegation token with sequence number: {}.", + identifier.getSequenceNumber()); + RouterStoreToken storeToken = RouterStoreToken.newInstance(identifier, 0L); + RouterRMTokenRequest request = RouterRMTokenRequest.newInstance(storeToken); + stateStore.removeStoredToken(request); + } + + /** + * The Router Supports GetTokenByRouterStoreToken{@link RMDelegationTokenIdentifier}. + * + * @param identifier delegation tokens from the RM + * @return RouterStoreToken + * @throws YarnException if the call to the state store is unsuccessful + * @throws IOException An IO Error occurred + */ + public RouterRMTokenResponse getTokenByRouterStoreToken(RMDelegationTokenIdentifier identifier) + throws YarnException, IOException { + LOG.info("get RouterStoreToken token with sequence number: {}.", + identifier.getSequenceNumber()); + RouterStoreToken storeToken = RouterStoreToken.newInstance(identifier, 0L); + RouterRMTokenRequest request = RouterRMTokenRequest.newInstance(storeToken); + return stateStore.getTokenByRouterStoreToken(request); + } + + /** + * stateStore provides DelegationTokenSeqNum increase. + * + * @return delegationTokenSequenceNumber. + */ + public int incrementDelegationTokenSeqNum() { + return stateStore.incrementDelegationTokenSeqNum(); + } + + /** + * Get SeqNum from stateStore. + * + * @return delegationTokenSequenceNumber. + */ + public int getDelegationTokenSeqNum() { + return stateStore.getDelegationTokenSeqNum(); + } + + /** + * Set SeqNum from stateStore. + * + * @param seqNum delegationTokenSequenceNumber. + */ + public void setDelegationTokenSeqNum(int seqNum) { + stateStore.setDelegationTokenSeqNum(seqNum); + } + + /** + * Get CurrentKeyId from stateStore. + * + * @return currentKeyId. + */ + public int getCurrentKeyId() { + return stateStore.getCurrentKeyId(); + } + + /** + * stateStore provides CurrentKeyId increase. + * + * @return currentKeyId. + */ + public int incrementCurrentKeyId() { + return stateStore.incrementCurrentKeyId(); + } + + /** + * Get the number of active cluster nodes. + * + * @return number of active cluster nodes. + * @throws YarnException if the call to the state store is unsuccessful. + */ + public int getActiveSubClustersCount() throws YarnException { + Map activeSubClusters = getSubClusters(true); + if (activeSubClusters == null || activeSubClusters.isEmpty()) { + return 0; + } else { + return activeSubClusters.size(); + } + } + + /** + * Randomly pick ActiveSubCluster. + * During the selection process, we will exclude SubClusters from the blacklist. + * + * @param activeSubClusters List of active subClusters. + * @param blackList blacklist. + * @return Active SubClusterId. + * @throws YarnException When there is no Active SubCluster, + * an exception will be thrown (No active SubCluster available to submit the request.) + */ + public static SubClusterId getRandomActiveSubCluster( + Map activeSubClusters, List blackList) + throws YarnException { + + // Check if activeSubClusters is empty, if it is empty, we need to throw an exception + if (MapUtils.isEmpty(activeSubClusters)) { + throw new FederationPolicyException( + FederationPolicyUtils.NO_ACTIVE_SUBCLUSTER_AVAILABLE); + } + + // Change activeSubClusters to List + List subClusterIds = new ArrayList<>(activeSubClusters.keySet()); + + // If the blacklist is not empty, we need to remove all the subClusters in the blacklist + if (CollectionUtils.isNotEmpty(blackList)) { + subClusterIds.removeAll(blackList); + } + + // Check there are still active subcluster after removing the blacklist + if (CollectionUtils.isEmpty(subClusterIds)) { + throw new FederationPolicyException( + FederationPolicyUtils.NO_ACTIVE_SUBCLUSTER_AVAILABLE); + } + + // Randomly choose a SubCluster + return subClusterIds.get(rand.nextInt(subClusterIds.size())); + } + + /** + * Get the number of retries. + * + * @param configRetries User-configured number of retries. + * @return number of retries. + * @throws YarnException yarn exception. + */ + public int getRetryNumbers(int configRetries) throws YarnException { + int activeSubClustersCount = getActiveSubClustersCount(); + int actualRetryNums = Math.min(activeSubClustersCount, configRetries); + // Normally, we don't set a negative number for the number of retries, + // but if the user sets a negative number for the number of retries, + // we will return 0 + if (actualRetryNums < 0) { + return 0; + } + return actualRetryNums; + } + + /** + * Query SubClusterId By applicationId. + * + * If SubClusterId is not empty, it means it exists and returns true; + * if SubClusterId is empty, it means it does not exist and returns false. + * + * @param applicationId applicationId + * @return true, SubClusterId exists; false, SubClusterId not exists. + */ + public boolean existsApplicationHomeSubCluster(ApplicationId applicationId) { + try { + SubClusterId subClusterId = getApplicationHomeSubCluster(applicationId); + if (subClusterId != null) { + return true; + } + } catch (YarnException e) { + LOG.warn("get homeSubCluster by applicationId = {} error.", applicationId, e); + } + return false; + } + + /** + * Add ApplicationHomeSubCluster to FederationStateStore. + * + * @param applicationId applicationId. + * @param homeSubCluster homeSubCluster, homeSubCluster selected according to policy. + * @throws YarnException yarn exception. + */ + public void addApplicationHomeSubCluster(ApplicationId applicationId, + ApplicationHomeSubCluster homeSubCluster) throws YarnException { + try { + addApplicationHomeSubCluster(homeSubCluster); + } catch (YarnException e) { + String msg = String.format( + "Unable to insert the ApplicationId %s into the FederationStateStore.", applicationId); + throw new YarnException(msg, e); + } + } + + /** + * Update ApplicationHomeSubCluster to FederationStateStore. + * + * @param subClusterId homeSubClusterId + * @param applicationId applicationId. + * @param homeSubCluster homeSubCluster, homeSubCluster selected according to policy. + * @throws YarnException yarn exception. + */ + public void updateApplicationHomeSubCluster(SubClusterId subClusterId, + ApplicationId applicationId, ApplicationHomeSubCluster homeSubCluster) throws YarnException { + try { + updateApplicationHomeSubCluster(homeSubCluster); + } catch (YarnException e) { + SubClusterId subClusterIdInStateStore = getApplicationHomeSubCluster(applicationId); + if (subClusterId == subClusterIdInStateStore) { + LOG.info("Application {} already submitted on SubCluster {}.", applicationId, subClusterId); + } else { + String msg = String.format( + "Unable to update the ApplicationId %s into the FederationStateStore.", applicationId); + throw new YarnException(msg, e); + } + } + } + + /** + * Add or Update ApplicationHomeSubCluster. + * + * @param applicationId applicationId, is the id of the application. + * @param subClusterId homeSubClusterId, this is selected by strategy. + * @param retryCount number of retries. + * @throws YarnException yarn exception. + */ + public void addOrUpdateApplicationHomeSubCluster(ApplicationId applicationId, + SubClusterId subClusterId, int retryCount) throws YarnException { + Boolean exists = existsApplicationHomeSubCluster(applicationId); + ApplicationHomeSubCluster appHomeSubCluster = + ApplicationHomeSubCluster.newInstance(applicationId, subClusterId); + if (!exists || retryCount == 0) { + // persist the mapping of applicationId and the subClusterId which has + // been selected as its home. + addApplicationHomeSubCluster(applicationId, appHomeSubCluster); + } else { + // update the mapping of applicationId and the home subClusterId to + // the new subClusterId we have selected. + updateApplicationHomeSubCluster(subClusterId, applicationId, appHomeSubCluster); + } + } + + /** + * Exists ReservationHomeSubCluster Mapping. + * + * @param reservationId reservationId + * @return true - exist, false - not exist + */ + public boolean existsReservationHomeSubCluster(ReservationId reservationId) { + try { + SubClusterId subClusterId = getReservationHomeSubCluster(reservationId); + if (subClusterId != null) { + return true; + } + } catch (YarnException e) { + LOG.warn("get homeSubCluster by reservationId = {} error.", reservationId, e); + } + return false; + } + + /** + * Save Reservation And HomeSubCluster Mapping. + * + * @param reservationId reservationId + * @param homeSubCluster homeSubCluster + * @throws YarnException on failure + */ + public void addReservationHomeSubCluster(ReservationId reservationId, + ReservationHomeSubCluster homeSubCluster) throws YarnException { + try { + // persist the mapping of reservationId and the subClusterId which has + // been selected as its home + addReservationHomeSubCluster(homeSubCluster); + } catch (YarnException e) { + String msg = String.format( + "Unable to insert the ReservationId %s into the FederationStateStore.", reservationId); + throw new YarnException(msg, e); + } + } + + /** + * Update Reservation And HomeSubCluster Mapping. + * + * @param subClusterId subClusterId + * @param reservationId reservationId + * @param homeSubCluster homeSubCluster + * @throws YarnException on failure + */ + public void updateReservationHomeSubCluster(SubClusterId subClusterId, + ReservationId reservationId, ReservationHomeSubCluster homeSubCluster) throws YarnException { + try { + // update the mapping of reservationId and the home subClusterId to + // the new subClusterId we have selected + updateReservationHomeSubCluster(homeSubCluster); + } catch (YarnException e) { + SubClusterId subClusterIdInStateStore = getReservationHomeSubCluster(reservationId); + if (subClusterId == subClusterIdInStateStore) { + LOG.info("Reservation {} already submitted on SubCluster {}.", reservationId, subClusterId); + } else { + String msg = String.format( + "Unable to update the ReservationId %s into the FederationStateStore.", reservationId); + throw new YarnException(msg, e); + } + } + } + + /** + * Add or Update ReservationHomeSubCluster. + * + * @param reservationId reservationId. + * @param subClusterId homeSubClusterId, this is selected by strategy. + * @param retryCount number of retries. + * @throws YarnException yarn exception. + */ + public void addOrUpdateReservationHomeSubCluster(ReservationId reservationId, + SubClusterId subClusterId, int retryCount) throws YarnException { + Boolean exists = existsReservationHomeSubCluster(reservationId); + ReservationHomeSubCluster reservationHomeSubCluster = + ReservationHomeSubCluster.newInstance(reservationId, subClusterId); + if (!exists || retryCount == 0) { + // persist the mapping of reservationId and the subClusterId which has + // been selected as its home. + addReservationHomeSubCluster(reservationId, reservationHomeSubCluster); + } else { + // update the mapping of reservationId and the home subClusterId to + // the new subClusterId we have selected. + updateReservationHomeSubCluster(subClusterId, reservationId, + reservationHomeSubCluster); + } + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/scheduler/ResourceRequestSet.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/scheduler/ResourceRequestSet.java index cf24bbf361f..ed615e85c9c 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/scheduler/ResourceRequestSet.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/scheduler/ResourceRequestSet.java @@ -71,7 +71,7 @@ public class ResourceRequestSet { * with the same resource name, override it and update accordingly. * * @param ask the new {@link ResourceRequest} - * @throws YarnException + * @throws YarnException indicates exceptions from yarn servers. */ public void addAndOverrideRR(ResourceRequest ask) throws YarnException { if (!this.key.equals(new ResourceRequestSetKey(ask))) { @@ -102,7 +102,7 @@ public class ResourceRequestSet { * Merge a requestSet into this one. * * @param requestSet the requestSet to merge - * @throws YarnException + * @throws YarnException indicates exceptions from yarn servers. */ public void addAndOverrideRRSet(ResourceRequestSet requestSet) throws YarnException { @@ -149,7 +149,7 @@ public class ResourceRequestSet { * Force set the # of containers to ask for this requestSet to a given value. * * @param newValue the new # of containers value - * @throws YarnException + * @throws YarnException indicates exceptions from yarn servers. */ public void setNumContainers(int newValue) throws YarnException { if (this.numContainers == 0) { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/security/BaseNMTokenSecretManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/security/BaseNMTokenSecretManager.java index 3cbd1dc36dc..b8b5073119c 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/security/BaseNMTokenSecretManager.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/security/BaseNMTokenSecretManager.java @@ -111,6 +111,11 @@ public class BaseNMTokenSecretManager extends /** * Helper function for creating NMTokens. + * + * @param applicationAttemptId application AttemptId. + * @param nodeId node Id. + * @param applicationSubmitter application Submitter. + * @return NMToken. */ public Token createNMToken(ApplicationAttemptId applicationAttemptId, NodeId nodeId, String applicationSubmitter) { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/service/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/service/package-info.java index c448bab134d..c27269820ed 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/service/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/service/package-info.java @@ -19,9 +19,8 @@ * Package org.apache.hadoop.yarn.server.service contains service related * classes. */ -@InterfaceAudience.Private @InterfaceStability.Unstable - +@Private +@Unstable package org.apache.hadoop.yarn.server.service; - -import org.apache.hadoop.classification.InterfaceAudience; -import org.apache.hadoop.classification.InterfaceStability; \ No newline at end of file +import org.apache.hadoop.classification.InterfaceAudience.Private; +import org.apache.hadoop.classification.InterfaceStability.Unstable; \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/timeline/security/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/timeline/security/package-info.java index 14a52e342b3..76ea2064b2b 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/timeline/security/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/timeline/security/package-info.java @@ -21,6 +21,6 @@ * to timeline authentication filters and abstract delegation token service for * ATSv1 and ATSv2. */ -@InterfaceAudience.Private +@Private package org.apache.hadoop.yarn.server.timeline.security; -import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceAudience.Private; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedApplicationManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedApplicationManager.java index c70a2db25f1..01f9bc7dbea 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedApplicationManager.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedApplicationManager.java @@ -62,9 +62,9 @@ import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider; import org.apache.hadoop.yarn.security.AMRMTokenIdentifier; import org.apache.hadoop.yarn.server.AMHeartbeatRequestHandler; import org.apache.hadoop.yarn.server.AMRMClientRelayer; -import org.apache.hadoop.yarn.server.utils.BuilderUtils; import org.apache.hadoop.yarn.util.AsyncCallback; import org.apache.hadoop.yarn.util.ConverterUtils; +import org.apache.hadoop.yarn.util.resource.Resources; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -426,7 +426,7 @@ public class UnmanagedApplicationManager { ContainerLaunchContext amContainer = this.recordFactory.newRecordInstance(ContainerLaunchContext.class); - Resource resource = BuilderUtils.newResource(1024, 1); + Resource resource = Resources.createResource(1024); context.setResource(resource); context.setAMContainerSpec(amContainer); submitRequest.setApplicationSubmissionContext(context); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/util/timeline/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/util/timeline/package-info.java index 75c69738c50..0c61b6246cc 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/util/timeline/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/util/timeline/package-info.java @@ -20,6 +20,6 @@ * Package org.apache.hadoop.server.util.timeline contains utility classes used * by ATSv1 and ATSv2 on the server side. */ -@InterfaceAudience.Private +@Private package org.apache.hadoop.yarn.server.util.timeline; -import org.apache.hadoop.classification.InterfaceAudience; \ No newline at end of file +import org.apache.hadoop.classification.InterfaceAudience.Private; \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/utils/LeveldbIterator.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/utils/LeveldbIterator.java index f33cb5f1d89..463bee7ebab 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/utils/LeveldbIterator.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/utils/LeveldbIterator.java @@ -41,21 +41,28 @@ public class LeveldbIterator implements Iterator>, private DBIterator iter; /** - * Create an iterator for the specified database + * Create an iterator for the specified database. + * + * @param db database. */ public LeveldbIterator(DB db) { iter = db.iterator(); } /** - * Create an iterator for the specified database + * Create an iterator for the specified database. + * + * @param db db. + * @param options ReadOptions. */ public LeveldbIterator(DB db, ReadOptions options) { iter = db.iterator(options); } /** - * Create an iterator using the specified underlying DBIterator + * Create an iterator using the specified underlying DBIterator. + * + * @param iter DB Iterator. */ public LeveldbIterator(DBIterator iter) { this.iter = iter; @@ -64,6 +71,9 @@ public class LeveldbIterator implements Iterator>, /** * Repositions the iterator so the key of the next BlockElement * returned greater than or equal to the specified targetKey. + * + * @param key key of the next BlockElement. + * @throws DBException db Exception. */ public void seek(byte[] key) throws DBException { try { @@ -116,6 +126,9 @@ public class LeveldbIterator implements Iterator>, /** * Returns the next element in the iteration. + * + * @return the next element in the iteration. + * @throws DBException DB Exception. */ @Override public Map.Entry next() throws DBException { @@ -131,6 +144,9 @@ public class LeveldbIterator implements Iterator>, /** * Returns the next element in the iteration, without advancing the * iteration. + * + * @return the next element in the iteration. + * @throws DBException db Exception. */ public Map.Entry peekNext() throws DBException { try { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/utils/YarnServerSecurityUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/utils/YarnServerSecurityUtils.java index c5ae56f3d10..4ad6a94ab11 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/utils/YarnServerSecurityUtils.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/utils/YarnServerSecurityUtils.java @@ -56,7 +56,7 @@ public final class YarnServerSecurityUtils { * current application. * * @return the AMRMTokenIdentifier instance for the current user - * @throws YarnException + * @throws YarnException exceptions from yarn servers. */ public static AMRMTokenIdentifier authorizeRequest() throws YarnException { @@ -137,9 +137,9 @@ public final class YarnServerSecurityUtils { * Parses the container launch context and returns a Credential instance that * contains all the tokens from the launch context. * - * @param launchContext + * @param launchContext ContainerLaunchContext. * @return the credential instance - * @throws IOException + * @throws IOException if there are I/O errors. */ public static Credentials parseCredentials( ContainerLaunchContext launchContext) throws IOException { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/volume/csi/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/volume/csi/package-info.java index ef4ffef5646..64b42c7c435 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/volume/csi/package-info.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/volume/csi/package-info.java @@ -19,9 +19,8 @@ /** * This package contains common volume related classes. */ -@InterfaceAudience.Private -@InterfaceStability.Unstable +@Private +@Unstable package org.apache.hadoop.yarn.server.volume.csi; - -import org.apache.hadoop.classification.InterfaceAudience; -import org.apache.hadoop.classification.InterfaceStability; \ No newline at end of file +import org.apache.hadoop.classification.InterfaceAudience.Private; +import org.apache.hadoop.classification.InterfaceStability.Unstable; \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/LogServlet.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/LogServlet.java index 86b8d55adc6..16fac7ac439 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/LogServlet.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/LogServlet.java @@ -187,7 +187,10 @@ public class LogServlet extends Configured { * Returns the user qualified path name of the remote log directory for * each pre-configured log aggregation file controller. * + * @param user remoteUser. + * @param applicationId applicationId. * @return {@link Response} object containing remote log dir path names + * @throws IOException if there are I/O errors. */ public Response getRemoteLogDirPath(String user, String applicationId) throws IOException { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/LogWebService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/LogWebService.java index 1edfd5287ac..565d4fd8c9e 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/LogWebService.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/LogWebService.java @@ -138,6 +138,9 @@ public class LogWebService implements AppInfoProvider { * @param containerIdStr The container ID * @param nmId The Node Manager NodeId * @param redirectedFromNode Whether this is a redirected request from NM + * @param clusterId clusterId the id of the cluster + * @param manualRedirection whether to return a response with a Location + * instead of an automatic redirection * @return The log file's name and current file size */ @GET @@ -242,6 +245,9 @@ public class LogWebService implements AppInfoProvider { * @param size the size of the log file * @param nmId The Node Manager NodeId * @param redirectedFromNode Whether this is the redirect request from NM + * @param clusterId the id of the cluster + * @param manualRedirection whether to return a response with a Location + * instead of an automatic redirection * @return The contents of the container's log file */ @GET diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WrappedLogMetaRequest.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WrappedLogMetaRequest.java index d39eef8cee4..59a88c26186 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WrappedLogMetaRequest.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WrappedLogMetaRequest.java @@ -155,6 +155,7 @@ public class WrappedLogMetaRequest { * * @return list of {@link ContainerLogMeta} objects that belong * to the application, attempt or container + * @throws IOException if there are I/O errors. */ public List getContainerLogMetas() throws IOException { ApplicationId applicationId = ApplicationId.fromString(getAppId()); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/impl/FederationStateStoreBaseTest.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/impl/FederationStateStoreBaseTest.java index d5493f6614f..7fb1e327e85 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/impl/FederationStateStoreBaseTest.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/impl/FederationStateStoreBaseTest.java @@ -26,12 +26,14 @@ import java.util.HashSet; import java.util.TimeZone; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.io.Text; import org.apache.hadoop.security.token.delegation.DelegationKey; import org.apache.hadoop.test.LambdaTestUtils; import org.apache.hadoop.util.Time; import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.api.records.ReservationId; import org.apache.hadoop.yarn.exceptions.YarnException; +import org.apache.hadoop.yarn.security.client.RMDelegationTokenIdentifier; import org.apache.hadoop.yarn.server.federation.store.FederationStateStore; import org.apache.hadoop.yarn.server.federation.store.exception.FederationStateStoreException; import org.apache.hadoop.yarn.server.federation.store.records.AddApplicationHomeSubClusterRequest; @@ -74,6 +76,9 @@ import org.apache.hadoop.yarn.server.federation.store.records.UpdateReservationH import org.apache.hadoop.yarn.server.federation.store.records.RouterMasterKey; import org.apache.hadoop.yarn.server.federation.store.records.RouterMasterKeyRequest; import org.apache.hadoop.yarn.server.federation.store.records.RouterMasterKeyResponse; +import org.apache.hadoop.yarn.server.federation.store.records.RouterStoreToken; +import org.apache.hadoop.yarn.server.federation.store.records.RouterRMTokenRequest; +import org.apache.hadoop.yarn.server.federation.store.records.RouterRMTokenResponse; import org.apache.hadoop.yarn.util.MonotonicClock; import org.junit.After; import org.junit.Assert; @@ -92,6 +97,12 @@ public abstract class FederationStateStoreBaseTest { protected abstract FederationStateStore createStateStore(); + protected abstract void checkRouterMasterKey(DelegationKey delegationKey, + RouterMasterKey routerMasterKey) throws YarnException, IOException; + + protected abstract void checkRouterStoreToken(RMDelegationTokenIdentifier identifier, + RouterStoreToken token) throws YarnException, IOException; + private Configuration conf; @Before @@ -871,6 +882,8 @@ public abstract class FederationStateStoreBaseTest { Assert.assertEquals(routerMasterKey.getKeyId(), routerMasterKeyResp.getKeyId()); Assert.assertEquals(routerMasterKey.getKeyBytes(), routerMasterKeyResp.getKeyBytes()); Assert.assertEquals(routerMasterKey.getExpiryDate(), routerMasterKeyResp.getExpiryDate()); + + checkRouterMasterKey(key, routerMasterKey); } @Test @@ -922,4 +935,114 @@ public abstract class FederationStateStoreBaseTest { Assert.assertEquals(routerMasterKey.getKeyBytes(), routerMasterKeyResp.getKeyBytes()); Assert.assertEquals(routerMasterKey.getExpiryDate(), routerMasterKeyResp.getExpiryDate()); } + + @Test + public void testStoreNewToken() throws IOException, YarnException { + // prepare parameters + RMDelegationTokenIdentifier identifier = new RMDelegationTokenIdentifier( + new Text("owner1"), new Text("renewer1"), new Text("realuser1")); + int sequenceNumber = 1; + identifier.setSequenceNumber(sequenceNumber); + Long renewDate = Time.now(); + + // store new rm-token + RouterStoreToken storeToken = RouterStoreToken.newInstance(identifier, renewDate); + RouterRMTokenRequest request = RouterRMTokenRequest.newInstance(storeToken); + RouterRMTokenResponse routerRMTokenResponse = stateStore.storeNewToken(request); + + // Verify the returned result to ensure that the returned Response is not empty + // and the returned result is consistent with the input parameters. + Assert.assertNotNull(routerRMTokenResponse); + RouterStoreToken storeTokenResp = routerRMTokenResponse.getRouterStoreToken(); + Assert.assertNotNull(storeTokenResp); + Assert.assertEquals(storeToken.getRenewDate(), storeTokenResp.getRenewDate()); + Assert.assertEquals(storeToken.getTokenIdentifier(), storeTokenResp.getTokenIdentifier()); + + checkRouterStoreToken(identifier, storeToken); + checkRouterStoreToken(identifier, storeTokenResp); + } + + @Test + public void testUpdateStoredToken() throws IOException, YarnException { + // prepare saveToken parameters + RMDelegationTokenIdentifier identifier = new RMDelegationTokenIdentifier( + new Text("owner2"), new Text("renewer2"), new Text("realuser2")); + int sequenceNumber = 2; + identifier.setSequenceNumber(sequenceNumber); + Long renewDate = Time.now(); + + // store new rm-token + RouterStoreToken storeToken = RouterStoreToken.newInstance(identifier, renewDate); + RouterRMTokenRequest request = RouterRMTokenRequest.newInstance(storeToken); + RouterRMTokenResponse routerRMTokenResponse = stateStore.storeNewToken(request); + Assert.assertNotNull(routerRMTokenResponse); + + // prepare updateToken parameters + Long renewDate2 = Time.now(); + int sequenceNumber2 = 3; + identifier.setSequenceNumber(sequenceNumber2); + + // update rm-token + RouterStoreToken updateToken = RouterStoreToken.newInstance(identifier, renewDate2); + RouterRMTokenRequest updateTokenRequest = RouterRMTokenRequest.newInstance(updateToken); + RouterRMTokenResponse updateTokenResponse = stateStore.updateStoredToken(updateTokenRequest); + + Assert.assertNotNull(updateTokenResponse); + RouterStoreToken updateTokenResp = updateTokenResponse.getRouterStoreToken(); + Assert.assertNotNull(updateTokenResp); + Assert.assertEquals(updateToken.getRenewDate(), updateTokenResp.getRenewDate()); + Assert.assertEquals(updateToken.getTokenIdentifier(), updateTokenResp.getTokenIdentifier()); + + checkRouterStoreToken(identifier, updateTokenResp); + } + + @Test + public void testRemoveStoredToken() throws IOException, YarnException { + // prepare saveToken parameters + RMDelegationTokenIdentifier identifier = new RMDelegationTokenIdentifier( + new Text("owner3"), new Text("renewer3"), new Text("realuser3")); + int sequenceNumber = 3; + identifier.setSequenceNumber(sequenceNumber); + Long renewDate = Time.now(); + + // store new rm-token + RouterStoreToken storeToken = RouterStoreToken.newInstance(identifier, renewDate); + RouterRMTokenRequest request = RouterRMTokenRequest.newInstance(storeToken); + RouterRMTokenResponse routerRMTokenResponse = stateStore.storeNewToken(request); + Assert.assertNotNull(routerRMTokenResponse); + + // remove rm-token + RouterRMTokenResponse removeTokenResponse = stateStore.removeStoredToken(request); + Assert.assertNotNull(removeTokenResponse); + RouterStoreToken removeTokenResp = removeTokenResponse.getRouterStoreToken(); + Assert.assertNotNull(removeTokenResp); + Assert.assertEquals(removeTokenResp.getRenewDate(), storeToken.getRenewDate()); + Assert.assertEquals(removeTokenResp.getTokenIdentifier(), storeToken.getTokenIdentifier()); + } + + @Test + public void testGetTokenByRouterStoreToken() throws IOException, YarnException { + // prepare saveToken parameters + RMDelegationTokenIdentifier identifier = new RMDelegationTokenIdentifier( + new Text("owner4"), new Text("renewer4"), new Text("realuser4")); + int sequenceNumber = 4; + identifier.setSequenceNumber(sequenceNumber); + Long renewDate = Time.now(); + + // store new rm-token + RouterStoreToken storeToken = RouterStoreToken.newInstance(identifier, renewDate); + RouterRMTokenRequest request = RouterRMTokenRequest.newInstance(storeToken); + RouterRMTokenResponse routerRMTokenResponse = stateStore.storeNewToken(request); + Assert.assertNotNull(routerRMTokenResponse); + + // getTokenByRouterStoreToken + RouterRMTokenResponse getRouterRMTokenResp = stateStore.getTokenByRouterStoreToken(request); + Assert.assertNotNull(getRouterRMTokenResp); + RouterStoreToken getStoreTokenResp = getRouterRMTokenResp.getRouterStoreToken(); + Assert.assertNotNull(getStoreTokenResp); + Assert.assertEquals(getStoreTokenResp.getRenewDate(), storeToken.getRenewDate()); + Assert.assertEquals(getStoreTokenResp.getTokenIdentifier(), storeToken.getTokenIdentifier()); + + checkRouterStoreToken(identifier, getStoreTokenResp); + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/impl/HSQLDBFederationStateStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/impl/HSQLDBFederationStateStore.java index e90f1dc099e..b3bb0764dfa 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/impl/HSQLDBFederationStateStore.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/impl/HSQLDBFederationStateStore.java @@ -325,7 +325,7 @@ public class HSQLDBFederationStateStore extends SQLFederationStateStore { try { conf.setInt(YarnConfiguration.FEDERATION_STATESTORE_MAX_APPLICATIONS, 10); super.init(conf); - conn = super.conn; + conn = super.getConn(); LOG.info("Database Init: Start"); @@ -365,7 +365,7 @@ public class HSQLDBFederationStateStore extends SQLFederationStateStore { public void initConnection(Configuration conf) { try { super.init(conf); - conn = super.conn; + conn = super.getConn(); } catch (YarnException e1) { LOG.error("ERROR: failed open connection to HSQLDB DB {}.", e1.getMessage()); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/impl/TestMemoryFederationStateStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/impl/TestMemoryFederationStateStore.java index 70dda2227d0..0ea714ff06e 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/impl/TestMemoryFederationStateStore.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/impl/TestMemoryFederationStateStore.java @@ -18,14 +18,29 @@ package org.apache.hadoop.yarn.server.federation.store.impl; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.security.token.delegation.DelegationKey; import org.apache.hadoop.yarn.conf.YarnConfiguration; +import org.apache.hadoop.yarn.exceptions.YarnException; +import org.apache.hadoop.yarn.security.client.RMDelegationTokenIdentifier; +import org.apache.hadoop.yarn.security.client.YARNDelegationTokenIdentifier; import org.apache.hadoop.yarn.server.federation.store.FederationStateStore; +import org.apache.hadoop.yarn.server.federation.store.records.RouterMasterKey; +import org.apache.hadoop.yarn.server.federation.store.records.RouterRMDTSecretManagerState; +import org.apache.hadoop.yarn.server.federation.store.records.RouterStoreToken; + +import java.io.IOException; +import java.nio.ByteBuffer; +import java.util.Map; +import java.util.Set; + +import static org.junit.Assert.assertNotNull; +import static org.junit.Assert.assertTrue; +import static org.junit.Assert.assertEquals; /** * Unit tests for MemoryFederationStateStore. */ -public class TestMemoryFederationStateStore - extends FederationStateStoreBaseTest { +public class TestMemoryFederationStateStore extends FederationStateStoreBaseTest { @Override protected FederationStateStore createStateStore() { @@ -34,4 +49,43 @@ public class TestMemoryFederationStateStore super.setConf(conf); return new MemoryFederationStateStore(); } + + @Override + protected void checkRouterMasterKey(DelegationKey delegationKey, + RouterMasterKey routerMasterKey) throws YarnException, IOException { + MemoryFederationStateStore memoryStateStore = + MemoryFederationStateStore.class.cast(this.getStateStore()); + RouterRMDTSecretManagerState secretManagerState = + memoryStateStore.getRouterRMSecretManagerState(); + assertNotNull(secretManagerState); + + Set delegationKeys = secretManagerState.getMasterKeyState(); + assertNotNull(delegationKeys); + + assertTrue(delegationKeys.contains(delegationKey)); + + RouterMasterKey resultRouterMasterKey = RouterMasterKey.newInstance(delegationKey.getKeyId(), + ByteBuffer.wrap(delegationKey.getEncodedKey()), delegationKey.getExpiryDate()); + assertEquals(resultRouterMasterKey, routerMasterKey); + } + + @Override + protected void checkRouterStoreToken(RMDelegationTokenIdentifier identifier, + RouterStoreToken token) throws YarnException, IOException { + MemoryFederationStateStore memoryStateStore = + MemoryFederationStateStore.class.cast(this.getStateStore()); + RouterRMDTSecretManagerState secretManagerState = + memoryStateStore.getRouterRMSecretManagerState(); + assertNotNull(secretManagerState); + + Map tokenStateMap = + secretManagerState.getTokenState(); + assertNotNull(tokenStateMap); + + assertTrue(tokenStateMap.containsKey(identifier)); + + YARNDelegationTokenIdentifier tokenIdentifier = token.getTokenIdentifier(); + assertTrue(tokenIdentifier instanceof RMDelegationTokenIdentifier); + assertEquals(identifier, tokenIdentifier); + } } \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/impl/TestSQLFederationStateStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/impl/TestSQLFederationStateStore.java index 6f5f19877c0..befdf489763 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/impl/TestSQLFederationStateStore.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/impl/TestSQLFederationStateStore.java @@ -18,12 +18,14 @@ package org.apache.hadoop.yarn.server.federation.store.impl; import org.apache.commons.lang3.NotImplementedException; +import org.apache.hadoop.security.token.delegation.DelegationKey; import org.apache.hadoop.test.LambdaTestUtils; import org.apache.hadoop.util.Time; import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.api.records.ReservationId; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.exceptions.YarnException; +import org.apache.hadoop.yarn.security.client.RMDelegationTokenIdentifier; import org.apache.hadoop.yarn.server.federation.store.FederationStateStore; import org.apache.hadoop.yarn.server.federation.store.metrics.FederationStateStoreClientMetrics; import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId; @@ -33,6 +35,8 @@ import org.apache.hadoop.yarn.server.federation.store.records.ReservationHomeSub import org.apache.hadoop.yarn.server.federation.store.records.AddReservationHomeSubClusterRequest; import org.apache.hadoop.yarn.server.federation.store.records.UpdateReservationHomeSubClusterRequest; import org.apache.hadoop.yarn.server.federation.store.records.DeleteReservationHomeSubClusterRequest; +import org.apache.hadoop.yarn.server.federation.store.records.RouterMasterKey; +import org.apache.hadoop.yarn.server.federation.store.records.RouterStoreToken; import org.apache.hadoop.yarn.server.federation.store.utils.FederationStateStoreUtils; import org.junit.Assert; import org.junit.Test; @@ -447,7 +451,7 @@ public class TestSQLFederationStateStore extends FederationStateStoreBaseTest { SQLFederationStateStore sqlFederationStateStore = (SQLFederationStateStore) stateStore; - Connection conn = sqlFederationStateStore.conn; + Connection conn = sqlFederationStateStore.getConn(); conn.prepareStatement(SP_DROP_ADDRESERVATIONHOMESUBCLUSTER).execute(); conn.prepareStatement(SP_ADDRESERVATIONHOMESUBCLUSTER2).execute(); @@ -484,7 +488,7 @@ public class TestSQLFederationStateStore extends FederationStateStoreBaseTest { SQLFederationStateStore sqlFederationStateStore = (SQLFederationStateStore) stateStore; - Connection conn = sqlFederationStateStore.conn; + Connection conn = sqlFederationStateStore.getConn(); conn.prepareStatement(SP_DROP_UPDATERESERVATIONHOMESUBCLUSTER).execute(); conn.prepareStatement(SP_UPDATERESERVATIONHOMESUBCLUSTER2).execute(); @@ -530,7 +534,7 @@ public class TestSQLFederationStateStore extends FederationStateStoreBaseTest { SQLFederationStateStore sqlFederationStateStore = (SQLFederationStateStore) stateStore; - Connection conn = sqlFederationStateStore.conn; + Connection conn = sqlFederationStateStore.getConn(); conn.prepareStatement(SP_DROP_DELETERESERVATIONHOMESUBCLUSTER).execute(); conn.prepareStatement(SP_DELETERESERVATIONHOMESUBCLUSTER2).execute(); @@ -572,4 +576,38 @@ public class TestSQLFederationStateStore extends FederationStateStoreBaseTest { public void testRemoveStoredMasterKey() throws YarnException, IOException { super.testRemoveStoredMasterKey(); } + + @Test(expected = NotImplementedException.class) + public void testStoreNewToken() throws IOException, YarnException { + super.testStoreNewToken(); + } + + @Test(expected = NotImplementedException.class) + public void testUpdateStoredToken() throws IOException, YarnException { + super.testUpdateStoredToken(); + } + + @Test(expected = NotImplementedException.class) + public void testRemoveStoredToken() throws IOException, YarnException { + super.testRemoveStoredToken(); + } + + @Test(expected = NotImplementedException.class) + public void testGetTokenByRouterStoreToken() throws IOException, YarnException { + super.testGetTokenByRouterStoreToken(); + } + + @Override + protected void checkRouterMasterKey(DelegationKey delegationKey, + RouterMasterKey routerMasterKey) throws YarnException, IOException { + // TODO: This part of the code will be completed in YARN-11349 and + // will be used to verify whether the RouterMasterKey stored in the DB is as expected. + } + + @Override + protected void checkRouterStoreToken(RMDelegationTokenIdentifier identifier, + RouterStoreToken token) throws YarnException, IOException { + // TODO: This part of the code will be completed in YARN-11349 and + // will be used to verify whether the RouterStoreToken stored in the DB is as expected. + } } \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/impl/TestZookeeperFederationStateStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/impl/TestZookeeperFederationStateStore.java index 4571371eb6d..ba22a1e1894 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/impl/TestZookeeperFederationStateStore.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/impl/TestZookeeperFederationStateStore.java @@ -17,9 +17,11 @@ package org.apache.hadoop.yarn.server.federation.store.impl; +import java.io.ByteArrayInputStream; +import java.io.DataInputStream; import java.io.IOException; +import java.nio.ByteBuffer; -import org.apache.commons.lang3.NotImplementedException; import org.apache.curator.framework.CuratorFramework; import org.apache.curator.framework.CuratorFrameworkFactory; import org.apache.curator.retry.RetryNTimes; @@ -29,27 +31,52 @@ import org.apache.hadoop.fs.CommonConfigurationKeys; import org.apache.hadoop.metrics2.MetricsRecord; import org.apache.hadoop.metrics2.impl.MetricsCollectorImpl; import org.apache.hadoop.metrics2.impl.MetricsRecords; +import org.apache.hadoop.security.token.delegation.DelegationKey; import org.apache.hadoop.util.Time; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.exceptions.YarnException; +import org.apache.hadoop.yarn.security.client.RMDelegationTokenIdentifier; import org.apache.hadoop.yarn.server.federation.store.FederationStateStore; +import org.apache.hadoop.yarn.server.federation.store.records.RouterMasterKey; +import org.apache.hadoop.yarn.server.federation.store.records.RouterMasterKeyRequest; +import org.apache.hadoop.yarn.server.federation.store.records.RouterMasterKeyResponse; +import org.apache.hadoop.yarn.server.federation.store.records.RouterStoreToken; +import org.apache.hadoop.yarn.util.Records; import org.junit.After; import org.junit.Before; import org.junit.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; +import static org.apache.hadoop.util.curator.ZKCuratorManager.getNodePath; import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertTrue; +import static org.junit.Assert.assertNotNull; /** * Unit tests for ZookeeperFederationStateStore. */ -public class TestZookeeperFederationStateStore - extends FederationStateStoreBaseTest { +public class TestZookeeperFederationStateStore extends FederationStateStoreBaseTest { private static final Logger LOG = LoggerFactory.getLogger(TestZookeeperFederationStateStore.class); + private static final String ZNODE_FEDERATIONSTORE = + "/federationstore"; + private static final String ZNODE_ROUTER_RM_DT_SECRET_MANAGER_ROOT = + "/router_rm_dt_secret_manager_root"; + private static final String ZNODE_ROUTER_RM_DELEGATION_TOKENS_ROOT_ZNODE_NAME = + "/router_rm_delegation_tokens_root"; + private static final String ZNODE_ROUTER_RM_DT_MASTER_KEYS_ROOT_ZNODE_NAME = + "/router_rm_dt_master_keys_root/"; + private static final String ROUTER_RM_DELEGATION_TOKEN_PREFIX = "rm_delegation_token_"; + private static final String ROUTER_RM_DELEGATION_KEY_PREFIX = "delegation_key_"; + + private static final String ZNODE_DT_PREFIX = ZNODE_FEDERATIONSTORE + + ZNODE_ROUTER_RM_DT_SECRET_MANAGER_ROOT + ZNODE_ROUTER_RM_DELEGATION_TOKENS_ROOT_ZNODE_NAME; + private static final String ZNODE_MASTER_KEY_PREFIX = ZNODE_FEDERATIONSTORE + + ZNODE_ROUTER_RM_DT_SECRET_MANAGER_ROOT + ZNODE_ROUTER_RM_DT_MASTER_KEYS_ROOT_ZNODE_NAME; + /** Zookeeper test server. */ private static TestingServer curatorTestingServer; private static CuratorFramework curatorFramework; @@ -171,18 +198,82 @@ public class TestZookeeperFederationStateStore MetricsRecords.assertMetric(record, "UpdateReservationHomeSubClusterNumOps", expectOps); } - @Test(expected = NotImplementedException.class) - public void testStoreNewMasterKey() throws Exception { - super.testStoreNewMasterKey(); + private RouterStoreToken getStoreTokenFromZK(String nodePath) + throws YarnException { + try { + byte[] data = curatorFramework.getData().forPath(nodePath); + if ((data == null) || (data.length == 0)) { + return null; + } + ByteArrayInputStream bin = new ByteArrayInputStream(data); + DataInputStream din = new DataInputStream(bin); + RouterStoreToken storeToken = Records.newRecord(RouterStoreToken.class); + storeToken.readFields(din); + return storeToken; + } catch (Exception e) { + throw new YarnException(e); + } } - @Test(expected = NotImplementedException.class) - public void testGetMasterKeyByDelegationKey() throws YarnException, IOException { - super.testGetMasterKeyByDelegationKey(); + private RouterMasterKey getRouterMasterKeyFromZK(String nodePath) throws YarnException { + try { + byte[] data = curatorFramework.getData().forPath(nodePath); + ByteArrayInputStream bin = new ByteArrayInputStream(data); + DataInputStream din = new DataInputStream(bin); + DelegationKey zkDT = new DelegationKey(); + zkDT.readFields(din); + RouterMasterKey zkRouterMasterKey = RouterMasterKey.newInstance( + zkDT.getKeyId(), ByteBuffer.wrap(zkDT.getEncodedKey()), zkDT.getExpiryDate()); + return zkRouterMasterKey; + } catch (Exception e) { + throw new YarnException(e); + } } - @Test(expected = NotImplementedException.class) - public void testRemoveStoredMasterKey() throws YarnException, IOException { - super.testRemoveStoredMasterKey(); + private boolean isExists(String path) throws YarnException { + try { + return (curatorFramework.checkExists().forPath(path) != null); + } catch (Exception e) { + throw new YarnException(e); + } + } + + protected void checkRouterMasterKey(DelegationKey delegationKey, + RouterMasterKey routerMasterKey) throws YarnException, IOException { + // Check for MasterKey stored in ZK + RouterMasterKeyRequest routerMasterKeyRequest = + RouterMasterKeyRequest.newInstance(routerMasterKey); + + // Get Data From zk. + String nodeName = ROUTER_RM_DELEGATION_KEY_PREFIX + delegationKey.getKeyId(); + String nodePath = ZNODE_MASTER_KEY_PREFIX + nodeName; + RouterMasterKey zkRouterMasterKey = getRouterMasterKeyFromZK(nodePath); + + // Call the getMasterKeyByDelegationKey interface to get the returned result. + // The zk data should be consistent with the returned data. + RouterMasterKeyResponse response = getStateStore(). + getMasterKeyByDelegationKey(routerMasterKeyRequest); + assertNotNull(response); + RouterMasterKey respRouterMasterKey = response.getRouterMasterKey(); + assertEquals(routerMasterKey, respRouterMasterKey); + assertEquals(routerMasterKey, zkRouterMasterKey); + assertEquals(zkRouterMasterKey, respRouterMasterKey); + } + + protected void checkRouterStoreToken(RMDelegationTokenIdentifier identifier, + RouterStoreToken token) throws YarnException, IOException { + // Get delegationToken Path + String nodeName = ROUTER_RM_DELEGATION_TOKEN_PREFIX + identifier.getSequenceNumber(); + String nodePath = getNodePath(ZNODE_DT_PREFIX, nodeName); + + // Check if the path exists, we expect the result to exist. + assertTrue(isExists(nodePath)); + + // Check whether the token (paramStoreToken) + // We generated is consistent with the data stored in zk. + // We expect data to be consistent. + RouterStoreToken zkRouterStoreToken = getStoreTokenFromZK(nodePath); + assertNotNull(zkRouterStoreToken); + assertEquals(token, zkRouterStoreToken); } } \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/records/TestFederationProtocolRecords.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/records/TestFederationProtocolRecords.java index 174b4288528..bc20856e8c5 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/records/TestFederationProtocolRecords.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/store/records/TestFederationProtocolRecords.java @@ -17,6 +17,7 @@ package org.apache.hadoop.yarn.server.federation.store.records; +import org.apache.hadoop.util.Time; import org.apache.hadoop.yarn.api.BasePBImplRecordsTest; import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.api.records.ReservationId; @@ -48,11 +49,15 @@ import org.apache.hadoop.yarn.federation.proto.YarnServerFederationProtos.SubClu import org.apache.hadoop.yarn.federation.proto.YarnServerFederationProtos.SubClusterRegisterResponseProto; import org.apache.hadoop.yarn.federation.proto.YarnServerFederationProtos.UpdateApplicationHomeSubClusterRequestProto; import org.apache.hadoop.yarn.federation.proto.YarnServerFederationProtos.UpdateApplicationHomeSubClusterResponseProto; +import org.apache.hadoop.yarn.federation.proto.YarnServerFederationProtos.RouterStoreTokenProto; +import org.apache.hadoop.yarn.federation.proto.YarnServerFederationProtos.RouterRMTokenRequestProto; +import org.apache.hadoop.yarn.federation.proto.YarnServerFederationProtos.RouterRMTokenResponseProto; import org.apache.hadoop.yarn.federation.proto.YarnServerFederationProtos.GetReservationHomeSubClusterRequestProto; import org.apache.hadoop.yarn.federation.proto.YarnServerFederationProtos.RouterMasterKeyProto; import org.apache.hadoop.yarn.federation.proto.YarnServerFederationProtos.RouterMasterKeyRequestProto; import org.apache.hadoop.yarn.federation.proto.YarnServerFederationProtos.RouterMasterKeyResponseProto; import org.apache.hadoop.yarn.federation.proto.YarnServerFederationProtos.ApplicationHomeSubClusterProto; +import org.apache.hadoop.yarn.server.federation.policies.dao.WeightedPolicyInfo; import org.apache.hadoop.yarn.server.federation.store.records.impl.pb.AddApplicationHomeSubClusterRequestPBImpl; import org.apache.hadoop.yarn.server.federation.store.records.impl.pb.AddApplicationHomeSubClusterResponsePBImpl; import org.apache.hadoop.yarn.server.federation.store.records.impl.pb.DeleteApplicationHomeSubClusterRequestPBImpl; @@ -84,12 +89,21 @@ import org.apache.hadoop.yarn.server.federation.store.records.impl.pb.UpdateAppl import org.apache.hadoop.yarn.server.federation.store.records.impl.pb.RouterMasterKeyPBImpl; import org.apache.hadoop.yarn.server.federation.store.records.impl.pb.RouterMasterKeyRequestPBImpl; import org.apache.hadoop.yarn.server.federation.store.records.impl.pb.RouterMasterKeyResponsePBImpl; +import org.apache.hadoop.yarn.server.federation.store.records.impl.pb.RouterStoreTokenPBImpl; +import org.apache.hadoop.yarn.server.federation.store.records.impl.pb.RouterRMTokenRequestPBImpl; +import org.apache.hadoop.yarn.server.federation.store.records.impl.pb.RouterRMTokenResponsePBImpl; +import org.apache.hadoop.yarn.security.client.YARNDelegationTokenIdentifier; import org.apache.hadoop.yarn.server.federation.store.records.impl.pb.ApplicationHomeSubClusterPBImpl; import org.apache.hadoop.yarn.server.federation.store.records.impl.pb.GetReservationHomeSubClusterRequestPBImpl; import org.apache.hadoop.yarn.server.records.Version; import org.junit.BeforeClass; import org.junit.Test; +import java.nio.ByteBuffer; + +import static org.junit.Assert.assertEquals; +import static org.mockito.Mockito.mock; + /** * Test class for federation protocol records. */ @@ -104,6 +118,8 @@ public class TestFederationProtocolRecords extends BasePBImplRecordsTest { generateByNewInstance(ApplicationHomeSubCluster.class); generateByNewInstance(SubClusterPolicyConfiguration.class); generateByNewInstance(RouterMasterKey.class); + generateByNewInstance(YARNDelegationTokenIdentifier.class); + generateByNewInstance(RouterStoreToken.class); generateByNewInstance(ReservationId.class); } @@ -291,6 +307,21 @@ public class TestFederationProtocolRecords extends BasePBImplRecordsTest { validatePBImplRecord(RouterMasterKeyResponsePBImpl.class, RouterMasterKeyResponseProto.class); } + @Test + public void testRouterStoreToken() throws Exception { + validatePBImplRecord(RouterStoreTokenPBImpl.class, RouterStoreTokenProto.class); + } + + @Test + public void testRouterRMTokenRequest() throws Exception { + validatePBImplRecord(RouterRMTokenRequestPBImpl.class, RouterRMTokenRequestProto.class); + } + + @Test + public void testRouterRMTokenResponse() throws Exception { + validatePBImplRecord(RouterRMTokenResponsePBImpl.class, RouterRMTokenResponseProto.class); + } + @Test public void testApplicationHomeSubCluster() throws Exception { validatePBImplRecord(ApplicationHomeSubClusterPBImpl.class, @@ -302,4 +333,92 @@ public class TestFederationProtocolRecords extends BasePBImplRecordsTest { validatePBImplRecord(GetReservationHomeSubClusterRequestPBImpl.class, GetReservationHomeSubClusterRequestProto.class); } + + @Test + public void testValidateApplicationHomeSubClusterEqual() throws Exception { + long now = Time.now(); + + ApplicationId appId1 = ApplicationId.newInstance(now, 1); + SubClusterId subClusterId1 = SubClusterId.newInstance("SC-1"); + ApplicationHomeSubCluster applicationHomeSubCluster1 = + ApplicationHomeSubCluster.newInstance(appId1, subClusterId1); + + ApplicationId appId2 = ApplicationId.newInstance(now, 1); + SubClusterId subClusterId2 = SubClusterId.newInstance("SC-1"); + ApplicationHomeSubCluster applicationHomeSubCluster2 = + ApplicationHomeSubCluster.newInstance(appId2, subClusterId2); + + assertEquals(applicationHomeSubCluster1, applicationHomeSubCluster2); + } + + @Test + public void testValidateReservationHomeSubClusterEqual() throws Exception { + long now = Time.now(); + + ReservationId reservationId1 = ReservationId.newInstance(now, 1); + SubClusterId subClusterId1 = SubClusterId.newInstance("SC-1"); + ReservationHomeSubCluster reservationHomeSubCluster1 = + ReservationHomeSubCluster.newInstance(reservationId1, subClusterId1); + + ReservationId reservationId2 = ReservationId.newInstance(now, 1); + SubClusterId subClusterId2 = SubClusterId.newInstance("SC-1"); + ReservationHomeSubCluster reservationHomeSubCluster2 = + ReservationHomeSubCluster.newInstance(reservationId2, subClusterId2); + + assertEquals(reservationHomeSubCluster1, reservationHomeSubCluster2); + } + + @Test + public void testSubClusterIdEqual() throws Exception { + SubClusterId subClusterId1 = SubClusterId.newInstance("SC-1"); + SubClusterId subClusterId2 = SubClusterId.newInstance("SC-1"); + assertEquals(subClusterId1, subClusterId2); + } + + @Test + public void testSubClusterIdInfoEqual() throws Exception { + SubClusterIdInfo subClusterIdInfo1 = new SubClusterIdInfo("SC-1"); + SubClusterIdInfo subClusterIdInfo2 = new SubClusterIdInfo("SC-1"); + assertEquals(subClusterIdInfo1, subClusterIdInfo2); + } + + @Test + public void testSubClusterPolicyConfigurationEqual() throws Exception { + + String queue1 = "queue1"; + WeightedPolicyInfo policyInfo1 = mock(WeightedPolicyInfo.class); + ByteBuffer buf1 = policyInfo1.toByteBuffer(); + SubClusterPolicyConfiguration configuration1 = SubClusterPolicyConfiguration + .newInstance(queue1, policyInfo1.getClass().getCanonicalName(), buf1); + + String queue2 = "queue1"; + WeightedPolicyInfo policyInfo2 = mock(WeightedPolicyInfo.class); + ByteBuffer buf2 = policyInfo1.toByteBuffer(); + SubClusterPolicyConfiguration configuration2 = SubClusterPolicyConfiguration + .newInstance(queue2, policyInfo2.getClass().getCanonicalName(), buf2); + + assertEquals(configuration1, configuration2); + } + + @Test + public void testSubClusterInfoEqual() throws Exception { + + String scAmRMAddress = "5.6.7.8:5"; + String scClientRMAddress = "5.6.7.8:6"; + String scRmAdminAddress = "5.6.7.8:7"; + String scWebAppAddress = "127.0.0.1:8080"; + String capabilityJson = "-"; + + SubClusterInfo sc1 = + SubClusterInfo.newInstance(SubClusterId.newInstance("SC-1"), + scAmRMAddress, scClientRMAddress, scRmAdminAddress, scWebAppAddress, + SubClusterState.SC_RUNNING, Time.now(), capabilityJson); + + SubClusterInfo sc2 = + SubClusterInfo.newInstance(SubClusterId.newInstance("SC-1"), + scAmRMAddress, scClientRMAddress, scRmAdminAddress, scWebAppAddress, + SubClusterState.SC_RUNNING, Time.now(), capabilityJson); + + assertEquals(sc1, sc2); + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/utils/TestFederationStateStoreFacade.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/utils/TestFederationStateStoreFacade.java index 1bfa6b90ff3..92dd426f513 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/utils/TestFederationStateStoreFacade.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/utils/TestFederationStateStoreFacade.java @@ -26,10 +26,15 @@ import java.util.Set; import java.util.HashSet; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.io.Text; import org.apache.hadoop.security.token.delegation.DelegationKey; +import org.apache.hadoop.test.LambdaTestUtils; +import org.apache.hadoop.util.Time; import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.exceptions.YarnException; +import org.apache.hadoop.yarn.security.client.RMDelegationTokenIdentifier; +import org.apache.hadoop.yarn.security.client.YARNDelegationTokenIdentifier; import org.apache.hadoop.yarn.server.federation.store.FederationStateStore; import org.apache.hadoop.yarn.server.federation.store.impl.MemoryFederationStateStore; import org.apache.hadoop.yarn.server.federation.store.records.ApplicationHomeSubCluster; @@ -37,6 +42,9 @@ import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId; import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo; import org.apache.hadoop.yarn.server.federation.store.records.SubClusterPolicyConfiguration; import org.apache.hadoop.yarn.server.federation.store.records.RouterRMDTSecretManagerState; +import org.apache.hadoop.yarn.server.federation.store.records.RouterStoreToken; +import org.apache.hadoop.yarn.server.federation.store.records.RouterRMTokenRequest; +import org.apache.hadoop.yarn.server.federation.store.records.RouterRMTokenResponse; import org.junit.After; import org.junit.Assert; import org.junit.Before; @@ -269,4 +277,97 @@ public class TestFederationStateStoreFacade { federationStateStore.getRouterRMSecretManagerState(); Assert.assertEquals(keySet, secretManagerState.getMasterKeyState()); } + + @Test + public void testStoreNewToken() throws YarnException, IOException { + // store new rm-token + RMDelegationTokenIdentifier dtId1 = new RMDelegationTokenIdentifier( + new Text("owner1"), new Text("renewer1"), new Text("realuser1")); + int sequenceNumber = 1; + dtId1.setSequenceNumber(sequenceNumber); + Long renewDate1 = Time.now(); + facade.storeNewToken(dtId1, renewDate1); + + // get RouterStoreToken from StateStore + RouterStoreToken routerStoreToken = RouterStoreToken.newInstance(dtId1, renewDate1); + RouterRMTokenRequest rmTokenRequest = RouterRMTokenRequest.newInstance(routerStoreToken); + RouterRMTokenResponse rmTokenResponse = stateStore.getTokenByRouterStoreToken(rmTokenRequest); + Assert.assertNotNull(rmTokenResponse); + + RouterStoreToken resultStoreToken = rmTokenResponse.getRouterStoreToken(); + YARNDelegationTokenIdentifier resultTokenIdentifier = resultStoreToken.getTokenIdentifier(); + Assert.assertNotNull(resultStoreToken); + Assert.assertNotNull(resultTokenIdentifier); + Assert.assertNotNull(resultStoreToken.getRenewDate()); + + Assert.assertEquals(dtId1, resultTokenIdentifier); + Assert.assertEquals(renewDate1, resultStoreToken.getRenewDate()); + Assert.assertEquals(sequenceNumber, resultTokenIdentifier.getSequenceNumber()); + } + + @Test + public void testUpdateNewToken() throws YarnException, IOException { + // store new rm-token + RMDelegationTokenIdentifier dtId1 = new RMDelegationTokenIdentifier( + new Text("owner2"), new Text("renewer2"), new Text("realuser2")); + int sequenceNumber = 2; + dtId1.setSequenceNumber(sequenceNumber); + Long renewDate1 = Time.now(); + facade.storeNewToken(dtId1, renewDate1); + + Long renewDate2 = Time.now(); + int sequenceNumber2 = 3; + dtId1.setSequenceNumber(sequenceNumber2); + facade.updateStoredToken(dtId1, renewDate2); + + // get RouterStoreToken from StateStore + RouterStoreToken routerStoreToken = RouterStoreToken.newInstance(dtId1, renewDate1); + RouterRMTokenRequest rmTokenRequest = RouterRMTokenRequest.newInstance(routerStoreToken); + RouterRMTokenResponse rmTokenResponse = stateStore.getTokenByRouterStoreToken(rmTokenRequest); + Assert.assertNotNull(rmTokenResponse); + + RouterStoreToken resultStoreToken = rmTokenResponse.getRouterStoreToken(); + YARNDelegationTokenIdentifier resultTokenIdentifier = resultStoreToken.getTokenIdentifier(); + Assert.assertNotNull(resultStoreToken); + Assert.assertNotNull(resultTokenIdentifier); + Assert.assertNotNull(resultStoreToken.getRenewDate()); + + Assert.assertEquals(dtId1, resultTokenIdentifier); + Assert.assertEquals(renewDate2, resultStoreToken.getRenewDate()); + Assert.assertEquals(sequenceNumber2, resultTokenIdentifier.getSequenceNumber()); + } + + @Test + public void testRemoveStoredToken() throws Exception { + // store new rm-token + RMDelegationTokenIdentifier dtId1 = new RMDelegationTokenIdentifier( + new Text("owner3"), new Text("renewer3"), new Text("realuser3")); + int sequenceNumber = 3; + dtId1.setSequenceNumber(sequenceNumber); + Long renewDate1 = Time.now(); + facade.storeNewToken(dtId1, renewDate1); + + // get RouterStoreToken from StateStore + RouterStoreToken routerStoreToken = RouterStoreToken.newInstance(dtId1, renewDate1); + RouterRMTokenRequest rmTokenRequest = RouterRMTokenRequest.newInstance(routerStoreToken); + RouterRMTokenResponse rmTokenResponse = stateStore.getTokenByRouterStoreToken(rmTokenRequest); + Assert.assertNotNull(rmTokenResponse); + + RouterStoreToken resultStoreToken = rmTokenResponse.getRouterStoreToken(); + YARNDelegationTokenIdentifier resultTokenIdentifier = resultStoreToken.getTokenIdentifier(); + Assert.assertNotNull(resultStoreToken); + Assert.assertNotNull(resultTokenIdentifier); + Assert.assertNotNull(resultStoreToken.getRenewDate()); + + Assert.assertEquals(dtId1, resultTokenIdentifier); + Assert.assertEquals(renewDate1, resultStoreToken.getRenewDate()); + Assert.assertEquals(sequenceNumber, resultTokenIdentifier.getSequenceNumber()); + + // remove rm-token + facade.removeStoredToken(dtId1); + + // Call again(getTokenByRouterStoreToken) after remove will throw IOException(not exist) + LambdaTestUtils.intercept(IOException.class, "RMDelegationToken: " + dtId1 + " does not exist.", + () -> stateStore.getTokenByRouterStoreToken(rmTokenRequest)); + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java index e899215291b..ea4595dffc4 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java @@ -175,8 +175,12 @@ public class LinuxContainerExecutor extends ContainerExecutor { COULD_NOT_CREATE_WORK_DIRECTORIES(35), COULD_NOT_CREATE_APP_LOG_DIRECTORIES(36), COULD_NOT_CREATE_TMP_DIRECTORIES(37), - ERROR_CREATE_CONTAINER_DIRECTORIES_ARGUMENTS(38); - + ERROR_CREATE_CONTAINER_DIRECTORIES_ARGUMENTS(38), + CANNOT_GET_EXECUTABLE_NAME_FROM_READLINK(80), + TOO_LONG_EXECUTOR_PATH(81), + CANNOT_GET_EXECUTABLE_NAME_FROM_KERNEL(82), + CANNOT_GET_EXECUTABLE_NAME_FROM_PID(83), + WRONG_PATH_OF_EXECUTABLE(84); private final int code; ExitCode(int exitCode) { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java index 0acc4de4704..7d91e5d395f 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java @@ -246,7 +246,7 @@ public class ContainerLaunch implements Callable { launchContext.setCommands(newCmds); // The actual expansion of environment variables happens after calling - // sanitizeEnv. This allows variables specified in NM_ADMIN_USER_ENV + // addConfigsToEnv. This allows variables specified in NM_ADMIN_USER_ENV // to reference user or container-defined variables. Map environment = launchContext.getEnvironment(); // /////////////////////////// End of variable expansion @@ -340,13 +340,15 @@ public class ContainerLaunch implements Callable { try (DataOutputStream containerScriptOutStream = lfs.create(nmPrivateContainerScriptPath, EnumSet.of(CREATE, OVERWRITE))) { + addConfigsToEnv(environment); + + expandAllEnvironmentVars(environment, containerLogDir); + // Sanitize the container's environment sanitizeEnv(environment, containerWorkDir, appDirs, userLocalDirs, containerLogDirs, localResources, nmPrivateClasspathJarDir, nmEnvVars); - expandAllEnvironmentVars(environment, containerLogDir); - // Add these if needed after expanding so we don't expand key values. if (keystore != null) { addKeystoreVars(environment, containerWorkDir); @@ -1641,13 +1643,35 @@ public class ContainerLaunch implements Callable { addToEnvMap(environment, nmVars, "JVM_PID", "$$"); } + // TODO: Remove Windows check and use this approach on all platforms after + // additional testing. See YARN-358. + if (Shell.WINDOWS) { + sanitizeWindowsEnv(environment, pwd, + resources, nmPrivateClasspathJarDir); + } + + // put AuxiliaryService data to environment + for (Map.Entry meta : containerManager + .getAuxServiceMetaData().entrySet()) { + AuxiliaryServiceHelper.setServiceDataIntoEnv( + meta.getKey(), meta.getValue(), environment); + nmVars.add(AuxiliaryServiceHelper.getPrefixServiceName(meta.getKey())); + } + } + + /** + * There are some configurations (such as {@value YarnConfiguration#NM_ADMIN_USER_ENV}) whose + * values need to be added to the environment variables. + * + * @param environment The environment variables map to add the configuration values to. + */ + public void addConfigsToEnv(Map environment) { // variables here will be forced in, even if the container has // specified them. Note: we do not track these in nmVars, to // allow them to be ordered properly if they reference variables // defined by the user. String defEnvStr = conf.get(YarnConfiguration.DEFAULT_NM_ADMIN_USER_ENV); - Apps.setEnvFromInputProperty(environment, - YarnConfiguration.NM_ADMIN_USER_ENV, defEnvStr, conf, + Apps.setEnvFromInputProperty(environment, YarnConfiguration.NM_ADMIN_USER_ENV, defEnvStr, conf, File.pathSeparator); if (!Shell.WINDOWS) { @@ -1658,39 +1682,21 @@ public class ContainerLaunch implements Callable { String userPath = environment.get(Environment.PATH.name()); environment.remove(Environment.PATH.name()); if (userPath == null || userPath.isEmpty()) { - Apps.addToEnvironment(environment, Environment.PATH.name(), - forcePath, File.pathSeparator); - Apps.addToEnvironment(environment, Environment.PATH.name(), - "$PATH", File.pathSeparator); + Apps.addToEnvironment(environment, Environment.PATH.name(), forcePath, + File.pathSeparator); + Apps.addToEnvironment(environment, Environment.PATH.name(), "$PATH", File.pathSeparator); } else { - Apps.addToEnvironment(environment, Environment.PATH.name(), - forcePath, File.pathSeparator); - Apps.addToEnvironment(environment, Environment.PATH.name(), - userPath, File.pathSeparator); + Apps.addToEnvironment(environment, Environment.PATH.name(), forcePath, + File.pathSeparator); + Apps.addToEnvironment(environment, Environment.PATH.name(), userPath, File.pathSeparator); } } } - - // TODO: Remove Windows check and use this approach on all platforms after - // additional testing. See YARN-358. - if (Shell.WINDOWS) { - - sanitizeWindowsEnv(environment, pwd, - resources, nmPrivateClasspathJarDir); - } - // put AuxiliaryService data to environment - for (Map.Entry meta : containerManager - .getAuxServiceMetaData().entrySet()) { - AuxiliaryServiceHelper.setServiceDataIntoEnv( - meta.getKey(), meta.getValue(), environment); - nmVars.add(AuxiliaryServiceHelper.getPrefixServiceName(meta.getKey())); - } } private void sanitizeWindowsEnv(Map environment, Path pwd, Map> resources, Path nmPrivateClasspathJarDir) throws IOException { - String inputClassPath = environment.get(Environment.CLASSPATH.name()); if (inputClassPath != null && !inputClassPath.isEmpty()) { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java index c89ac520f4b..14f5ffeefe0 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java @@ -208,6 +208,8 @@ public class DockerLinuxContainerRuntime extends OCIContainerRuntime { private static final Pattern dockerImagePattern = Pattern.compile(DOCKER_IMAGE_PATTERN); + private static final Pattern DOCKER_DIGEST_PATTERN = Pattern.compile("^sha256:[a-z0-9]{12,64}$"); + private static final String DEFAULT_PROCFS = "/proc"; @InterfaceAudience.Private @@ -1201,9 +1203,17 @@ public class DockerLinuxContainerRuntime extends OCIContainerRuntime { throw new ContainerExecutionException( ENV_DOCKER_CONTAINER_IMAGE + " not set!"); } - if (!dockerImagePattern.matcher(imageName).matches()) { - throw new ContainerExecutionException("Image name '" + imageName - + "' doesn't match docker image name pattern"); + // check if digest is part of imageName, extract and validate it. + String digest = null; + if (imageName.contains("@sha256")) { + String[] digestParts = imageName.split("@"); + digest = digestParts[1]; + imageName = digestParts[0]; + } + if (!dockerImagePattern.matcher(imageName).matches() || (digest != null + && !DOCKER_DIGEST_PATTERN.matcher(digest).matches())) { + throw new ContainerExecutionException( + "Image name '" + imageName + "' doesn't match docker image name pattern"); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/dao/gpu/GpuDeviceInformationParser.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/dao/gpu/GpuDeviceInformationParser.java index 5c166571ccc..d21d6a0b72a 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/dao/gpu/GpuDeviceInformationParser.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/dao/gpu/GpuDeviceInformationParser.java @@ -18,20 +18,27 @@ package org.apache.hadoop.yarn.server.nodemanager.webapp.dao.gpu; -import org.apache.hadoop.classification.InterfaceAudience; -import org.apache.hadoop.classification.InterfaceStability; -import org.apache.hadoop.yarn.exceptions.YarnException; -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; -import org.xml.sax.InputSource; -import org.xml.sax.XMLReader; - +import java.io.StringReader; +import javax.xml.XMLConstants; import javax.xml.bind.JAXBContext; import javax.xml.bind.JAXBException; import javax.xml.bind.Unmarshaller; import javax.xml.parsers.SAXParserFactory; import javax.xml.transform.sax.SAXSource; -import java.io.StringReader; + +import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceStability; +import org.apache.hadoop.yarn.exceptions.YarnException; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.xml.sax.InputSource; +import org.xml.sax.XMLReader; + +import static org.apache.hadoop.util.XMLUtils.EXTERNAL_GENERAL_ENTITIES; +import static org.apache.hadoop.util.XMLUtils.EXTERNAL_PARAMETER_ENTITIES; +import static org.apache.hadoop.util.XMLUtils.LOAD_EXTERNAL_DECL; +import static org.apache.hadoop.util.XMLUtils.VALIDATION; /** * Parse XML and get GPU device information @@ -68,10 +75,11 @@ public class GpuDeviceInformationParser { */ private SAXParserFactory initSaxParserFactory() throws Exception { SAXParserFactory spf = SAXParserFactory.newInstance(); - spf.setFeature( - "http://apache.org/xml/features/nonvalidating/load-external-dtd", - false); - spf.setFeature("http://xml.org/sax/features/validation", false); + spf.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, true); + spf.setFeature(LOAD_EXTERNAL_DECL, false); + spf.setFeature(EXTERNAL_GENERAL_ENTITIES, false); + spf.setFeature(EXTERNAL_PARAMETER_ENTITIES, false); + spf.setFeature(VALIDATION, false); return spf; } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c index e1ec293cd47..b027e51bd31 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c @@ -56,17 +56,17 @@ char *__get_exec_readproc(char *procfn) { filename = malloc(EXECUTOR_PATH_MAX); if (!filename) { fprintf(ERRORFILE,"cannot allocate memory for filename before readlink: %s\n",strerror(errno)); - exit(-1); + exit(OUT_OF_MEMORY); } len = readlink(procfn, filename, EXECUTOR_PATH_MAX); if (len == -1) { - fprintf(ERRORFILE,"Can't get executable name from %s - %s\n", procfn, + fprintf(ERRORFILE,"Cannot get executable name from %s - %s\n", procfn, strerror(errno)); - exit(-1); + exit(CANNOT_GET_EXECUTABLE_NAME_FROM_READLINK); } else if (len >= EXECUTOR_PATH_MAX) { fprintf(ERRORFILE,"Resolved path for %s [%s] is longer than %d characters.\n", procfn, filename, EXECUTOR_PATH_MAX); - exit(-1); + exit(TOO_LONG_EXECUTOR_PATH); } filename[len] = '\0'; return filename; @@ -88,14 +88,14 @@ char *__get_exec_sysctl(int *mib) len = sizeof(buffer); if (sysctl(mib, 4, buffer, &len, NULL, 0) == -1) { - fprintf(ERRORFILE,"Can't get executable name from kernel: %s\n", + fprintf(ERRORFILE,"Cannot get executable name from kernel: %s\n", strerror(errno)); - exit(-1); + exit(CANNOT_GET_EXECUTABLE_NAME_FROM_KERNEL); } filename=malloc(EXECUTOR_PATH_MAX); if (!filename) { fprintf(ERRORFILE,"cannot allocate memory for filename after sysctl: %s\n",strerror(errno)); - exit(-1); + exit(OUT_OF_MEMORY); } snprintf(filename,EXECUTOR_PATH_MAX,"%s",buffer); return filename; @@ -120,13 +120,13 @@ char* get_executable(char *argv0) { filename = malloc(PROC_PIDPATHINFO_MAXSIZE); if (!filename) { fprintf(ERRORFILE,"cannot allocate memory for filename before proc_pidpath: %s\n",strerror(errno)); - exit(-1); + exit(OUT_OF_MEMORY); } pid = getpid(); if (proc_pidpath(pid,filename,PROC_PIDPATHINFO_MAXSIZE) <= 0) { - fprintf(ERRORFILE,"Can't get executable name from pid %u - %s\n", pid, + fprintf(ERRORFILE,"Cannot get executable name from pid %u - %s\n", pid, strerror(errno)); - exit(-1); + exit(CANNOT_GET_EXECUTABLE_NAME_FROM_PID); } return filename; } @@ -194,7 +194,7 @@ char* get_executable (char *argv0) { if (!filename) { fprintf(ERRORFILE,"realpath of executable: %s\n",strerror(errno)); - exit(-1); + exit(WRONG_PATH_OF_EXECUTABLE); } return filename; } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.c b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.c index c8ee7b461e6..33a388fc646 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.c +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.c @@ -337,6 +337,16 @@ const char *get_error_message(const int error_code) { return "runC run failed"; case ERROR_RUNC_REAP_LAYER_MOUNTS_FAILED: return "runC reap layer mounts failed"; + case CANNOT_GET_EXECUTABLE_NAME_FROM_READLINK: + return "Cannot get executable name from readlink"; + case TOO_LONG_EXECUTOR_PATH: + return "Too long executor path"; + case CANNOT_GET_EXECUTABLE_NAME_FROM_KERNEL: + return "Cannot get executable name from kernel"; + case CANNOT_GET_EXECUTABLE_NAME_FROM_PID: + return "Cannot get executable name from pid"; + case WRONG_PATH_OF_EXECUTABLE: + return "Wrong path of executable"; default: return "Unknown error code"; } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.h b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.h index 920888f1eff..73dfeb629d7 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.h +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.h @@ -104,7 +104,12 @@ enum errorcodes { ERROR_RUNC_SETUP_FAILED = 76, ERROR_RUNC_RUN_FAILED = 77, ERROR_RUNC_REAP_LAYER_MOUNTS_FAILED = 78, - ERROR_DOCKER_CONTAINER_EXEC_FAILED = 79 + ERROR_DOCKER_CONTAINER_EXEC_FAILED = 79, + CANNOT_GET_EXECUTABLE_NAME_FROM_READLINK = 80, + TOO_LONG_EXECUTOR_PATH = 81, + CANNOT_GET_EXECUTABLE_NAME_FROM_KERNEL = 82, + CANNOT_GET_EXECUTABLE_NAME_FROM_PID = 83, + WRONG_PATH_OF_EXECUTABLE = 84 }; /* Macros for min/max. */ diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeStatusUpdater.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeStatusUpdater.java index 511013cad19..6b8d928b7b1 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeStatusUpdater.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeStatusUpdater.java @@ -118,6 +118,7 @@ import org.apache.hadoop.yarn.server.nodemanager.security.NMTokenSecretManagerIn import org.apache.hadoop.yarn.server.security.ApplicationACLsManager; import org.apache.hadoop.yarn.server.utils.BuilderUtils; import org.apache.hadoop.yarn.server.utils.YarnServerBuilderUtils; +import org.apache.hadoop.yarn.util.resource.Resources; import org.junit.After; import org.junit.Assert; import org.junit.Before; @@ -243,7 +244,7 @@ public class TestNodeStatusUpdater extends NodeManagerTestBase { ContainerId.newContainerId(appAttemptID, heartBeatID.get()); ContainerLaunchContext launchContext = recordFactory .newRecordInstance(ContainerLaunchContext.class); - Resource resource = BuilderUtils.newResource(2, 1); + Resource resource = Resources.createResource(2, 1); long currentTime = System.currentTimeMillis(); String user = "testUser"; ContainerTokenIdentifier containerToken = BuilderUtils @@ -286,7 +287,7 @@ public class TestNodeStatusUpdater extends NodeManagerTestBase { .newRecordInstance(ContainerLaunchContext.class); long currentTime = System.currentTimeMillis(); String user = "testUser"; - Resource resource = BuilderUtils.newResource(3, 1); + Resource resource = Resources.createResource(3, 1); ContainerTokenIdentifier containerToken = BuilderUtils .newContainerTokenIdentifier(BuilderUtils.newContainerToken( secondContainerID, 0, InetAddress.getByName("localhost") @@ -990,7 +991,7 @@ public class TestNodeStatusUpdater extends NodeManagerTestBase { ContainerId cId = ContainerId.newContainerId(appAttemptId, 1); Token containerToken = BuilderUtils.newContainerToken(cId, 0, "anyHost", 1234, "anyUser", - BuilderUtils.newResource(1024, 1), 0, 123, + Resources.createResource(1024), 0, 123, "password".getBytes(), 0); Container anyCompletedContainer = new ContainerImpl(conf, null, null, null, null, @@ -1012,7 +1013,7 @@ public class TestNodeStatusUpdater extends NodeManagerTestBase { ContainerId.newContainerId(appAttemptId, 3); Token runningContainerToken = BuilderUtils.newContainerToken(runningContainerId, 0, "anyHost", - 1234, "anyUser", BuilderUtils.newResource(1024, 1), 0, 123, + 1234, "anyUser", Resources.createResource(1024), 0, 123, "password".getBytes(), 0); Container runningContainer = new ContainerImpl(conf, null, null, null, null, @@ -1071,7 +1072,7 @@ public class TestNodeStatusUpdater extends NodeManagerTestBase { ContainerId containerId = ContainerId.newContainerId(appAttemptId, 1); Token containerToken = BuilderUtils.newContainerToken(containerId, 0, "host", 1234, "user", - BuilderUtils.newResource(1024, 1), 0, 123, + Resources.createResource(1024), 0, 123, "password".getBytes(), 0); Container completedContainer = new ContainerImpl(conf, null, @@ -1110,7 +1111,7 @@ public class TestNodeStatusUpdater extends NodeManagerTestBase { ContainerId cId = ContainerId.newContainerId(appAttemptId, 1); Token containerToken = BuilderUtils.newContainerToken(cId, 0, "anyHost", 1234, "anyUser", - BuilderUtils.newResource(1024, 1), 0, 123, + Resources.createResource(1024), 0, 123, "password".getBytes(), 0); Container anyCompletedContainer = new ContainerImpl(conf, null, null, null, null, diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/BaseContainerManagerTest.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/BaseContainerManagerTest.java index 9ee3ce6bc8b..37045dc74b8 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/BaseContainerManagerTest.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/BaseContainerManagerTest.java @@ -93,6 +93,7 @@ import org.apache.hadoop.yarn.server.nodemanager.security.NMContainerTokenSecret import org.apache.hadoop.yarn.server.nodemanager.security.NMTokenSecretManagerInNM; import org.apache.hadoop.yarn.server.security.ApplicationACLsManager; import org.apache.hadoop.yarn.server.utils.BuilderUtils; +import org.apache.hadoop.yarn.util.resource.Resources; import org.junit.After; import org.junit.Assert; import org.junit.Before; @@ -439,7 +440,7 @@ public abstract class BaseContainerManagerTest { NMContainerTokenSecretManager containerTokenSecretManager, LogAggregationContext logAggregationContext) throws IOException { - Resource r = BuilderUtils.newResource(1024, 1); + Resource r = Resources.createResource(1024); return createContainerToken(cId, rmIdentifier, nodeId, user, r, containerTokenSecretManager, logAggregationContext); } @@ -449,7 +450,7 @@ public abstract class BaseContainerManagerTest { NMContainerTokenSecretManager containerTokenSecretManager, LogAggregationContext logAggregationContext, ContainerType containerType) throws IOException { - Resource r = BuilderUtils.newResource(1024, 1); + Resource r = Resources.createResource(1024); return createContainerToken(cId, rmIdentifier, nodeId, user, r, containerTokenSecretManager, logAggregationContext, containerType); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/TestContainer.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/TestContainer.java index a2ef9d9186a..3241cca2760 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/TestContainer.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/TestContainer.java @@ -106,6 +106,7 @@ import org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdater; import org.apache.hadoop.yarn.server.nodemanager.recovery.NMNullStateStoreService; import org.apache.hadoop.yarn.server.utils.BuilderUtils; import org.apache.hadoop.yarn.util.ControlledClock; +import org.apache.hadoop.yarn.util.resource.Resources; import org.junit.Assert; import org.junit.Test; import org.mockito.ArgumentMatcher; @@ -1391,7 +1392,7 @@ public class TestContainer { cId = BuilderUtils.newContainerId(appId, 1, timestamp, id); when(mockContainer.getId()).thenReturn(cId); - Resource resource = BuilderUtils.newResource(1024, 1); + Resource resource = Resources.createResource(1024); when(mockContainer.getResource()).thenReturn(resource); String host = "127.0.0.1"; int port = 1234; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java index 6b0732b4e5c..15dac3daa54 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java @@ -126,6 +126,7 @@ import org.apache.hadoop.yarn.util.Apps; import org.apache.hadoop.yarn.util.AuxiliaryServiceHelper; import org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin; import org.apache.hadoop.yarn.util.ResourceCalculatorPlugin; +import org.apache.hadoop.yarn.util.resource.Resources; import org.junit.Assert; import org.junit.Assume; import org.junit.Before; @@ -808,6 +809,7 @@ public class TestContainerLaunch extends BaseContainerManagerTest { resources.put(userjar, lpaths); Path nmp = new Path(testDir); + launch.addConfigsToEnv(userSetEnv); launch.sanitizeEnv(userSetEnv, pwd, appDirs, userLocalDirs, containerLogs, resources, nmp, nmEnvTrack); Assert.assertTrue(userSetEnv.containsKey("MALLOC_ARENA_MAX")); @@ -864,6 +866,7 @@ public class TestContainerLaunch extends BaseContainerManagerTest { ContainerLaunch launch = new ContainerLaunch(distContext, conf, dispatcher, exec, null, container, dirsHandler, containerManager); + launch.addConfigsToEnv(userSetEnv); launch.sanitizeEnv(userSetEnv, pwd, appDirs, userLocalDirs, containerLogs, resources, nmp, nmEnvTrack); @@ -876,6 +879,7 @@ public class TestContainerLaunch extends BaseContainerManagerTest { containerLaunchContext.setEnvironment(userSetEnv); when(container.getLaunchContext()).thenReturn(containerLaunchContext); + launch.addConfigsToEnv(userSetEnv); launch.sanitizeEnv(userSetEnv, pwd, appDirs, userLocalDirs, containerLogs, resources, nmp, nmEnvTrack); @@ -1478,7 +1482,7 @@ public class TestContainerLaunch extends BaseContainerManagerTest { protected Token createContainerToken(ContainerId cId, Priority priority, long createTime) throws InvalidToken { - Resource r = BuilderUtils.newResource(1024, 1); + Resource r = Resources.createResource(1024); ContainerTokenIdentifier containerTokenIdentifier = new ContainerTokenIdentifier(cId, context.getNodeId().toString(), user, r, System.currentTimeMillis() + 10000L, 123, DUMMY_RM_IDENTIFIER, diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainersLauncher.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainersLauncher.java index f48d7855695..d1b16507317 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainersLauncher.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainersLauncher.java @@ -18,7 +18,6 @@ package org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher; import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.test.Whitebox; import org.apache.hadoop.yarn.api.records.ApplicationAttemptId; import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.api.records.ContainerId; @@ -40,8 +39,6 @@ import org.mockito.MockitoAnnotations; import java.io.IOException; import java.util.ArrayList; -import java.util.Collections; -import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.concurrent.ConcurrentHashMap; @@ -123,10 +120,8 @@ public class TestContainersLauncher { @SuppressWarnings("unchecked") @Test public void testLaunchContainerEvent() - throws IllegalArgumentException, IllegalAccessException { - Map dummyMap = - (Map) Whitebox.getInternalState(spy, - "running"); + throws IllegalArgumentException { + Map dummyMap = spy.running; when(event.getType()) .thenReturn(ContainersLauncherEventType.LAUNCH_CONTAINER); assertEquals(0, dummyMap.size()); @@ -139,10 +134,8 @@ public class TestContainersLauncher { @SuppressWarnings("unchecked") @Test public void testRelaunchContainerEvent() - throws IllegalArgumentException, IllegalAccessException { - Map dummyMap = - (Map) Whitebox.getInternalState(spy, - "running"); + throws IllegalArgumentException { + Map dummyMap = spy.running; when(event.getType()) .thenReturn(ContainersLauncherEventType.RELAUNCH_CONTAINER); assertEquals(0, dummyMap.size()); @@ -159,10 +152,8 @@ public class TestContainersLauncher { @SuppressWarnings("unchecked") @Test public void testRecoverContainerEvent() - throws IllegalArgumentException, IllegalAccessException { - Map dummyMap = - (Map) Whitebox.getInternalState(spy, - "running"); + throws IllegalArgumentException { + Map dummyMap = spy.running; when(event.getType()) .thenReturn(ContainersLauncherEventType.RECOVER_CONTAINER); assertEquals(0, dummyMap.size()); @@ -178,7 +169,7 @@ public class TestContainersLauncher { @Test public void testRecoverPausedContainerEvent() - throws IllegalArgumentException, IllegalAccessException { + throws IllegalArgumentException { when(event.getType()) .thenReturn(ContainersLauncherEventType.RECOVER_PAUSED_CONTAINER); spy.handle(event); @@ -189,16 +180,14 @@ public class TestContainersLauncher { @Test public void testCleanupContainerEvent() throws IllegalArgumentException, IllegalAccessException, IOException { - Map dummyMap = Collections - .synchronizedMap(new HashMap()); - dummyMap.put(containerId, containerLaunch); - Whitebox.setInternalState(spy, "running", dummyMap); + spy.running.clear(); + spy.running.put(containerId, containerLaunch); when(event.getType()) .thenReturn(ContainersLauncherEventType.CLEANUP_CONTAINER); - assertEquals(1, dummyMap.size()); + assertEquals(1, spy.running.size()); spy.handle(event); - assertEquals(0, dummyMap.size()); + assertEquals(0, spy.running.size()); Mockito.verify(containerLauncher, Mockito.times(1)) .submit(Mockito.any(ContainerCleanup.class)); } @@ -206,10 +195,8 @@ public class TestContainersLauncher { @Test public void testCleanupContainerForReINITEvent() throws IllegalArgumentException, IllegalAccessException, IOException { - Map dummyMap = Collections - .synchronizedMap(new HashMap()); - dummyMap.put(containerId, containerLaunch); - Whitebox.setInternalState(spy, "running", dummyMap); + spy.running.clear(); + spy.running.put(containerId, containerLaunch); when(event.getType()) .thenReturn(ContainersLauncherEventType.CLEANUP_CONTAINER_FOR_REINIT); @@ -226,9 +213,6 @@ public class TestContainersLauncher { @Test public void testSignalContainerEvent() throws IllegalArgumentException, IllegalAccessException, IOException { - Map dummyMap = Collections - .synchronizedMap(new HashMap()); - dummyMap.put(containerId, containerLaunch); SignalContainersLauncherEvent dummyEvent = mock(SignalContainersLauncherEvent.class); @@ -238,7 +222,8 @@ public class TestContainersLauncher { when(containerId.getApplicationAttemptId().getApplicationId()) .thenReturn(appId); - Whitebox.setInternalState(spy, "running", dummyMap); + spy.running.clear(); + spy.running.put(containerId, containerLaunch); when(dummyEvent.getType()) .thenReturn(ContainersLauncherEventType.SIGNAL_CONTAINER); when(dummyEvent.getCommand()) @@ -246,7 +231,7 @@ public class TestContainersLauncher { doNothing().when(containerLaunch) .signalContainer(SignalContainerCommand.GRACEFUL_SHUTDOWN); spy.handle(dummyEvent); - assertEquals(1, dummyMap.size()); + assertEquals(1, spy.running.size()); Mockito.verify(containerLaunch, Mockito.times(1)) .signalContainer(SignalContainerCommand.GRACEFUL_SHUTDOWN); } @@ -254,30 +239,26 @@ public class TestContainersLauncher { @Test public void testPauseContainerEvent() throws IllegalArgumentException, IllegalAccessException, IOException { - Map dummyMap = Collections - .synchronizedMap(new HashMap()); - dummyMap.put(containerId, containerLaunch); - Whitebox.setInternalState(spy, "running", dummyMap); + spy.running.clear(); + spy.running.put(containerId, containerLaunch); when(event.getType()) .thenReturn(ContainersLauncherEventType.PAUSE_CONTAINER); doNothing().when(containerLaunch).pauseContainer(); spy.handle(event); - assertEquals(1, dummyMap.size()); + assertEquals(1, spy.running.size()); Mockito.verify(containerLaunch, Mockito.times(1)).pauseContainer(); } @Test public void testResumeContainerEvent() throws IllegalArgumentException, IllegalAccessException, IOException { - Map dummyMap = Collections - .synchronizedMap(new HashMap()); - dummyMap.put(containerId, containerLaunch); - Whitebox.setInternalState(spy, "running", dummyMap); + spy.running.clear(); + spy.running.put(containerId, containerLaunch); when(event.getType()) .thenReturn(ContainersLauncherEventType.RESUME_CONTAINER); doNothing().when(containerLaunch).resumeContainer(); spy.handle(event); - assertEquals(1, dummyMap.size()); + assertEquals(1, spy.running.size()); Mockito.verify(containerLaunch, Mockito.times(1)).resumeContainer(); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/TestDockerContainerRuntime.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/TestDockerContainerRuntime.java index f0ae037f9ff..ea7c2138093 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/TestDockerContainerRuntime.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/TestDockerContainerRuntime.java @@ -2033,19 +2033,27 @@ public class TestDockerContainerRuntime { @Test public void testDockerImageNamePattern() throws Exception { - String[] validNames = - { "ubuntu", "fedora/httpd:version1.0", - "fedora/httpd:version1.0.test", - "fedora/httpd:version1.0.TEST", - "myregistryhost:5000/ubuntu", - "myregistryhost:5000/fedora/httpd:version1.0", - "myregistryhost:5000/fedora/httpd:version1.0.test", - "myregistryhost:5000/fedora/httpd:version1.0.TEST"}; + String[] validNames = {"ubuntu", "fedora/httpd:version1.0", "fedora/httpd:version1.0.test", + "fedora/httpd:version1.0.TEST", "myregistryhost:5000/ubuntu", + "myregistryhost:5000/fedora/httpd:version1.0", + "myregistryhost:5000/fedora/httpd:version1.0.test", + "myregistryhost:5000/fedora/httpd:version1.0.TEST", + "123456789123.dkr.ecr.us-east-1.amazonaws.com/emr-docker-examples:pyspark-example" + + "@sha256:f1d4ae3f7261a72e98c6ebefe9985cf10a0ea5bd762585a43e0700ed99863807"}; - String[] invalidNames = { "Ubuntu", "ubuntu || fedora", "ubuntu#", - "myregistryhost:50AB0/ubuntu", "myregistry#host:50AB0/ubuntu", - ":8080/ubuntu" - }; + String[] invalidNames = {"Ubuntu", "ubuntu || fedora", "ubuntu#", "myregistryhost:50AB0/ubuntu", + "myregistry#host:50AB0/ubuntu", ":8080/ubuntu", + + // Invalid: contains "@sha256" but doesn't really contain a digest. + "123456789123.dkr.ecr.us-east-1.amazonaws.com/emr-docker-examples:pyspark-example@sha256", + + // Invalid: digest is too short. + "123456789123.dkr.ecr.us-east-1.amazonaws.com/emr-docker-examples:pyspark-example" + + "@sha256:f1d4", + + // Invalid: digest is too long + "123456789123.dkr.ecr.us-east-1.amazonaws.com/emr-docker-examples:pyspark-example" + + "@sha256:f1d4ae3f7261a72e98c6ebefe9985cf10a0ea5bd762585a43e0700ed99863807f"}; for (String name : validNames) { DockerLinuxContainerRuntime.validateImageName(name); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java index 4ec8f462f51..746826a1362 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java @@ -143,6 +143,7 @@ import org.apache.hadoop.yarn.server.nodemanager.executor.DeletionAsUserContext; import org.apache.hadoop.yarn.server.utils.BuilderUtils; import org.apache.hadoop.yarn.util.ConverterUtils; import org.apache.hadoop.yarn.util.Records; +import org.apache.hadoop.yarn.util.resource.Resources; import org.junit.Assert; import org.junit.Test; import org.mockito.ArgumentCaptor; @@ -2293,7 +2294,7 @@ public class TestLogAggregationService extends BaseContainerManagerTest { long cId, ContainerType containerType) { ContainerId containerId = BuilderUtils.newContainerId(appAttemptId1, cId); - Resource r = BuilderUtils.newResource(1024, 1); + Resource r = Resources.createResource(1024); ContainerTokenIdentifier containerToken = new ContainerTokenIdentifier( containerId, context.getNodeId().toString(), user, r, System.currentTimeMillis() + 100000L, 123, DUMMY_RM_IDENTIFIER, diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/TestContainersMonitor.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/TestContainersMonitor.java index 1719e1b11db..c4a2cfc0d7c 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/TestContainersMonitor.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/TestContainersMonitor.java @@ -86,6 +86,7 @@ import org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin; import org.apache.hadoop.yarn.util.ProcfsBasedProcessTree; import org.apache.hadoop.yarn.util.ResourceCalculatorPlugin; import org.apache.hadoop.yarn.util.TestProcfsBasedProcessTree; +import org.apache.hadoop.yarn.util.resource.Resources; import org.junit.Assert; import org.junit.Before; import org.junit.Test; @@ -301,7 +302,7 @@ public class TestContainersMonitor extends BaseContainerManagerTest { commands.add("/bin/bash"); commands.add(scriptFile.getAbsolutePath()); containerLaunchContext.setCommands(commands); - Resource r = BuilderUtils.newResource(0, 0); + Resource r = Resources.createResource(0); ContainerTokenIdentifier containerIdentifier = new ContainerTokenIdentifier(cId, context.getNodeId().toString(), user, r, System.currentTimeMillis() + 120000, 123, DUMMY_RM_IDENTIFIER, diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/TestContainerSchedulerBehaviorCompatibility.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/TestContainerSchedulerBehaviorCompatibility.java index 5b99285d2e5..f7d0b1f1aff 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/TestContainerSchedulerBehaviorCompatibility.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/TestContainerSchedulerBehaviorCompatibility.java @@ -25,7 +25,7 @@ import org.apache.hadoop.yarn.api.records.ContainerLaunchContext; import org.apache.hadoop.yarn.api.records.ExecutionType; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.server.nodemanager.containermanager.BaseContainerManagerTest; -import org.apache.hadoop.yarn.server.utils.BuilderUtils; +import org.apache.hadoop.yarn.util.resource.Resources; import org.junit.Assert; import org.junit.Before; import org.junit.Test; @@ -70,7 +70,7 @@ public class TestContainerSchedulerBehaviorCompatibility // on the RM side it won't check vcores at all. list.add(StartContainerRequest.newInstance(containerLaunchContext, createContainerToken(createContainerId(0), DUMMY_RM_IDENTIFIER, - context.getNodeId(), user, BuilderUtils.newResource(2048, 4), + context.getNodeId(), user, Resources.createResource(2048, 4), context.getContainerTokenSecretManager(), null, ExecutionType.GUARANTEED))); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/TestContainerSchedulerOppContainersByResources.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/TestContainerSchedulerOppContainersByResources.java index 30fbbde1299..e4be5f49932 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/TestContainerSchedulerOppContainersByResources.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/TestContainerSchedulerOppContainersByResources.java @@ -33,7 +33,7 @@ import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.server.nodemanager.containermanager.BaseContainerManagerTest; import org.apache.hadoop.yarn.server.nodemanager.containermanager.BaseContainerSchedulerTest; import org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerState; -import org.apache.hadoop.yarn.server.utils.BuilderUtils; +import org.apache.hadoop.yarn.util.resource.Resources; import org.junit.Assert; import org.junit.Test; @@ -161,7 +161,7 @@ public class TestContainerSchedulerOppContainersByResources recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(0), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(2048, 1), + user, Resources.createResource(2048), context.getContainerTokenSecretManager(), null, ExecutionType.GUARANTEED))); @@ -170,7 +170,7 @@ public class TestContainerSchedulerOppContainersByResources recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(1), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(2048, 1), + user, Resources.createResource(2048), context.getContainerTokenSecretManager(), null, ExecutionType.OPPORTUNISTIC))); @@ -219,7 +219,7 @@ public class TestContainerSchedulerOppContainersByResources recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(i), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(512, 1), + user, Resources.createResource(512), context.getContainerTokenSecretManager(), null, ExecutionType.OPPORTUNISTIC))); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/TestContainerSchedulerQueuing.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/TestContainerSchedulerQueuing.java index 218d03afe7e..fab3061304e 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/TestContainerSchedulerQueuing.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/TestContainerSchedulerQueuing.java @@ -41,7 +41,7 @@ import org.apache.hadoop.yarn.server.nodemanager.containermanager.BaseContainerS import org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerEventType; import org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerState; import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.ResourceHandlerChain; -import org.apache.hadoop.yarn.server.utils.BuilderUtils; +import org.apache.hadoop.yarn.util.resource.Resources; import org.junit.Assert; import org.junit.Test; @@ -86,14 +86,14 @@ public class TestContainerSchedulerQueuing extends BaseContainerSchedulerTest { recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(0), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(1024, 1), + user, Resources.createResource(1024), context.getContainerTokenSecretManager(), null, ExecutionType.GUARANTEED))); list.add(StartContainerRequest.newInstance( recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(1), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(1024, 1), + user, Resources.createResource(1024), context.getContainerTokenSecretManager(), null, ExecutionType.OPPORTUNISTIC))); @@ -141,14 +141,14 @@ public class TestContainerSchedulerQueuing extends BaseContainerSchedulerTest { recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(0), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(3072, 1), + user, Resources.createResource(3072), context.getContainerTokenSecretManager(), null, ExecutionType.GUARANTEED))); list.add(StartContainerRequest.newInstance( recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(1), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(3072, 1), + user, Resources.createResource(3072), context.getContainerTokenSecretManager(), null, ExecutionType.OPPORTUNISTIC))); @@ -198,21 +198,21 @@ public class TestContainerSchedulerQueuing extends BaseContainerSchedulerTest { recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(0), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(2048, 1), + user, Resources.createResource(2048), context.getContainerTokenSecretManager(), null, ExecutionType.OPPORTUNISTIC))); list.add(StartContainerRequest.newInstance( recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(1), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(1024, 1), + user, Resources.createResource(1024), context.getContainerTokenSecretManager(), null, ExecutionType.OPPORTUNISTIC))); list.add(StartContainerRequest.newInstance( recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(2), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(1024, 1), + user, Resources.createResource(1024), context.getContainerTokenSecretManager(), null, ExecutionType.OPPORTUNISTIC))); @@ -269,7 +269,7 @@ public class TestContainerSchedulerQueuing extends BaseContainerSchedulerTest { recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(0), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(2048, 1), + user, Resources.createResource(2048), context.getContainerTokenSecretManager(), null, ExecutionType.GUARANTEED))); @@ -281,7 +281,7 @@ public class TestContainerSchedulerQueuing extends BaseContainerSchedulerTest { recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(i), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(2048, 1), + user, Resources.createResource(2048), context.getContainerTokenSecretManager(), null, ExecutionType.OPPORTUNISTIC))); } @@ -354,21 +354,21 @@ public class TestContainerSchedulerQueuing extends BaseContainerSchedulerTest { recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(0), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(2048, 1), + user, Resources.createResource(2048), context.getContainerTokenSecretManager(), null, ExecutionType.OPPORTUNISTIC))); list.add(StartContainerRequest.newInstance( recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(1), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(2048, 1), + user, Resources.createResource(2048), context.getContainerTokenSecretManager(), null, ExecutionType.OPPORTUNISTIC))); list.add(StartContainerRequest.newInstance( recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(2), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(2048, 1), + user, Resources.createResource(2048), context.getContainerTokenSecretManager(), null, ExecutionType.GUARANTEED))); @@ -447,7 +447,7 @@ public class TestContainerSchedulerQueuing extends BaseContainerSchedulerTest { recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(0), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(2048, 1), + user, Resources.createResource(2048), context.getContainerTokenSecretManager(), null, ExecutionType.OPPORTUNISTIC))); @@ -463,7 +463,7 @@ public class TestContainerSchedulerQueuing extends BaseContainerSchedulerTest { recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(1), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(2048, 1), + user, Resources.createResource(2048), context.getContainerTokenSecretManager(), null, ExecutionType.GUARANTEED))); allRequests = @@ -560,7 +560,7 @@ public class TestContainerSchedulerQueuing extends BaseContainerSchedulerTest { containerLaunchContext, createContainerToken(createContainerId(0), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(2048, 1), + user, Resources.createResource(2048), context.getContainerTokenSecretManager(), null, ExecutionType.GUARANTEED))); @@ -573,42 +573,42 @@ public class TestContainerSchedulerQueuing extends BaseContainerSchedulerTest { recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(1), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(512, 1), + user, Resources.createResource(512), context.getContainerTokenSecretManager(), null, ExecutionType.OPPORTUNISTIC))); list.add(StartContainerRequest.newInstance( recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(2), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(512, 1), + user, Resources.createResource(512), context.getContainerTokenSecretManager(), null, ExecutionType.OPPORTUNISTIC))); list.add(StartContainerRequest.newInstance( recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(3), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(512, 1), + user, Resources.createResource(512), context.getContainerTokenSecretManager(), null, ExecutionType.OPPORTUNISTIC))); list.add(StartContainerRequest.newInstance( recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(4), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(512, 1), + user, Resources.createResource(512), context.getContainerTokenSecretManager(), null, ExecutionType.OPPORTUNISTIC))); list.add(StartContainerRequest.newInstance( recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(5), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(512, 1), + user, Resources.createResource(512), context.getContainerTokenSecretManager(), null, ExecutionType.OPPORTUNISTIC))); list.add(StartContainerRequest.newInstance( recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(6), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(512, 1), + user, Resources.createResource(512), context.getContainerTokenSecretManager(), null, ExecutionType.OPPORTUNISTIC))); @@ -681,7 +681,7 @@ public class TestContainerSchedulerQueuing extends BaseContainerSchedulerTest { containerLaunchContext, createContainerToken(createContainerId(0), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(2048, 1), + user, Resources.createResource(2048), context.getContainerTokenSecretManager(), null, ExecutionType.GUARANTEED))); @@ -694,14 +694,14 @@ public class TestContainerSchedulerQueuing extends BaseContainerSchedulerTest { recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(1), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(512, 1), + user, Resources.createResource(512), context.getContainerTokenSecretManager(), null, ExecutionType.OPPORTUNISTIC))); list.add(StartContainerRequest.newInstance( recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(2), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(512, 1), + user, Resources.createResource(512), context.getContainerTokenSecretManager(), null, ExecutionType.OPPORTUNISTIC))); @@ -743,21 +743,21 @@ public class TestContainerSchedulerQueuing extends BaseContainerSchedulerTest { recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(0), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(512, 1), + user, Resources.createResource(512), context.getContainerTokenSecretManager(), null, ExecutionType.OPPORTUNISTIC))); list.add(StartContainerRequest.newInstance( recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(1), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(512, 1), + user, Resources.createResource(512), context.getContainerTokenSecretManager(), null, ExecutionType.OPPORTUNISTIC))); list.add(StartContainerRequest.newInstance( recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(2), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(512, 1), + user, Resources.createResource(512), context.getContainerTokenSecretManager(), null, ExecutionType.OPPORTUNISTIC))); @@ -770,7 +770,7 @@ public class TestContainerSchedulerQueuing extends BaseContainerSchedulerTest { recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(3), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(1500, 1), + user, Resources.createResource(1500), context.getContainerTokenSecretManager(), null, ExecutionType.GUARANTEED))); @@ -824,7 +824,7 @@ public class TestContainerSchedulerQueuing extends BaseContainerSchedulerTest { recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(i), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(512, 1), + user, Resources.createResource(512), context.getContainerTokenSecretManager(), null, ExecutionType.OPPORTUNISTIC))); } @@ -840,7 +840,7 @@ public class TestContainerSchedulerQueuing extends BaseContainerSchedulerTest { recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(i), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(512, 1), + user, Resources.createResource(512), context.getContainerTokenSecretManager(), null, ExecutionType.GUARANTEED))); } @@ -888,21 +888,21 @@ public class TestContainerSchedulerQueuing extends BaseContainerSchedulerTest { recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(0), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(2048, 1), + user, Resources.createResource(2048), context.getContainerTokenSecretManager(), null, ExecutionType.GUARANTEED))); list.add(StartContainerRequest.newInstance( recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(1), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(512, 1), + user, Resources.createResource(512), context.getContainerTokenSecretManager(), null, ExecutionType.OPPORTUNISTIC))); list.add(StartContainerRequest.newInstance( recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(2), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(512, 1), + user, Resources.createResource(512), context.getContainerTokenSecretManager(), null, ExecutionType.OPPORTUNISTIC))); @@ -994,14 +994,14 @@ public class TestContainerSchedulerQueuing extends BaseContainerSchedulerTest { recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(0), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(2048, 1), + user, Resources.createResource(2048), context.getContainerTokenSecretManager(), null, ExecutionType.OPPORTUNISTIC))); list.add(StartContainerRequest.newInstance( recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(createContainerId(1), DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(1024, 1), + user, Resources.createResource(1024), context.getContainerTokenSecretManager(), null, ExecutionType.OPPORTUNISTIC))); @@ -1044,7 +1044,7 @@ public class TestContainerSchedulerQueuing extends BaseContainerSchedulerTest { // Promote Queued Opportunistic Container Token updateToken = createContainerToken(createContainerId(1), 1, DUMMY_RM_IDENTIFIER, - context.getNodeId(), user, BuilderUtils.newResource(1024, 1), + context.getNodeId(), user, Resources.createResource(1024), context.getContainerTokenSecretManager(), null, ExecutionType.GUARANTEED); List updateTokens = new ArrayList(); @@ -1115,7 +1115,7 @@ public class TestContainerSchedulerQueuing extends BaseContainerSchedulerTest { StartContainerRequest.newInstance( recordFactory.newRecordInstance(ContainerLaunchContext.class), createContainerToken(cId, DUMMY_RM_IDENTIFIER, - context.getNodeId(), user, BuilderUtils.newResource(512, 1), + context.getNodeId(), user, Resources.createResource(512), context.getContainerTokenSecretManager(), null)); List list = new ArrayList<>(); list.add(scRequest); @@ -1130,7 +1130,7 @@ public class TestContainerSchedulerQueuing extends BaseContainerSchedulerTest { List updateTokens = new ArrayList<>(); Token containerToken = createContainerToken(cId, 1, DUMMY_RM_IDENTIFIER, context.getNodeId(), - user, BuilderUtils.newResource(512, 1), + user, Resources.createResource(512), context.getContainerTokenSecretManager(), null, ExecutionType.OPPORTUNISTIC); updateTokens.add(containerToken); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/security/TestNMContainerTokenSecretManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/security/TestNMContainerTokenSecretManager.java index f2a46adaf8a..2f6698229ba 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/security/TestNMContainerTokenSecretManager.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/security/TestNMContainerTokenSecretManager.java @@ -38,6 +38,7 @@ import org.apache.hadoop.yarn.server.api.records.MasterKey; import org.apache.hadoop.yarn.server.nodemanager.recovery.NMMemoryStateStoreService; import org.apache.hadoop.yarn.server.security.BaseContainerTokenSecretManager; import org.apache.hadoop.yarn.server.utils.BuilderUtils; +import org.apache.hadoop.yarn.util.resource.Resources; import org.junit.Test; public class TestNMContainerTokenSecretManager { @@ -122,7 +123,7 @@ public class TestNMContainerTokenSecretManager { long rmid = cid.getApplicationAttemptId().getApplicationId() .getClusterTimestamp(); ContainerTokenIdentifier ctid = new ContainerTokenIdentifier(cid, - nodeId.toString(), user, BuilderUtils.newResource(1024, 1), + nodeId.toString(), user, Resources.createResource(1024), System.currentTimeMillis() + 100000L, secretMgr.getCurrentKey().getKeyId(), rmid, Priority.newInstance(0), 0); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/MockContainer.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/MockContainer.java index 6e07fa5034d..274d84858b2 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/MockContainer.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/MockContainer.java @@ -42,6 +42,7 @@ import org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Reso import org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceSet; import org.apache.hadoop.yarn.server.nodemanager.containermanager.runtime.ContainerExecutionException; import org.apache.hadoop.yarn.server.utils.BuilderUtils; +import org.apache.hadoop.yarn.util.resource.Resources; import java.io.IOException; import java.util.HashMap; @@ -73,7 +74,7 @@ public class MockContainer implements Container { this.containerTokenIdentifier = BuilderUtils.newContainerTokenIdentifier(BuilderUtils .newContainerToken(id, 0, "127.0.0.1", 1234, user, - BuilderUtils.newResource(1024, 1), currentTime + 10000, 123, + Resources.createResource(1024), currentTime + 10000, 123, "password".getBytes(), currentTime)); this.state = ContainerState.NEW; } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServer.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServer.java index cbfaa177921..3d070ea3d10 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServer.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServer.java @@ -53,7 +53,7 @@ import org.apache.hadoop.yarn.server.nodemanager.recovery.NMNullStateStoreServic import org.apache.hadoop.yarn.server.nodemanager.recovery.NMStateStoreService; import org.apache.hadoop.yarn.server.security.ApplicationACLsManager; import org.apache.hadoop.yarn.server.utils.BuilderUtils; - +import org.apache.hadoop.yarn.util.resource.Resources; import org.junit.After; import org.junit.Assert; import org.junit.Before; @@ -211,7 +211,7 @@ public class TestNMWebServer { long currentTime = System.currentTimeMillis(); Token containerToken = BuilderUtils.newContainerToken(containerId, 0, "127.0.0.1", 1234, - user, BuilderUtils.newResource(1024, 1), currentTime + 10000L, + user, Resources.createResource(1024), currentTime + 10000L, 123, "password".getBytes(), currentTime); Context context = mock(Context.class); Container container = diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServices.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServices.java index 30b73c1acc7..a7222f62e8a 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServices.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServices.java @@ -35,6 +35,7 @@ import org.apache.hadoop.fs.FileUtil; import org.apache.hadoop.fs.Path; import org.apache.hadoop.http.JettyUtils; import org.apache.hadoop.util.VersionInfo; +import org.apache.hadoop.util.XMLUtils; import org.apache.hadoop.yarn.api.records.ApplicationAttemptId; import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.api.records.ContainerId; @@ -432,10 +433,9 @@ public class TestNMWebServices extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML+ "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); - InputSource is = new InputSource(); - is.setCharacterStream(new StringReader(xml)); + InputSource is = new InputSource(new StringReader(xml)); Document dom = db.parse(is); NodeList nodes = dom.getElementsByTagName("nodeInfo"); assertEquals("incorrect number of elements", 1, nodes.getLength()); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServicesApps.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServicesApps.java index ab06c0f9f33..204cd00dd97 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServicesApps.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServicesApps.java @@ -40,6 +40,7 @@ import javax.xml.parsers.DocumentBuilderFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileUtil; import org.apache.hadoop.http.JettyUtils; +import org.apache.hadoop.util.XMLUtils; import org.apache.hadoop.yarn.api.records.ApplicationAttemptId; import org.apache.hadoop.yarn.api.records.NodeId; import org.apache.hadoop.yarn.conf.YarnConfiguration; @@ -486,7 +487,7 @@ public class TestNMWebServicesApps extends JerseyTestBase { response.getType().toString()); String msg = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(msg)); @@ -651,7 +652,7 @@ public class TestNMWebServicesApps extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); @@ -676,7 +677,7 @@ public class TestNMWebServicesApps extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServicesAuxServices.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServicesAuxServices.java index 7ec8fcd47d3..20e1fc98895 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServicesAuxServices.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServicesAuxServices.java @@ -40,6 +40,7 @@ import com.sun.jersey.api.client.filter.LoggingFilter; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileUtil; import org.apache.hadoop.http.JettyUtils; +import org.apache.hadoop.util.XMLUtils; import org.apache.hadoop.yarn.api.records.NodeId; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.server.nodemanager.Context; @@ -257,7 +258,7 @@ public class TestNMWebServicesAuxServices extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServicesContainers.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServicesContainers.java index 175a0b02470..e348b1559ea 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServicesContainers.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServicesContainers.java @@ -39,6 +39,7 @@ import com.sun.jersey.api.client.filter.LoggingFilter; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileUtil; import org.apache.hadoop.http.JettyUtils; +import org.apache.hadoop.util.XMLUtils; import org.apache.hadoop.yarn.api.records.ApplicationAttemptId; import org.apache.hadoop.yarn.api.records.ContainerId; import org.apache.hadoop.yarn.api.records.NodeId; @@ -447,7 +448,7 @@ public class TestNMWebServicesContainers extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); @@ -476,7 +477,7 @@ public class TestNMWebServicesContainers extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java index 74ecec33d07..08c664472a8 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java @@ -199,6 +199,7 @@ import org.apache.hadoop.yarn.util.UTCClock; import org.apache.hadoop.classification.VisibleForTesting; import org.apache.hadoop.yarn.util.resource.ResourceUtils; +import org.apache.hadoop.yarn.util.resource.Resources; import org.apache.hadoop.yarn.util.timeline.TimelineUtils; @@ -405,22 +406,11 @@ public class ClientRMService extends AbstractService implements throw new ApplicationNotFoundException("Invalid application id: null"); } - UserGroupInformation callerUGI; - try { - callerUGI = UserGroupInformation.getCurrentUser(); - } catch (IOException ie) { - LOG.info("Error getting UGI ", ie); - throw RPCUtil.getRemoteException(ie); - } + UserGroupInformation callerUGI = getCallerUgi(applicationId, + AuditConstants.GET_APP_REPORT); - RMApp application = this.rmContext.getRMApps().get(applicationId); - if (application == null) { - // If the RM doesn't have the application, throw - // ApplicationNotFoundException and let client to handle. - throw new ApplicationNotFoundException("Application with id '" - + applicationId + "' doesn't exist in RM. Please check " - + "that the job submission was successful."); - } + RMApp application = verifyUserAccessForRMApp(applicationId, callerUGI, + AuditConstants.GET_APP_REPORT, ApplicationAccessType.VIEW_APP, false); boolean allowAccess = checkAccess(callerUGI, application.getUser(), ApplicationAccessType.VIEW_APP, application); @@ -860,12 +850,14 @@ public class ClientRMService extends AbstractService implements .newRecordInstance(YarnClusterMetrics.class); ymetrics.setNumNodeManagers(this.rmContext.getRMNodes().size()); ClusterMetrics clusterMetrics = ClusterMetrics.getMetrics(); + ymetrics.setNumDecommissioningNodeManagers(clusterMetrics.getNumDecommissioningNMs()); ymetrics.setNumDecommissionedNodeManagers(clusterMetrics .getNumDecommisionedNMs()); ymetrics.setNumActiveNodeManagers(clusterMetrics.getNumActiveNMs()); ymetrics.setNumLostNodeManagers(clusterMetrics.getNumLostNMs()); ymetrics.setNumUnhealthyNodeManagers(clusterMetrics.getUnhealthyNMs()); ymetrics.setNumRebootedNodeManagers(clusterMetrics.getNumRebootedNMs()); + ymetrics.setNumShutdownNodeManagers(clusterMetrics.getNumShutdownNMs()); response.setClusterMetrics(ymetrics); return response; } @@ -878,13 +870,8 @@ public class ClientRMService extends AbstractService implements @Override public GetApplicationsResponse getApplications(GetApplicationsRequest request) throws YarnException { - UserGroupInformation callerUGI; - try { - callerUGI = UserGroupInformation.getCurrentUser(); - } catch (IOException ie) { - LOG.info("Error getting UGI ", ie); - throw RPCUtil.getRemoteException(ie); - } + UserGroupInformation callerUGI = getCallerUgi(null, + AuditConstants.GET_APPLICATIONS_REQUEST); Set applicationTypes = getLowerCasedAppTypes(request); EnumSet applicationStates = @@ -1046,13 +1033,8 @@ public class ClientRMService extends AbstractService implements @Override public GetQueueInfoResponse getQueueInfo(GetQueueInfoRequest request) throws YarnException { - UserGroupInformation callerUGI; - try { - callerUGI = UserGroupInformation.getCurrentUser(); - } catch (IOException ie) { - LOG.info("Error getting UGI ", ie); - throw RPCUtil.getRemoteException(ie); - } + UserGroupInformation callerUGI = getCallerUgi(null, + AuditConstants.GET_QUEUE_INFO_REQUEST); GetQueueInfoResponse response = recordFactory.newRecordInstance(GetQueueInfoResponse.class); @@ -1105,7 +1087,7 @@ public class ClientRMService extends AbstractService implements private NodeReport createNodeReports(RMNode rmNode) { SchedulerNodeReport schedulerNodeReport = scheduler.getNodeReport(rmNode.getNodeID()); - Resource used = BuilderUtils.newResource(0, 0); + Resource used = Resources.createResource(0); int numContainers = 0; if (schedulerNodeReport != null) { used = schedulerNodeReport.getUsedResource(); @@ -1718,16 +1700,10 @@ public class ClientRMService extends AbstractService implements SignalContainerRequest request) throws YarnException, IOException { ContainerId containerId = request.getContainerId(); - UserGroupInformation callerUGI; - try { - callerUGI = UserGroupInformation.getCurrentUser(); - } catch (IOException ie) { - LOG.info("Error getting UGI ", ie); - throw RPCUtil.getRemoteException(ie); - } - ApplicationId applicationId = containerId.getApplicationAttemptId(). getApplicationId(); + UserGroupInformation callerUGI = getCallerUgi(applicationId, + AuditConstants.SIGNAL_CONTAINER); RMApp application = this.rmContext.getRMApps().get(applicationId); if (application == null) { RMAuditLogger.logFailure(callerUGI.getUserName(), diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/DefaultAMSProcessor.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/DefaultAMSProcessor.java index bab7e521dc0..98fb9b34942 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/DefaultAMSProcessor.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/DefaultAMSProcessor.java @@ -407,7 +407,7 @@ final class DefaultAMSProcessor implements ApplicationMasterServiceProcessor { RMNode rmNode = rmNodeEntry.getKey(); SchedulerNodeReport schedulerNodeReport = getScheduler().getNodeReport(rmNode.getNodeID()); - Resource used = BuilderUtils.newResource(0, 0); + Resource used = Resources.createResource(0); int numContainers = 0; if (schedulerNodeReport != null) { used = schedulerNodeReport.getUsedResource(); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java index f847152c47d..4c7cdb125d1 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java @@ -28,9 +28,11 @@ import java.util.concurrent.Future; import org.apache.hadoop.yarn.api.records.Container; import org.apache.hadoop.yarn.api.records.NodeId; +import org.apache.hadoop.yarn.conf.HAUtil; import org.apache.hadoop.yarn.security.ConfiguredYarnAuthorizer; import org.apache.hadoop.yarn.security.Permission; import org.apache.hadoop.yarn.security.PrivilegedEntity; +import org.apache.hadoop.yarn.server.resourcemanager.federation.FederationStateStoreService; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueuePath; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -114,6 +116,7 @@ public class RMAppManager implements EventHandler, private boolean nodeLabelsEnabled; private Set exclusiveEnforcedPartitions; private String amDefaultNodeLabel; + private FederationStateStoreService federationStateStoreService; private static final String USER_ID_PREFIX = "userid="; @@ -347,6 +350,7 @@ public class RMAppManager implements EventHandler, + ", removing app " + removeApp.getApplicationId() + " from state store."); rmContext.getStateStore().removeApplication(removeApp); + removeApplicationIdFromStateStore(removeId); completedAppsInStateStore--; } @@ -358,6 +362,7 @@ public class RMAppManager implements EventHandler, + this.maxCompletedAppsInMemory + ", removing app " + removeId + " from memory: "); rmContext.getRMApps().remove(removeId); + removeApplicationIdFromStateStore(removeId); this.applicationACLsManager.removeApplication(removeId); } } @@ -1002,7 +1007,7 @@ public class RMAppManager implements EventHandler, .checkAccess(callerUGI, QueueACL.SUBMIT_APPLICATIONS, queue)) { usernameUsedForPlacement = userNameFromAppTag; } else { - LOG.warn("User '{}' from application tag does not have access to " + + LOG.warn("Proxy user '{}' from application tag does not have access to " + " queue '{}'. " + "The placement is done for user '{}'", userNameFromAppTag, queue, user); } @@ -1054,4 +1059,42 @@ public class RMAppManager implements EventHandler, context.setQueue(placementContext.getQueue()); } } + + @VisibleForTesting + public void setFederationStateStoreService(FederationStateStoreService stateStoreService) { + this.federationStateStoreService = stateStoreService; + } + + /** + * Remove ApplicationId From StateStore. + * + * @param appId appId + */ + private void removeApplicationIdFromStateStore(ApplicationId appId) { + if (HAUtil.isFederationEnabled(conf) && federationStateStoreService != null) { + try { + boolean cleanUpResult = + federationStateStoreService.cleanUpFinishApplicationsWithRetries(appId, true); + if(cleanUpResult){ + LOG.info("applicationId = {} remove from state store success.", appId); + } else { + LOG.warn("applicationId = {} remove from state store failed.", appId); + } + } catch (Exception e) { + LOG.error("applicationId = {} remove from state store error.", appId, e); + } + } + } + + // just test using + @VisibleForTesting + public void checkAppNumCompletedLimit4Test() { + checkAppNumCompletedLimit(); + } + + // just test using + @VisibleForTesting + public void finishApplication4Test(ApplicationId applicationId) { + finishApplication(applicationId); + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java index 854b6ca64e2..cc54d0b5861 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java @@ -57,6 +57,7 @@ public class RMAuditLogger { public static final String GET_APP_PRIORITY = "Get Application Priority"; public static final String GET_APP_QUEUE = "Get Application Queue"; public static final String GET_APP_ATTEMPTS = "Get Application Attempts"; + public static final String GET_APP_REPORT = "Get Application Report"; public static final String GET_APP_ATTEMPT_REPORT = "Get Application Attempt Report"; public static final String GET_CONTAINERS = "Get Containers"; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java index 79115902bcd..de7a7626600 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java @@ -453,6 +453,17 @@ public class RMServerUtils { } } + public static YarnApplicationAttemptState convertRmAppAttemptStateToYarnApplicationAttemptState( + RMAppAttemptState currentState, + RMAppAttemptState previousState + ) { + return createApplicationAttemptState( + currentState == RMAppAttemptState.FINAL_SAVING + ? previousState + : currentState + ); + } + public static YarnApplicationAttemptState createApplicationAttemptState( RMAppAttemptState rmAppAttemptState) { switch (rmAppAttemptState) { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java index 8adcff42a69..1bcfdbbafa8 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java @@ -917,6 +917,7 @@ public class ResourceManager extends CompositeService } federationStateStoreService = createFederationStateStoreService(); addIfService(federationStateStoreService); + rmAppManager.setFederationStateStoreService(federationStateStoreService); LOG.info("Initialized Federation membership."); } @@ -996,6 +997,13 @@ public class ResourceManager extends CompositeService RMState state = rmStore.loadState(); recover(state); LOG.info("Recovery ended"); + + // Make sure that the App is cleaned up after the RM memory is restored. + if (HAUtil.isFederationEnabled(conf)) { + federationStateStoreService. + createCleanUpFinishApplicationThread("Recovery"); + } + } catch (Exception e) { // the Exception from loadState() needs to be handled for // HA and we need to give up master status if we got fenced diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/federation/FederationStateStoreService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/federation/FederationStateStoreService.java index 060540d01ee..92376872919 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/federation/FederationStateStoreService.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/federation/FederationStateStoreService.java @@ -20,18 +20,22 @@ package org.apache.hadoop.yarn.server.resourcemanager.federation; import java.io.IOException; import java.net.InetSocketAddress; +import java.util.HashMap; +import java.util.Map; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.TimeUnit; +import java.util.List; -import org.apache.commons.lang3.NotImplementedException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.io.retry.RetryPolicy; import org.apache.hadoop.net.NetUtils; import org.apache.hadoop.service.AbstractService; import org.apache.hadoop.util.concurrent.HadoopExecutors; +import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.exceptions.YarnException; import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; +import org.apache.hadoop.yarn.server.federation.retry.FederationActionRetry; import org.apache.hadoop.yarn.server.federation.store.FederationStateStore; import org.apache.hadoop.yarn.server.federation.store.records.AddApplicationHomeSubClusterRequest; import org.apache.hadoop.yarn.server.federation.store.records.AddApplicationHomeSubClusterResponse; @@ -74,10 +78,14 @@ import org.apache.hadoop.yarn.server.federation.store.records.UpdateReservationH import org.apache.hadoop.yarn.server.federation.store.records.UpdateReservationHomeSubClusterResponse; import org.apache.hadoop.yarn.server.federation.store.records.RouterMasterKeyRequest; import org.apache.hadoop.yarn.server.federation.store.records.RouterMasterKeyResponse; +import org.apache.hadoop.yarn.server.federation.store.records.RouterRMTokenRequest; +import org.apache.hadoop.yarn.server.federation.store.records.RouterRMTokenResponse; +import org.apache.hadoop.yarn.server.federation.store.records.ApplicationHomeSubCluster; import org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade; import org.apache.hadoop.yarn.server.records.Version; import org.apache.hadoop.yarn.server.resourcemanager.RMContext; import org.apache.hadoop.yarn.server.resourcemanager.ResourceManager; +import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMApp; import org.apache.hadoop.yarn.webapp.util.WebAppUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -102,6 +110,9 @@ public class FederationStateStoreService extends AbstractService private long heartbeatInterval; private long heartbeatInitialDelay; private RMContext rmContext; + private String cleanUpThreadNamePrefix = "FederationStateStoreService-Clean-Thread"; + private int cleanUpRetryCountNum; + private long cleanUpRetrySleepTime; public FederationStateStoreService(RMContext rmContext) { super(FederationStateStoreService.class.getName()); @@ -149,6 +160,15 @@ public class FederationStateStoreService extends AbstractService heartbeatInitialDelay = YarnConfiguration.DEFAULT_FEDERATION_STATESTORE_HEARTBEAT_INITIAL_DELAY; } + + cleanUpRetryCountNum = conf.getInt(YarnConfiguration.FEDERATION_STATESTORE_CLEANUP_RETRY_COUNT, + YarnConfiguration.DEFAULT_FEDERATION_STATESTORE_CLEANUP_RETRY_COUNT); + + cleanUpRetrySleepTime = conf.getTimeDuration( + YarnConfiguration.FEDERATION_STATESTORE_CLEANUP_RETRY_SLEEP_TIME, + YarnConfiguration.DEFAULT_FEDERATION_STATESTORE_CLEANUP_RETRY_SLEEP_TIME, + TimeUnit.MILLISECONDS); + LOG.info("Initialized federation membership service."); super.serviceInit(conf); @@ -364,18 +384,223 @@ public class FederationStateStoreService extends AbstractService @Override public RouterMasterKeyResponse storeNewMasterKey(RouterMasterKeyRequest request) throws YarnException, IOException { - throw new NotImplementedException("Code is not implemented"); + return stateStoreClient.storeNewMasterKey(request); } @Override public RouterMasterKeyResponse removeStoredMasterKey(RouterMasterKeyRequest request) throws YarnException, IOException { - throw new NotImplementedException("Code is not implemented"); + return stateStoreClient.removeStoredMasterKey(request); } @Override public RouterMasterKeyResponse getMasterKeyByDelegationKey(RouterMasterKeyRequest request) throws YarnException, IOException { - throw new NotImplementedException("Code is not implemented"); + return stateStoreClient.getMasterKeyByDelegationKey(request); } + + @Override + public RouterRMTokenResponse storeNewToken(RouterRMTokenRequest request) + throws YarnException, IOException { + return stateStoreClient.storeNewToken(request); + } + + @Override + public RouterRMTokenResponse updateStoredToken(RouterRMTokenRequest request) + throws YarnException, IOException { + return stateStoreClient.updateStoredToken(request); + } + + @Override + public RouterRMTokenResponse removeStoredToken(RouterRMTokenRequest request) + throws YarnException, IOException { + return stateStoreClient.removeStoredToken(request); + } + + @Override + public RouterRMTokenResponse getTokenByRouterStoreToken(RouterRMTokenRequest request) + throws YarnException, IOException { + return stateStoreClient.getTokenByRouterStoreToken(request); + } + + @Override + public int incrementDelegationTokenSeqNum() { + return stateStoreClient.incrementDelegationTokenSeqNum(); + } + + @Override + public int getDelegationTokenSeqNum() { + return stateStoreClient.getDelegationTokenSeqNum(); + } + + @Override + public void setDelegationTokenSeqNum(int seqNum) { + stateStoreClient.setDelegationTokenSeqNum(seqNum); + } + + @Override + public int getCurrentKeyId() { + return stateStoreClient.getCurrentKeyId(); + } + + @Override + public int incrementCurrentKeyId() { + return stateStoreClient.incrementCurrentKeyId(); + } + + /** + * Create a thread that cleans up the app. + * @param stage rm-start/rm-stop. + */ + public void createCleanUpFinishApplicationThread(String stage) { + String threadName = cleanUpThreadNamePrefix + "-" + stage; + Thread finishApplicationThread = new Thread(createCleanUpFinishApplicationThread()); + finishApplicationThread.setName(threadName); + finishApplicationThread.start(); + LOG.info("CleanUpFinishApplicationThread has been started {}.", threadName); + } + + /** + * Create a thread that cleans up the apps. + * + * @return thread object. + */ + private Runnable createCleanUpFinishApplicationThread() { + return () -> { + createCleanUpFinishApplication(); + }; + } + + /** + * cleans up the apps. + */ + private void createCleanUpFinishApplication() { + try { + // Get the current RM's App list based on subClusterId + GetApplicationsHomeSubClusterRequest request = + GetApplicationsHomeSubClusterRequest.newInstance(subClusterId); + GetApplicationsHomeSubClusterResponse response = + getApplicationsHomeSubCluster(request); + List applicationHomeSCs = response.getAppsHomeSubClusters(); + + // Traverse the app list and clean up the app. + long successCleanUpAppCount = 0; + + // Save a local copy of the map so that it won't change with the map + Map rmApps = new HashMap<>(this.rmContext.getRMApps()); + + // Need to make sure there is app list in RM memory. + if (rmApps != null && !rmApps.isEmpty()) { + for (ApplicationHomeSubCluster applicationHomeSC : applicationHomeSCs) { + ApplicationId applicationId = applicationHomeSC.getApplicationId(); + if (!rmApps.containsKey(applicationId)) { + try { + Boolean cleanUpSuccess = cleanUpFinishApplicationsWithRetries(applicationId, false); + if (cleanUpSuccess) { + LOG.info("application = {} has been cleaned up successfully.", applicationId); + successCleanUpAppCount++; + } + } catch (Exception e) { + LOG.error("problem during application = {} cleanup.", applicationId, e); + } + } + } + } + + // print app cleanup log + LOG.info("cleanup finished applications size = {}, number = {} successful cleanup.", + applicationHomeSCs.size(), successCleanUpAppCount); + } catch (Exception e) { + LOG.error("problem during cleanup applications.", e); + } + } + + /** + * Clean up the federation completed Application. + * + * @param appId app id. + * @param isQuery true, need to query from statestore, false not query. + * @throws Exception exception occurs. + * @return true, successfully deleted; false, failed to delete or no need to delete + */ + public boolean cleanUpFinishApplicationsWithRetries(ApplicationId appId, boolean isQuery) + throws Exception { + + // Generate a request to delete data + DeleteApplicationHomeSubClusterRequest req = + DeleteApplicationHomeSubClusterRequest.newInstance(appId); + + // CleanUp Finish App. + return ((FederationActionRetry) (retry) -> invokeCleanUpFinishApp(appId, isQuery, req)) + .runWithRetries(cleanUpRetryCountNum, cleanUpRetrySleepTime); + } + + /** + * CleanUp Finish App. + * + * @param applicationId app id. + * @param isQuery true, need to query from statestore, false not query. + * @param delRequest delete Application Request + * @return true, successfully deleted; false, failed to delete or no need to delete + * @throws YarnException + */ + private boolean invokeCleanUpFinishApp(ApplicationId applicationId, boolean isQuery, + DeleteApplicationHomeSubClusterRequest delRequest) throws YarnException { + boolean isAppNeedClean = true; + // If we need to query the StateStore + if (isQuery) { + isAppNeedClean = isApplicationNeedClean(applicationId); + } + // When the App needs to be cleaned up, clean up the App. + if (isAppNeedClean) { + DeleteApplicationHomeSubClusterResponse response = + deleteApplicationHomeSubCluster(delRequest); + if (response != null) { + LOG.info("The applicationId = {} has been successfully cleaned up.", applicationId); + return true; + } + } + return false; + } + + /** + * Used to determine whether the Application is cleaned up. + * + * When the app in the RM is completed, + * the HomeSC corresponding to the app will be queried in the StateStore. + * If the current RM is the HomeSC, the completed app will be cleaned up. + * + * @param applicationId applicationId + * @return true, app needs to be cleaned up; + * false, app doesn't need to be cleaned up. + */ + private boolean isApplicationNeedClean(ApplicationId applicationId) { + GetApplicationHomeSubClusterRequest queryRequest = + GetApplicationHomeSubClusterRequest.newInstance(applicationId); + // Here we need to use try...catch, + // because getApplicationHomeSubCluster may throw not exist exception + try { + GetApplicationHomeSubClusterResponse queryResp = + getApplicationHomeSubCluster(queryRequest); + if (queryResp != null) { + ApplicationHomeSubCluster appHomeSC = queryResp.getApplicationHomeSubCluster(); + SubClusterId homeSubClusterId = appHomeSC.getHomeSubCluster(); + if (!subClusterId.equals(homeSubClusterId)) { + LOG.warn("The homeSubCluster of applicationId = {} belong subCluster = {}, " + + " not belong subCluster = {} and is not allowed to delete.", + applicationId, homeSubClusterId, subClusterId); + return false; + } + } else { + LOG.warn("The applicationId = {} not belong subCluster = {} " + + " and is not allowed to delete.", applicationId, subClusterId); + return false; + } + } catch (Exception e) { + LOG.warn("query applicationId = {} error.", applicationId, e); + return false; + } + return true; + } + } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/CSMappingPlacementRule.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/CSMappingPlacementRule.java index cefed1dd9fd..e0fab39b053 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/CSMappingPlacementRule.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/CSMappingPlacementRule.java @@ -228,7 +228,11 @@ public class CSMappingPlacementRule extends PlacementRule { ApplicationSubmissionContext asc, String user) { VariableContext vctx = new VariableContext(); - vctx.put("%user", cleanName(user)); + String cleanedName = cleanName(user); + if (!user.equals(cleanedName)) { + vctx.putOriginal("%user", user); + } + vctx.put("%user", cleanedName); //If the specified matches the default it means NO queue have been specified //as per ClientRMService#submitApplication which sets the queue to default //when no queue is provided. @@ -239,7 +243,7 @@ public class CSMappingPlacementRule extends PlacementRule { //Adding specified as empty will prevent it to be undefined and it won't //try to place the application to a queue named '%specified', queue path //validation will reject the empty path or the path with empty parts, - //so we sill still hit the fallback action of this rule if no queue + //so we still hit the fallback action of this rule if no queue //is specified vctx.put("%specified", ""); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/VariableContext.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/VariableContext.java index d60e7b5630a..e8e419c64eb 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/VariableContext.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/VariableContext.java @@ -39,6 +39,7 @@ public class VariableContext { * This is our actual variable store. */ private Map variables = new HashMap<>(); + private Map originalVariables = new HashMap<>(); /** * This is our conditional variable store. @@ -124,6 +125,10 @@ public class VariableContext { return this; } + public void putOriginal(String name, String value) { + originalVariables.put(name, value); + } + /** * This method is used to add a conditional variable to the variable context. * @param name Name of the variable @@ -150,6 +155,10 @@ public class VariableContext { return ret == null ? "" : ret; } + public String getOriginal(String name) { + return originalVariables.get(name); + } + /** * Adds a set to the context, each name can only be added once. The extra * dataset is different from the regular variables because it cannot be diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/csmappingrule/MappingRuleMatchers.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/csmappingrule/MappingRuleMatchers.java index 9d56e89121c..0466dcffe97 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/csmappingrule/MappingRuleMatchers.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/csmappingrule/MappingRuleMatchers.java @@ -87,6 +87,12 @@ public class MappingRuleMatchers { } String substituted = variables.replaceVariables(value); + + String originalVariableValue = variables.getOriginal(variable); + if (originalVariableValue != null) { + return substituted.equals(originalVariableValue); + } + return substituted.equals(variables.get(variable)); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttempt.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttempt.java index cfd91e902c9..5d78c8b354d 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttempt.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttempt.java @@ -206,6 +206,14 @@ public interface RMAppAttempt extends EventHandler { */ RMAppAttemptState getState(); + /** + * The previous state of the {@link RMAppAttempt} before the current state. + * + * @return the previous state of the {@link RMAppAttempt} before the current state + * for this application attempt. + */ + RMAppAttemptState getPreviousState(); + /** * Create the external user-facing state of the attempt of ApplicationMaster * from the current state of the {@link RMAppAttempt}. diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java index 5c34fe8f918..6010bd21a18 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java @@ -2220,13 +2220,22 @@ public class RMAppAttemptImpl implements RMAppAttempt, Recoverable { } @Override - public YarnApplicationAttemptState createApplicationAttemptState() { - RMAppAttemptState state = getState(); - // If AppAttempt is in FINAL_SAVING state, return its previous state. - if (state.equals(RMAppAttemptState.FINAL_SAVING)) { - state = stateBeforeFinalSaving; + public RMAppAttemptState getPreviousState() { + this.readLock.lock(); + + try { + return this.stateMachine.getPreviousState(); + } finally { + this.readLock.unlock(); } - return RMServerUtils.createApplicationAttemptState(state); + } + + @Override + public YarnApplicationAttemptState createApplicationAttemptState() { + return RMServerUtils.convertRmAppAttemptStateToYarnApplicationAttemptState( + getState(), + stateBeforeFinalSaving + ); } private void launchAttempt(){ diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbsoluteResourceCapacityCalculator.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbsoluteResourceCapacityCalculator.java new file mode 100644 index 00000000000..33b45741079 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbsoluteResourceCapacityCalculator.java @@ -0,0 +1,135 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + *

    + * http://www.apache.org/licenses/LICENSE-2.0 + *

    + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity; + +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacityVector.ResourceUnitCapacityType; +import org.apache.hadoop.yarn.util.UnitsConversionUtil; + +import java.util.Map; + +import static org.apache.hadoop.yarn.api.records.ResourceInformation.MEMORY_URI; +import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueUpdateWarning.QueueUpdateWarningType.BRANCH_DOWNSCALED; +import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ResourceCalculationDriver.MB_UNIT; + +public class AbsoluteResourceCapacityCalculator extends AbstractQueueCapacityCalculator { + + @Override + public void calculateResourcePrerequisites(ResourceCalculationDriver resourceCalculationDriver) { + setNormalizedResourceRatio(resourceCalculationDriver); + } + + @Override + public double calculateMinimumResource( + ResourceCalculationDriver resourceCalculationDriver, CalculationContext context, + String label) { + String resourceName = context.getResourceName(); + double normalizedRatio = resourceCalculationDriver.getNormalizedResourceRatios().getOrDefault( + label, ResourceVector.of(1)).getValue(resourceName); + double remainingResourceRatio = resourceCalculationDriver.getRemainingRatioOfResource( + label, resourceName); + + return normalizedRatio * remainingResourceRatio * context.getCurrentMinimumCapacityEntry( + label).getResourceValue(); + } + + @Override + public double calculateMaximumResource( + ResourceCalculationDriver resourceCalculationDriver, CalculationContext context, + String label) { + return context.getCurrentMaximumCapacityEntry(label).getResourceValue(); + } + + @Override + public void updateCapacitiesAfterCalculation( + ResourceCalculationDriver resourceCalculationDriver, CSQueue queue, String label) { + CapacitySchedulerQueueCapacityHandler.setQueueCapacities( + resourceCalculationDriver.getUpdateContext().getUpdatedClusterResource(label), queue, label); + } + + @Override + public ResourceUnitCapacityType getCapacityType() { + return ResourceUnitCapacityType.ABSOLUTE; + } + + /** + * Calculates the normalized resource ratio of a parent queue, under which children are defined + * with absolute capacity type. If the effective resource of the parent is less, than the + * aggregated configured absolute resource of its children, the resource ratio will be less, + * than 1. + * + * @param calculationDriver the driver, which contains the parent queue that will form the base + * of the normalization calculation + */ + public static void setNormalizedResourceRatio(ResourceCalculationDriver calculationDriver) { + CSQueue queue = calculationDriver.getQueue(); + + for (String label : queue.getConfiguredNodeLabels()) { + // ManagedParents assign zero capacity to queues in case of overutilization, downscaling is + // turned off for their children + if (queue instanceof ManagedParentQueue) { + return; + } + + for (String resourceName : queue.getConfiguredCapacityVector(label).getResourceNames()) { + long childrenConfiguredResource = 0; + long effectiveMinResource = queue.getQueueResourceQuotas().getEffectiveMinResource( + label).getResourceValue(resourceName); + + // Total configured min resources of direct children of the queue + for (CSQueue childQueue : queue.getChildQueues()) { + if (!childQueue.getConfiguredNodeLabels().contains(label)) { + continue; + } + QueueCapacityVector capacityVector = childQueue.getConfiguredCapacityVector(label); + if (capacityVector.isResourceOfType(resourceName, ResourceUnitCapacityType.ABSOLUTE)) { + childrenConfiguredResource += capacityVector.getResource(resourceName) + .getResourceValue(); + } + } + // If no children is using ABSOLUTE capacity type, normalization is not needed + if (childrenConfiguredResource == 0) { + continue; + } + // Factor to scale down effective resource: When cluster has sufficient + // resources, effective_min_resources will be same as configured + // min_resources. + float numeratorForMinRatio = childrenConfiguredResource; + if (effectiveMinResource < childrenConfiguredResource) { + numeratorForMinRatio = queue.getQueueResourceQuotas().getEffectiveMinResource(label) + .getResourceValue(resourceName); + calculationDriver.getUpdateContext().addUpdateWarning(BRANCH_DOWNSCALED.ofQueue( + queue.getQueuePath())); + } + + String unit = resourceName.equals(MEMORY_URI) ? MB_UNIT : ""; + long convertedValue = UnitsConversionUtil.convert(unit, calculationDriver.getUpdateContext() + .getUpdatedClusterResource(label).getResourceInformation(resourceName).getUnits(), + childrenConfiguredResource); + + if (convertedValue != 0) { + Map normalizedResourceRatios = + calculationDriver.getNormalizedResourceRatios(); + normalizedResourceRatios.putIfAbsent(label, ResourceVector.newInstance()); + normalizedResourceRatios.get(label).setValue(resourceName, numeratorForMinRatio / + convertedValue); + } + } + } + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java index 1a5a1ce0fd4..f9304cc9604 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java @@ -29,10 +29,8 @@ import org.apache.hadoop.util.Time; import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.api.records.Priority; import org.apache.hadoop.yarn.api.records.QueueACL; -import org.apache.hadoop.yarn.api.records.QueueConfigurations; import org.apache.hadoop.yarn.api.records.QueueInfo; import org.apache.hadoop.yarn.api.records.QueueState; -import org.apache.hadoop.yarn.api.records.QueueStatistics; import org.apache.hadoop.yarn.api.records.Resource; import org.apache.hadoop.yarn.api.records.ResourceInformation; import org.apache.hadoop.yarn.exceptions.YarnException; @@ -117,6 +115,7 @@ public abstract class AbstractCSQueue implements CSQueue { CapacityConfigType.NONE; protected Map configuredCapacityVectors; + protected Map configuredMaxCapacityVectors; private final RecordFactory recordFactory = RecordFactoryProvider.getRecordFactory(null); @@ -381,7 +380,10 @@ public abstract class AbstractCSQueue implements CSQueue { this.configuredCapacityVectors = configuration .parseConfiguredResourceVector(queuePath.getFullPath(), this.queueNodeLabelsSettings.getConfiguredNodeLabels()); - + this.configuredMaxCapacityVectors = configuration + .parseConfiguredMaximumCapacityVector(queuePath.getFullPath(), + this.queueNodeLabelsSettings.getConfiguredNodeLabels(), + QueueCapacityVector.newInstance()); // Update metrics CSQueueUtils.updateQueueStatistics(resourceCalculator, clusterResource, this, labelManager, null); @@ -535,7 +537,8 @@ public abstract class AbstractCSQueue implements CSQueue { private void validateAbsoluteVsPercentageCapacityConfig( CapacityConfigType localType) { if (!queuePath.isRoot() - && !this.capacityConfigType.equals(localType)) { + && !this.capacityConfigType.equals(localType) && + queueContext.getConfiguration().isLegacyQueueMode()) { throw new IllegalArgumentException("Queue '" + getQueuePath() + "' should use either percentage based capacity" + " configuration or absolute resource."); @@ -574,93 +577,31 @@ public abstract class AbstractCSQueue implements CSQueue { } @Override - public QueueCapacityVector getConfiguredCapacityVector( - String label) { + public QueueCapacityVector getConfiguredCapacityVector(String label) { return configuredCapacityVectors.get(label); } + @Override + public QueueCapacityVector getConfiguredMaxCapacityVector(String label) { + return configuredMaxCapacityVectors.get(label); + } + + @Override + public void setConfiguredMinCapacityVector(String label, QueueCapacityVector minCapacityVector) { + configuredCapacityVectors.put(label, minCapacityVector); + } + + @Override + public void setConfiguredMaxCapacityVector(String label, QueueCapacityVector maxCapacityVector) { + configuredMaxCapacityVectors.put(label, maxCapacityVector); + } + protected QueueInfo getQueueInfo() { // Deliberately doesn't use lock here, because this method will be invoked // from schedulerApplicationAttempt, to avoid deadlock, sacrifice // consistency here. // TODO, improve this - QueueInfo queueInfo = recordFactory.newRecordInstance(QueueInfo.class); - queueInfo.setQueueName(queuePath.getLeafName()); - queueInfo.setQueuePath(queuePath.getFullPath()); - queueInfo.setAccessibleNodeLabels(queueNodeLabelsSettings.getAccessibleNodeLabels()); - queueInfo.setCapacity(queueCapacities.getCapacity()); - queueInfo.setMaximumCapacity(queueCapacities.getMaximumCapacity()); - queueInfo.setQueueState(getState()); - queueInfo.setDefaultNodeLabelExpression(queueNodeLabelsSettings.getDefaultLabelExpression()); - queueInfo.setCurrentCapacity(getUsedCapacity()); - queueInfo.setQueueStatistics(getQueueStatistics()); - queueInfo.setPreemptionDisabled(preemptionSettings.isPreemptionDisabled()); - queueInfo.setIntraQueuePreemptionDisabled( - getIntraQueuePreemptionDisabled()); - queueInfo.setQueueConfigurations(getQueueConfigurations()); - queueInfo.setWeight(queueCapacities.getWeight()); - queueInfo.setMaxParallelApps(queueAppLifetimeSettings.getMaxParallelApps()); - return queueInfo; - } - - public QueueStatistics getQueueStatistics() { - // Deliberately doesn't use lock here, because this method will be invoked - // from schedulerApplicationAttempt, to avoid deadlock, sacrifice - // consistency here. - // TODO, improve this - QueueStatistics stats = recordFactory.newRecordInstance( - QueueStatistics.class); - stats.setNumAppsSubmitted(getMetrics().getAppsSubmitted()); - stats.setNumAppsRunning(getMetrics().getAppsRunning()); - stats.setNumAppsPending(getMetrics().getAppsPending()); - stats.setNumAppsCompleted(getMetrics().getAppsCompleted()); - stats.setNumAppsKilled(getMetrics().getAppsKilled()); - stats.setNumAppsFailed(getMetrics().getAppsFailed()); - stats.setNumActiveUsers(getMetrics().getActiveUsers()); - stats.setAvailableMemoryMB(getMetrics().getAvailableMB()); - stats.setAllocatedMemoryMB(getMetrics().getAllocatedMB()); - stats.setPendingMemoryMB(getMetrics().getPendingMB()); - stats.setReservedMemoryMB(getMetrics().getReservedMB()); - stats.setAvailableVCores(getMetrics().getAvailableVirtualCores()); - stats.setAllocatedVCores(getMetrics().getAllocatedVirtualCores()); - stats.setPendingVCores(getMetrics().getPendingVirtualCores()); - stats.setReservedVCores(getMetrics().getReservedVirtualCores()); - stats.setPendingContainers(getMetrics().getPendingContainers()); - stats.setAllocatedContainers(getMetrics().getAllocatedContainers()); - stats.setReservedContainers(getMetrics().getReservedContainers()); - return stats; - } - - public Map getQueueConfigurations() { - Map queueConfigurations = new HashMap<>(); - Set nodeLabels = getNodeLabelsForQueue(); - QueueResourceQuotas queueResourceQuotas = usageTracker.getQueueResourceQuotas(); - for (String nodeLabel : nodeLabels) { - QueueConfigurations queueConfiguration = - recordFactory.newRecordInstance(QueueConfigurations.class); - float capacity = queueCapacities.getCapacity(nodeLabel); - float absoluteCapacity = queueCapacities.getAbsoluteCapacity(nodeLabel); - float maxCapacity = queueCapacities.getMaximumCapacity(nodeLabel); - float absMaxCapacity = - queueCapacities.getAbsoluteMaximumCapacity(nodeLabel); - float maxAMPercentage = - queueCapacities.getMaxAMResourcePercentage(nodeLabel); - queueConfiguration.setCapacity(capacity); - queueConfiguration.setAbsoluteCapacity(absoluteCapacity); - queueConfiguration.setMaxCapacity(maxCapacity); - queueConfiguration.setAbsoluteMaxCapacity(absMaxCapacity); - queueConfiguration.setMaxAMPercentage(maxAMPercentage); - queueConfiguration.setConfiguredMinCapacity( - queueResourceQuotas.getConfiguredMinResource(nodeLabel)); - queueConfiguration.setConfiguredMaxCapacity( - queueResourceQuotas.getConfiguredMaxResource(nodeLabel)); - queueConfiguration.setEffectiveMinCapacity( - queueResourceQuotas.getEffectiveMinResource(nodeLabel)); - queueConfiguration.setEffectiveMaxCapacity( - queueResourceQuotas.getEffectiveMaxResource(nodeLabel)); - queueConfigurations.put(nodeLabel, queueConfiguration); - } - return queueConfigurations; + return CSQueueInfoProvider.getQueueInfo(this); } @Private @@ -769,6 +710,11 @@ public abstract class AbstractCSQueue implements CSQueue { return readLock; } + @Override + public ReentrantReadWriteLock.WriteLock getWriteLock() { + return writeLock; + } + private Resource getCurrentLimitResource(String nodePartition, Resource clusterResource, ResourceLimits currentResourceLimits, SchedulingMode schedulingMode) { @@ -905,6 +851,11 @@ public abstract class AbstractCSQueue implements CSQueue { } + @Override + public Set getConfiguredNodeLabels() { + return queueNodeLabelsSettings.getConfiguredNodeLabels(); + } + private static String ensurePartition(String partition) { return Optional.ofNullable(partition).orElse(NO_LABEL); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractLeafQueue.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractLeafQueue.java index 08fedb578ca..72ea63a2fc5 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractLeafQueue.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractLeafQueue.java @@ -88,6 +88,9 @@ import org.slf4j.LoggerFactory; import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.getACLsForFlexibleAutoCreatedLeafQueue; +import static org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager.NO_LABEL; +import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacityVector.ResourceUnitCapacityType.PERCENTAGE; + public class AbstractLeafQueue extends AbstractCSQueue { private static final Logger LOG = LoggerFactory.getLogger(AbstractLeafQueue.class); @@ -164,7 +167,7 @@ public class AbstractLeafQueue extends AbstractCSQueue { resourceCalculator); // One time initialization is enough since it is static ordering policy - this.pendingOrderingPolicy = new FifoOrderingPolicyForPendingApps(); + this.pendingOrderingPolicy = new FifoOrderingPolicyForPendingApps<>(); } @SuppressWarnings("checkstyle:nowhitespaceafter") @@ -1936,6 +1939,49 @@ public class AbstractLeafQueue extends AbstractCSQueue { currentResourceLimits.getLimit())); } + @Override + public void refreshAfterResourceCalculation(Resource clusterResource, + ResourceLimits resourceLimits) { + lastClusterResource = clusterResource; + // Update maximum applications for the queue and for users + updateMaximumApplications(); + + updateCurrentResourceLimits(resourceLimits, clusterResource); + + // Update headroom info based on new cluster resource value + // absoluteMaxCapacity now, will be replaced with absoluteMaxAvailCapacity + // during allocation + setQueueResourceLimitsInfo(clusterResource); + + // Update user consumedRatios + recalculateQueueUsageRatio(clusterResource, null); + + // Update metrics + CSQueueUtils.updateQueueStatistics(resourceCalculator, clusterResource, + this, labelManager, null); + // Update configured capacity/max-capacity for default partition only + CSQueueUtils.updateConfiguredCapacityMetrics(resourceCalculator, + labelManager.getResourceByLabel(null, clusterResource), + NO_LABEL, this); + + // queue metrics are updated, more resource may be available + // activate the pending applications if possible + activateApplications(); + + // In case of any resource change, invalidate recalculateULCount to clear + // the computed user-limit. + usersManager.userLimitNeedsRecompute(); + + // Update application properties + for (FiCaSchedulerApp application : orderingPolicy + .getSchedulableEntities()) { + computeUserLimitAndSetHeadroom(application, clusterResource, + NO_LABEL, + SchedulingMode.RESPECT_PARTITION_EXCLUSIVITY, null); + + } + } + @Override public void updateClusterResource(Resource clusterResource, ResourceLimits currentResourceLimits) { @@ -2225,10 +2271,12 @@ public class AbstractLeafQueue extends AbstractCSQueue { } public void setCapacity(float capacity) { + configuredCapacityVectors.put(NO_LABEL, QueueCapacityVector.of(capacity * 100, PERCENTAGE)); queueCapacities.setCapacity(capacity); } public void setCapacity(String nodeLabel, float capacity) { + configuredCapacityVectors.put(nodeLabel, QueueCapacityVector.of(capacity * 100, PERCENTAGE)); queueCapacities.setCapacity(nodeLabel, capacity); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractQueueCapacityCalculator.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractQueueCapacityCalculator.java new file mode 100644 index 00000000000..8b48da88ff8 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractQueueCapacityCalculator.java @@ -0,0 +1,109 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + *

    + * http://www.apache.org/licenses/LICENSE-2.0 + *

    + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity; + +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacityVector.ResourceUnitCapacityType; +import java.util.Set; + +/** + * A strategy class to encapsulate queue capacity setup and resource calculation + * logic. + */ +public abstract class AbstractQueueCapacityCalculator { + + /** + * Sets the metrics and statistics after effective resource values calculation. + * + * @param queue the queue on which the calculations are based + * @param resourceCalculationDriver driver that contains the intermediate calculation results for + * a queue branch + * @param label node label + */ + public abstract void updateCapacitiesAfterCalculation( + ResourceCalculationDriver resourceCalculationDriver, CSQueue queue, String label); + + + /** + * Returns the capacity type the calculator could handle. + * + * @return capacity type + */ + public abstract ResourceUnitCapacityType getCapacityType(); + + /** + * Calculates the minimum effective resource. + * + * @param resourceCalculationDriver driver that contains the intermediate calculation results for + * a queue branch + * @param context the units evaluated in the current iteration phase + * @param label node label + * @return minimum effective resource + */ + public abstract double calculateMinimumResource(ResourceCalculationDriver resourceCalculationDriver, + CalculationContext context, + String label); + + /** + * Calculates the maximum effective resource. + * + * @param resourceCalculationDriver driver that contains the intermediate calculation results for + * a queue branch + * @param context the units evaluated in the current iteration phase + * @param label node label + * @return minimum effective resource + */ + public abstract double calculateMaximumResource(ResourceCalculationDriver resourceCalculationDriver, + CalculationContext context, + String label); + + /** + * Executes all logic that must be called prior to the effective resource value calculations. + * + * @param resourceCalculationDriver driver that contains the parent queue on which the + * prerequisite calculation should be made + */ + public abstract void calculateResourcePrerequisites( + ResourceCalculationDriver resourceCalculationDriver); + + /** + * Returns all resource names that are defined for the capacity type that is + * handled by the calculator. + * + * @param queue queue for which the capacity vector is defined + * @param label node label + * @return resource names + */ + protected Set getResourceNames(CSQueue queue, String label) { + return getResourceNames(queue, label, getCapacityType()); + } + + /** + * Returns all resource names that are defined for a capacity type. + * + * @param queue queue for which the capacity vector is defined + * @param label node label + * @param capacityType capacity type for which the resource names are defined + * @return resource names + */ + protected Set getResourceNames(CSQueue queue, String label, + ResourceUnitCapacityType capacityType) { + return queue.getConfiguredCapacityVector(label) + .getResourceNamesByCapacityType(capacityType); + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueue.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueue.java index e2aeaab4180..91dab98ce76 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueue.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueue.java @@ -121,7 +121,7 @@ public interface CSQueue extends SchedulerQueue { * cumulative capacity in the cluster */ public float getAbsoluteCapacity(); - + /** * Get the configured maximum-capacity of the queue. * @return the configured maximum-capacity of the queue @@ -169,7 +169,7 @@ public interface CSQueue extends SchedulerQueue { * @return max-parallel-applications */ public int getMaxParallelApps(); - + /** * Get child queues * @return child queues @@ -270,6 +270,9 @@ public interface CSQueue extends SchedulerQueue { public void reinitialize(CSQueue newlyParsedQueue, Resource clusterResource) throws IOException; + public void refreshAfterResourceCalculation( + Resource clusterResource, ResourceLimits resourceLimits); + /** * Update the cluster resource for queues as we add/remove nodes * @param clusterResource the current cluster resource @@ -388,6 +391,12 @@ public interface CSQueue extends SchedulerQueue { */ public ReentrantReadWriteLock.ReadLock getReadLock(); + /** + * Get writeLock associated with the Queue. + * @return writeLock of corresponding queue. + */ + ReentrantReadWriteLock.WriteLock getWriteLock(); + /** * Validate submitApplication api so that moveApplication do a pre-check. * @param applicationId Application ID @@ -433,13 +442,37 @@ public interface CSQueue extends SchedulerQueue { Resource getEffectiveCapacity(String label); /** - * Get configured capacity resource vector parsed from the capacity config + * Get configured capacity vector parsed from the capacity config * of the queue. * @param label node label (partition) * @return capacity resource vector */ QueueCapacityVector getConfiguredCapacityVector(String label); + /** + * Get configured maximum capacity vector parsed from the capacity config + * of the queue. + * @param label node label (partition) + * @return capacity resource vector + */ + QueueCapacityVector getConfiguredMaxCapacityVector(String label); + + /** + * Sets the configured minimum capacity vector to a specific value. + * @param label node label (partition) + * @param minCapacityVector capacity vector + */ + void setConfiguredMinCapacityVector(String label, QueueCapacityVector minCapacityVector); + + /** + * Sets the configured maximum capacity vector to a specific value. + * @param label node label (partition) + * @param maxCapacityVector capacity vector + */ + void setConfiguredMaxCapacityVector(String label, QueueCapacityVector maxCapacityVector); + + Set getConfiguredNodeLabels(); + /** * Get effective capacity of queue. If min/max resource is configured, * preference will be given to absolute configuration over normal capacity. diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueInfoProvider.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueInfoProvider.java new file mode 100644 index 00000000000..8daca2bc26b --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueInfoProvider.java @@ -0,0 +1,117 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity; + +import org.apache.hadoop.yarn.api.records.QueueConfigurations; +import org.apache.hadoop.yarn.api.records.QueueInfo; +import org.apache.hadoop.yarn.api.records.QueueStatistics; +import org.apache.hadoop.yarn.factories.RecordFactory; +import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider; +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueResourceQuotas; + +import java.util.HashMap; +import java.util.Map; +import java.util.Set; + +public final class CSQueueInfoProvider { + + private static final RecordFactory RECORD_FACTORY = + RecordFactoryProvider.getRecordFactory(null); + + private CSQueueInfoProvider() { + } + + public static QueueInfo getQueueInfo(AbstractCSQueue csQueue) { + QueueInfo queueInfo = RECORD_FACTORY.newRecordInstance(QueueInfo.class); + queueInfo.setQueueName(csQueue.getQueuePathObject().getLeafName()); + queueInfo.setQueuePath(csQueue.getQueuePathObject().getFullPath()); + queueInfo.setAccessibleNodeLabels(csQueue.getAccessibleNodeLabels()); + queueInfo.setCapacity(csQueue.getCapacity()); + queueInfo.setMaximumCapacity(csQueue.getMaximumCapacity()); + queueInfo.setQueueState(csQueue.getState()); + queueInfo.setDefaultNodeLabelExpression(csQueue.getDefaultNodeLabelExpression()); + queueInfo.setCurrentCapacity(csQueue.getUsedCapacity()); + queueInfo.setQueueStatistics(getQueueStatistics(csQueue)); + queueInfo.setPreemptionDisabled(csQueue.getPreemptionDisabled()); + queueInfo.setIntraQueuePreemptionDisabled( + csQueue.getIntraQueuePreemptionDisabled()); + queueInfo.setQueueConfigurations(getQueueConfigurations(csQueue)); + queueInfo.setWeight(csQueue.getQueueCapacities().getWeight()); + queueInfo.setMaxParallelApps(csQueue.getMaxParallelApps()); + return queueInfo; + } + + private static QueueStatistics getQueueStatistics(AbstractCSQueue csQueue) { + QueueStatistics stats = RECORD_FACTORY.newRecordInstance( + QueueStatistics.class); + CSQueueMetrics queueMetrics = csQueue.getMetrics(); + stats.setNumAppsSubmitted(queueMetrics.getAppsSubmitted()); + stats.setNumAppsRunning(queueMetrics.getAppsRunning()); + stats.setNumAppsPending(queueMetrics.getAppsPending()); + stats.setNumAppsCompleted(queueMetrics.getAppsCompleted()); + stats.setNumAppsKilled(queueMetrics.getAppsKilled()); + stats.setNumAppsFailed(queueMetrics.getAppsFailed()); + stats.setNumActiveUsers(queueMetrics.getActiveUsers()); + stats.setAvailableMemoryMB(queueMetrics.getAvailableMB()); + stats.setAllocatedMemoryMB(queueMetrics.getAllocatedMB()); + stats.setPendingMemoryMB(queueMetrics.getPendingMB()); + stats.setReservedMemoryMB(queueMetrics.getReservedMB()); + stats.setAvailableVCores(queueMetrics.getAvailableVirtualCores()); + stats.setAllocatedVCores(queueMetrics.getAllocatedVirtualCores()); + stats.setPendingVCores(queueMetrics.getPendingVirtualCores()); + stats.setReservedVCores(queueMetrics.getReservedVirtualCores()); + stats.setPendingContainers(queueMetrics.getPendingContainers()); + stats.setAllocatedContainers(queueMetrics.getAllocatedContainers()); + stats.setReservedContainers(queueMetrics.getReservedContainers()); + return stats; + } + + private static Map getQueueConfigurations(AbstractCSQueue csQueue) { + Map queueConfigurations = new HashMap<>(); + Set nodeLabels = csQueue.getNodeLabelsForQueue(); + QueueResourceQuotas queueResourceQuotas = csQueue.getQueueResourceQuotas(); + for (String nodeLabel : nodeLabels) { + QueueConfigurations queueConfiguration = + RECORD_FACTORY.newRecordInstance(QueueConfigurations.class); + QueueCapacities queueCapacities = csQueue.getQueueCapacities(); + float capacity = queueCapacities.getCapacity(nodeLabel); + float absoluteCapacity = queueCapacities.getAbsoluteCapacity(nodeLabel); + float maxCapacity = queueCapacities.getMaximumCapacity(nodeLabel); + float absMaxCapacity = + queueCapacities.getAbsoluteMaximumCapacity(nodeLabel); + float maxAMPercentage = + queueCapacities.getMaxAMResourcePercentage(nodeLabel); + queueConfiguration.setCapacity(capacity); + queueConfiguration.setAbsoluteCapacity(absoluteCapacity); + queueConfiguration.setMaxCapacity(maxCapacity); + queueConfiguration.setAbsoluteMaxCapacity(absMaxCapacity); + queueConfiguration.setMaxAMPercentage(maxAMPercentage); + queueConfiguration.setConfiguredMinCapacity( + queueResourceQuotas.getConfiguredMinResource(nodeLabel)); + queueConfiguration.setConfiguredMaxCapacity( + queueResourceQuotas.getConfiguredMaxResource(nodeLabel)); + queueConfiguration.setEffectiveMinCapacity( + queueResourceQuotas.getEffectiveMinResource(nodeLabel)); + queueConfiguration.setEffectiveMaxCapacity( + queueResourceQuotas.getEffectiveMaxResource(nodeLabel)); + queueConfigurations.put(nodeLabel, queueConfiguration); + } + return queueConfigurations; + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueUsageTracker.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueUsageTracker.java index 0f18e944e9a..dd6b9b17ac0 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueUsageTracker.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueUsageTracker.java @@ -75,4 +75,5 @@ public class CSQueueUsageTracker { public QueueResourceQuotas getQueueResourceQuotas() { return queueResourceQuotas; } + } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CalculationContext.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CalculationContext.java new file mode 100644 index 00000000000..7ec85e19b1f --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CalculationContext.java @@ -0,0 +1,72 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + *

    + * http://www.apache.org/licenses/LICENSE-2.0 + *

    + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity; + +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacityVector.QueueCapacityVectorEntry; +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacityVector.ResourceUnitCapacityType; + +/** + * A storage class that wraps arguments used in a resource calculation iteration. + */ +public class CalculationContext { + private final String resourceName; + private final ResourceUnitCapacityType capacityType; + private final CSQueue queue; + + public CalculationContext(String resourceName, ResourceUnitCapacityType capacityType, + CSQueue queue) { + this.resourceName = resourceName; + this.capacityType = capacityType; + this.queue = queue; + } + + public String getResourceName() { + return resourceName; + } + + public ResourceUnitCapacityType getCapacityType() { + return capacityType; + } + + public CSQueue getQueue() { + return queue; + } + + /** + * A shorthand to return the minimum capacity vector entry for the currently evaluated child and + * resource name. + * + * @param label node label + * @return capacity vector entry + */ + public QueueCapacityVectorEntry getCurrentMinimumCapacityEntry(String label) { + return queue.getConfiguredCapacityVector(label).getResource(resourceName); + } + + /** + * A shorthand to return the maximum capacity vector entry for the currently evaluated child and + * resource name. + * + * @param label node label + * @return capacity vector entry + */ + public QueueCapacityVectorEntry getCurrentMaximumCapacityEntry(String label) { + return queue.getConfiguredMaxCapacityVector(label).getResource(resourceName); + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java index 51616da14b6..757120e1621 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java @@ -423,6 +423,8 @@ public class CapacitySchedulerConfiguration extends ReservationSchedulerConfigur private static final QueueCapacityConfigParser queueCapacityConfigParser = new QueueCapacityConfigParser(); + private static final String LEGACY_QUEUE_MODE_ENABLED = PREFIX + "legacy-queue-mode.enabled"; + public static final boolean DEFAULT_LEGACY_QUEUE_MODE = true; private ConfigurationProperties configurationProperties; @@ -572,8 +574,10 @@ public class CapacitySchedulerConfiguration extends ReservationSchedulerConfigur String configuredCapacity = get(getQueuePrefix(queue.getFullPath()) + CAPACITY); boolean absoluteResourceConfigured = (configuredCapacity != null) && RESOURCE_PATTERN.matcher(configuredCapacity).find(); + boolean isCapacityVectorFormat = queueCapacityConfigParser + .isCapacityVectorFormat(configuredCapacity); if (absoluteResourceConfigured || configuredWeightAsCapacity( - configuredCapacity)) { + configuredCapacity) || isCapacityVectorFormat) { // Return capacity in percentage as 0 for non-root queues and 100 for // root.From AbstractCSQueue, absolute resource will be parsed and // updated. Once nodes are added/removed in cluster, capacity in @@ -623,7 +627,8 @@ public class CapacitySchedulerConfiguration extends ReservationSchedulerConfigur public float getNonLabeledQueueMaximumCapacity(QueuePath queue) { String configuredCapacity = get(getQueuePrefix(queue.getFullPath()) + MAXIMUM_CAPACITY); boolean matcher = (configuredCapacity != null) - && RESOURCE_PATTERN.matcher(configuredCapacity).find(); + && RESOURCE_PATTERN.matcher(configuredCapacity).find() + || queueCapacityConfigParser.isCapacityVectorFormat(configuredCapacity); if (matcher) { // Return capacity in percentage as 0 for non-root queues and 100 for // root.From AbstractCSQueue, absolute resource will be parsed and @@ -819,6 +824,16 @@ public class CapacitySchedulerConfiguration extends ReservationSchedulerConfigur return Collections.unmodifiableSet(set); } + public void setCapacityVector(String queuePath, String label, String capacityVector) { + String capacityPropertyName = getNodeLabelPrefix(queuePath, label) + CAPACITY; + set(capacityPropertyName, capacityVector); + } + + public void setMaximumCapacityVector(String queuePath, String label, String capacityVector) { + String capacityPropertyName = getNodeLabelPrefix(queuePath, label) + MAXIMUM_CAPACITY; + set(capacityPropertyName, capacityVector); + } + private boolean configuredWeightAsCapacity(String configureValue) { if (configureValue == null) { return false; @@ -843,7 +858,7 @@ public class CapacitySchedulerConfiguration extends ReservationSchedulerConfigur (configuredCapacity != null) && RESOURCE_PATTERN.matcher( configuredCapacity).find(); if (absoluteResourceConfigured || configuredWeightAsCapacity( - configuredCapacity)) { + configuredCapacity) || queueCapacityConfigParser.isCapacityVectorFormat(configuredCapacity)) { // Return capacity in percentage as 0 for non-root queues and 100 for // root.From AbstractCSQueue, absolute resource, and weight will be parsed // and updated separately. Once nodes are added/removed in cluster, @@ -2701,7 +2716,28 @@ public class CapacitySchedulerConfiguration extends ReservationSchedulerConfigur String queuePath, Set labels) { Map queueResourceVectors = new HashMap<>(); for (String label : labels) { - queueResourceVectors.put(label, queueCapacityConfigParser.parse(this, queuePath, label)); + String propertyName = CapacitySchedulerConfiguration.getNodeLabelPrefix( + queuePath, label) + CapacitySchedulerConfiguration.CAPACITY; + String capacityString = get(propertyName); + queueResourceVectors.put(label, queueCapacityConfigParser.parse(capacityString, queuePath)); + } + + return queueResourceVectors; + } + + public Map parseConfiguredMaximumCapacityVector( + String queuePath, Set labels, QueueCapacityVector defaultVector) { + Map queueResourceVectors = new HashMap<>(); + for (String label : labels) { + String propertyName = CapacitySchedulerConfiguration.getNodeLabelPrefix( + queuePath, label) + CapacitySchedulerConfiguration.MAXIMUM_CAPACITY; + String capacityString = get(propertyName); + QueueCapacityVector capacityVector = queueCapacityConfigParser.parse(capacityString, + queuePath); + if (capacityVector.isEmpty()) { + capacityVector = defaultVector; + } + queueResourceVectors.put(label, capacityVector); } return queueResourceVectors; @@ -2806,6 +2842,11 @@ public class CapacitySchedulerConfiguration extends ReservationSchedulerConfigur } String units = getUnits(splits[1]); + + if (!UnitsConversionUtil.KNOWN_UNITS.contains(units)) { + return; + } + Long resourceValue = Long .valueOf(splits[1].substring(0, splits[1].length() - units.length())); @@ -2888,6 +2929,14 @@ public class CapacitySchedulerConfiguration extends ReservationSchedulerConfigur return normalizePolicyName(policyClassName.trim()); } + public boolean isLegacyQueueMode() { + return getBoolean(LEGACY_QUEUE_MODE_ENABLED, DEFAULT_LEGACY_QUEUE_MODE); + } + + public void setLegacyQueueModeEnabled(boolean value) { + setBoolean(LEGACY_QUEUE_MODE_ENABLED, value); + } + public boolean getMultiNodePlacementEnabled() { return getBoolean(MULTI_NODE_PLACEMENT_ENABLED, DEFAULT_MULTI_NODE_PLACEMENT_ENABLED); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerQueueCapacityHandler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerQueueCapacityHandler.java new file mode 100644 index 00000000000..f197ccf6be2 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerQueueCapacityHandler.java @@ -0,0 +1,221 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + *

    + * http://www.apache.org/licenses/LICENSE-2.0 + *

    + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity; + +import org.apache.commons.collections.CollectionUtils; +import org.apache.hadoop.yarn.api.records.Resource; +import org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager; +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits; +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacityVector.ResourceUnitCapacityType; +import org.apache.hadoop.yarn.util.resource.ResourceCalculator; +import org.apache.hadoop.yarn.util.resource.ResourceUtils; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.Collection; +import java.util.HashMap; +import java.util.HashSet; +import java.util.LinkedHashSet; +import java.util.Map; +import java.util.Set; + +import static org.apache.hadoop.yarn.api.records.ResourceInformation.MEMORY_URI; +import static org.apache.hadoop.yarn.api.records.ResourceInformation.VCORES_URI; +import static org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager.NO_LABEL; + +/** + * Controls how capacity and resource values are set and calculated for a queue. + * Effective minimum and maximum resource values are set for each label and resource separately. + */ +public class CapacitySchedulerQueueCapacityHandler { + + private static final Logger LOG = + LoggerFactory.getLogger(CapacitySchedulerQueueCapacityHandler.class); + + private final Map + calculators; + private final AbstractQueueCapacityCalculator rootCalculator = + new RootQueueCapacityCalculator(); + private final RMNodeLabelsManager labelsManager; + private final Collection definedResources = new LinkedHashSet<>(); + + public CapacitySchedulerQueueCapacityHandler(RMNodeLabelsManager labelsManager) { + this.calculators = new HashMap<>(); + this.labelsManager = labelsManager; + + this.calculators.put(ResourceUnitCapacityType.ABSOLUTE, + new AbsoluteResourceCapacityCalculator()); + this.calculators.put(ResourceUnitCapacityType.PERCENTAGE, + new PercentageQueueCapacityCalculator()); + this.calculators.put(ResourceUnitCapacityType.WEIGHT, + new WeightQueueCapacityCalculator()); + + loadResourceNames(); + } + + /** + * Updates the resource and metrics values of all children under a specific queue. + * These values are calculated at runtime. + * + * @param clusterResource resource of the cluster + * @param queue parent queue whose children will be updated + * @return update context that contains information about the update phase + */ + public QueueCapacityUpdateContext updateChildren(Resource clusterResource, CSQueue queue) { + ResourceLimits resourceLimits = new ResourceLimits(clusterResource); + QueueCapacityUpdateContext updateContext = + new QueueCapacityUpdateContext(clusterResource, labelsManager); + + update(queue, updateContext, resourceLimits); + return updateContext; + } + + /** + * Updates the resource and metrics value of the root queue. Root queue always has percentage + * capacity type and is assigned the cluster resource as its minimum and maximum effective + * resource. + * @param rootQueue root queue + * @param clusterResource cluster resource + */ + public void updateRoot(CSQueue rootQueue, Resource clusterResource) { + ResourceLimits resourceLimits = new ResourceLimits(clusterResource); + QueueCapacityUpdateContext updateContext = + new QueueCapacityUpdateContext(clusterResource, labelsManager); + + RootCalculationDriver rootCalculationDriver = new RootCalculationDriver(rootQueue, + updateContext, + rootCalculator, definedResources); + rootCalculationDriver.calculateResources(); + rootQueue.refreshAfterResourceCalculation(updateContext.getUpdatedClusterResource(), + resourceLimits); + } + + private void update( + CSQueue queue, QueueCapacityUpdateContext updateContext, ResourceLimits resourceLimits) { + if (queue == null || CollectionUtils.isEmpty(queue.getChildQueues())) { + return; + } + + ResourceCalculationDriver resourceCalculationDriver = new ResourceCalculationDriver( + queue, updateContext, calculators, definedResources); + resourceCalculationDriver.calculateResources(); + + updateChildrenAfterCalculation(resourceCalculationDriver, resourceLimits); + } + + private void updateChildrenAfterCalculation( + ResourceCalculationDriver resourceCalculationDriver, ResourceLimits resourceLimits) { + ParentQueue parentQueue = (ParentQueue) resourceCalculationDriver.getQueue(); + for (CSQueue childQueue : parentQueue.getChildQueues()) { + updateQueueCapacities(resourceCalculationDriver, childQueue); + + ResourceLimits childLimit = parentQueue.getResourceLimitsOfChild(childQueue, + resourceCalculationDriver.getUpdateContext().getUpdatedClusterResource(), + resourceLimits, NO_LABEL, false); + childQueue.refreshAfterResourceCalculation(resourceCalculationDriver.getUpdateContext() + .getUpdatedClusterResource(), childLimit); + + update(childQueue, resourceCalculationDriver.getUpdateContext(), childLimit); + } + } + + /** + * Updates the capacity values of the currently evaluated child. + * @param queue queue on which the capacities are set + */ + private void updateQueueCapacities( + ResourceCalculationDriver resourceCalculationDriver, CSQueue queue) { + queue.getWriteLock().lock(); + try { + for (String label : queue.getConfiguredNodeLabels()) { + QueueCapacityVector capacityVector = queue.getConfiguredCapacityVector(label); + if (capacityVector.isMixedCapacityVector()) { + // Post update capacities based on the calculated effective resource values + setQueueCapacities(resourceCalculationDriver.getUpdateContext().getUpdatedClusterResource( + label), queue, label); + } else { + // Update capacities according to the legacy logic + for (ResourceUnitCapacityType capacityType : + queue.getConfiguredCapacityVector(label).getDefinedCapacityTypes()) { + AbstractQueueCapacityCalculator calculator = calculators.get(capacityType); + calculator.updateCapacitiesAfterCalculation(resourceCalculationDriver, queue, label); + } + } + } + } finally { + queue.getWriteLock().unlock(); + } + } + + /** + * Sets capacity and absolute capacity values of a queue based on minimum and + * maximum effective resources. + * + * @param clusterResource overall cluster resource + * @param queue child queue for which the capacities are set + * @param label node label + */ + public static void setQueueCapacities(Resource clusterResource, CSQueue queue, String label) { + if (!(queue instanceof AbstractCSQueue)) { + return; + } + + AbstractCSQueue csQueue = (AbstractCSQueue) queue; + ResourceCalculator resourceCalculator = csQueue.resourceCalculator; + + CSQueue parent = queue.getParent(); + if (parent == null) { + return; + } + // Update capacity with a double calculated from the parent's minResources + // and the recently changed queue minResources. + // capacity = effectiveMinResource / {parent's effectiveMinResource} + float result = resourceCalculator.divide(clusterResource, + queue.getQueueResourceQuotas().getEffectiveMinResource(label), + parent.getQueueResourceQuotas().getEffectiveMinResource(label)); + queue.getQueueCapacities().setCapacity(label, + Float.isInfinite(result) ? 0 : result); + + // Update maxCapacity with a double calculated from the parent's maxResources + // and the recently changed queue maxResources. + // maxCapacity = effectiveMaxResource / parent's effectiveMaxResource + result = resourceCalculator.divide(clusterResource, + queue.getQueueResourceQuotas().getEffectiveMaxResource(label), + parent.getQueueResourceQuotas().getEffectiveMaxResource(label)); + queue.getQueueCapacities().setMaximumCapacity(label, + Float.isInfinite(result) ? 0 : result); + + csQueue.updateAbsoluteCapacities(); + } + + private void loadResourceNames() { + Set resources = new HashSet<>(ResourceUtils.getResourceTypes().keySet()); + if (resources.contains(MEMORY_URI)) { + resources.remove(MEMORY_URI); + definedResources.add(MEMORY_URI); + } + + if (resources.contains(VCORES_URI)) { + resources.remove(VCORES_URI); + definedResources.add(VCORES_URI); + } + + definedResources.addAll(resources); + } +} \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerQueueManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerQueueManager.java index ba6849cb780..d8108c0f007 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerQueueManager.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerQueueManager.java @@ -81,6 +81,7 @@ public class CapacitySchedulerQueueManager implements SchedulerQueueManager< private CSQueue root; private final RMNodeLabelsManager labelManager; private AppPriorityACLsManager appPriorityACLManager; + private CapacitySchedulerQueueCapacityHandler queueCapacityHandler; private QueueStateManager queueStateManager; @@ -100,6 +101,7 @@ public class CapacitySchedulerQueueManager implements SchedulerQueueManager< this.queueStateManager = new QueueStateManager<>(); this.appPriorityACLManager = appPriorityACLManager; this.configuredNodeLabels = new ConfiguredNodeLabels(); + this.queueCapacityHandler = new CapacitySchedulerQueueCapacityHandler(labelManager); } @Override @@ -413,6 +415,10 @@ public class CapacitySchedulerQueueManager implements SchedulerQueueManager< return this.queueStateManager; } + public CapacitySchedulerQueueCapacityHandler getQueueCapacityHandler() { + return queueCapacityHandler; + } + /** * Removes an {@code AutoCreatedLeafQueue} from the manager collection and * from its parent children collection. diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/DefaultQueueResourceRoundingStrategy.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/DefaultQueueResourceRoundingStrategy.java new file mode 100644 index 00000000000..3a0254cdc53 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/DefaultQueueResourceRoundingStrategy.java @@ -0,0 +1,48 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + *

    + * http://www.apache.org/licenses/LICENSE-2.0 + *

    + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity; + +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacityVector.QueueCapacityVectorEntry; +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacityVector.ResourceUnitCapacityType; + +/** + * The default rounding strategy for resource calculation. Uses floor for all types except WEIGHT, + * which is always the last type to consider, therefore it is safe to round up. + */ +public class DefaultQueueResourceRoundingStrategy implements QueueResourceRoundingStrategy { + private final ResourceUnitCapacityType lastCapacityType; + + public DefaultQueueResourceRoundingStrategy( + ResourceUnitCapacityType[] capacityTypePrecedence) { + if (capacityTypePrecedence.length == 0) { + throw new IllegalArgumentException("Capacity type precedence collection is empty"); + } + + lastCapacityType = capacityTypePrecedence[capacityTypePrecedence.length - 1]; + } + + @Override + public double getRoundedResource(double resourceValue, QueueCapacityVectorEntry capacityVectorEntry) { + if (capacityVectorEntry.getVectorResourceType().equals(lastCapacityType)) { + return Math.round(resourceValue); + } else { + return Math.floor(resourceValue); + } + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java index 0949d512a79..a816b91034c 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java @@ -302,94 +302,97 @@ public class ParentQueue extends AbstractCSQueue { void setChildQueues(Collection childQueues) throws IOException { writeLock.lock(); try { - QueueCapacityType childrenCapacityType = - getCapacityConfigurationTypeForQueues(childQueues); - QueueCapacityType parentCapacityType = - getCapacityConfigurationTypeForQueues(ImmutableList.of(this)); + boolean isLegacyQueueMode = queueContext.getConfiguration().isLegacyQueueMode(); + if (isLegacyQueueMode) { + QueueCapacityType childrenCapacityType = + getCapacityConfigurationTypeForQueues(childQueues); + QueueCapacityType parentCapacityType = + getCapacityConfigurationTypeForQueues(ImmutableList.of(this)); - if (childrenCapacityType == QueueCapacityType.ABSOLUTE_RESOURCE - || parentCapacityType == QueueCapacityType.ABSOLUTE_RESOURCE) { - // We don't allow any mixed absolute + {weight, percentage} between - // children and parent - if (childrenCapacityType != parentCapacityType && !this.getQueuePath() - .equals(CapacitySchedulerConfiguration.ROOT)) { - throw new IOException("Parent=" + this.getQueuePath() - + ": When absolute minResource is used, we must make sure both " - + "parent and child all use absolute minResource"); - } - - // Ensure that for each parent queue: parent.min-resource >= - // Σ(child.min-resource). - for (String nodeLabel : queueCapacities.getExistingNodeLabels()) { - Resource minRes = Resources.createResource(0, 0); - for (CSQueue queue : childQueues) { - // Accumulate all min/max resource configured for all child queues. - Resources.addTo(minRes, queue.getQueueResourceQuotas() - .getConfiguredMinResource(nodeLabel)); - } - Resource resourceByLabel = labelManager.getResourceByLabel(nodeLabel, - queueContext.getClusterResource()); - Resource parentMinResource = - usageTracker.getQueueResourceQuotas().getConfiguredMinResource(nodeLabel); - if (!parentMinResource.equals(Resources.none()) && Resources.lessThan( - resourceCalculator, resourceByLabel, parentMinResource, minRes)) { - throw new IOException( - "Parent Queues" + " capacity: " + parentMinResource - + " is less than" + " to its children:" + minRes - + " for queue:" + getQueueName()); - } - } - } - - // When child uses percent - if (childrenCapacityType == QueueCapacityType.PERCENT) { - float childrenPctSum = 0; - // check label capacities - for (String nodeLabel : queueCapacities.getExistingNodeLabels()) { - // check children's labels - childrenPctSum = 0; - for (CSQueue queue : childQueues) { - childrenPctSum += queue.getQueueCapacities().getCapacity(nodeLabel); + if (childrenCapacityType == QueueCapacityType.ABSOLUTE_RESOURCE + || parentCapacityType == QueueCapacityType.ABSOLUTE_RESOURCE) { + // We don't allow any mixed absolute + {weight, percentage} between + // children and parent + if (childrenCapacityType != parentCapacityType && !this.getQueuePath() + .equals(CapacitySchedulerConfiguration.ROOT)) { + throw new IOException("Parent=" + this.getQueuePath() + + ": When absolute minResource is used, we must make sure both " + + "parent and child all use absolute minResource"); } - if (Math.abs(1 - childrenPctSum) > PRECISION) { - // When children's percent sum != 100% - if (Math.abs(childrenPctSum) > PRECISION) { - // It is wrong when percent sum != {0, 1} + // Ensure that for each parent queue: parent.min-resource >= + // Σ(child.min-resource). + for (String nodeLabel : queueCapacities.getExistingNodeLabels()) { + Resource minRes = Resources.createResource(0, 0); + for (CSQueue queue : childQueues) { + // Accumulate all min/max resource configured for all child queues. + Resources.addTo(minRes, queue.getQueueResourceQuotas() + .getConfiguredMinResource(nodeLabel)); + } + Resource resourceByLabel = labelManager.getResourceByLabel(nodeLabel, + queueContext.getClusterResource()); + Resource parentMinResource = + usageTracker.getQueueResourceQuotas().getConfiguredMinResource(nodeLabel); + if (!parentMinResource.equals(Resources.none()) && Resources.lessThan( + resourceCalculator, resourceByLabel, parentMinResource, minRes)) { throw new IOException( - "Illegal capacity sum of " + childrenPctSum - + " for children of queue " + getQueueName() + " for label=" - + nodeLabel + ". It should be either 0 or 1.0"); - } else{ - // We also allow children's percent sum = 0 under the following - // conditions - // - Parent uses weight mode - // - Parent uses percent mode, and parent has - // (capacity=0 OR allowZero) - if (parentCapacityType == QueueCapacityType.PERCENT) { - if ((Math.abs(queueCapacities.getCapacity(nodeLabel)) - > PRECISION) && (!allowZeroCapacitySum)) { - throw new IOException( - "Illegal capacity sum of " + childrenPctSum - + " for children of queue " + getQueueName() - + " for label=" + nodeLabel - + ". It is set to 0, but parent percent != 0, and " - + "doesn't allow children capacity to set to 0"); + "Parent Queues" + " capacity: " + parentMinResource + + " is less than" + " to its children:" + minRes + + " for queue:" + getQueueName()); + } + } + } + + // When child uses percent + if (childrenCapacityType == QueueCapacityType.PERCENT) { + float childrenPctSum = 0; + // check label capacities + for (String nodeLabel : queueCapacities.getExistingNodeLabels()) { + // check children's labels + childrenPctSum = 0; + for (CSQueue queue : childQueues) { + childrenPctSum += queue.getQueueCapacities().getCapacity(nodeLabel); + } + + if (Math.abs(1 - childrenPctSum) > PRECISION) { + // When children's percent sum != 100% + if (Math.abs(childrenPctSum) > PRECISION) { + // It is wrong when percent sum != {0, 1} + throw new IOException( + "Illegal" + " capacity sum of " + childrenPctSum + + " for children of queue " + getQueueName() + " for label=" + + nodeLabel + ". It should be either 0 or 1.0"); + } else { + // We also allow children's percent sum = 0 under the following + // conditions + // - Parent uses weight mode + // - Parent uses percent mode, and parent has + // (capacity=0 OR allowZero) + if (parentCapacityType == QueueCapacityType.PERCENT) { + if ((Math.abs(queueCapacities.getCapacity(nodeLabel)) + > PRECISION) && (!allowZeroCapacitySum)) { + throw new IOException( + "Illegal" + " capacity sum of " + childrenPctSum + + " for children of queue " + getQueueName() + + " for label=" + nodeLabel + + ". It is set to 0, but parent percent != 0, and " + + "doesn't allow children capacity to set to 0"); + } } } - } - } else { - // Even if child pct sum == 1.0, we will make sure parent has - // positive percent. - if (parentCapacityType == QueueCapacityType.PERCENT && Math.abs( - queueCapacities.getCapacity(nodeLabel)) <= 0f - && !allowZeroCapacitySum) { - throw new IOException( - "Illegal capacity sum of " + childrenPctSum - + " for children of queue " + getQueueName() + " for label=" - + nodeLabel + ". queue=" + getQueueName() - + " has zero capacity, but child" - + "queues have positive capacities"); + } else { + // Even if child pct sum == 1.0, we will make sure parent has + // positive percent. + if (parentCapacityType == QueueCapacityType.PERCENT && Math.abs( + queueCapacities.getCapacity(nodeLabel)) <= 0f + && !allowZeroCapacitySum) { + throw new IOException( + "Illegal" + " capacity sum of " + childrenPctSum + + " for children of queue " + getQueueName() + " for label=" + + nodeLabel + ". queue=" + getQueueName() + + " has zero capacity, but child" + + "queues have positive capacities"); + } } } } @@ -1057,7 +1060,7 @@ public class ParentQueue extends AbstractCSQueue { return accept; } - private ResourceLimits getResourceLimitsOfChild(CSQueue child, + public ResourceLimits getResourceLimitsOfChild(CSQueue child, Resource clusterResource, ResourceLimits parentLimits, String nodePartition, boolean netLimit) { // Set resource-limit of a given child, child.limit = @@ -1208,6 +1211,17 @@ public class ParentQueue extends AbstractCSQueue { } } + @Override + public void refreshAfterResourceCalculation(Resource clusterResource, + ResourceLimits resourceLimits) { + CSQueueUtils.updateQueueStatistics(resourceCalculator, clusterResource, + this, labelManager, null); + // Update configured capacity/max-capacity for default partition only + CSQueueUtils.updateConfiguredCapacityMetrics(resourceCalculator, + labelManager.getResourceByLabel(null, clusterResource), + RMNodeLabelsManager.NO_LABEL, this); + } + @Override public void updateClusterResource(Resource clusterResource, ResourceLimits resourceLimits) { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/PercentageQueueCapacityCalculator.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/PercentageQueueCapacityCalculator.java new file mode 100644 index 00000000000..6a73459aaf4 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/PercentageQueueCapacityCalculator.java @@ -0,0 +1,72 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + *

    + * http://www.apache.org/licenses/LICENSE-2.0 + *

    + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity; + +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacityVector.ResourceUnitCapacityType; + +public class PercentageQueueCapacityCalculator extends AbstractQueueCapacityCalculator { + + @Override + public double calculateMinimumResource( + ResourceCalculationDriver resourceCalculationDriver, CalculationContext context, + String label) { + String resourceName = context.getResourceName(); + + double parentAbsoluteCapacity = resourceCalculationDriver.getParentAbsoluteMinCapacity(label, + resourceName); + double remainingPerEffectiveResourceRatio = + resourceCalculationDriver.getRemainingRatioOfResource(label, resourceName); + double absoluteCapacity = parentAbsoluteCapacity * remainingPerEffectiveResourceRatio + * context.getCurrentMinimumCapacityEntry(label).getResourceValue() / 100; + + return resourceCalculationDriver.getUpdateContext().getUpdatedClusterResource(label) + .getResourceValue(resourceName) * absoluteCapacity; + } + + @Override + public double calculateMaximumResource( + ResourceCalculationDriver resourceCalculationDriver, CalculationContext context, + String label) { + String resourceName = context.getResourceName(); + + double parentAbsoluteMaxCapacity = + resourceCalculationDriver.getParentAbsoluteMaxCapacity(label, resourceName); + double absoluteMaxCapacity = parentAbsoluteMaxCapacity + * context.getCurrentMaximumCapacityEntry(label).getResourceValue() / 100; + + return resourceCalculationDriver.getUpdateContext().getUpdatedClusterResource(label) + .getResourceValue(resourceName) * absoluteMaxCapacity; + } + + @Override + public void calculateResourcePrerequisites(ResourceCalculationDriver resourceCalculationDriver) { + + } + + @Override + public void updateCapacitiesAfterCalculation(ResourceCalculationDriver resourceCalculationDriver, + CSQueue queue, String label) { + ((AbstractCSQueue) queue).updateAbsoluteCapacities(); + } + + @Override + public ResourceUnitCapacityType getCapacityType() { + return ResourceUnitCapacityType.PERCENTAGE; + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/QueueCapacityUpdateContext.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/QueueCapacityUpdateContext.java new file mode 100644 index 00000000000..4eb270be515 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/QueueCapacityUpdateContext.java @@ -0,0 +1,76 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + *

    + * http://www.apache.org/licenses/LICENSE-2.0 + *

    + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity; + +import org.apache.hadoop.yarn.api.records.Resource; +import org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager; + +import java.util.ArrayList; +import java.util.List; + +/** + * A storage that encapsulates intermediate calculation values throughout a + * full queue capacity update phase. + */ +public class QueueCapacityUpdateContext { + private final Resource updatedClusterResource; + private final RMNodeLabelsManager labelsManager; + + private final List warnings = new ArrayList(); + + public QueueCapacityUpdateContext(Resource updatedClusterResource, + RMNodeLabelsManager labelsManager) { + this.updatedClusterResource = updatedClusterResource; + this.labelsManager = labelsManager; + } + + /** + * Returns the overall cluster resource available for the update phase. + * + * @param label node label + * @return cluster resource + */ + public Resource getUpdatedClusterResource(String label) { + return labelsManager.getResourceByLabel(label, updatedClusterResource); + } + + /** + * Returns the overall cluster resource available for the update phase of empty label. + * @return cluster resource + */ + public Resource getUpdatedClusterResource() { + return updatedClusterResource; + } + + /** + * Adds an update warning to the context. + * @param warning warning during update phase + */ + public void addUpdateWarning(QueueUpdateWarning warning) { + warnings.add(warning); + } + + /** + * Returns all update warnings occurred in this update phase. + * @return update warnings + */ + public List getUpdateWarnings() { + return warnings; + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/QueueCapacityVector.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/QueueCapacityVector.java index 9f6e0e264a3..bcce996b279 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/QueueCapacityVector.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/QueueCapacityVector.java @@ -39,9 +39,9 @@ public class QueueCapacityVector implements private static final String VALUE_DELIMITER = "="; private final ResourceVector resource; - private final Map capacityTypes + private final Map capacityTypes = new HashMap<>(); - private final Map> capacityTypePerResource + private final Map> capacityTypePerResource = new HashMap<>(); public QueueCapacityVector() { @@ -61,9 +61,9 @@ public class QueueCapacityVector implements public static QueueCapacityVector newInstance() { QueueCapacityVector newCapacityVector = new QueueCapacityVector(ResourceVector.newInstance()); - for (Map.Entry resourceEntry : newCapacityVector.resource) { + for (Map.Entry resourceEntry : newCapacityVector.resource) { newCapacityVector.storeResourceType(resourceEntry.getKey(), - QueueCapacityType.ABSOLUTE); + ResourceUnitCapacityType.ABSOLUTE); } return newCapacityVector; @@ -78,10 +78,10 @@ public class QueueCapacityVector implements * @return uniform capacity vector */ public static QueueCapacityVector of( - float value, QueueCapacityType capacityType) { + double value, ResourceUnitCapacityType capacityType) { QueueCapacityVector newCapacityVector = new QueueCapacityVector(ResourceVector.of(value)); - for (Map.Entry resourceEntry : newCapacityVector.resource) { + for (Map.Entry resourceEntry : newCapacityVector.resource) { newCapacityVector.storeResourceType(resourceEntry.getKey(), capacityType); } @@ -109,8 +109,8 @@ public class QueueCapacityVector implements * @param value value of the resource * @param capacityType type of the resource */ - public void setResource(String resourceName, float value, - QueueCapacityType capacityType) { + public void setResource(String resourceName, double value, + ResourceUnitCapacityType capacityType) { // Necessary due to backward compatibility (memory = memory-mb) String convertedResourceName = resourceName; if (resourceName.equals("memory")) { @@ -125,10 +125,14 @@ public class QueueCapacityVector implements * * @return value of memory resource */ - public float getMemory() { + public double getMemory() { return resource.getValue(ResourceInformation.MEMORY_URI); } + public boolean isEmpty() { + return resource.isEmpty() && capacityTypePerResource.isEmpty() && capacityTypes.isEmpty(); + } + /** * Returns the name of all resources that are defined in the given capacity * type. @@ -137,13 +141,20 @@ public class QueueCapacityVector implements * @return all resource names for the given capacity type */ public Set getResourceNamesByCapacityType( - QueueCapacityType capacityType) { - return capacityTypePerResource.getOrDefault(capacityType, - Collections.emptySet()); + ResourceUnitCapacityType capacityType) { + return new HashSet<>(capacityTypePerResource.getOrDefault(capacityType, + Collections.emptySet())); } + /** + * Checks whether a resource unit is defined as a specific type. + * + * @param resourceName resource unit name + * @param capacityType capacity type + * @return true, if resource unit is defined as the given type + */ public boolean isResourceOfType( - String resourceName, QueueCapacityType capacityType) { + String resourceName, ResourceUnitCapacityType capacityType) { return capacityTypes.containsKey(resourceName) && capacityTypes.get(resourceName).equals(capacityType); } @@ -151,7 +162,7 @@ public class QueueCapacityVector implements @Override public Iterator iterator() { return new Iterator() { - private final Iterator> resources = + private final Iterator> resources = resource.iterator(); private int i = 0; @@ -162,7 +173,7 @@ public class QueueCapacityVector implements @Override public QueueCapacityVectorEntry next() { - Map.Entry resourceInformation = resources.next(); + Map.Entry resourceInformation = resources.next(); i++; return new QueueCapacityVectorEntry( capacityTypes.get(resourceInformation.getKey()), @@ -172,16 +183,29 @@ public class QueueCapacityVector implements } /** - * Returns a set of all capacity type defined for this vector. + * Returns a set of all capacity types defined for this vector. * * @return capacity types */ - public Set getDefinedCapacityTypes() { + public Set getDefinedCapacityTypes() { return capacityTypePerResource.keySet(); } + /** + * Checks whether the vector is a mixed capacity vector (more than one capacity type is used, + * therefore it is not uniform). + * @return true, if the vector is mixed + */ + public boolean isMixedCapacityVector() { + return getDefinedCapacityTypes().size() > 1; + } + + public Set getResourceNames() { + return resource.getResourceNames(); + } + private void storeResourceType( - String resourceName, QueueCapacityType resourceType) { + String resourceName, ResourceUnitCapacityType resourceType) { if (capacityTypes.get(resourceName) != null && !capacityTypes.get(resourceName).equals(resourceType)) { capacityTypePerResource.get(capacityTypes.get(resourceName)) @@ -199,7 +223,7 @@ public class QueueCapacityVector implements stringVector.append(START_PARENTHESES); int resourceCount = 0; - for (Map.Entry resourceEntry : resource) { + for (Map.Entry resourceEntry : resource) { resourceCount++; stringVector.append(resourceEntry.getKey()) .append(VALUE_DELIMITER) @@ -218,11 +242,11 @@ public class QueueCapacityVector implements /** * Represents a capacity type associated with its syntax postfix. */ - public enum QueueCapacityType { + public enum ResourceUnitCapacityType { PERCENTAGE("%"), ABSOLUTE(""), WEIGHT("w"); private final String postfix; - QueueCapacityType(String postfix) { + ResourceUnitCapacityType(String postfix) { this.postfix = postfix; } @@ -232,22 +256,22 @@ public class QueueCapacityVector implements } public static class QueueCapacityVectorEntry { - private final QueueCapacityType vectorResourceType; - private final float resourceValue; + private final ResourceUnitCapacityType vectorResourceType; + private final double resourceValue; private final String resourceName; - public QueueCapacityVectorEntry(QueueCapacityType vectorResourceType, - String resourceName, float resourceValue) { + public QueueCapacityVectorEntry(ResourceUnitCapacityType vectorResourceType, + String resourceName, double resourceValue) { this.vectorResourceType = vectorResourceType; this.resourceValue = resourceValue; this.resourceName = resourceName; } - public QueueCapacityType getVectorResourceType() { + public ResourceUnitCapacityType getVectorResourceType() { return vectorResourceType; } - public float getResourceValue() { + public double getResourceValue() { return resourceValue; } diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/AuthenticationWrapper.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/QueueResourceRoundingStrategy.java similarity index 51% rename from hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/AuthenticationWrapper.java rename to hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/QueueResourceRoundingStrategy.java index 6f67a16715e..ef753316e84 100644 --- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/AuthenticationWrapper.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/QueueResourceRoundingStrategy.java @@ -6,9 +6,9 @@ * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * + *

    + * http://www.apache.org/licenses/LICENSE-2.0 + *

    * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. @@ -16,32 +16,21 @@ * limitations under the License. */ -package org.apache.hadoop.fs.swift.auth; +package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity; + +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacityVector.QueueCapacityVectorEntry; /** - * This class is used for correct hierarchy mapping of - * Keystone authentication model and java code - * THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON. - * DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS. + * Represents an approach on how to convert a calculated resource from floating point to a whole + * number. */ -public class AuthenticationWrapper { +public interface QueueResourceRoundingStrategy { /** - * authentication response field + * Returns a whole number converted from the calculated resource value. + * @param resourceValue calculated resource value + * @param capacityVectorEntry configured capacity entry + * @return rounded resource value */ - private AuthenticationResponse access; - - /** - * @return authentication response - */ - public AuthenticationResponse getAccess() { - return access; - } - - /** - * @param access sets authentication response - */ - public void setAccess(AuthenticationResponse access) { - this.access = access; - } + double getRoundedResource(double resourceValue, QueueCapacityVectorEntry capacityVectorEntry); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/QueueUpdateWarning.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/QueueUpdateWarning.java new file mode 100644 index 00000000000..43c345b1bc3 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/QueueUpdateWarning.java @@ -0,0 +1,78 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + *

    + * http://www.apache.org/licenses/LICENSE-2.0 + *

    + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity; + +/** + * Represents a warning event that occurred during a queue capacity update phase. + */ +public class QueueUpdateWarning { + private final String queue; + private final QueueUpdateWarningType warningType; + private String info = ""; + + public QueueUpdateWarning(QueueUpdateWarningType queueUpdateWarningType, String queue) { + this.warningType = queueUpdateWarningType; + this.queue = queue; + } + + public enum QueueUpdateWarningType { + BRANCH_UNDERUTILIZED("Remaining resource found in branch under parent queue '%s'. %s"), + QUEUE_OVERUTILIZED("Queue '%s' is configured to use more resources than what is available " + + "under its parent. %s"), + QUEUE_ZERO_RESOURCE("Queue '%s' is assigned zero resource. %s"), + BRANCH_DOWNSCALED("Child queues with absolute configured capacity under parent queue '%s' are" + + " downscaled due to insufficient cluster resource. %s"), + QUEUE_EXCEEDS_MAX_RESOURCE("Queue '%s' exceeds its maximum available resources. %s"), + QUEUE_MAX_RESOURCE_EXCEEDS_PARENT("Maximum resources of queue '%s' are greater than its " + + "parent's. %s"); + + private final String template; + + QueueUpdateWarningType(String template) { + this.template = template; + } + + public QueueUpdateWarning ofQueue(String queue) { + return new QueueUpdateWarning(this, queue); + } + + public String getTemplate() { + return template; + } + } + + public QueueUpdateWarning withInfo(String info) { + this.info = info; + + return this; + } + + public String getQueue() { + return queue; + } + + public QueueUpdateWarningType getWarningType() { + return warningType; + } + + @Override + public String toString() { + return String.format(warningType.getTemplate(), queue, info); + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ResourceCalculationDriver.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ResourceCalculationDriver.java new file mode 100644 index 00000000000..5993042c0e5 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ResourceCalculationDriver.java @@ -0,0 +1,336 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + *

    + * http://www.apache.org/licenses/LICENSE-2.0 + *

    + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity; + +import org.apache.commons.lang3.tuple.ImmutablePair; +import org.apache.commons.lang3.tuple.Pair; +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacityVector.QueueCapacityVectorEntry; +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacityVector.ResourceUnitCapacityType; +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueUpdateWarning.QueueUpdateWarningType; + +import java.util.Collection; +import java.util.HashMap; +import java.util.Map; + +import static org.apache.hadoop.yarn.api.records.ResourceInformation.MEMORY_URI; + +/** + * Drives the main logic of resource calculation for all children under a queue. Acts as a + * bookkeeper of disposable update information that is used by all children under the common parent. + */ +public class ResourceCalculationDriver { + private static final ResourceUnitCapacityType[] CALCULATOR_PRECEDENCE = + new ResourceUnitCapacityType[] { + ResourceUnitCapacityType.ABSOLUTE, + ResourceUnitCapacityType.PERCENTAGE, + ResourceUnitCapacityType.WEIGHT}; + static final String MB_UNIT = "Mi"; + + protected final QueueResourceRoundingStrategy roundingStrategy = + new DefaultQueueResourceRoundingStrategy(CALCULATOR_PRECEDENCE); + protected final CSQueue queue; + protected final QueueCapacityUpdateContext updateContext; + protected final Map calculators; + protected final Collection definedResources; + + protected final Map overallRemainingResourcePerLabel = new HashMap<>(); + protected final Map batchRemainingResourcePerLabel = new HashMap<>(); + // Used by ABSOLUTE capacity types + protected final Map normalizedResourceRatioPerLabel = new HashMap<>(); + // Used by WEIGHT capacity types + protected final Map> sumWeightsPerLabel = new HashMap<>(); + protected Map usedResourceByCurrentCalculatorPerLabel = new HashMap<>(); + + public ResourceCalculationDriver( + CSQueue queue, QueueCapacityUpdateContext updateContext, + Map calculators, + Collection definedResources) { + this.queue = queue; + this.updateContext = updateContext; + this.calculators = calculators; + this.definedResources = definedResources; + } + + + /** + * Returns the parent that is driving the calculation. + * + * @return a common parent queue + */ + public CSQueue getQueue() { + return queue; + } + + /** + * Returns all the children defined under the driver parent queue. + * + * @return child queues + */ + public Collection getChildQueues() { + return queue.getChildQueues(); + } + + /** + * Returns the context that is used throughout the whole update phase. + * + * @return update context + */ + public QueueCapacityUpdateContext getUpdateContext() { + return updateContext; + } + + /** + * Increments the aggregated weight. + * + * @param label node label + * @param resourceName resource unit name + * @param value weight value + */ + public void incrementWeight(String label, String resourceName, double value) { + sumWeightsPerLabel.putIfAbsent(label, new HashMap<>()); + sumWeightsPerLabel.get(label).put(resourceName, + sumWeightsPerLabel.get(label).getOrDefault(resourceName, 0d) + value); + } + + /** + * Returns the aggregated children weights. + * + * @param label node label + * @param resourceName resource unit name + * @return aggregated weights of children + */ + public double getSumWeightsByResource(String label, String resourceName) { + return sumWeightsPerLabel.get(label).get(resourceName); + } + + /** + * Returns the ratio of the summary of children absolute configured resources and the parent's + * effective minimum resource. + * + * @return normalized resource ratio for all labels + */ + public Map getNormalizedResourceRatios() { + return normalizedResourceRatioPerLabel; + } + + /** + * Returns the remaining resource ratio under the parent queue. The remaining resource is only + * decremented after a capacity type is fully evaluated. + * + * @param label node label + * @param resourceName name of resource unit + * @return resource ratio + */ + public double getRemainingRatioOfResource(String label, String resourceName) { + return batchRemainingResourcePerLabel.get(label).getValue(resourceName) + / queue.getEffectiveCapacity(label).getResourceValue(resourceName); + } + + /** + * Returns the ratio of the parent queue's effective minimum resource relative to the full cluster + * resource. + * + * @param label node label + * @param resourceName name of resource unit + * @return absolute minimum capacity + */ + public double getParentAbsoluteMinCapacity(String label, String resourceName) { + return (double) queue.getEffectiveCapacity(label).getResourceValue(resourceName) + / getUpdateContext().getUpdatedClusterResource(label).getResourceValue(resourceName); + } + + /** + * Returns the ratio of the parent queue's effective maximum resource relative to the full cluster + * resource. + * + * @param label node label + * @param resourceName name of resource unit + * @return absolute maximum capacity + */ + public double getParentAbsoluteMaxCapacity(String label, String resourceName) { + return (double) queue.getEffectiveMaxCapacity(label).getResourceValue(resourceName) + / getUpdateContext().getUpdatedClusterResource(label).getResourceValue(resourceName); + } + + /** + * Returns the remaining resources of a parent that is still available for its + * children. Decremented only after the calculator is finished its work on the corresponding + * resources. + * + * @param label node label + * @return remaining resources + */ + public ResourceVector getBatchRemainingResource(String label) { + batchRemainingResourcePerLabel.putIfAbsent(label, ResourceVector.newInstance()); + return batchRemainingResourcePerLabel.get(label); + } + + /** + * Calculates and sets the minimum and maximum effective resources for all children under the + * parent queue with which this driver was initialized. + */ + public void calculateResources() { + // Reset both remaining resource storage to the parent's available resource + for (String label : queue.getConfiguredNodeLabels()) { + overallRemainingResourcePerLabel.put(label, + ResourceVector.of(queue.getEffectiveCapacity(label))); + batchRemainingResourcePerLabel.put(label, + ResourceVector.of(queue.getEffectiveCapacity(label))); + } + + for (AbstractQueueCapacityCalculator capacityCalculator : calculators.values()) { + capacityCalculator.calculateResourcePrerequisites(this); + } + + for (String resourceName : definedResources) { + for (ResourceUnitCapacityType capacityType : CALCULATOR_PRECEDENCE) { + for (CSQueue childQueue : getChildQueues()) { + CalculationContext context = new CalculationContext(resourceName, capacityType, + childQueue); + calculateResourceOnChild(context); + } + + // Flush aggregated used resource by labels at the end of a calculator phase + for (Map.Entry entry : usedResourceByCurrentCalculatorPerLabel.entrySet()) { + batchRemainingResourcePerLabel.get(entry.getKey()).decrement(resourceName, + entry.getValue()); + } + + usedResourceByCurrentCalculatorPerLabel = new HashMap<>(); + } + } + + validateRemainingResource(); + } + + private void calculateResourceOnChild(CalculationContext context) { + context.getQueue().getWriteLock().lock(); + try { + for (String label : context.getQueue().getConfiguredNodeLabels()) { + if (!context.getQueue().getConfiguredCapacityVector(label).isResourceOfType( + context.getResourceName(), context.getCapacityType())) { + continue; + } + double usedResourceByChild = setChildResources(context, label); + double aggregatedUsedResource = usedResourceByCurrentCalculatorPerLabel.getOrDefault(label, + 0d); + double resourceUsedByLabel = aggregatedUsedResource + usedResourceByChild; + + overallRemainingResourcePerLabel.get(label).decrement(context.getResourceName(), + usedResourceByChild); + usedResourceByCurrentCalculatorPerLabel.put(label, resourceUsedByLabel); + } + } finally { + context.getQueue().getWriteLock().unlock(); + } + } + + private double setChildResources(CalculationContext context, String label) { + QueueCapacityVectorEntry capacityVectorEntry = context.getQueue().getConfiguredCapacityVector( + label).getResource(context.getResourceName()); + QueueCapacityVectorEntry maximumCapacityVectorEntry = context.getQueue() + .getConfiguredMaxCapacityVector(label).getResource(context.getResourceName()); + AbstractQueueCapacityCalculator maximumCapacityCalculator = calculators.get( + maximumCapacityVectorEntry.getVectorResourceType()); + + double minimumResource = + calculators.get(context.getCapacityType()).calculateMinimumResource(this, context, label); + double maximumResource = maximumCapacityCalculator.calculateMaximumResource(this, context, + label); + + minimumResource = roundingStrategy.getRoundedResource(minimumResource, capacityVectorEntry); + maximumResource = roundingStrategy.getRoundedResource(maximumResource, + maximumCapacityVectorEntry); + Pair resources = validateCalculatedResources(context, label, + new ImmutablePair<>( + minimumResource, maximumResource)); + minimumResource = resources.getLeft(); + maximumResource = resources.getRight(); + + context.getQueue().getQueueResourceQuotas().getEffectiveMinResource(label).setResourceValue( + context.getResourceName(), (long) minimumResource); + context.getQueue().getQueueResourceQuotas().getEffectiveMaxResource(label).setResourceValue( + context.getResourceName(), (long) maximumResource); + + return minimumResource; + } + + private Pair validateCalculatedResources(CalculationContext context, + String label, Pair calculatedResources) { + double minimumResource = calculatedResources.getLeft(); + long minimumMemoryResource = + context.getQueue().getQueueResourceQuotas().getEffectiveMinResource(label).getMemorySize(); + + double remainingResourceUnderParent = overallRemainingResourcePerLabel.get(label).getValue( + context.getResourceName()); + + long parentMaximumResource = queue.getEffectiveMaxCapacity(label).getResourceValue( + context.getResourceName()); + double maximumResource = calculatedResources.getRight(); + + // Memory is the primary resource, if its zero, all other resource units are zero as well. + if (!context.getResourceName().equals(MEMORY_URI) && minimumMemoryResource == 0) { + minimumResource = 0; + } + + if (maximumResource != 0 && maximumResource > parentMaximumResource) { + updateContext.addUpdateWarning(QueueUpdateWarningType.QUEUE_MAX_RESOURCE_EXCEEDS_PARENT + .ofQueue(context.getQueue().getQueuePath())); + } + maximumResource = maximumResource == 0 ? parentMaximumResource : Math.min(maximumResource, + parentMaximumResource); + + if (maximumResource < minimumResource) { + updateContext.addUpdateWarning(QueueUpdateWarningType.QUEUE_EXCEEDS_MAX_RESOURCE.ofQueue( + context.getQueue().getQueuePath())); + minimumResource = maximumResource; + } + + if (minimumResource > remainingResourceUnderParent) { + // Legacy auto queues are assigned a zero resource if not enough resource is left + if (queue instanceof ManagedParentQueue) { + minimumResource = 0; + } else { + updateContext.addUpdateWarning( + QueueUpdateWarningType.QUEUE_OVERUTILIZED.ofQueue( + context.getQueue().getQueuePath()).withInfo( + "Resource name: " + context.getResourceName() + + " resource value: " + minimumResource)); + minimumResource = remainingResourceUnderParent; + } + } + + if (minimumResource == 0) { + updateContext.addUpdateWarning(QueueUpdateWarningType.QUEUE_ZERO_RESOURCE.ofQueue( + context.getQueue().getQueuePath()) + .withInfo("Resource name: " + context.getResourceName())); + } + + return new ImmutablePair<>(minimumResource, maximumResource); + } + + private void validateRemainingResource() { + for (String label : queue.getConfiguredNodeLabels()) { + if (!batchRemainingResourcePerLabel.get(label).equals(ResourceVector.newInstance())) { + updateContext.addUpdateWarning(QueueUpdateWarningType.BRANCH_UNDERUTILIZED.ofQueue( + queue.getQueuePath()).withInfo("Label: " + label)); + } + } + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ResourceVector.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ResourceVector.java index 88c09af6b09..8a417b0e66b 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ResourceVector.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ResourceVector.java @@ -25,13 +25,13 @@ import org.apache.hadoop.yarn.util.resource.ResourceUtils; import java.util.HashMap; import java.util.Iterator; import java.util.Map; +import java.util.Set; /** - * Represents a simple resource floating point value storage - * grouped by resource names. + * Represents a simple resource floating point value grouped by resource names. */ -public class ResourceVector implements Iterable> { - private final Map resourcesByName = new HashMap<>(); +public class ResourceVector implements Iterable> { + private final Map resourcesByName = new HashMap<>(); /** * Creates a new {@code ResourceVector} with all pre-defined resources set to @@ -53,7 +53,7 @@ public class ResourceVector implements Iterable> { * @param value the value to set all resources to * @return uniform resource vector */ - public static ResourceVector of(float value) { + public static ResourceVector of(double value) { ResourceVector emptyResourceVector = new ResourceVector(); for (ResourceInformation resource : ResourceUtils.getResourceTypesArray()) { emptyResourceVector.setValue(resource.getName(), value); @@ -79,34 +79,51 @@ public class ResourceVector implements Iterable> { } /** - * Subtract values for each resource defined in the given resource vector. + * Decrements values for each resource defined in the given resource vector. * @param otherResourceVector rhs resource vector of the subtraction */ - public void subtract(ResourceVector otherResourceVector) { - for (Map.Entry resource : otherResourceVector) { + public void decrement(ResourceVector otherResourceVector) { + for (Map.Entry resource : otherResourceVector) { setValue(resource.getKey(), getValue(resource.getKey()) - resource.getValue()); } } + /** + * Decrements the given resource by the specified value. + * @param resourceName name of the resource + * @param value value to be subtracted from the resource's current value + */ + public void decrement(String resourceName, double value) { + setValue(resourceName, getValue(resourceName) - value); + } + /** * Increments the given resource by the specified value. * @param resourceName name of the resource * @param value value to be added to the resource's current value */ - public void increment(String resourceName, float value) { + public void increment(String resourceName, double value) { setValue(resourceName, getValue(resourceName) + value); } - public Float getValue(String resourceName) { + public double getValue(String resourceName) { return resourcesByName.get(resourceName); } - public void setValue(String resourceName, float value) { + public void setValue(String resourceName, double value) { resourcesByName.put(resourceName, value); } + public boolean isEmpty() { + return resourcesByName.isEmpty(); + } + + public Set getResourceNames() { + return resourcesByName.keySet(); + } + @Override - public Iterator> iterator() { + public Iterator> iterator() { return resourcesByName.entrySet().iterator(); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/RootCalculationDriver.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/RootCalculationDriver.java new file mode 100644 index 00000000000..530c5c1086f --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/RootCalculationDriver.java @@ -0,0 +1,64 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + *

    + * http://www.apache.org/licenses/LICENSE-2.0 + *

    + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity; + +import java.util.Collection; +import java.util.Collections; + +import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacityVector.ResourceUnitCapacityType.PERCENTAGE; + +/** + * A special case that contains the resource calculation of the root queue. + */ +public final class RootCalculationDriver extends ResourceCalculationDriver { + private final AbstractQueueCapacityCalculator rootCalculator; + + public RootCalculationDriver(CSQueue rootQueue, QueueCapacityUpdateContext updateContext, + AbstractQueueCapacityCalculator rootCalculator, + Collection definedResources) { + super(rootQueue, updateContext, Collections.emptyMap(), definedResources); + this.rootCalculator = rootCalculator; + } + + @Override + public void calculateResources() { + for (String label : queue.getConfiguredNodeLabels()) { + for (QueueCapacityVector.QueueCapacityVectorEntry capacityVectorEntry : + queue.getConfiguredCapacityVector(label)) { + String resourceName = capacityVectorEntry.getResourceName(); + + CalculationContext context = new CalculationContext(resourceName, PERCENTAGE, queue); + double minimumResource = rootCalculator.calculateMinimumResource(this, context, label); + double maximumResource = rootCalculator.calculateMaximumResource(this, context, label); + long roundedMinResource = (long) roundingStrategy + .getRoundedResource(minimumResource, capacityVectorEntry); + long roundedMaxResource = (long) roundingStrategy + .getRoundedResource(maximumResource, + queue.getConfiguredMaxCapacityVector(label).getResource(resourceName)); + queue.getQueueResourceQuotas().getEffectiveMinResource(label).setResourceValue( + resourceName, roundedMinResource); + queue.getQueueResourceQuotas().getEffectiveMaxResource(label).setResourceValue( + resourceName, roundedMaxResource); + } + rootCalculator.updateCapacitiesAfterCalculation(this, queue, label); + } + + rootCalculator.calculateResourcePrerequisites(this); + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/RootQueueCapacityCalculator.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/RootQueueCapacityCalculator.java new file mode 100644 index 00000000000..8da1aeab282 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/RootQueueCapacityCalculator.java @@ -0,0 +1,59 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + *

    + * http://www.apache.org/licenses/LICENSE-2.0 + *

    + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity; + +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacityVector.ResourceUnitCapacityType; + +import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacityVector.ResourceUnitCapacityType.PERCENTAGE; + +public class RootQueueCapacityCalculator extends AbstractQueueCapacityCalculator { + + @Override + public void calculateResourcePrerequisites(ResourceCalculationDriver resourceCalculationDriver) { + AbsoluteResourceCapacityCalculator.setNormalizedResourceRatio(resourceCalculationDriver); + } + + @Override + public double calculateMinimumResource(ResourceCalculationDriver resourceCalculationDriver, + CalculationContext context, String label) { + return resourceCalculationDriver.getUpdateContext().getUpdatedClusterResource(label) + .getResourceValue(context.getResourceName()); + } + + @Override + public double calculateMaximumResource(ResourceCalculationDriver resourceCalculationDriver, + CalculationContext context, String label) { + return resourceCalculationDriver.getUpdateContext().getUpdatedClusterResource(label) + .getResourceValue(context.getResourceName()); + } + + @Override + public void updateCapacitiesAfterCalculation( + ResourceCalculationDriver resourceCalculationDriver, CSQueue queue, String label) { + queue.getQueueCapacities().setAbsoluteCapacity(label, 1); + if (queue.getQueueCapacities().getWeight(label) == 1) { + queue.getQueueCapacities().setNormalizedWeight(label, 1); + } + } + + @Override + public ResourceUnitCapacityType getCapacityType() { + return PERCENTAGE; + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/WeightQueueCapacityCalculator.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/WeightQueueCapacityCalculator.java new file mode 100644 index 00000000000..4121a6bf056 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/WeightQueueCapacityCalculator.java @@ -0,0 +1,103 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + *

    + * http://www.apache.org/licenses/LICENSE-2.0 + *

    + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity; + +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacityVector.ResourceUnitCapacityType; + +import java.util.Collection; + +import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacityVector.ResourceUnitCapacityType.WEIGHT; + +public class WeightQueueCapacityCalculator extends AbstractQueueCapacityCalculator { + + @Override + public void calculateResourcePrerequisites(ResourceCalculationDriver resourceCalculationDriver) { + // Precalculate the summary of children's weight + for (CSQueue childQueue : resourceCalculationDriver.getChildQueues()) { + for (String label : childQueue.getConfiguredNodeLabels()) { + for (String resourceName : childQueue.getConfiguredCapacityVector(label) + .getResourceNamesByCapacityType(getCapacityType())) { + resourceCalculationDriver.incrementWeight(label, resourceName, childQueue + .getConfiguredCapacityVector(label).getResource(resourceName).getResourceValue()); + } + } + } + } + + @Override + public double calculateMinimumResource(ResourceCalculationDriver resourceCalculationDriver, + CalculationContext context, + String label) { + String resourceName = context.getResourceName(); + double normalizedWeight = context.getCurrentMinimumCapacityEntry(label).getResourceValue() / + resourceCalculationDriver.getSumWeightsByResource(label, resourceName); + + double remainingResource = resourceCalculationDriver.getBatchRemainingResource(label) + .getValue(resourceName); + + // Due to rounding loss it is better to use all remaining resources if no other resource uses + // weight + if (normalizedWeight == 1) { + return remainingResource; + } + + double remainingResourceRatio = resourceCalculationDriver.getRemainingRatioOfResource( + label, resourceName); + double parentAbsoluteCapacity = resourceCalculationDriver.getParentAbsoluteMinCapacity( + label, resourceName); + double queueAbsoluteCapacity = parentAbsoluteCapacity * remainingResourceRatio + * normalizedWeight; + + return resourceCalculationDriver.getUpdateContext() + .getUpdatedClusterResource(label).getResourceValue(resourceName) * queueAbsoluteCapacity; + } + + @Override + public double calculateMaximumResource(ResourceCalculationDriver resourceCalculationDriver, + CalculationContext context, + String label) { + throw new IllegalStateException("Resource " + context.getCurrentMinimumCapacityEntry( + label).getResourceName() + + " has " + "WEIGHT maximum capacity type, which is not supported"); + } + + @Override + public ResourceUnitCapacityType getCapacityType() { + return WEIGHT; + } + + @Override + public void updateCapacitiesAfterCalculation( + ResourceCalculationDriver resourceCalculationDriver, CSQueue queue, String label) { + double sumCapacityPerResource = 0f; + + Collection resourceNames = getResourceNames(queue, label); + for (String resourceName : resourceNames) { + double sumBranchWeight = resourceCalculationDriver.getSumWeightsByResource(label, + resourceName); + double capacity = queue.getConfiguredCapacityVector( + label).getResource(resourceName).getResourceValue() / sumBranchWeight; + sumCapacityPerResource += capacity; + } + + queue.getQueueCapacities().setNormalizedWeight(label, + (float) (sumCapacityPerResource / resourceNames.size())); + ((AbstractCSQueue) queue).updateAbsoluteCapacities(); + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/ConfigurationUpdateAssembler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/ConfigurationUpdateAssembler.java new file mode 100644 index 00000000000..88c93019680 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/ConfigurationUpdateAssembler.java @@ -0,0 +1,181 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.conf; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.thirdparty.com.google.common.base.Joiner; +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration; +import org.apache.hadoop.yarn.webapp.dao.QueueConfigInfo; +import org.apache.hadoop.yarn.webapp.dao.SchedConfUpdateInfo; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.ORDERING_POLICY; + +public final class ConfigurationUpdateAssembler { + + private ConfigurationUpdateAssembler() { + } + + public static Map constructKeyValueConfUpdate( + CapacitySchedulerConfiguration proposedConf, + SchedConfUpdateInfo mutationInfo) throws IOException { + + Map confUpdate = new HashMap<>(); + for (String queueToRemove : mutationInfo.getRemoveQueueInfo()) { + removeQueue(queueToRemove, proposedConf, confUpdate); + } + for (QueueConfigInfo addQueueInfo : mutationInfo.getAddQueueInfo()) { + addQueue(addQueueInfo, proposedConf, confUpdate); + } + for (QueueConfigInfo updateQueueInfo : mutationInfo.getUpdateQueueInfo()) { + updateQueue(updateQueueInfo, proposedConf, confUpdate); + } + for (Map.Entry global : mutationInfo.getGlobalParams() + .entrySet()) { + confUpdate.put(global.getKey(), global.getValue()); + } + return confUpdate; + } + + private static void removeQueue( + String queueToRemove, CapacitySchedulerConfiguration proposedConf, + Map confUpdate) throws IOException { + if (queueToRemove == null) { + return; + } + if (queueToRemove.lastIndexOf('.') == -1) { + throw new IOException("Can't remove queue " + queueToRemove); + } + String queueName = queueToRemove.substring( + queueToRemove.lastIndexOf('.') + 1); + List siblingQueues = getSiblingQueues(queueToRemove, + proposedConf); + if (!siblingQueues.contains(queueName)) { + throw new IOException("Queue " + queueToRemove + " not found"); + } + siblingQueues.remove(queueName); + String parentQueuePath = queueToRemove.substring(0, queueToRemove + .lastIndexOf('.')); + proposedConf.setQueues(parentQueuePath, siblingQueues.toArray( + new String[0])); + String queuesConfig = CapacitySchedulerConfiguration.PREFIX + + parentQueuePath + CapacitySchedulerConfiguration.DOT + + CapacitySchedulerConfiguration.QUEUES; + if (siblingQueues.isEmpty()) { + confUpdate.put(queuesConfig, null); + // Unset Ordering Policy of Leaf Queue converted from + // Parent Queue after removeQueue + String queueOrderingPolicy = CapacitySchedulerConfiguration.PREFIX + + parentQueuePath + CapacitySchedulerConfiguration.DOT + + ORDERING_POLICY; + proposedConf.unset(queueOrderingPolicy); + confUpdate.put(queueOrderingPolicy, null); + } else { + confUpdate.put(queuesConfig, Joiner.on(',').join(siblingQueues)); + } + for (Map.Entry confRemove : proposedConf.getValByRegex( + ".*" + queueToRemove + "\\..*") + .entrySet()) { + proposedConf.unset(confRemove.getKey()); + confUpdate.put(confRemove.getKey(), null); + } + } + + private static void addQueue( + QueueConfigInfo addInfo, CapacitySchedulerConfiguration proposedConf, + Map confUpdate) throws IOException { + if (addInfo == null) { + return; + } + String queuePath = addInfo.getQueue(); + String queueName = queuePath.substring(queuePath.lastIndexOf('.') + 1); + if (queuePath.lastIndexOf('.') == -1) { + throw new IOException("Can't add invalid queue " + queuePath); + } else if (getSiblingQueues(queuePath, proposedConf).contains( + queueName)) { + throw new IOException("Can't add existing queue " + queuePath); + } + String parentQueue = queuePath.substring(0, queuePath.lastIndexOf('.')); + String[] siblings = proposedConf.getQueues(parentQueue); + List siblingQueues = siblings == null ? new ArrayList<>() : + new ArrayList<>(Arrays.asList(siblings)); + siblingQueues.add(queuePath.substring(queuePath.lastIndexOf('.') + 1)); + proposedConf.setQueues(parentQueue, + siblingQueues.toArray(new String[0])); + confUpdate.put(CapacitySchedulerConfiguration.PREFIX + + parentQueue + CapacitySchedulerConfiguration.DOT + + CapacitySchedulerConfiguration.QUEUES, + Joiner.on(',').join(siblingQueues)); + String keyPrefix = CapacitySchedulerConfiguration.PREFIX + + queuePath + CapacitySchedulerConfiguration.DOT; + for (Map.Entry kv : addInfo.getParams().entrySet()) { + String keyValue = kv.getValue(); + if (keyValue == null || keyValue.isEmpty()) { + proposedConf.unset(keyPrefix + kv.getKey()); + confUpdate.put(keyPrefix + kv.getKey(), null); + } else { + proposedConf.set(keyPrefix + kv.getKey(), keyValue); + confUpdate.put(keyPrefix + kv.getKey(), keyValue); + } + } + // Unset Ordering Policy of Parent Queue converted from + // Leaf Queue after addQueue + String queueOrderingPolicy = CapacitySchedulerConfiguration.PREFIX + + parentQueue + CapacitySchedulerConfiguration.DOT + ORDERING_POLICY; + if (siblingQueues.size() == 1) { + proposedConf.unset(queueOrderingPolicy); + confUpdate.put(queueOrderingPolicy, null); + } + } + + private static void updateQueue(QueueConfigInfo updateInfo, + CapacitySchedulerConfiguration proposedConf, + Map confUpdate) { + if (updateInfo == null) { + return; + } + String queuePath = updateInfo.getQueue(); + String keyPrefix = CapacitySchedulerConfiguration.PREFIX + + queuePath + CapacitySchedulerConfiguration.DOT; + for (Map.Entry kv : updateInfo.getParams().entrySet()) { + String keyValue = kv.getValue(); + if (keyValue == null || keyValue.isEmpty()) { + proposedConf.unset(keyPrefix + kv.getKey()); + confUpdate.put(keyPrefix + kv.getKey(), null); + } else { + proposedConf.set(keyPrefix + kv.getKey(), keyValue); + confUpdate.put(keyPrefix + kv.getKey(), keyValue); + } + } + } + + private static List getSiblingQueues(String queuePath, Configuration conf) { + String parentQueue = queuePath.substring(0, queuePath.lastIndexOf('.')); + String childQueuesKey = CapacitySchedulerConfiguration.PREFIX + + parentQueue + CapacitySchedulerConfiguration.DOT + + CapacitySchedulerConfiguration.QUEUES; + return new ArrayList<>(conf.getTrimmedStringCollection(childQueuesKey)); + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/MutableCSConfigurationProvider.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/MutableCSConfigurationProvider.java index c4cb273b495..cbd217f4f2b 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/MutableCSConfigurationProvider.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/MutableCSConfigurationProvider.java @@ -19,7 +19,6 @@ package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.conf; import org.apache.hadoop.classification.VisibleForTesting; -import org.apache.hadoop.thirdparty.com.google.common.base.Joiner; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.hadoop.conf.Configuration; @@ -31,19 +30,12 @@ import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ConfigurationMuta import org.apache.hadoop.yarn.server.resourcemanager.scheduler.MutableConfigurationProvider; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.YarnConfigurationStore.LogMutation; -import org.apache.hadoop.yarn.webapp.dao.QueueConfigInfo; import org.apache.hadoop.yarn.webapp.dao.SchedConfUpdateInfo; import java.io.IOException; -import java.util.ArrayList; -import java.util.Arrays; -import java.util.HashMap; -import java.util.List; import java.util.Map; import java.util.concurrent.locks.ReentrantReadWriteLock; -import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.ORDERING_POLICY; - /** * CS configuration provider which implements * {@link MutableConfigurationProvider} for modifying capacity scheduler @@ -79,15 +71,7 @@ public class MutableCSConfigurationProvider implements CSConfigurationProvider, @Override public void init(Configuration config) throws IOException { this.confStore = YarnConfigurationStoreFactory.getStore(config); - Configuration initialSchedConf = getInitSchedulerConfig(); - initialSchedConf.addResource(YarnConfiguration.CS_CONFIGURATION_FILE); - this.schedConf = new Configuration(false); - // We need to explicitly set the key-values in schedConf, otherwise - // these configuration keys cannot be deleted when - // configuration is reloaded. - for (Map.Entry kv : initialSchedConf) { - schedConf.set(kv.getKey(), kv.getValue()); - } + initializeSchedConf(); try { confStore.initialize(config, schedConf, rmContext); confStore.checkVersion(); @@ -108,7 +92,7 @@ public class MutableCSConfigurationProvider implements CSConfigurationProvider, } @VisibleForTesting - public YarnConfigurationStore getConfStore() { + protected YarnConfigurationStore getConfStore() { return confStore; } @@ -142,7 +126,7 @@ public class MutableCSConfigurationProvider implements CSConfigurationProvider, CapacitySchedulerConfiguration proposedConf = new CapacitySchedulerConfiguration(schedConf, false); Map kvUpdate - = constructKeyValueConfUpdate(proposedConf, confUpdate); + = ConfigurationUpdateAssembler.constructKeyValueConfUpdate(proposedConf, confUpdate); LogMutation log = new LogMutation(kvUpdate, user.getShortUserName()); confStore.logMutation(log); applyMutation(proposedConf, kvUpdate); @@ -155,7 +139,7 @@ public class MutableCSConfigurationProvider implements CSConfigurationProvider, CapacitySchedulerConfiguration proposedConf = new CapacitySchedulerConfiguration(oldConfiguration, false); Map kvUpdate - = constructKeyValueConfUpdate(proposedConf, confUpdate); + = ConfigurationUpdateAssembler.constructKeyValueConfUpdate(proposedConf, confUpdate); applyMutation(proposedConf, kvUpdate); return proposedConf; } @@ -177,15 +161,7 @@ public class MutableCSConfigurationProvider implements CSConfigurationProvider, try { confStore.format(); oldConf = new Configuration(schedConf); - Configuration initialSchedConf = new Configuration(false); - initialSchedConf.addResource(YarnConfiguration.CS_CONFIGURATION_FILE); - this.schedConf = new Configuration(false); - // We need to explicitly set the key-values in schedConf, otherwise - // these configuration keys cannot be deleted when - // configuration is reloaded. - for (Map.Entry kv : initialSchedConf) { - schedConf.set(kv.getKey(), kv.getValue()); - } + initializeSchedConf(); confStore.initialize(config, schedConf, rmContext); confStore.checkVersion(); } catch (Exception e) { @@ -195,6 +171,17 @@ public class MutableCSConfigurationProvider implements CSConfigurationProvider, } } + private void initializeSchedConf() { + Configuration initialSchedConf = getInitSchedulerConfig(); + this.schedConf = new Configuration(false); + // We need to explicitly set the key-values in schedConf, otherwise + // these configuration keys cannot be deleted when + // configuration is reloaded. + for (Map.Entry kv : initialSchedConf) { + schedConf.set(kv.getKey(), kv.getValue()); + } + } + @Override public void revertToOldConfig(Configuration config) throws Exception { formatLock.writeLock().lock(); @@ -233,147 +220,4 @@ public class MutableCSConfigurationProvider implements CSConfigurationProvider, formatLock.readLock().unlock(); } } - - private List getSiblingQueues(String queuePath, Configuration conf) { - String parentQueue = queuePath.substring(0, queuePath.lastIndexOf('.')); - String childQueuesKey = CapacitySchedulerConfiguration.PREFIX + - parentQueue + CapacitySchedulerConfiguration.DOT + - CapacitySchedulerConfiguration.QUEUES; - return new ArrayList<>(conf.getTrimmedStringCollection(childQueuesKey)); - } - - private Map constructKeyValueConfUpdate( - CapacitySchedulerConfiguration proposedConf, - SchedConfUpdateInfo mutationInfo) throws IOException { - - Map confUpdate = new HashMap<>(); - for (String queueToRemove : mutationInfo.getRemoveQueueInfo()) { - removeQueue(queueToRemove, proposedConf, confUpdate); - } - for (QueueConfigInfo addQueueInfo : mutationInfo.getAddQueueInfo()) { - addQueue(addQueueInfo, proposedConf, confUpdate); - } - for (QueueConfigInfo updateQueueInfo : mutationInfo.getUpdateQueueInfo()) { - updateQueue(updateQueueInfo, proposedConf, confUpdate); - } - for (Map.Entry global : mutationInfo.getGlobalParams() - .entrySet()) { - confUpdate.put(global.getKey(), global.getValue()); - } - return confUpdate; - } - - private void removeQueue( - String queueToRemove, CapacitySchedulerConfiguration proposedConf, - Map confUpdate) throws IOException { - if (queueToRemove == null) { - return; - } else { - String queueName = queueToRemove.substring( - queueToRemove.lastIndexOf('.') + 1); - if (queueToRemove.lastIndexOf('.') == -1) { - throw new IOException("Can't remove queue " + queueToRemove); - } else { - List siblingQueues = getSiblingQueues(queueToRemove, - proposedConf); - if (!siblingQueues.contains(queueName)) { - throw new IOException("Queue " + queueToRemove + " not found"); - } - siblingQueues.remove(queueName); - String parentQueuePath = queueToRemove.substring(0, queueToRemove - .lastIndexOf('.')); - proposedConf.setQueues(parentQueuePath, siblingQueues.toArray( - new String[0])); - String queuesConfig = CapacitySchedulerConfiguration.PREFIX - + parentQueuePath + CapacitySchedulerConfiguration.DOT - + CapacitySchedulerConfiguration.QUEUES; - if (siblingQueues.size() == 0) { - confUpdate.put(queuesConfig, null); - // Unset Ordering Policy of Leaf Queue converted from - // Parent Queue after removeQueue - String queueOrderingPolicy = CapacitySchedulerConfiguration.PREFIX - + parentQueuePath + CapacitySchedulerConfiguration.DOT - + ORDERING_POLICY; - proposedConf.unset(queueOrderingPolicy); - confUpdate.put(queueOrderingPolicy, null); - } else { - confUpdate.put(queuesConfig, Joiner.on(',').join(siblingQueues)); - } - for (Map.Entry confRemove : proposedConf.getValByRegex( - ".*" + queueToRemove.replaceAll("\\.", "\\.") + "\\..*") - .entrySet()) { - proposedConf.unset(confRemove.getKey()); - confUpdate.put(confRemove.getKey(), null); - } - } - } - } - - private void addQueue( - QueueConfigInfo addInfo, CapacitySchedulerConfiguration proposedConf, - Map confUpdate) throws IOException { - if (addInfo == null) { - return; - } else { - String queuePath = addInfo.getQueue(); - String queueName = queuePath.substring(queuePath.lastIndexOf('.') + 1); - if (queuePath.lastIndexOf('.') == -1) { - throw new IOException("Can't add invalid queue " + queuePath); - } else if (getSiblingQueues(queuePath, proposedConf).contains( - queueName)) { - throw new IOException("Can't add existing queue " + queuePath); - } - String parentQueue = queuePath.substring(0, queuePath.lastIndexOf('.')); - String[] siblings = proposedConf.getQueues(parentQueue); - List siblingQueues = siblings == null ? new ArrayList<>() : - new ArrayList<>(Arrays.asList(siblings)); - siblingQueues.add(queuePath.substring(queuePath.lastIndexOf('.') + 1)); - proposedConf.setQueues(parentQueue, - siblingQueues.toArray(new String[0])); - confUpdate.put(CapacitySchedulerConfiguration.PREFIX - + parentQueue + CapacitySchedulerConfiguration.DOT - + CapacitySchedulerConfiguration.QUEUES, - Joiner.on(',').join(siblingQueues)); - String keyPrefix = CapacitySchedulerConfiguration.PREFIX - + queuePath + CapacitySchedulerConfiguration.DOT; - for (Map.Entry kv : addInfo.getParams().entrySet()) { - if (kv.getValue() == null) { - proposedConf.unset(keyPrefix + kv.getKey()); - } else { - proposedConf.set(keyPrefix + kv.getKey(), kv.getValue()); - } - confUpdate.put(keyPrefix + kv.getKey(), kv.getValue()); - } - // Unset Ordering Policy of Parent Queue converted from - // Leaf Queue after addQueue - String queueOrderingPolicy = CapacitySchedulerConfiguration.PREFIX - + parentQueue + CapacitySchedulerConfiguration.DOT + ORDERING_POLICY; - if (siblingQueues.size() == 1) { - proposedConf.unset(queueOrderingPolicy); - confUpdate.put(queueOrderingPolicy, null); - } - } - } - - private void updateQueue(QueueConfigInfo updateInfo, - CapacitySchedulerConfiguration proposedConf, - Map confUpdate) { - if (updateInfo == null) { - return; - } else { - String queuePath = updateInfo.getQueue(); - String keyPrefix = CapacitySchedulerConfiguration.PREFIX - + queuePath + CapacitySchedulerConfiguration.DOT; - for (Map.Entry kv : updateInfo.getParams().entrySet()) { - String keyValue = kv.getValue(); - if (keyValue == null || keyValue.isEmpty()) { - keyValue = null; - proposedConf.unset(keyPrefix + kv.getKey()); - } else { - proposedConf.set(keyPrefix + kv.getKey(), keyValue); - } - confUpdate.put(keyPrefix + kv.getKey(), keyValue); - } - } - } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/QueueCapacityConfigParser.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/QueueCapacityConfigParser.java index 28eb33c5536..79786a11b3c 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/QueueCapacityConfigParser.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/QueueCapacityConfigParser.java @@ -20,7 +20,7 @@ package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.conf; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacityVector; -import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacityVector.QueueCapacityType; +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacityVector.ResourceUnitCapacityType; import org.apache.hadoop.yarn.util.UnitsConversionUtil; import java.util.ArrayList; @@ -61,22 +61,16 @@ public class QueueCapacityConfigParser { /** * Creates a {@code QueueCapacityVector} parsed from the capacity configuration * property set for a queue. - * @param conf configuration object + * @param capacityString capacity string to parse * @param queuePath queue for which the capacity property is parsed - * @param label node label * @return a parsed capacity vector */ - public QueueCapacityVector parse(CapacitySchedulerConfiguration conf, - String queuePath, String label) { + public QueueCapacityVector parse(String capacityString, String queuePath) { if (queuePath.equals(CapacitySchedulerConfiguration.ROOT)) { - return QueueCapacityVector.of(100f, QueueCapacityType.PERCENTAGE); + return QueueCapacityVector.of(100f, ResourceUnitCapacityType.PERCENTAGE); } - String propertyName = CapacitySchedulerConfiguration.getNodeLabelPrefix( - queuePath, label) + CapacitySchedulerConfiguration.CAPACITY; - String capacityString = conf.get(propertyName); - if (capacityString == null) { return new QueueCapacityVector(); } @@ -101,13 +95,13 @@ public class QueueCapacityConfigParser { * @return a parsed capacity vector */ private QueueCapacityVector uniformParser(Matcher matcher) { - QueueCapacityType capacityType = null; + ResourceUnitCapacityType capacityType = null; String value = matcher.group(1); if (matcher.groupCount() == 2) { String matchedSuffix = matcher.group(2); - for (QueueCapacityType suffix : QueueCapacityType.values()) { + for (ResourceUnitCapacityType suffix : ResourceUnitCapacityType.values()) { // Absolute uniform syntax is not supported - if (suffix.equals(QueueCapacityType.ABSOLUTE)) { + if (suffix.equals(ResourceUnitCapacityType.ABSOLUTE)) { continue; } // when capacity is given in percentage, we do not need % symbol @@ -164,7 +158,7 @@ public class QueueCapacityConfigParser { private void setCapacityVector( QueueCapacityVector resource, String resourceName, String resourceValue) { - QueueCapacityType capacityType = QueueCapacityType.ABSOLUTE; + ResourceUnitCapacityType capacityType = ResourceUnitCapacityType.ABSOLUTE; // Extract suffix from a value e.g. for 6w extract w String suffix = resourceValue.replaceAll(FLOAT_DIGIT_REGEX, ""); @@ -180,7 +174,7 @@ public class QueueCapacityConfigParser { // Convert all incoming units to MB if units is configured. convertedValue = UnitsConversionUtil.convert(suffix, "Mi", (long) parsedResourceValue); } else { - for (QueueCapacityType capacityTypeSuffix : QueueCapacityType.values()) { + for (ResourceUnitCapacityType capacityTypeSuffix : ResourceUnitCapacityType.values()) { if (capacityTypeSuffix.getPostfix().equals(suffix)) { capacityType = capacityTypeSuffix; } @@ -198,8 +192,12 @@ public class QueueCapacityConfigParser { * false otherwise */ public boolean isCapacityVectorFormat(String configuredCapacity) { - return configuredCapacity != null - && RESOURCE_PATTERN.matcher(configuredCapacity).find(); + if (configuredCapacity == null) { + return false; + } + + String formattedCapacityString = configuredCapacity.replaceAll(" ", ""); + return RESOURCE_PATTERN.matcher(formattedCapacityString).find(); } private static class Parser { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java index ad8022a0d08..7a5bad5d580 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java @@ -17,17 +17,27 @@ */ package org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair; -import org.apache.hadoop.classification.VisibleForTesting; -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; +import java.io.IOException; +import java.net.URL; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +import javax.xml.parsers.DocumentBuilder; +import javax.xml.parsers.DocumentBuilderFactory; +import javax.xml.parsers.ParserConfigurationException; + import org.apache.hadoop.classification.InterfaceAudience.Public; import org.apache.hadoop.classification.InterfaceStability.Unstable; +import org.apache.hadoop.classification.VisibleForTesting; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.UnsupportedFileSystemException; import org.apache.hadoop.security.authorize.AccessControlList; import org.apache.hadoop.service.AbstractService; +import org.apache.hadoop.util.XMLUtils; import org.apache.hadoop.yarn.api.records.QueueACL; import org.apache.hadoop.yarn.security.AccessType; import org.apache.hadoop.yarn.security.Permission; @@ -39,19 +49,14 @@ import org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.allocation.A import org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.allocation.QueueProperties; import org.apache.hadoop.yarn.util.Clock; import org.apache.hadoop.yarn.util.SystemClock; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; import org.w3c.dom.Document; import org.w3c.dom.Element; import org.w3c.dom.NodeList; import org.xml.sax.SAXException; -import javax.xml.parsers.DocumentBuilder; -import javax.xml.parsers.DocumentBuilderFactory; -import javax.xml.parsers.ParserConfigurationException; -import java.io.IOException; -import java.net.URL; -import java.util.ArrayList; -import java.util.HashMap; -import java.util.List; -import java.util.Map; + import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.allocation.AllocationFileQueueParser.EVERYBODY_ACL; import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.allocation.AllocationFileQueueParser.ROOT; @@ -236,8 +241,7 @@ public class AllocationFileLoaderService extends AbstractService { LOG.info("Loading allocation file " + allocFile); // Read and parse the allocations file. - DocumentBuilderFactory docBuilderFactory = - DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory docBuilderFactory = XMLUtils.newSecureDocumentBuilderFactory(); docBuilderFactory.setIgnoringComments(true); DocumentBuilder builder = docBuilderFactory.newDocumentBuilder(); Document doc = builder.parse(fs.open(allocFile)); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java index 2a74d56d925..878546a4398 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java @@ -34,7 +34,6 @@ import org.apache.hadoop.yarn.api.records.Resource; import org.apache.hadoop.yarn.api.records.ResourceInformation; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.exceptions.ResourceNotFoundException; -import org.apache.hadoop.yarn.server.utils.BuilderUtils; import org.apache.hadoop.yarn.util.UnitsConversionUtil; import org.apache.hadoop.yarn.util.resource.ResourceUtils; import org.apache.hadoop.yarn.util.resource.Resources; @@ -629,7 +628,7 @@ public class FairSchedulerConfiguration extends Configuration { final int memory = parseOldStyleResourceMemory(resources); final int vcores = parseOldStyleResourceVcores(resources); return new ConfigurableResource( - BuilderUtils.newResource(memory, vcores)); + Resources.createResource(memory, vcores)); } private static String[] findOldStyleResourcesInSpaceSeparatedInput( diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/FSQueueConverter.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/FSQueueConverter.java index 0e9b3894885..c29d020b10c 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/FSQueueConverter.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/FSQueueConverter.java @@ -17,12 +17,15 @@ package org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.converter; import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.PREFIX; +import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.DOT; +import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.USER_LIMIT_FACTOR; import java.util.List; import java.util.stream.Collectors; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.yarn.api.records.Resource; +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.ConfigurableResource; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSQueue; @@ -79,6 +82,7 @@ public class FSQueueConverter { emitMaxParallelApps(queueName, queue); emitMaxAllocations(queueName, queue); emitPreemptionDisabled(queueName, queue); + emitDefaultUserLimitFactor(queueName, children); emitChildCapacity(queue); emitMaximumCapacity(queueName, queue); @@ -215,6 +219,15 @@ public class FSQueueConverter { } } + public void emitDefaultUserLimitFactor(String queueName, List children) { + if (children.isEmpty()) { + capacitySchedulerConfig.setFloat( + CapacitySchedulerConfiguration. + PREFIX + queueName + DOT + USER_LIMIT_FACTOR, + -1.0f); + } + } + /** * yarn.scheduler.fair.sizebasedweight ==> * yarn.scheduler.capacity.<queue-path> diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/DeSelectFields.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/DeSelectFields.java index 2023ebd7dea..d63fb2d7792 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/DeSelectFields.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/DeSelectFields.java @@ -27,7 +27,7 @@ import org.apache.hadoop.yarn.webapp.BadRequestException; /** * DeSelectFields make the /apps api more flexible. - * It can be used to strip off more fields if there's such use case in future. + * It can be used to strip off more fields if there's such use case in the future. * You can simply extend it via two steps: *
    1. add a DeSelectType enum with a string literals *
    2. write your logical based on @@ -60,10 +60,10 @@ public class DeSelectFields { if (type == null) { LOG.warn("Invalid deSelects string " + literals.trim()); DeSelectType[] typeArray = DeSelectType.values(); - String allSuppportLiterals = Arrays.toString(typeArray); + String allSupportLiterals = Arrays.toString(typeArray); throw new BadRequestException("Invalid deSelects string " + literals.trim() + " specified. It should be one of " - + allSuppportLiterals); + + allSupportLiterals); } else { this.types.add(type); } @@ -74,7 +74,7 @@ public class DeSelectFields { } /** - * Determine the deselect type should be handled or not. + * Determine to deselect type should be handled or not. * @param type deselected type * @return true if the deselect type should be handled */ @@ -83,7 +83,7 @@ public class DeSelectFields { } /** - * Deselect field type, can be boost in future. + * Deselect field type, can be boosted in the future. */ public enum DeSelectType { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/FairSchedulerAppsBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/FairSchedulerAppsBlock.java index f6202cbcc51..c74e2ae3e1b 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/FairSchedulerAppsBlock.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/FairSchedulerAppsBlock.java @@ -129,6 +129,12 @@ public class FairSchedulerAppsBlock extends HtmlBlock { return true; } + private static String printAppInfo(long value) { + if (value == -1) { + return "N/A"; + } + return String.valueOf(value); + } @Override public void render(Block html) { TBODY> tbody = html. @@ -193,16 +199,16 @@ public class FairSchedulerAppsBlock extends HtmlBlock { .append(appInfo.getFinishTime()).append("\",\"") .append(appInfo.getState()).append("\",\"") .append(appInfo.getFinalStatus()).append("\",\"") - .append(appInfo.getRunningContainers() == -1 ? "N/A" : String - .valueOf(appInfo.getRunningContainers())).append("\",\"") - .append(appInfo.getAllocatedVCores() == -1 ? "N/A" : String - .valueOf(appInfo.getAllocatedVCores())).append("\",\"") - .append(appInfo.getAllocatedMB() == -1 ? "N/A" : String - .valueOf(appInfo.getAllocatedMB())).append("\",\"") - .append(appInfo.getReservedVCores() == -1 ? "N/A" : String - .valueOf(appInfo.getReservedVCores())).append("\",\"") - .append(appInfo.getReservedMB() == -1 ? "N/A" : String - .valueOf(appInfo.getReservedMB())).append("\",\"") + .append(printAppInfo(appInfo.getRunningContainers())) + .append("\",\"") + .append(printAppInfo(appInfo.getAllocatedVCores())) + .append("\",\"") + .append(printAppInfo(appInfo.getAllocatedMB())) + .append("\",\"") + .append(printAppInfo(appInfo.getReservedVCores())) + .append("\",\"") + .append(printAppInfo(appInfo.getReservedMB())) + .append("\",\"") // Progress bar .append("

    >(); map.put(node.getNodeId(), ImmutableSet.of("y")); labelsMgr.replaceLabelsOnNode(map); - + // Now query for UNHEALTHY nodes request = GetClusterNodesRequest.newInstance(EnumSet.of(NodeState.UNHEALTHY)); nodeReports = client.getClusterNodes(request).getNodeReports(); Assert.assertEquals(1, nodeReports.size()); Assert.assertEquals("Node is expected to be unhealthy!", NodeState.UNHEALTHY, nodeReports.get(0).getNodeState()); - + Assert.assertTrue(nodeReports.get(0).getNodeLabels().contains("y")); Assert.assertNull(nodeReports.get(0).getDecommissioningTimeout()); Assert.assertNull(nodeReports.get(0).getNodeUpdateType()); - + // Remove labels of host1 map = new HashMap>(); map.put(node.getNodeId(), ImmutableSet.of("y")); labelsMgr.removeLabelsFromNode(map); - + // Query all states should return all nodes rm.registerNode("host3:1236", 1024); request = GetClusterNodesRequest.newInstance(EnumSet.allOf(NodeState.class)); nodeReports = client.getClusterNodes(request).getNodeReports(); Assert.assertEquals(3, nodeReports.size()); - + // All host1-3's label should be empty (instead of null) for (NodeReport report : nodeReports) { Assert.assertTrue(report.getNodeLabels() != null @@ -343,11 +351,8 @@ public class TestClientRMService { Assert.assertNull(report.getDecommissioningTimeout()); Assert.assertNull(report.getNodeUpdateType()); } - - rpc.stopProxy(client, conf); - rm.close(); } - + @Test public void testNonExistingApplicationReport() throws YarnException { RMContext rmContext = mock(RMContext.class); @@ -355,7 +360,6 @@ public class TestClientRMService { new ConcurrentHashMap()); ClientRMService rmService = new ClientRMService(rmContext, null, null, null, null, null); - RecordFactory recordFactory = RecordFactoryProvider.getRecordFactory(null); GetApplicationReportRequest request = recordFactory .newRecordInstance(GetApplicationReportRequest.class); request.setApplicationId(ApplicationId.newInstance(0, 0)); @@ -370,7 +374,7 @@ public class TestClientRMService { } } - @Test + @Test public void testGetApplicationReport() throws Exception { ResourceScheduler scheduler = mock(ResourceScheduler.class); RMContext rmContext = mock(RMContext.class); @@ -386,14 +390,13 @@ public class TestClientRMService { ClientRMService rmService = new ClientRMService(rmContext, scheduler, null, mockAclsManager, null, null); try { - RecordFactory recordFactory = RecordFactoryProvider.getRecordFactory(null); GetApplicationReportRequest request = recordFactory .newRecordInstance(GetApplicationReportRequest.class); request.setApplicationId(appId1); - GetApplicationReportResponse response = + GetApplicationReportResponse response = rmService.getApplicationReport(request); ApplicationReport report = response.getApplicationReport(); - ApplicationResourceUsageReport usageReport = + ApplicationResourceUsageReport usageReport = report.getApplicationResourceUsageReport(); Assert.assertEquals(10, usageReport.getMemorySeconds()); Assert.assertEquals(3, usageReport.getVcoreSeconds()); @@ -440,12 +443,11 @@ public class TestClientRMService { rmService.close(); } } - + @Test public void testGetApplicationAttemptReport() throws YarnException, IOException { ClientRMService rmService = createRMService(); - RecordFactory recordFactory = RecordFactoryProvider.getRecordFactory(null); GetApplicationAttemptReportRequest request = recordFactory .newRecordInstance(GetApplicationAttemptReportRequest.class); ApplicationAttemptId attemptId = ApplicationAttemptId.newInstance( @@ -474,7 +476,7 @@ public class TestClientRMService { public void handle(Event event) { } }); - ApplicationSubmissionContext asContext = + ApplicationSubmissionContext asContext = mock(ApplicationSubmissionContext.class); YarnConfiguration config = new YarnConfiguration(); RMAppAttemptImpl rmAppAttemptImpl = new RMAppAttemptImpl(attemptId, @@ -487,7 +489,6 @@ public class TestClientRMService { @Test public void testGetApplicationAttempts() throws YarnException, IOException { ClientRMService rmService = createRMService(); - RecordFactory recordFactory = RecordFactoryProvider.getRecordFactory(null); GetApplicationAttemptsRequest request = recordFactory .newRecordInstance(GetApplicationAttemptsRequest.class); ApplicationAttemptId attemptId = ApplicationAttemptId.newInstance( @@ -509,7 +510,6 @@ public class TestClientRMService { @Test public void testGetContainerReport() throws YarnException, IOException { ClientRMService rmService = createRMService(); - RecordFactory recordFactory = RecordFactoryProvider.getRecordFactory(null); GetContainerReportRequest request = recordFactory .newRecordInstance(GetContainerReportRequest.class); ApplicationAttemptId attemptId = ApplicationAttemptId.newInstance( @@ -530,7 +530,6 @@ public class TestClientRMService { @Test public void testGetContainers() throws YarnException, IOException { ClientRMService rmService = createRMService(); - RecordFactory recordFactory = RecordFactoryProvider.getRecordFactory(null); GetContainersRequest request = recordFactory .newRecordInstance(GetContainersRequest.class); ApplicationAttemptId attemptId = ApplicationAttemptId.newInstance( @@ -595,12 +594,13 @@ public class TestClientRMService { @Test public void testApplicationTagsValidation() throws IOException { - YarnConfiguration conf = new YarnConfiguration(); + conf = new YarnConfiguration(); int maxtags = 3, appMaxTagLength = 5; conf.setInt(YarnConfiguration.RM_APPLICATION_MAX_TAGS, maxtags); conf.setInt(YarnConfiguration.RM_APPLICATION_MAX_TAG_LENGTH, appMaxTagLength); MockRM rm = new MockRM(conf); + resourceManager = rm; rm.init(conf); rm.start(); @@ -622,7 +622,6 @@ public class TestClientRMService { tags = Arrays.asList("tãg1", "tag2#"); validateApplicationTag(rmService, tags, "A tag can only have ASCII characters! Invalid tag - tãg1"); - rm.close(); } private void validateApplicationTag(ClientRMService rmService, @@ -640,9 +639,10 @@ public class TestClientRMService { @Test public void testForceKillApplication() throws Exception { - YarnConfiguration conf = new YarnConfiguration(); + conf = new YarnConfiguration(); conf.setBoolean(MockRM.ENABLE_WEBAPP, true); MockRM rm = new MockRM(conf); + resourceManager = rm; rm.init(conf); rm.start(); @@ -698,7 +698,7 @@ public class TestClientRMService { assertEquals("Incorrect number of apps in the RM", 2, rmService.getApplications(getRequest).getApplicationList().size()); } - + @Test (expected = ApplicationNotFoundException.class) public void testMoveAbsentApplication() throws YarnException { RMContext rmContext = mock(RMContext.class); @@ -1202,7 +1202,7 @@ public class TestClientRMService { new byte[]{123, 123, 123, 123})); Server.getCurCall().set(mockCall); } - + @Test (timeout = 30000) @SuppressWarnings ("rawtypes") public void testAppSubmit() throws Exception { @@ -1350,7 +1350,7 @@ public class TestClientRMService { ApplicationId[] appIds = {getApplicationId(101), getApplicationId(102), getApplicationId(103)}; List tags = Arrays.asList("Tag1", "Tag2", "Tag3"); - + long[] submitTimeMillis = new long[3]; // Submit applications for (int i = 0; i < appIds.length; i++) { @@ -1377,23 +1377,23 @@ public class TestClientRMService { request.setLimit(1L); assertEquals("Failed to limit applications", 1, rmService.getApplications(request).getApplicationList().size()); - + // Check start range request = GetApplicationsRequest.newInstance(); request.setStartRange(submitTimeMillis[0] + 1, System.currentTimeMillis()); - + // 2 applications are submitted after first timeMills - assertEquals("Incorrect number of matching start range", + assertEquals("Incorrect number of matching start range", 2, rmService.getApplications(request).getApplicationList().size()); - + // 1 application is submitted after the second timeMills request.setStartRange(submitTimeMillis[1] + 1, System.currentTimeMillis()); - assertEquals("Incorrect number of matching start range", + assertEquals("Incorrect number of matching start range", 1, rmService.getApplications(request).getApplicationList().size()); - + // no application is submitted after the third timeMills request.setStartRange(submitTimeMillis[2] + 1, System.currentTimeMillis()); - assertEquals("Incorrect number of matching start range", + assertEquals("Incorrect number of matching start range", 0, rmService.getApplications(request).getApplicationList().size()); // Check queue @@ -1465,7 +1465,7 @@ public class TestClientRMService { assertEquals("Incorrect number of applications for the scope", 3, rmService.getApplications(request).getApplicationList().size()); } - + @Test(timeout=4000) public void testConcurrentAppSubmit() throws IOException, InterruptedException, BrokenBarrierException, @@ -1484,7 +1484,7 @@ public class TestClientRMService { appId1, null, null); final SubmitApplicationRequest submitRequest2 = mockSubmitAppRequest( appId2, null, null); - + final CyclicBarrier startBarrier = new CyclicBarrier(2); final CyclicBarrier endBarrier = new CyclicBarrier(2); @@ -1526,7 +1526,7 @@ public class TestClientRMService { } }; t.start(); - + // submit another app, so go through while the first app is blocked startBarrier.await(); rmService.submitApplication(submitRequest2); @@ -1612,21 +1612,21 @@ public class TestClientRMService { private ConcurrentHashMap getRMApps( RMContext rmContext, YarnScheduler yarnScheduler) { - ConcurrentHashMap apps = - new ConcurrentHashMap(); + ConcurrentHashMap apps = + new ConcurrentHashMap(); ApplicationId applicationId1 = getApplicationId(1); ApplicationId applicationId2 = getApplicationId(2); ApplicationId applicationId3 = getApplicationId(3); YarnConfiguration config = new YarnConfiguration(); apps.put(applicationId1, getRMApp(rmContext, yarnScheduler, applicationId1, - config, "testqueue", 10, 3,null,null)); + config, "testqueue", 10, 3, null, null)); apps.put(applicationId2, getRMApp(rmContext, yarnScheduler, applicationId2, - config, "a", 20, 2,null,"")); + config, "a", 20, 2, null, "")); apps.put(applicationId3, getRMApp(rmContext, yarnScheduler, applicationId3, - config, "testqueue", 40, 5,"high-mem","high-mem")); + config, "testqueue", 40, 5, "high-mem", "high-mem")); return apps; } - + private List getSchedulerApps( Map apps) { List schedApps = new ArrayList(); @@ -1639,7 +1639,7 @@ public class TestClientRMService { private static ApplicationId getApplicationId(int id) { return ApplicationId.newInstance(123456, id); } - + private static ApplicationAttemptId getApplicationAttemptId(int id) { return ApplicationAttemptId.newInstance(getApplicationId(id), 1); } @@ -1664,7 +1664,7 @@ public class TestClientRMService { String clientUserName, boolean allowAccess) { ApplicationReport report = super.createAndGetApplicationReport( clientUserName, allowAccess); - ApplicationResourceUsageReport usageReport = + ApplicationResourceUsageReport usageReport = report.getApplicationResourceUsageReport(); usageReport.setMemorySeconds(memorySeconds); usageReport.setVcoreSeconds(vcoreSeconds); @@ -1684,8 +1684,7 @@ public class TestClientRMService { RMContainerImpl containerimpl = spy(new RMContainerImpl(container, SchedulerRequestKey.extractFrom(container), attemptId, null, "", rmContext)); - Map attempts = - new HashMap(); + Map attempts = new HashMap<>(); attempts.put(attemptId, rmAppAttemptImpl); when(app.getCurrentAppAttempt()).thenReturn(rmAppAttemptImpl); when(app.getAppAttempts()).thenReturn(attempts); @@ -1747,6 +1746,7 @@ public class TestClientRMService { ResourceScheduler.class); conf.setBoolean(YarnConfiguration.RM_RESERVATION_SYSTEM_ENABLE, true); MockRM rm = new MockRM(conf); + resourceManager = rm; rm.start(); try { rm.registerNode("127.0.0.1:1", 102400, 100); @@ -1787,8 +1787,8 @@ public class TestClientRMService { @Test public void testCreateReservation() { - ResourceManager rm = setupResourceManager(); - ClientRMService clientService = rm.getClientRMService(); + resourceManager = setupResourceManager(); + ClientRMService clientService = resourceManager.getClientRMService(); Clock clock = new UTCClock(); long arrival = clock.getTime(); long duration = 60000; @@ -1818,14 +1818,12 @@ public class TestClientRMService { } catch (Exception e) { Assert.assertTrue(e instanceof YarnException); } - - rm.stop(); } @Test public void testUpdateReservation() { - ResourceManager rm = setupResourceManager(); - ClientRMService clientService = rm.getClientRMService(); + resourceManager = setupResourceManager(); + ClientRMService clientService = resourceManager.getClientRMService(); Clock clock = new UTCClock(); long arrival = clock.getTime(); long duration = 60000; @@ -1854,14 +1852,12 @@ public class TestClientRMService { } Assert.assertNotNull(uResponse); System.out.println("Update reservation response: " + uResponse); - - rm.stop(); } @Test public void testListReservationsByReservationId() { - ResourceManager rm = setupResourceManager(); - ClientRMService clientService = rm.getClientRMService(); + resourceManager = setupResourceManager(); + ClientRMService clientService = resourceManager.getClientRMService(); Clock clock = new UTCClock(); long arrival = clock.getTime(); long duration = 60000; @@ -1885,14 +1881,12 @@ public class TestClientRMService { .getReservationId().getId(), reservationID.getId()); Assert.assertEquals(response.getReservationAllocationState().get(0) .getResourceAllocationRequests().size(), 0); - - rm.stop(); } @Test public void testListReservationsByTimeInterval() { - ResourceManager rm = setupResourceManager(); - ClientRMService clientService = rm.getClientRMService(); + resourceManager = setupResourceManager(); + ClientRMService clientService = resourceManager.getClientRMService(); Clock clock = new UTCClock(); long arrival = clock.getTime(); long duration = 60000; @@ -1944,14 +1938,12 @@ public class TestClientRMService { reservationRequests.getInterpreter().toString()); Assert.assertTrue(reservationRequests.getReservationResources().get(0) .getDuration() == duration); - - rm.stop(); } @Test public void testListReservationsByInvalidTimeInterval() { - ResourceManager rm = setupResourceManager(); - ClientRMService clientService = rm.getClientRMService(); + resourceManager = setupResourceManager(); + ClientRMService clientService = resourceManager.getClientRMService(); Clock clock = new UTCClock(); long arrival = clock.getTime(); long duration = 60000; @@ -1988,14 +1980,12 @@ public class TestClientRMService { Assert.assertEquals(1, response.getReservationAllocationState().size()); Assert.assertEquals(response.getReservationAllocationState().get(0) .getReservationId().getId(), sRequest.getReservationId().getId()); - - rm.stop(); } @Test public void testListReservationsByTimeIntervalContainingNoReservations() { - ResourceManager rm = setupResourceManager(); - ClientRMService clientService = rm.getClientRMService(); + resourceManager = setupResourceManager(); + ClientRMService clientService = resourceManager.getClientRMService(); Clock clock = new UTCClock(); long arrival = clock.getTime(); long duration = 60000; @@ -2070,14 +2060,12 @@ public class TestClientRMService { // Ensure all reservations are filtered out. Assert.assertNotNull(response); assertThat(response.getReservationAllocationState()).isEmpty(); - - rm.stop(); } @Test public void testReservationDelete() { - ResourceManager rm = setupResourceManager(); - ClientRMService clientService = rm.getClientRMService(); + resourceManager = setupResourceManager(); + ClientRMService clientService = resourceManager.getClientRMService(); Clock clock = new UTCClock(); long arrival = clock.getTime(); long duration = 60000; @@ -2111,8 +2099,6 @@ public class TestClientRMService { } Assert.assertNotNull(response); Assert.assertEquals(0, response.getReservationAllocationState().size()); - - rm.stop(); } @Test @@ -2125,6 +2111,7 @@ public class TestClientRMService { .getRMDelegationTokenSecretManager()); }; }; + resourceManager = rm; rm.start(); NodeLabel labelX = NodeLabel.newInstance("x", false); NodeLabel labelY = NodeLabel.newInstance("y"); @@ -2139,12 +2126,12 @@ public class TestClientRMService { labelsMgr.replaceLabelsOnNode(map); // Create a client. - Configuration conf = new Configuration(); - YarnRPC rpc = YarnRPC.create(conf); + conf = new Configuration(); + rpc = YarnRPC.create(conf); InetSocketAddress rmAddress = rm.getClientRMService().getBindAddress(); LOG.info("Connecting to ResourceManager at " + rmAddress); - ApplicationClientProtocol client = (ApplicationClientProtocol) rpc - .getProxy(ApplicationClientProtocol.class, rmAddress, conf); + client = (ApplicationClientProtocol) rpc.getProxy( + ApplicationClientProtocol.class, rmAddress, conf); // Get node labels collection GetClusterNodeLabelsResponse response = client @@ -2165,9 +2152,6 @@ public class TestClientRMService { // Below label "x" is not present in the response as exclusivity is true Assert.assertFalse(nodeToLabels.get(node1).containsAll( Arrays.asList(NodeLabel.newInstance("x")))); - - rpc.stopProxy(client, conf); - rm.stop(); } @Test @@ -2180,6 +2164,7 @@ public class TestClientRMService { .getRMDelegationTokenSecretManager()); }; }; + resourceManager = rm; rm.start(); NodeLabel labelX = NodeLabel.newInstance("x", false); @@ -2202,12 +2187,12 @@ public class TestClientRMService { labelsMgr.replaceLabelsOnNode(map); // Create a client. - Configuration conf = new Configuration(); - YarnRPC rpc = YarnRPC.create(conf); + conf = new Configuration(); + rpc = YarnRPC.create(conf); InetSocketAddress rmAddress = rm.getClientRMService().getBindAddress(); LOG.info("Connecting to ResourceManager at " + rmAddress); - ApplicationClientProtocol client = (ApplicationClientProtocol) rpc - .getProxy(ApplicationClientProtocol.class, rmAddress, conf); + client = (ApplicationClientProtocol) rpc.getProxy( + ApplicationClientProtocol.class, rmAddress, conf); // Get node labels collection GetClusterNodeLabelsResponse response = client @@ -2241,9 +2226,6 @@ public class TestClientRMService { Assert.assertTrue(labelsToNodes.get(labelZ.getName()).containsAll( Arrays.asList(node1B, node3B))); assertThat(labelsToNodes.get(labelY.getName())).isNull(); - - rpc.stopProxy(client, conf); - rm.close(); } @Test(timeout = 120000) @@ -2256,6 +2238,7 @@ public class TestClientRMService { this.getRMContext().getRMDelegationTokenSecretManager()); } }; + resourceManager = rm; rm.start(); NodeAttributesManager mgr = rm.getRMContext().getNodeAttributesManager(); @@ -2275,12 +2258,12 @@ public class TestClientRMService { nodes.put(host2.getHost(), ImmutableSet.of(docker)); mgr.addNodeAttributes(nodes); // Create a client. - Configuration conf = new Configuration(); - YarnRPC rpc = YarnRPC.create(conf); + conf = new Configuration(); + rpc = YarnRPC.create(conf); InetSocketAddress rmAddress = rm.getClientRMService().getBindAddress(); LOG.info("Connecting to ResourceManager at " + rmAddress); - ApplicationClientProtocol client = (ApplicationClientProtocol) rpc - .getProxy(ApplicationClientProtocol.class, rmAddress, conf); + client = (ApplicationClientProtocol) rpc.getProxy( + ApplicationClientProtocol.class, rmAddress, conf); GetClusterNodeAttributesRequest request = GetClusterNodeAttributesRequest.newInstance(); @@ -2292,8 +2275,6 @@ public class TestClientRMService { Assert.assertTrue(attributes.contains(NodeAttributeInfo.newInstance(os))); Assert .assertTrue(attributes.contains(NodeAttributeInfo.newInstance(docker))); - rpc.stopProxy(client, conf); - rm.close(); } @Test(timeout = 120000) @@ -2306,6 +2287,7 @@ public class TestClientRMService { this.getRMContext().getRMDelegationTokenSecretManager()); } }; + resourceManager = rm; rm.start(); NodeAttributesManager mgr = rm.getRMContext().getNodeAttributesManager(); @@ -2328,12 +2310,12 @@ public class TestClientRMService { nodes.put(node2, ImmutableSet.of(docker, dist)); mgr.addNodeAttributes(nodes); // Create a client. - Configuration conf = new Configuration(); - YarnRPC rpc = YarnRPC.create(conf); + conf = new Configuration(); + rpc = YarnRPC.create(conf); InetSocketAddress rmAddress = rm.getClientRMService().getBindAddress(); LOG.info("Connecting to ResourceManager at " + rmAddress); - ApplicationClientProtocol client = (ApplicationClientProtocol) rpc - .getProxy(ApplicationClientProtocol.class, rmAddress, conf); + client = (ApplicationClientProtocol) rpc.getProxy( + ApplicationClientProtocol.class, rmAddress, conf); GetAttributesToNodesRequest request = GetAttributesToNodesRequest.newInstance(); @@ -2374,8 +2356,6 @@ public class TestClientRMService { attrs3.get(os.getAttributeKey()))); Assert.assertTrue(findHostnameAndValInMapping(node2, "docker0", attrs3.get(docker.getAttributeKey()))); - rpc.stopProxy(client, conf); - rm.close(); } private boolean findHostnameAndValInMapping(String hostname, String attrVal, @@ -2398,6 +2378,7 @@ public class TestClientRMService { this.getRMContext().getRMDelegationTokenSecretManager()); } }; + resourceManager = rm; rm.start(); NodeAttributesManager mgr = rm.getRMContext().getNodeAttributesManager(); @@ -2420,12 +2401,12 @@ public class TestClientRMService { nodes.put(node2, ImmutableSet.of(docker, dist)); mgr.addNodeAttributes(nodes); // Create a client. - Configuration conf = new Configuration(); - YarnRPC rpc = YarnRPC.create(conf); + conf = new Configuration(); + rpc = YarnRPC.create(conf); InetSocketAddress rmAddress = rm.getClientRMService().getBindAddress(); LOG.info("Connecting to ResourceManager at " + rmAddress); - ApplicationClientProtocol client = (ApplicationClientProtocol) rpc - .getProxy(ApplicationClientProtocol.class, rmAddress, conf); + client = (ApplicationClientProtocol) rpc.getProxy( + ApplicationClientProtocol.class, rmAddress, conf); // Specify null for hostnames. GetNodesToAttributesRequest request1 = @@ -2468,8 +2449,6 @@ public class TestClientRMService { client.getNodesToAttributes(request4); hostToAttrs = response4.getNodeToAttributes(); Assert.assertEquals(0, hostToAttrs.size()); - rpc.stopProxy(client, conf); - rm.close(); } @Test(timeout = 120000) @@ -2477,13 +2456,14 @@ public class TestClientRMService { throws Exception { int maxPriority = 10; int appPriority = 5; - YarnConfiguration conf = new YarnConfiguration(); + conf = new YarnConfiguration(); Assume.assumeFalse("FairScheduler does not support Application Priorities", conf.get(YarnConfiguration.RM_SCHEDULER) .equals(FairScheduler.class.getName())); conf.setInt(YarnConfiguration.MAX_CLUSTER_LEVEL_APPLICATION_PRIORITY, maxPriority); MockRM rm = new MockRM(conf); + resourceManager = rm; rm.init(conf); rm.start(); MockRMAppSubmissionData data = MockRMAppSubmissionData.Builder @@ -2495,20 +2475,20 @@ public class TestClientRMService { testApplicationPriorityUpdation(rmService, app1, appPriority, appPriority); rm.killApp(app1.getApplicationId()); rm.waitForState(app1.getApplicationId(), RMAppState.KILLED); - rm.stop(); } @Test(timeout = 120000) public void testUpdateApplicationPriorityRequest() throws Exception { int maxPriority = 10; int appPriority = 5; - YarnConfiguration conf = new YarnConfiguration(); + conf = new YarnConfiguration(); Assume.assumeFalse("FairScheduler does not support Application Priorities", conf.get(YarnConfiguration.RM_SCHEDULER) .equals(FairScheduler.class.getName())); conf.setInt(YarnConfiguration.MAX_CLUSTER_LEVEL_APPLICATION_PRIORITY, maxPriority); MockRM rm = new MockRM(conf); + resourceManager = rm; rm.init(conf); rm.start(); rm.registerNode("host1:1234", 1024); @@ -2552,8 +2532,6 @@ public class TestClientRMService { Assert.assertEquals("Incorrect priority has been set to application", appPriority, rmService.updateApplicationPriority(updateRequest) .getApplicationPriority().getPriority()); - - rm.stop(); } private void testApplicationPriorityUpdation(ClientRMService rmService, @@ -2573,55 +2551,53 @@ public class TestClientRMService { updateApplicationPriority.getApplicationPriority().getPriority()); } - private void createExcludeFile(String filename) throws IOException { - File file = new File(filename); - if (file.exists()) { - file.delete(); + private File createExcludeFile(File testDir) throws IOException { + File excludeFile = new File(testDir, "excludeFile"); + try (FileOutputStream out = new FileOutputStream(excludeFile)) { + out.write("decommisssionedHost".getBytes(UTF_8)); } - - FileOutputStream out = new FileOutputStream(file); - out.write("decommisssionedHost".getBytes()); - out.close(); + return excludeFile; } @Test public void testRMStartWithDecommissionedNode() throws Exception { - String excludeFile = "excludeFile"; - createExcludeFile(excludeFile); - YarnConfiguration conf = new YarnConfiguration(); - conf.set(YarnConfiguration.RM_NODES_EXCLUDE_FILE_PATH, - excludeFile); - MockRM rm = new MockRM(conf) { - protected ClientRMService createClientRMService() { - return new ClientRMService(this.rmContext, scheduler, - this.rmAppManager, this.applicationACLsManager, this.queueACLsManager, - this.getRMContext().getRMDelegationTokenSecretManager()); + File testDir = GenericTestUtils.getRandomizedTestDir(); + assertTrue("Failed to create test directory: " + testDir.getAbsolutePath(), testDir.mkdirs()); + try { + File excludeFile = createExcludeFile(testDir); + conf = new YarnConfiguration(); + conf.set(YarnConfiguration.RM_NODES_EXCLUDE_FILE_PATH, + excludeFile.getAbsolutePath()); + MockRM rm = new MockRM(conf) { + protected ClientRMService createClientRMService() { + return new ClientRMService(this.rmContext, scheduler, + this.rmAppManager, this.applicationACLsManager, this.queueACLsManager, + this.getRMContext().getRMDelegationTokenSecretManager()); + }; }; - }; - rm.start(); + resourceManager = rm; + rm.start(); - YarnRPC rpc = YarnRPC.create(conf); - InetSocketAddress rmAddress = rm.getClientRMService().getBindAddress(); - LOG.info("Connecting to ResourceManager at " + rmAddress); - ApplicationClientProtocol client = - (ApplicationClientProtocol) rpc - .getProxy(ApplicationClientProtocol.class, rmAddress, conf); + rpc = YarnRPC.create(conf); + InetSocketAddress rmAddress = rm.getClientRMService().getBindAddress(); + LOG.info("Connecting to ResourceManager at " + rmAddress); + client = (ApplicationClientProtocol) rpc.getProxy( + ApplicationClientProtocol.class, rmAddress, conf); - // Make call - GetClusterNodesRequest request = - GetClusterNodesRequest.newInstance(EnumSet.allOf(NodeState.class)); - List nodeReports = client.getClusterNodes(request).getNodeReports(); - Assert.assertEquals(1, nodeReports.size()); - - rm.stop(); - rpc.stopProxy(client, conf); - new File(excludeFile).delete(); + // Make call + GetClusterNodesRequest request = + GetClusterNodesRequest.newInstance(EnumSet.allOf(NodeState.class)); + List nodeReports = client.getClusterNodes(request).getNodeReports(); + assertEquals(1, nodeReports.size()); + } finally { + FileUtil.fullyDelete(testDir); + } } @Test public void testGetResourceTypesInfoWhenResourceProfileDisabled() throws Exception { - YarnConfiguration conf = new YarnConfiguration(); + conf = new YarnConfiguration(); MockRM rm = new MockRM(conf) { protected ClientRMService createClientRMService() { return new ClientRMService(this.rmContext, scheduler, @@ -2629,14 +2605,14 @@ public class TestClientRMService { this.getRMContext().getRMDelegationTokenSecretManager()); } }; + resourceManager = rm; rm.start(); - YarnRPC rpc = YarnRPC.create(conf); + rpc = YarnRPC.create(conf); InetSocketAddress rmAddress = rm.getClientRMService().getBindAddress(); LOG.info("Connecting to ResourceManager at " + rmAddress); - ApplicationClientProtocol client = - (ApplicationClientProtocol) rpc - .getProxy(ApplicationClientProtocol.class, rmAddress, conf); + client = (ApplicationClientProtocol) rpc.getProxy( + ApplicationClientProtocol.class, rmAddress, conf); // Make call GetAllResourceTypeInfoRequest request = @@ -2656,9 +2632,6 @@ public class TestClientRMService { response.getResourceTypeInfo().get(1).getName()); Assert.assertEquals(ResourceInformation.VCORES.getUnits(), response.getResourceTypeInfo().get(1).getDefaultUnit()); - - rm.stop(); - rpc.stopProxy(client, conf); } @Test @@ -2752,11 +2725,12 @@ public class TestClientRMService { this.getRMContext().getRMDelegationTokenSecretManager()); }; }; + resourceManager = rm; rm.start(); - Resource resource = BuilderUtils.newResource(1024, 1); + Resource resource = Resources.createResource(976562); resource.setResourceInformation("memory-mb", - ResourceInformation.newInstance("memory-mb", "G", 1024)); + ResourceInformation.newInstance("memory-mb", "G", 976562)); resource.setResourceInformation("resource1", ResourceInformation.newInstance("resource1", "T", 1)); resource.setResourceInformation("resource2", @@ -2766,13 +2740,12 @@ public class TestClientRMService { node.nodeHeartbeat(true); // Create a client. - Configuration conf = new Configuration(); - YarnRPC rpc = YarnRPC.create(conf); + conf = new Configuration(); + rpc = YarnRPC.create(conf); InetSocketAddress rmAddress = rm.getClientRMService().getBindAddress(); LOG.info("Connecting to ResourceManager at " + rmAddress); - ApplicationClientProtocol client = - (ApplicationClientProtocol) rpc - .getProxy(ApplicationClientProtocol.class, rmAddress, conf); + client = (ApplicationClientProtocol) rpc.getProxy( + ApplicationClientProtocol.class, rmAddress, conf); // Make call GetClusterNodesRequest request = @@ -2798,21 +2771,74 @@ public class TestClientRMService { Assert.assertEquals(1000000000, nodeReports.get(0).getCapability(). getResourceInformation("resource2").getValue()); - //Resource 'memory-mb' has been passed as 1024G while registering NM - //1024G should be converted to 976562Mi + //Resource 'memory-mb' has been passed as 976562G while registering NM + //976562G should be converted to 976562Mi Assert.assertEquals("Mi", nodeReports.get(0).getCapability(). getResourceInformation("memory-mb").getUnits()); Assert.assertEquals(976562, nodeReports.get(0).getCapability(). getResourceInformation("memory-mb").getValue()); + } - rpc.stopProxy(client, conf); - rm.close(); + @Test + public void testGetClusterMetrics() throws Exception { + MockRM rm = new MockRM() { + protected ClientRMService createClientRMService() { + return new ClientRMService(this.rmContext, scheduler, + this.rmAppManager, this.applicationACLsManager, this.queueACLsManager, + this.getRMContext().getRMDelegationTokenSecretManager()); + }; + }; + resourceManager = rm; + rm.start(); + + ClusterMetrics clusterMetrics = ClusterMetrics.getMetrics(); + clusterMetrics.incrDecommissioningNMs(); + repeat(2, clusterMetrics::incrDecommisionedNMs); + repeat(3, clusterMetrics::incrNumActiveNodes); + repeat(4, clusterMetrics::incrNumLostNMs); + repeat(5, clusterMetrics::incrNumUnhealthyNMs); + repeat(6, clusterMetrics::incrNumRebootedNMs); + repeat(7, clusterMetrics::incrNumShutdownNMs); + + // Create a client. + conf = new Configuration(); + rpc = YarnRPC.create(conf); + InetSocketAddress rmAddress = rm.getClientRMService().getBindAddress(); + LOG.info("Connecting to ResourceManager at " + rmAddress); + client = (ApplicationClientProtocol) rpc.getProxy( + ApplicationClientProtocol.class, rmAddress, conf); + + YarnClusterMetrics ymetrics = client.getClusterMetrics( + GetClusterMetricsRequest.newInstance()).getClusterMetrics(); + + Assert.assertEquals(0, ymetrics.getNumNodeManagers()); + Assert.assertEquals(1, ymetrics.getNumDecommissioningNodeManagers()); + Assert.assertEquals(2, ymetrics.getNumDecommissionedNodeManagers()); + Assert.assertEquals(3, ymetrics.getNumActiveNodeManagers()); + Assert.assertEquals(4, ymetrics.getNumLostNodeManagers()); + Assert.assertEquals(5, ymetrics.getNumUnhealthyNodeManagers()); + Assert.assertEquals(6, ymetrics.getNumRebootedNodeManagers()); + Assert.assertEquals(7, ymetrics.getNumShutdownNodeManagers()); } @After - public void tearDown(){ + public void tearDown() throws Exception { if (resourceTypesFile != null && resourceTypesFile.exists()) { resourceTypesFile.delete(); } + ClusterMetrics.destroy(); + DefaultMetricsSystem.shutdown(); + if (conf != null && client != null && rpc != null) { + rpc.stopProxy(client, conf); + } + if (resourceManager != null) { + resourceManager.close(); + } + } + + private static void repeat(int n, Runnable r) { + for (int i = 0; i < n; ++i) { + r.run(); + } } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMTokens.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMTokens.java index 90ba8126328..928f5f50664 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMTokens.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMTokens.java @@ -77,6 +77,7 @@ import org.apache.hadoop.yarn.server.security.ApplicationACLsManager; import org.apache.hadoop.yarn.server.utils.BuilderUtils; import org.apache.hadoop.yarn.util.ConverterUtils; import org.apache.hadoop.yarn.util.Records; +import org.apache.hadoop.yarn.util.resource.Resources; import org.junit.Before; import org.junit.BeforeClass; import org.junit.Test; @@ -570,9 +571,9 @@ public class TestClientRMTokens { private static ResourceScheduler createMockScheduler(Configuration conf) { ResourceScheduler mockSched = mock(ResourceScheduler.class); - doReturn(BuilderUtils.newResource(512, 0)).when(mockSched) + doReturn(Resources.createResource(512)).when(mockSched) .getMinimumResourceCapability(); - doReturn(BuilderUtils.newResource(5120, 0)).when(mockSched) + doReturn(Resources.createResource(5120)).when(mockSched) .getMaximumResourceCapability(); return mockSched; } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestContainerResourceUsage.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestContainerResourceUsage.java index 9a01087ce3a..54fd6509647 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestContainerResourceUsage.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestContainerResourceUsage.java @@ -348,7 +348,7 @@ public class TestContainerResourceUsage { // If keepRunningContainers is false, all live containers should now // be completed. Calculate the resource usage metrics for all of them. for (RMContainer c : rmContainers) { - waitforContainerCompletion(rm, nm, amContainerId, c); + MockRM.waitForContainerCompletion(rm, nm, amContainerId, c); AggregateAppResourceUsage ru = calculateContainerResourceMetrics(c); memorySeconds += ru.getMemorySeconds(); vcoreSeconds += ru.getVcoreSeconds(); @@ -400,7 +400,7 @@ public class TestContainerResourceUsage { // Calculate container usage metrics for second attempt. for (RMContainer c : rmContainers) { - waitforContainerCompletion(rm, nm, amContainerId, c); + MockRM.waitForContainerCompletion(rm, nm, amContainerId, c); AggregateAppResourceUsage ru = calculateContainerResourceMetrics(c); memorySeconds += ru.getMemorySeconds(); vcoreSeconds += ru.getVcoreSeconds(); @@ -417,20 +417,6 @@ public class TestContainerResourceUsage { return; } - private void waitforContainerCompletion(MockRM rm, MockNM nm, - ContainerId amContainerId, RMContainer container) throws Exception { - ContainerId containerId = container.getContainerId(); - if (null != rm.scheduler.getRMContainer(containerId)) { - if (containerId.equals(amContainerId)) { - rm.waitForState(nm, containerId, RMContainerState.COMPLETED); - } else { - rm.waitForState(nm, containerId, RMContainerState.KILLED); - } - } else { - rm.drainEvents(); - } - } - private AggregateAppResourceUsage calculateContainerResourceMetrics( RMContainer rmContainer) { Resource resource = rmContainer.getContainer().getResource(); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMServerUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMServerUtils.java index 78fa90ded2f..95cc9833929 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMServerUtils.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMServerUtils.java @@ -43,10 +43,12 @@ import org.apache.hadoop.yarn.api.records.Resource; import org.apache.hadoop.yarn.api.records.ResourceRequest; import org.apache.hadoop.yarn.api.records.UpdateContainerError; import org.apache.hadoop.yarn.api.records.UpdateContainerRequest; +import org.apache.hadoop.yarn.api.records.YarnApplicationAttemptState; import org.apache.hadoop.yarn.api.records.impl.pb.UpdateContainerRequestPBImpl; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.event.Dispatcher; import org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager; +import org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptState; import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer; import org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNode; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ContainerUpdates; @@ -409,6 +411,30 @@ public class TestRMServerUtils { Assert.assertEquals(15, RMServerUtils.getApplicableNodeCountForAM(rmContext, conf, reqs)); } + @Test + public void testConvertRmAppAttemptStateToYarnApplicationAttemptState() { + Assert.assertEquals( + YarnApplicationAttemptState.FAILED, + RMServerUtils.convertRmAppAttemptStateToYarnApplicationAttemptState( + RMAppAttemptState.FINAL_SAVING, + RMAppAttemptState.FAILED + ) + ); + Assert.assertEquals( + YarnApplicationAttemptState.SCHEDULED, + RMServerUtils.convertRmAppAttemptStateToYarnApplicationAttemptState( + RMAppAttemptState.FINAL_SAVING, + RMAppAttemptState.SCHEDULED + ) + ); + Assert.assertEquals( + YarnApplicationAttemptState.NEW, + RMServerUtils.convertRmAppAttemptStateToYarnApplicationAttemptState( + RMAppAttemptState.NEW, + null + ) + ); + } private ResourceRequest createResourceRequest(String resource, boolean relaxLocality, String nodeLabel) { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceTrackerService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceTrackerService.java index 9feb54c7d40..e4f0b79e373 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceTrackerService.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceTrackerService.java @@ -26,6 +26,7 @@ import org.apache.hadoop.security.Credentials; import org.apache.hadoop.security.token.Token; import org.apache.hadoop.security.token.delegation.web.DelegationTokenIdentifier; import org.apache.hadoop.util.Sets; +import org.apache.hadoop.util.XMLUtils; import org.apache.hadoop.yarn.nodelabels.NodeAttributeStore; import org.apache.hadoop.yarn.nodelabels.NodeLabelUtil; import org.apache.hadoop.yarn.server.api.ResourceTracker; @@ -55,13 +56,16 @@ import java.util.HashMap; import java.util.Iterator; import java.util.List; import java.util.Map; +import java.util.Objects; import java.util.Set; import java.util.HashSet; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; +import java.util.concurrent.TimeoutException; import java.util.concurrent.atomic.AtomicInteger; +import java.util.function.Supplier; import javax.xml.parsers.DocumentBuilderFactory; import javax.xml.transform.Transformer; import javax.xml.transform.TransformerFactory; @@ -142,6 +146,7 @@ import org.apache.hadoop.yarn.server.timelineservice.storage.TimelineWriter; import org.apache.hadoop.yarn.server.utils.BuilderUtils; import org.apache.hadoop.yarn.server.utils.YarnServerBuilderUtils; import org.apache.hadoop.yarn.util.Records; +import org.apache.hadoop.yarn.util.resource.Resources; import org.apache.hadoop.yarn.util.YarnVersionInfo; import org.junit.After; import org.junit.Assert; @@ -609,7 +614,7 @@ public class TestResourceTrackerService extends NodeLabelTestBase { RegisterNodeManagerRequest req = Records.newRecord( RegisterNodeManagerRequest.class); NodeId nodeId = NodeId.newInstance("host2", 1234); - Resource capability = BuilderUtils.newResource(1024, 1); + Resource capability = Resources.createResource(1024); req.setResource(capability); req.setNodeId(nodeId); req.setHttpPort(1234); @@ -651,7 +656,7 @@ public class TestResourceTrackerService extends NodeLabelTestBase { RegisterNodeManagerRequest registerReq = Records.newRecord(RegisterNodeManagerRequest.class); NodeId nodeId = NodeId.newInstance("host2", 1234); - Resource capability = BuilderUtils.newResource(1024, 1); + Resource capability = Resources.createResource(1024); registerReq.setResource(capability); registerReq.setNodeId(nodeId); registerReq.setHttpPort(1234); @@ -700,7 +705,7 @@ public class TestResourceTrackerService extends NodeLabelTestBase { RegisterNodeManagerRequest registerReq = Records.newRecord(RegisterNodeManagerRequest.class); NodeId nodeId = NodeId.newInstance("host2", 1234); - Resource capability = BuilderUtils.newResource(1024, 1); + Resource capability = Resources.createResource(1024); registerReq.setResource(capability); registerReq.setNodeId(nodeId); registerReq.setHttpPort(1234); @@ -753,7 +758,7 @@ public class TestResourceTrackerService extends NodeLabelTestBase { RegisterNodeManagerRequest req = Records.newRecord(RegisterNodeManagerRequest.class); NodeId nodeId = NodeId.newInstance("host2", 1234); - Resource capability = BuilderUtils.newResource(1024, 1); + Resource capability = Resources.createResource(1024); req.setResource(capability); req.setNodeId(nodeId); req.setHttpPort(1234); @@ -804,7 +809,7 @@ public class TestResourceTrackerService extends NodeLabelTestBase { RegisterNodeManagerRequest req = Records.newRecord(RegisterNodeManagerRequest.class); NodeId nodeId = NodeId.newInstance("host2", 1234); - Resource capability = BuilderUtils.newResource(1024, 1); + Resource capability = Resources.createResource(1024); req.setResource(capability); req.setNodeId(nodeId); req.setHttpPort(1234); @@ -847,7 +852,7 @@ public class TestResourceTrackerService extends NodeLabelTestBase { RegisterNodeManagerRequest registerReq = Records.newRecord(RegisterNodeManagerRequest.class); NodeId nodeId = NodeId.newInstance("host2", 1234); - Resource capability = BuilderUtils.newResource(1024, 1); + Resource capability = Resources.createResource(1024); NodeAttribute nodeAttribute1 = NodeAttribute .newInstance(NodeAttribute.PREFIX_DISTRIBUTED, "Attr1", NodeAttributeType.STRING, "V1"); @@ -894,7 +899,7 @@ public class TestResourceTrackerService extends NodeLabelTestBase { RegisterNodeManagerRequest req = Records.newRecord(RegisterNodeManagerRequest.class); NodeId nodeId = NodeId.newInstance("host2", 1234); - Resource capability = BuilderUtils.newResource(1024, 1); + Resource capability = Resources.createResource(1024); NodeAttribute validNodeAttribute = NodeAttribute .newInstance(NodeAttribute.PREFIX_DISTRIBUTED, "Attr1", NodeAttributeType.STRING, "V1"); @@ -997,7 +1002,7 @@ public class TestResourceTrackerService extends NodeLabelTestBase { RegisterNodeManagerRequest registerReq = Records.newRecord(RegisterNodeManagerRequest.class); NodeId nodeId = NodeId.newInstance("host2", 1234); - Resource capability = BuilderUtils.newResource(1024, 1); + Resource capability = Resources.createResource(1024); registerReq.setResource(capability); registerReq.setNodeId(nodeId); registerReq.setHttpPort(1234); @@ -1069,7 +1074,7 @@ public class TestResourceTrackerService extends NodeLabelTestBase { RegisterNodeManagerRequest registerReq = Records.newRecord(RegisterNodeManagerRequest.class); NodeId nodeId = NodeId.newInstance("host2", 1234); - Resource capability = BuilderUtils.newResource(1024, 1); + Resource capability = Resources.createResource(1024); registerReq.setResource(capability); registerReq.setNodeId(nodeId); registerReq.setHttpPort(1234); @@ -1146,7 +1151,7 @@ public class TestResourceTrackerService extends NodeLabelTestBase { RegisterNodeManagerRequest registerReq = Records.newRecord(RegisterNodeManagerRequest.class); NodeId nodeId = NodeId.newInstance("host2", 1234); - Resource capability = BuilderUtils.newResource(1024, 1); + Resource capability = Resources.createResource(1024); registerReq.setResource(capability); registerReq.setNodeId(nodeId); registerReq.setHttpPort(1234); @@ -1280,7 +1285,7 @@ public class TestResourceTrackerService extends NodeLabelTestBase { RegisterNodeManagerRequest registerReq = Records.newRecord(RegisterNodeManagerRequest.class); NodeId nodeId = NodeId.newInstance("host2", 1234); - Resource capability = BuilderUtils.newResource(1024, 1); + Resource capability = Resources.createResource(1024); registerReq.setResource(capability); registerReq.setNodeId(nodeId); registerReq.setHttpPort(1234); @@ -1437,7 +1442,7 @@ public class TestResourceTrackerService extends NodeLabelTestBase { RegisterNodeManagerRequest registerReq = Records.newRecord(RegisterNodeManagerRequest.class); NodeId nodeId = NodeId.newInstance("host2", 1234); - Resource capability = BuilderUtils.newResource(1024, 1); + Resource capability = Resources.createResource(1024); registerReq.setResource(capability); registerReq.setNodeId(nodeId); registerReq.setHttpPort(1234); @@ -1490,7 +1495,7 @@ public class TestResourceTrackerService extends NodeLabelTestBase { RegisterNodeManagerRequest req = Records.newRecord(RegisterNodeManagerRequest.class); NodeId nodeId = NodeId.newInstance("host2", 1234); - Resource capability = BuilderUtils.newResource(1024, 1); + Resource capability = Resources.createResource(1024); req.setResource(capability); req.setNodeId(nodeId); req.setHttpPort(1234); @@ -1538,7 +1543,7 @@ public class TestResourceTrackerService extends NodeLabelTestBase { RegisterNodeManagerRequest req = Records.newRecord( RegisterNodeManagerRequest.class); NodeId nodeId = NodeId.newInstance("host2", 1234); - Resource capability = BuilderUtils.newResource(1024, 1); + Resource capability = Resources.createResource(1024); req.setResource(capability); req.setNodeId(nodeId); req.setHttpPort(1234); @@ -1607,7 +1612,7 @@ public class TestResourceTrackerService extends NodeLabelTestBase { NodeId nodeId = BuilderUtils.newNodeId("host", 1234); req.setNodeId(nodeId); - Resource capability = BuilderUtils.newResource(1024, 1); + Resource capability = Resources.createResource(1024); req.setResource(capability); RegisterNodeManagerResponse response1 = resourceTrackerService.registerNodeManager(req); @@ -2344,8 +2349,7 @@ public class TestResourceTrackerService extends NodeLabelTestBase { } //Test decommed/ing node that transitions to untracked,timer should remove - testNodeRemovalUtilDecomToUntracked(rmContext, conf, nm1, nm2, nm3, - maxThreadSleeptime, doGraceful); + testNodeRemovalUtilDecomToUntracked(rmContext, conf, nm1, nm2, nm3, doGraceful); rm.stop(); } @@ -2353,41 +2357,41 @@ public class TestResourceTrackerService extends NodeLabelTestBase { // max allowed length. private void testNodeRemovalUtilDecomToUntracked( RMContext rmContext, Configuration conf, - MockNM nm1, MockNM nm2, MockNM nm3, - long maxThreadSleeptime, boolean doGraceful) throws Exception { + MockNM nm1, MockNM nm2, MockNM nm3, boolean doGraceful + ) throws Exception { ClusterMetrics metrics = ClusterMetrics.getMetrics(); String ip = NetUtils.normalizeHostName("localhost"); - CountDownLatch latch = new CountDownLatch(1); writeToHostsFile("host1", ip, "host2"); writeToHostsFile(excludeHostFile, "host2"); refreshNodesOption(doGraceful, conf); nm1.nodeHeartbeat(true); //nm2.nodeHeartbeat(true); nm3.nodeHeartbeat(true); - latch.await(maxThreadSleeptime, TimeUnit.MILLISECONDS); - RMNode rmNode = doGraceful ? rmContext.getRMNodes().get(nm2.getNodeId()) : - rmContext.getInactiveRMNodes().get(nm2.getNodeId()); - Assert.assertNotEquals("Timer for this node was not canceled!", - rmNode, null); - Assert.assertTrue("Node should be DECOMMISSIONED or DECOMMISSIONING", - (rmNode.getState() == NodeState.DECOMMISSIONED) || - (rmNode.getState() == NodeState.DECOMMISSIONING)); + Supplier nodeSupplier = doGraceful + ? () -> rmContext.getRMNodes().get(nm2.getNodeId()) + : () -> rmContext.getInactiveRMNodes().get(nm2.getNodeId()); + pollingAssert(() -> nodeSupplier.get() != null, + "Timer for this node was not canceled!"); + final List expectedStates = Arrays.asList( + NodeState.DECOMMISSIONED, + NodeState.DECOMMISSIONING + ); + pollingAssert(() -> expectedStates.contains(nodeSupplier.get().getState()), + "Node should be in one of these states: " + expectedStates); + writeToHostsFile("host1", ip); writeToHostsFile(excludeHostFile, ""); refreshNodesOption(doGraceful, conf); nm2.nodeHeartbeat(true); - latch.await(maxThreadSleeptime, TimeUnit.MILLISECONDS); - rmNode = doGraceful ? rmContext.getRMNodes().get(nm2.getNodeId()) : - rmContext.getInactiveRMNodes().get(nm2.getNodeId()); - Assert.assertEquals("Node should have been forgotten!", - rmNode, null); - Assert.assertEquals("Shutdown nodes should be 0 now", - metrics.getNumDecommisionedNMs(), 0); - Assert.assertEquals("Shutdown nodes should be 0 now", - metrics.getNumShutdownNMs(), 0); - Assert.assertEquals("Active nodes should be 2", - metrics.getNumActiveNMs(), 2); + pollingAssert(() -> nodeSupplier.get() == null, + "Node should have been forgotten!"); + pollingAssert(metrics::getNumDecommisionedNMs, 0, + "metrics#getNumDecommisionedNMs should be 0 now"); + pollingAssert(metrics::getNumShutdownNMs, 0, + "metrics#getNumShutdownNMs should be 0 now"); + pollingAssert(metrics::getNumActiveNMs, 2, + "metrics#getNumActiveNMs should be 2 now"); } private void testNodeRemovalUtilLost(boolean doGraceful) throws Exception { @@ -2662,7 +2666,7 @@ public class TestResourceTrackerService extends NodeLabelTestBase { private void writeToHostsXmlFile( File file, Pair... hostsAndTimeouts) throws Exception { ensureFileExists(file); - DocumentBuilderFactory dbFactory = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbFactory = XMLUtils.newSecureDocumentBuilderFactory(); Document doc = dbFactory.newDocumentBuilder().newDocument(); Element hosts = doc.createElement("hosts"); doc.appendChild(hosts); @@ -2680,7 +2684,7 @@ public class TestResourceTrackerService extends NodeLabelTestBase { ); } } - TransformerFactory transformerFactory = TransformerFactory.newInstance(); + TransformerFactory transformerFactory = XMLUtils.newSecureTransformerFactory(); Transformer transformer = transformerFactory.newTransformer(); transformer.setOutputProperty(OutputKeys.INDENT, "yes"); transformer.transform(new DOMSource(doc), new StreamResult(file)); @@ -2774,7 +2778,7 @@ public class TestResourceTrackerService extends NodeLabelTestBase { RegisterNodeManagerRequest req = Records.newRecord( RegisterNodeManagerRequest.class); NodeId nodeId = NodeId.newInstance("host2", 1234); - Resource capability = BuilderUtils.newResource(1024, 1); + Resource capability = Resources.createResource(1024); NodeStatus mockNodeStatus = createMockNodeStatus(); @@ -2958,6 +2962,18 @@ public class TestResourceTrackerService extends NodeLabelTestBase { mockRM.stop(); } + private void pollingAssert(Supplier supplier, String message) + throws InterruptedException, TimeoutException { + GenericTestUtils.waitFor(supplier, + 100, 10_000, message); + } + + private void pollingAssert(Supplier supplier, T expected, String message) + throws InterruptedException, TimeoutException { + GenericTestUtils.waitFor(() -> Objects.equals(supplier.get(), expected), + 100, 10_000, message); + } + /** * A no-op implementation of NodeAttributeStore for testing */ @@ -3043,7 +3059,7 @@ public class TestResourceTrackerService extends NodeLabelTestBase { recordFactory.newRecordInstance(RegisterNodeManagerRequest.class); request.setNodeId(nodeId); request.setHttpPort(1234); - request.setResource(BuilderUtils.newResource(1024, 1)); + request.setResource(Resources.createResource(1024)); resourceTrackerService.registerNodeManager(request); org.apache.hadoop.yarn.server.api.records.NodeStatus nodeStatus = diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/federation/TestFederationRMStateStoreService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/federation/TestFederationRMStateStoreService.java index e8ebdd5bedd..b8e2ce6ef32 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/federation/TestFederationRMStateStoreService.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/federation/TestFederationRMStateStoreService.java @@ -20,22 +20,46 @@ package org.apache.hadoop.yarn.server.resourcemanager.federation; import java.io.IOException; import java.io.StringReader; import java.net.UnknownHostException; +import java.util.ArrayList; +import java.util.List; +import java.util.Map; import javax.xml.bind.JAXBException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.ha.HAServiceProtocol; import org.apache.hadoop.test.GenericTestUtils; +import org.apache.hadoop.test.LambdaTestUtils; +import org.apache.hadoop.util.Time; +import org.apache.hadoop.yarn.MockApps; +import org.apache.hadoop.yarn.api.records.ApplicationId; +import org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext; +import org.apache.hadoop.yarn.api.records.Priority; +import org.apache.hadoop.yarn.api.records.impl.pb.ApplicationSubmissionContextPBImpl; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.exceptions.YarnException; import org.apache.hadoop.yarn.server.federation.store.FederationStateStore; +import org.apache.hadoop.yarn.server.federation.store.exception.FederationStateStoreException; import org.apache.hadoop.yarn.server.federation.store.records.GetSubClusterInfoRequest; import org.apache.hadoop.yarn.server.federation.store.records.GetSubClusterInfoResponse; import org.apache.hadoop.yarn.server.federation.store.records.SubClusterDeregisterRequest; import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId; import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo; import org.apache.hadoop.yarn.server.federation.store.records.SubClusterState; +import org.apache.hadoop.yarn.server.federation.store.records.ApplicationHomeSubCluster; +import org.apache.hadoop.yarn.server.federation.store.records.AddApplicationHomeSubClusterRequest; +import org.apache.hadoop.yarn.server.federation.store.records.GetApplicationHomeSubClusterRequest; +import org.apache.hadoop.yarn.server.federation.store.records.GetApplicationHomeSubClusterResponse; +import org.apache.hadoop.yarn.server.federation.store.records.GetApplicationsHomeSubClusterRequest; +import org.apache.hadoop.yarn.server.federation.store.records.GetApplicationsHomeSubClusterResponse; +import org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService; import org.apache.hadoop.yarn.server.resourcemanager.MockRM; +import org.apache.hadoop.yarn.server.resourcemanager.RMAppManager; +import org.apache.hadoop.yarn.server.resourcemanager.RMContext; +import org.apache.hadoop.yarn.server.resourcemanager.recovery.MemoryRMStateStore; +import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMApp; +import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl; +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.YarnScheduler; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ClusterMetricsInfo; import org.junit.After; import org.junit.Assert; @@ -46,6 +70,8 @@ import com.sun.jersey.api.json.JSONConfiguration; import com.sun.jersey.api.json.JSONJAXBContext; import com.sun.jersey.api.json.JSONUnmarshaller; +import static org.mockito.Mockito.mock; + /** * Unit tests for FederationStateStoreService. */ @@ -207,4 +233,253 @@ public class TestFederationRMStateStoreService { "Started federation membership heartbeat with interval: 300 and initial delay: 10")); rm.stop(); } + + @Test + public void testCleanUpApplication() throws Exception { + + // set yarn configuration + conf.setBoolean(YarnConfiguration.FEDERATION_ENABLED, true); + conf.setInt(YarnConfiguration.FEDERATION_STATESTORE_HEARTBEAT_INITIAL_DELAY, 10); + conf.set(YarnConfiguration.RM_CLUSTER_ID, subClusterId.getId()); + + // set up MockRM + final MockRM rm = new MockRM(conf); + rm.init(conf); + stateStore = rm.getFederationStateStoreService().getStateStoreClient(); + rm.start(); + + // init subCluster Heartbeat, + // and check that the subCluster is in a running state + FederationStateStoreService stateStoreService = + rm.getFederationStateStoreService(); + FederationStateStoreHeartbeat storeHeartbeat = + stateStoreService.getStateStoreHeartbeatThread(); + storeHeartbeat.run(); + checkSubClusterInfo(SubClusterState.SC_RUNNING); + + // generate an application and join the [SC-1] cluster + ApplicationId appId = ApplicationId.newInstance(Time.now(), 1); + addApplication2StateStore(appId, stateStore); + + // make sure the app can be queried in the stateStore + GetApplicationHomeSubClusterRequest appRequest = + GetApplicationHomeSubClusterRequest.newInstance(appId); + GetApplicationHomeSubClusterResponse response = + stateStore.getApplicationHomeSubCluster(appRequest); + Assert.assertNotNull(response); + ApplicationHomeSubCluster appHomeSubCluster = response.getApplicationHomeSubCluster(); + Assert.assertNotNull(appHomeSubCluster); + Assert.assertNotNull(appHomeSubCluster.getApplicationId()); + Assert.assertEquals(appId, appHomeSubCluster.getApplicationId()); + + // clean up the app. + boolean cleanUpResult = + stateStoreService.cleanUpFinishApplicationsWithRetries(appId, true); + Assert.assertTrue(cleanUpResult); + + // after clean, the app can no longer be queried from the stateStore. + LambdaTestUtils.intercept(FederationStateStoreException.class, + "Application " + appId + " does not exist", + () -> stateStore.getApplicationHomeSubCluster(appRequest)); + + } + + @Test + public void testCleanUpApplicationWhenRMStart() throws Exception { + + // We design such a test case. + // Step1. We add app01, app02, app03 to the stateStore, + // But these apps are not in RM's RMContext, they are finished apps + // Step2. We simulate RM startup, there is only app04 in RMContext. + // Step3. We wait for 5 seconds, the automatic cleanup thread should clean up finished apps. + + // set yarn configuration. + conf.setBoolean(YarnConfiguration.FEDERATION_ENABLED, true); + conf.setInt(YarnConfiguration.FEDERATION_STATESTORE_HEARTBEAT_INITIAL_DELAY, 10); + conf.set(YarnConfiguration.RM_CLUSTER_ID, subClusterId.getId()); + conf.setBoolean(YarnConfiguration.RECOVERY_ENABLED, true); + + // set up MockRM. + MockRM rm = new MockRM(conf); + rm.init(conf); + stateStore = rm.getFederationStateStoreService().getStateStoreClient(); + + // generate an [app01] and join the [SC-1] cluster. + List appIds = new ArrayList<>(); + ApplicationId appId01 = ApplicationId.newInstance(Time.now(), 1); + addApplication2StateStore(appId01, stateStore); + appIds.add(appId01); + + // generate an [app02] and join the [SC-1] cluster. + ApplicationId appId02 = ApplicationId.newInstance(Time.now(), 2); + addApplication2StateStore(appId02, stateStore); + appIds.add(appId02); + + // generate an [app03] and join the [SC-1] cluster. + ApplicationId appId03 = ApplicationId.newInstance(Time.now(), 3); + addApplication2StateStore(appId03, stateStore); + appIds.add(appId03); + + // make sure the apps can be queried in the stateStore. + GetApplicationsHomeSubClusterRequest allRequest = + GetApplicationsHomeSubClusterRequest.newInstance(subClusterId); + GetApplicationsHomeSubClusterResponse allResponse = + stateStore.getApplicationsHomeSubCluster(allRequest); + Assert.assertNotNull(allResponse); + List appHomeSCLists = allResponse.getAppsHomeSubClusters(); + Assert.assertNotNull(appHomeSCLists); + Assert.assertEquals(3, appHomeSCLists.size()); + + // app04 exists in both RM memory and stateStore. + ApplicationId appId04 = ApplicationId.newInstance(Time.now(), 4); + addApplication2StateStore(appId04, stateStore); + addApplication2RMAppManager(rm, appId04); + + // start rm. + rm.start(); + + // wait 5s, wait for the thread to finish cleaning up. + GenericTestUtils.waitFor(() -> { + int appsSize = 0; + try { + List subClusters = + getApplicationsFromStateStore(); + Assert.assertNotNull(subClusters); + appsSize = subClusters.size(); + } catch (YarnException e) { + e.printStackTrace(); + } + return (appsSize == 1); + }, 100, 1000 * 5); + + // check the app to make sure the apps(app01,app02,app03) doesn't exist. + for (ApplicationId appId : appIds) { + GetApplicationHomeSubClusterRequest appRequest = + GetApplicationHomeSubClusterRequest.newInstance(appId); + LambdaTestUtils.intercept(FederationStateStoreException.class, + "Application " + appId + " does not exist", + () -> stateStore.getApplicationHomeSubCluster(appRequest)); + } + + if (rm != null) { + rm.stop(); + rm = null; + } + } + + @Test + public void testCleanUpApplicationWhenRMCompleteOneApp() throws Exception { + + // We design such a test case. + // Step1. We start RM,Set the RM memory to keep a maximum of 1 completed app. + // Step2. Register app[01-03] to RM memory & stateStore. + // Step3. We clean up app01, app02, app03, at this time, + // app01, app02 should be cleaned up from statestore, app03 should remain in statestore. + + // set yarn configuration. + conf.setBoolean(YarnConfiguration.FEDERATION_ENABLED, true); + conf.setInt(YarnConfiguration.FEDERATION_STATESTORE_HEARTBEAT_INITIAL_DELAY, 10); + conf.set(YarnConfiguration.RM_CLUSTER_ID, subClusterId.getId()); + conf.setBoolean(YarnConfiguration.RECOVERY_ENABLED, true); + conf.setInt(YarnConfiguration.RM_STATE_STORE_MAX_COMPLETED_APPLICATIONS, 1); + conf.set(YarnConfiguration.RM_STORE, MemoryRMStateStore.class.getName()); + + // set up MockRM. + MockRM rm = new MockRM(conf); + rm.init(conf); + stateStore = rm.getFederationStateStoreService().getStateStoreClient(); + rm.start(); + + // generate an [app01] and join the [SC-1] cluster. + List appIds = new ArrayList<>(); + ApplicationId appId01 = ApplicationId.newInstance(Time.now(), 1); + addApplication2StateStore(appId01, stateStore); + addApplication2RMAppManager(rm, appId01); + appIds.add(appId01); + + // generate an [app02] and join the [SC-1] cluster. + ApplicationId appId02 = ApplicationId.newInstance(Time.now(), 2); + addApplication2StateStore(appId02, stateStore); + addApplication2RMAppManager(rm, appId02); + appIds.add(appId02); + + // generate an [app03] and join the [SC-1] cluster. + ApplicationId appId03 = ApplicationId.newInstance(Time.now(), 3); + addApplication2StateStore(appId03, stateStore); + addApplication2RMAppManager(rm, appId03); + + // rmAppManager + RMAppManager rmAppManager = rm.getRMAppManager(); + rmAppManager.finishApplication4Test(appId01); + rmAppManager.finishApplication4Test(appId02); + rmAppManager.finishApplication4Test(appId03); + rmAppManager.checkAppNumCompletedLimit4Test(); + + // app01, app02 should be cleaned from statestore + // After the query, it should report the error not exist. + for (ApplicationId appId : appIds) { + GetApplicationHomeSubClusterRequest appRequest = + GetApplicationHomeSubClusterRequest.newInstance(appId); + LambdaTestUtils.intercept(FederationStateStoreException.class, + "Application " + appId + " does not exist", + () -> stateStore.getApplicationHomeSubCluster(appRequest)); + } + + // app03 should remain in statestore + List appHomeScList = getApplicationsFromStateStore(); + Assert.assertNotNull(appHomeScList); + Assert.assertEquals(1, appHomeScList.size()); + ApplicationHomeSubCluster homeSubCluster = appHomeScList.get(0); + Assert.assertNotNull(homeSubCluster); + Assert.assertEquals(appId03, homeSubCluster.getApplicationId()); + } + + private void addApplication2StateStore(ApplicationId appId, + FederationStateStore fedStateStore) throws YarnException { + ApplicationHomeSubCluster appHomeSC = ApplicationHomeSubCluster.newInstance( + appId, subClusterId); + AddApplicationHomeSubClusterRequest addHomeSCRequest = + AddApplicationHomeSubClusterRequest.newInstance(appHomeSC); + fedStateStore.addApplicationHomeSubCluster(addHomeSCRequest); + } + + private List getApplicationsFromStateStore() throws YarnException { + // make sure the apps can be queried in the stateStore + GetApplicationsHomeSubClusterRequest allRequest = + GetApplicationsHomeSubClusterRequest.newInstance(subClusterId); + GetApplicationsHomeSubClusterResponse allResponse = + stateStore.getApplicationsHomeSubCluster(allRequest); + Assert.assertNotNull(allResponse); + List appHomeSCLists = allResponse.getAppsHomeSubClusters(); + Assert.assertNotNull(appHomeSCLists); + return appHomeSCLists; + } + + private void addApplication2RMAppManager(MockRM rm, ApplicationId appId) { + RMContext rmContext = rm.getRMContext(); + Map rmAppMaps = rmContext.getRMApps(); + String user = MockApps.newUserName(); + String name = MockApps.newAppName(); + String queue = MockApps.newQueue(); + + YarnScheduler scheduler = mock(YarnScheduler.class); + + ApplicationMasterService masterService = + new ApplicationMasterService(rmContext, scheduler); + + ApplicationSubmissionContext submissionContext = + new ApplicationSubmissionContextPBImpl(); + + // applicationId will not be used because RMStateStore is mocked, + // but applicationId is still set for safety + submissionContext.setApplicationId(appId); + submissionContext.setPriority(Priority.newInstance(0)); + + RMApp application = new RMAppImpl(appId, rmContext, conf, name, + user, queue, submissionContext, scheduler, masterService, + System.currentTimeMillis(), "YARN", null, + new ArrayList<>()); + + rmAppMaps.putIfAbsent(application.getApplicationId(), application); + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/nodelabels/NullRMNodeLabelsManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/nodelabels/NullRMNodeLabelsManager.java index b8f3fae7da6..10d98455851 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/nodelabels/NullRMNodeLabelsManager.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/nodelabels/NullRMNodeLabelsManager.java @@ -27,9 +27,11 @@ import java.util.Set; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.yarn.api.records.NodeId; import org.apache.hadoop.yarn.api.records.NodeLabel; +import org.apache.hadoop.yarn.api.records.Resource; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager; import org.apache.hadoop.yarn.nodelabels.NodeLabelsStore; +import org.apache.hadoop.yarn.nodelabels.RMNodeLabel; public class NullRMNodeLabelsManager extends RMNodeLabelsManager { Map> lastNodeToLabels = null; @@ -98,4 +100,24 @@ public class NullRMNodeLabelsManager extends RMNodeLabelsManager { conf.setBoolean(YarnConfiguration.NODE_LABELS_ENABLED, true); super.serviceInit(conf); } + + public void setResourceForLabel(String label, Resource resource) { + if (label.equals(NO_LABEL)) { + noNodeLabel = new FakeLabel(resource); + return; + } + + labelCollections.put(label, new FakeLabel(label, resource)); + } + + private static class FakeLabel extends RMNodeLabel { + + FakeLabel(String label, Resource resource) { + super(label, resource, 1, false); + } + + FakeLabel(Resource resource) { + super(NO_LABEL, resource, 1, false); + } + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/nodelabels/TestRMDelegatedNodeLabelsUpdater.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/nodelabels/TestRMDelegatedNodeLabelsUpdater.java index 993b05e24e0..def531f9c59 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/nodelabels/TestRMDelegatedNodeLabelsUpdater.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/nodelabels/TestRMDelegatedNodeLabelsUpdater.java @@ -34,9 +34,9 @@ import org.apache.hadoop.yarn.server.api.protocolrecords.RegisterNodeManagerRequ import org.apache.hadoop.yarn.server.resourcemanager.MockRM; import org.apache.hadoop.yarn.server.resourcemanager.ResourceManager; import org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService; -import org.apache.hadoop.yarn.server.utils.BuilderUtils; import org.apache.hadoop.yarn.util.Records; import org.apache.hadoop.yarn.util.YarnVersionInfo; +import org.apache.hadoop.yarn.util.resource.Resources; import org.junit.Assert; import org.junit.Before; import org.junit.Test; @@ -133,7 +133,7 @@ public class TestRMDelegatedNodeLabelsUpdater extends NodeLabelTestBase { rm.getResourceTrackerService(); RegisterNodeManagerRequest req = Records.newRecord(RegisterNodeManagerRequest.class); - Resource capability = BuilderUtils.newResource(1024, 1); + Resource capability = Resources.createResource(1024); req.setResource(capability); req.setNodeId(nodeId); req.setHttpPort(1234); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/placement/TestPlacementRuleFS.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/placement/TestPlacementRuleFS.java index 116c0ace70d..1c7e5fa5131 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/placement/TestPlacementRuleFS.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/placement/TestPlacementRuleFS.java @@ -19,6 +19,7 @@ package org.apache.hadoop.yarn.server.resourcemanager.placement; import org.apache.commons.io.IOUtils; +import org.apache.hadoop.util.XMLUtils; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairSchedulerConfiguration; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.QueueManager; @@ -188,11 +189,10 @@ public class TestPlacementRuleFS { private Element createConf(String str) { // Create a simple rule element to use in the rule create - DocumentBuilderFactory docBuilderFactory = - DocumentBuilderFactory.newInstance(); - docBuilderFactory.setIgnoringComments(true); Document doc = null; try { + DocumentBuilderFactory docBuilderFactory = XMLUtils.newSecureDocumentBuilderFactory(); + docBuilderFactory.setIgnoringComments(true); DocumentBuilder builder = docBuilderFactory.newDocumentBuilder(); doc = builder.parse(IOUtils.toInputStream(str, StandardCharsets.UTF_8)); } catch (Exception ex) { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/placement/csmappingrule/TestCSMappingPlacementRule.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/placement/csmappingrule/TestCSMappingPlacementRule.java index 3e614bcbc96..41ce2b56eab 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/placement/csmappingrule/TestCSMappingPlacementRule.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/placement/csmappingrule/TestCSMappingPlacementRule.java @@ -65,7 +65,7 @@ public class TestCSMappingPlacementRule { @Rule public TemporaryFolder folder = new TemporaryFolder(); - + private Map> userGroups = ImmutableMap.>builder() .put("alice", ImmutableSet.of("p_alice", "unique", "user")) @@ -85,6 +85,7 @@ public class TestCSMappingPlacementRule { .withQueue("root.user.alice") .withQueue("root.user.bob") .withQueue("root.user.test_dot_user") + .withQueue("root.user.testuser") .withQueue("root.groups.main_dot_grp") .withQueue("root.groups.sec_dot_test_dot_grp") .withQueue("root.secondaryTests.unique") @@ -857,6 +858,46 @@ public class TestCSMappingPlacementRule { assertPlace(engine, app, user, "root.man.testGroup0"); } + @Test + public void testOriginalUserNameWithDotCanBeUsedInMatchExpression() throws IOException { + List rules = new ArrayList<>(); + rules.add( + new MappingRule( + MappingRuleMatchers.createUserMatcher("test.user"), + (MappingRuleActions.createUpdateDefaultAction("root.user.testuser")) + .setFallbackSkip())); + rules.add(new MappingRule( + MappingRuleMatchers.createUserMatcher("test.user"), + (MappingRuleActions.createPlaceToDefaultAction()) + .setFallbackReject())); + + CSMappingPlacementRule engine = setupEngine(true, rules); + ApplicationSubmissionContext app = createApp("app"); + assertPlace( + "test.user should be placed to root.user", + engine, app, "test.user", "root.user.testuser"); + } + + @Test + public void testOriginalGroupNameWithDotCanBeUsedInMatchExpression() throws IOException { + List rules = new ArrayList<>(); + rules.add( + new MappingRule( + MappingRuleMatchers.createUserGroupMatcher("sec.test.grp"), + (MappingRuleActions.createUpdateDefaultAction("root.user.testuser")) + .setFallbackSkip())); + rules.add(new MappingRule( + MappingRuleMatchers.createUserMatcher("test.user"), + (MappingRuleActions.createPlaceToDefaultAction()) + .setFallbackReject())); + + CSMappingPlacementRule engine = setupEngine(true, rules); + ApplicationSubmissionContext app = createApp("app"); + assertPlace( + "test.user should be placed to root.user", + engine, app, "test.user", "root.user.testuser"); + } + private CSMappingPlacementRule initPlacementEngine(CapacityScheduler cs) throws IOException { CSMappingPlacementRule engine = new CSMappingPlacementRule(); engine.setFailOnConfigError(true); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestFSRMStateStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestFSRMStateStore.java index b775f4c4e79..17737e59c2b 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestFSRMStateStore.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestFSRMStateStore.java @@ -172,7 +172,7 @@ public class TestFSRMStateStore extends RMStateStoreTestBase { } } - @Test(timeout = 60000) + @Test(timeout = 120000) public void testFSRMStateStore() throws Exception { HdfsConfiguration conf = new HdfsConfiguration(); MiniDFSCluster cluster = diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/resourcetracker/TestNMExpiry.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/resourcetracker/TestNMExpiry.java index 017a1e021d7..328f3b46a47 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/resourcetracker/TestNMExpiry.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/resourcetracker/TestNMExpiry.java @@ -46,8 +46,8 @@ import org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService; import org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeEventType; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.SchedulerEventType; import org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager; -import org.apache.hadoop.yarn.server.utils.BuilderUtils; import org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM; +import org.apache.hadoop.yarn.util.resource.Resources; import org.junit.Before; import org.junit.Test; @@ -136,7 +136,7 @@ public class TestNMExpiry { String hostname1 = "localhost1"; String hostname2 = "localhost2"; String hostname3 = "localhost3"; - Resource capability = BuilderUtils.newResource(1024, 1); + Resource capability = Resources.createResource(1024); NodeStatus mockNodeStatus = createMockNodeStatus(); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/resourcetracker/TestNMReconnect.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/resourcetracker/TestNMReconnect.java index 817fb9dfc33..f5a3bf85bcc 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/resourcetracker/TestNMReconnect.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/resourcetracker/TestNMReconnect.java @@ -55,7 +55,7 @@ import org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnSched import org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.SchedulerEventType; import org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM; import org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager; -import org.apache.hadoop.yarn.server.utils.BuilderUtils; +import org.apache.hadoop.yarn.util.resource.Resources; import org.junit.After; import org.junit.Assert; import org.junit.Before; @@ -133,7 +133,7 @@ public class TestNMReconnect extends ParameterizedSchedulerTestBase { @Test public void testReconnect() throws Exception { String hostname1 = "localhost1"; - Resource capability = BuilderUtils.newResource(1024, 1); + Resource capability = Resources.createResource(1024); RegisterNodeManagerRequest request1 = recordFactory .newRecordInstance(RegisterNodeManagerRequest.class); @@ -152,7 +152,7 @@ public class TestNMReconnect extends ParameterizedSchedulerTestBase { rmNodeEvents.clear(); resourceTrackerService.registerNodeManager(request1); - capability = BuilderUtils.newResource(1024, 2); + capability = Resources.createResource(1024, 2); request1.setResource(capability); Assert.assertEquals(RMNodeEventType.RECONNECTED, rmNodeEvents.get(0).getType()); @@ -176,7 +176,7 @@ public class TestNMReconnect extends ParameterizedSchedulerTestBase { dispatcher.register(SchedulerEventType.class, scheduler); String hostname1 = "localhost1"; - Resource capability = BuilderUtils.newResource(4096, 4); + Resource capability = Resources.createResource(4096, 4); RegisterNodeManagerRequest request1 = recordFactory .newRecordInstance(RegisterNodeManagerRequest.class); @@ -195,7 +195,7 @@ public class TestNMReconnect extends ParameterizedSchedulerTestBase { context.getRMNodes().get(nodeId1)); Assert.assertEquals(context.getRMNodes().get(nodeId1). getTotalCapability(), capability); - Resource capability1 = BuilderUtils.newResource(2048, 2); + Resource capability1 = Resources.createResource(2048, 2); request1.setResource(capability1); resourceTrackerService.registerNodeManager(request1); Assert.assertNotNull(context.getRMNodes().get(nodeId1)); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/resourcetracker/TestRMNMRPCResponseId.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/resourcetracker/TestRMNMRPCResponseId.java index 4f9469548ae..6e36c533a4b 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/resourcetracker/TestRMNMRPCResponseId.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/resourcetracker/TestRMNMRPCResponseId.java @@ -44,8 +44,8 @@ import org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService; import org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeEventType; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.SchedulerEventType; import org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager; -import org.apache.hadoop.yarn.server.utils.BuilderUtils; import org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM; +import org.apache.hadoop.yarn.util.resource.Resources; import org.junit.After; import org.junit.Assert; import org.junit.Before; @@ -94,7 +94,7 @@ public class TestRMNMRPCResponseId { @Test public void testRPCResponseId() throws IOException, YarnException { String node = "localhost"; - Resource capability = BuilderUtils.newResource(1024, 1); + Resource capability = Resources.createResource(1024); RegisterNodeManagerRequest request = recordFactory.newRecordInstance(RegisterNodeManagerRequest.class); nodeId = NodeId.newInstance(node, 1234); request.setNodeId(nodeId); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/TestRMAppAttemptTransitions.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/TestRMAppAttemptTransitions.java index 4e5ff3f7687..9e8f2f5edbd 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/TestRMAppAttemptTransitions.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/TestRMAppAttemptTransitions.java @@ -311,7 +311,7 @@ public class TestRMAppAttemptTransitions { final String queue = MockApps.newQueue(); submissionContext = mock(ApplicationSubmissionContext.class); when(submissionContext.getQueue()).thenReturn(queue); - Resource resource = BuilderUtils.newResource(1536, 1); + Resource resource = Resources.createResource(1536); ContainerLaunchContext amContainerSpec = BuilderUtils.newContainerLaunchContext(null, null, null, null, null, null); @@ -629,7 +629,7 @@ public class TestRMAppAttemptTransitions { // Mock the allocation of AM container Container container = mock(Container.class); - Resource resource = BuilderUtils.newResource(2048, 1); + Resource resource = Resources.createResource(2048); when(container.getId()).thenReturn( BuilderUtils.newContainerId(applicationAttempt.getAppAttemptId(), 1)); when(container.getResource()).thenReturn(resource); @@ -1199,7 +1199,7 @@ public class TestRMAppAttemptTransitions { RMAppAttemptEventType.ATTEMPT_ADDED)); Container amContainer = mock(Container.class); - Resource resource = BuilderUtils.newResource(2048, 1); + Resource resource = Resources.createResource(2048); when(amContainer.getId()).thenReturn( BuilderUtils.newContainerId(myApplicationAttempt.getAppAttemptId(), 1)); when(amContainer.getResource()).thenReturn(resource); @@ -1763,7 +1763,7 @@ public class TestRMAppAttemptTransitions { // Mock the allocation of AM container Container container = mock(Container.class); - Resource resource = BuilderUtils.newResource(2048, 1); + Resource resource = Resources.createResource(2048); when(container.getId()).thenReturn( BuilderUtils.newContainerId(applicationAttempt.getAppAttemptId(), 1)); when(container.getResource()).thenReturn(resource); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/TestRMContainerImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/TestRMContainerImpl.java index 72f420eea82..c16116f22c3 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/TestRMContainerImpl.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/TestRMContainerImpl.java @@ -73,6 +73,7 @@ import org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.Alloca import org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.AllocationTagsManager; import org.apache.hadoop.yarn.server.scheduler.SchedulerRequestKey; import org.apache.hadoop.yarn.server.utils.BuilderUtils; +import org.apache.hadoop.yarn.util.resource.Resources; import org.junit.Assert; import org.junit.Test; import org.mockito.ArgumentCaptor; @@ -100,7 +101,7 @@ public class TestRMContainerImpl { ContainerId containerId = BuilderUtils.newContainerId(appAttemptId, 1); ContainerAllocationExpirer expirer = mock(ContainerAllocationExpirer.class); - Resource resource = BuilderUtils.newResource(512, 1); + Resource resource = Resources.createResource(512); Priority priority = BuilderUtils.newPriority(5); Container container = BuilderUtils.newContainer(containerId, nodeId, @@ -206,7 +207,7 @@ public class TestRMContainerImpl { ContainerId containerId = BuilderUtils.newContainerId(appAttemptId, 1); ContainerAllocationExpirer expirer = mock(ContainerAllocationExpirer.class); - Resource resource = BuilderUtils.newResource(512, 1); + Resource resource = Resources.createResource(512); Priority priority = BuilderUtils.newPriority(5); Container container = BuilderUtils.newContainer(containerId, nodeId, @@ -407,7 +408,7 @@ public class TestRMContainerImpl { ContainerId containerId = BuilderUtils.newContainerId(appAttemptId, 1); ContainerAllocationExpirer expirer = mock(ContainerAllocationExpirer.class); - Resource resource = BuilderUtils.newResource(512, 1); + Resource resource = Resources.createResource(512); Priority priority = BuilderUtils.newPriority(5); Container container = BuilderUtils.newContainer(containerId, nodeId, @@ -582,7 +583,7 @@ public class TestRMContainerImpl { ContainerId containerId = BuilderUtils.newContainerId(appAttemptId, 1); ContainerAllocationExpirer expirer = mock(ContainerAllocationExpirer.class); - Resource resource = BuilderUtils.newResource(512, 1); + Resource resource = Resources.createResource(512); Priority priority = BuilderUtils.newPriority(5); Container container = BuilderUtils.newContainer(containerId, nodeId, diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestAbstractYarnScheduler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestAbstractYarnScheduler.java index 76fbf0936f1..e53ac5aff2a 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestAbstractYarnScheduler.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestAbstractYarnScheduler.java @@ -84,7 +84,6 @@ import org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.SchedulerEv import org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM; import org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager; import org.apache.hadoop.yarn.server.scheduler.SchedulerRequestKey; -import org.apache.hadoop.yarn.server.utils.BuilderUtils; import org.apache.hadoop.yarn.util.resource.Resources; import org.junit.Assert; @@ -1048,7 +1047,7 @@ public class TestAbstractYarnScheduler extends ParameterizedSchedulerTestBase { // Register node1 String hostname1 = "localhost1"; - Resource capability = BuilderUtils.newResource(4096, 4); + Resource capability = Resources.createResource(4096, 4); RecordFactory recordFactory = RecordFactoryProvider.getRecordFactory(null); @@ -1068,7 +1067,7 @@ public class TestAbstractYarnScheduler extends ParameterizedSchedulerTestBase { Assert.assertEquals("Initial cluster resources don't match", capability, clusterResource); - Resource newCapability = BuilderUtils.newResource(1024, 1); + Resource newCapability = Resources.createResource(1024); RegisterNodeManagerRequest request2 = recordFactory.newRecordInstance(RegisterNodeManagerRequest.class); request2.setNodeId(nodeId1); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerQueueCalculationTestBase.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerQueueCalculationTestBase.java new file mode 100644 index 00000000000..f62945c7a5a --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerQueueCalculationTestBase.java @@ -0,0 +1,131 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity; + +import org.apache.hadoop.yarn.api.records.Resource; +import org.apache.hadoop.yarn.conf.YarnConfiguration; +import org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager; +import org.apache.hadoop.yarn.server.resourcemanager.MockRM; +import org.apache.hadoop.yarn.server.resourcemanager.nodelabels.NullRMNodeLabelsManager; +import org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager; +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceScheduler; +import org.apache.hadoop.yarn.util.resource.ResourceCalculator; +import org.junit.Before; + +import java.io.IOException; + +import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerAutoCreatedQueueBase.GB; + +public class CapacitySchedulerQueueCalculationTestBase { + protected static final String A = "root.a"; + protected static final String A1 = "root.a.a1"; + protected static final String A11 = "root.a.a1.a11"; + protected static final String A12 = "root.a.a1.a12"; + protected static final String A2 = "root.a.a2"; + protected static final String B = "root.b"; + protected static final String B1 = "root.b.b1"; + protected static final String C = "root.c"; + + private static final String CAPACITY_VECTOR_TEMPLATE = "[memory=%s, vcores=%s]"; + + protected ResourceCalculator resourceCalculator; + + protected MockRM mockRM; + protected CapacityScheduler cs; + protected CapacitySchedulerConfiguration csConf; + protected NullRMNodeLabelsManager mgr; + + @Before + public void setUp() throws Exception { + csConf = new CapacitySchedulerConfiguration(); + csConf.setClass(YarnConfiguration.RM_SCHEDULER, CapacityScheduler.class, + ResourceScheduler.class); + + csConf.setQueues("root", new String[]{"a", "b"}); + csConf.setCapacity("root.a", 50f); + csConf.setCapacity("root.b", 50f); + csConf.setQueues("root.a", new String[]{"a1", "a2"}); + csConf.setCapacity("root.a.a1", 100f); + csConf.setQueues("root.a.a1", new String[]{"a11", "a12"}); + csConf.setCapacity("root.a.a1.a11", 50f); + csConf.setCapacity("root.a.a1.a12", 50f); + + mgr = new NullRMNodeLabelsManager(); + mgr.init(csConf); + mockRM = new MockRM(csConf) { + protected RMNodeLabelsManager createNodeLabelManager() { + return mgr; + } + }; + cs = (CapacityScheduler) mockRM.getResourceScheduler(); + cs.updatePlacementRules(); + // Policy for new auto created queue's auto deletion when expired + mockRM.start(); + cs.start(); + mockRM.registerNode("h1:1234", 10 * GB); // label = x + resourceCalculator = cs.getResourceCalculator(); + } + protected QueueCapacityUpdateContext update( + QueueAssertionBuilder assertions, Resource clusterResource) + throws IOException { + return update(assertions, clusterResource, clusterResource); + } + + protected QueueCapacityUpdateContext update( + QueueAssertionBuilder assertions, Resource clusterResource, Resource emptyLabelResource) + throws IOException { + cs.reinitialize(csConf, mockRM.getRMContext()); + + CapacitySchedulerQueueCapacityHandler queueController = + new CapacitySchedulerQueueCapacityHandler(mgr); + mgr.setResourceForLabel(CommonNodeLabelsManager.NO_LABEL, emptyLabelResource); + + queueController.updateRoot(cs.getQueue("root"), clusterResource); + QueueCapacityUpdateContext updateContext = + queueController.updateChildren(clusterResource, cs.getQueue("root")); + + assertions.finishAssertion(); + + return updateContext; + } + + protected QueueAssertionBuilder createAssertionBuilder() { + return new QueueAssertionBuilder(cs); + } + + protected static String createCapacityVector(Object memory, Object vcores) { + return String.format(CAPACITY_VECTOR_TEMPLATE, memory, vcores); + } + + protected static String absolute(double value) { + return String.valueOf((long) value); + } + + protected static String weight(float value) { + return value + "w"; + } + + protected static String percentage(float value) { + return value + "%"; + } + + protected static Resource createResource(double memory, double vcores) { + return Resource.newInstance((int) memory, (int) vcores); + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/QueueAssertionBuilder.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/QueueAssertionBuilder.java new file mode 100644 index 00000000000..1c066719dd0 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/QueueAssertionBuilder.java @@ -0,0 +1,210 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + *

    + * http://www.apache.org/licenses/LICENSE-2.0 + *

    + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity; + +import org.apache.hadoop.yarn.api.records.Resource; +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueResourceQuotas; +import org.junit.Assert; + +import java.util.ArrayList; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.function.BiFunction; +import java.util.function.Supplier; + +import static org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager.NO_LABEL; +import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueueUtils.EPSILON; + +/** + * Provides a fluent API to assert resource and capacity attributes of queues. + */ +class QueueAssertionBuilder { + private static final String EFFECTIVE_MAX_RES_INFO = "Effective Maximum Resource"; + private static final BiFunction EFFECTIVE_MAX_RES = + QueueResourceQuotas::getEffectiveMaxResource; + + private static final String EFFECTIVE_MIN_RES_INFO = "Effective Minimum Resource"; + private static final BiFunction EFFECTIVE_MIN_RES = + QueueResourceQuotas::getEffectiveMinResource; + + private static final String CAPACITY_INFO = "Capacity"; + private static final BiFunction CAPACITY = + QueueCapacities::getCapacity; + + private static final String ABS_CAPACITY_INFO = "Absolute Capacity"; + private static final BiFunction ABS_CAPACITY = + QueueCapacities::getAbsoluteCapacity; + + private static final String ASSERTION_ERROR_MESSAGE = + "'%s' of queue '%s' does not match %f for label %s"; + private static final String RESOURCE_ASSERTION_ERROR_MESSAGE = + "'%s' of queue '%s' does not match %s for label %s"; + private final CapacityScheduler cs; + + QueueAssertionBuilder(CapacityScheduler cs) { + this.cs = cs; + } + + public class QueueAssertion { + private final String queuePath; + private final List assertions = new ArrayList<>(); + + QueueAssertion(String queuePath) { + this.queuePath = queuePath; + } + + + public QueueAssertion withQueue(String queuePath) { + return QueueAssertionBuilder.this.withQueue(queuePath); + } + + public QueueAssertionBuilder build() { + return QueueAssertionBuilder.this.build(); + } + + public QueueAssertion assertEffectiveMaxResource(Resource expected) { + ValueAssertion valueAssertion = new ValueAssertion(expected); + valueAssertion.withResourceSupplier(EFFECTIVE_MAX_RES, EFFECTIVE_MAX_RES_INFO); + assertions.add(valueAssertion); + + return this; + } + + public QueueAssertion assertEffectiveMinResource(Resource expected, String label) { + ValueAssertion valueAssertion = new ValueAssertion(expected); + valueAssertion.withResourceSupplier(EFFECTIVE_MIN_RES, EFFECTIVE_MIN_RES_INFO); + assertions.add(valueAssertion); + valueAssertion.label = label; + + return this; + } + + public QueueAssertion assertEffectiveMinResource(Resource expected) { + return assertEffectiveMinResource(expected, NO_LABEL); + } + + public QueueAssertion assertCapacity(double expected) { + ValueAssertion valueAssertion = new ValueAssertion(expected); + valueAssertion.withCapacitySupplier(CAPACITY, CAPACITY_INFO); + assertions.add(valueAssertion); + + return this; + } + + public QueueAssertion assertAbsoluteCapacity(double expected) { + ValueAssertion valueAssertion = new ValueAssertion(expected); + valueAssertion.withCapacitySupplier(ABS_CAPACITY, ABS_CAPACITY_INFO); + assertions.add(valueAssertion); + + return this; + } + + private class ValueAssertion { + private double expectedValue = 0; + private Resource expectedResource = null; + private String assertionType; + private Supplier valueSupplier; + private Supplier resourceSupplier; + private String label = ""; + + ValueAssertion(double expectedValue) { + this.expectedValue = expectedValue; + } + + ValueAssertion(Resource expectedResource) { + this.expectedResource = expectedResource; + } + + public void setLabel(String label) { + this.label = label; + } + + public void withResourceSupplier( + BiFunction assertion, String messageInfo) { + CSQueue queue = cs.getQueue(queuePath); + if (queue == null) { + Assert.fail("Queue " + queuePath + " is not found"); + } + + assertionType = messageInfo; + resourceSupplier = () -> assertion.apply(queue.getQueueResourceQuotas(), label); + } + + public void withCapacitySupplier( + BiFunction assertion, String messageInfo) { + CSQueue queue = cs.getQueue(queuePath); + if (queue == null) { + Assert.fail("Queue " + queuePath + " is not found"); + } + assertionType = messageInfo; + valueSupplier = () -> assertion.apply(queue.getQueueCapacities(), label); + } + } + + } + + private final Map assertions = new LinkedHashMap<>(); + + public QueueAssertionBuilder build() { + return this; + } + + /** + * Creates a new assertion group for a specific queue. + * @param queuePath path of the queue + * @return queue assertion group + */ + public QueueAssertion withQueue(String queuePath) { + assertions.putIfAbsent(queuePath, new QueueAssertion(queuePath)); + return assertions.get(queuePath); + } + + /** + * Executes assertions created for all queues. + */ + public void finishAssertion() { + for (Map.Entry assertionEntry : assertions.entrySet()) { + for (QueueAssertion.ValueAssertion assertion : assertionEntry.getValue().assertions) { + if (assertion.resourceSupplier != null) { + String errorMessage = String.format(RESOURCE_ASSERTION_ERROR_MESSAGE, + assertion.assertionType, assertionEntry.getKey(), + assertion.expectedResource.toString(), assertion.label); + Assert.assertEquals(errorMessage, assertion.expectedResource, + assertion.resourceSupplier.get()); + } else { + String errorMessage = String.format(ASSERTION_ERROR_MESSAGE, + assertion.assertionType, assertionEntry.getKey(), assertion.expectedValue, + assertion.label); + Assert.assertEquals(errorMessage, assertion.expectedValue, + assertion.valueSupplier.get(), EPSILON); + } + } + } + } + + /** + * Returns all queues that have defined assertions. + * @return queue paths + */ + public Set getQueues() { + return assertions.keySet(); + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationPriorityACLs.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationPriorityACLs.java index cf9a01045b9..d3193537046 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationPriorityACLs.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationPriorityACLs.java @@ -36,7 +36,7 @@ import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.exceptions.YarnException; import org.apache.hadoop.yarn.server.resourcemanager.ACLsTestBase; import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppState; -import org.apache.hadoop.yarn.server.utils.BuilderUtils; +import org.apache.hadoop.yarn.util.resource.Resources; import org.junit.Assert; import org.junit.Test; @@ -108,7 +108,7 @@ public class TestApplicationPriorityACLs extends ACLsTestBase { ApplicationId applicationId = submitterClient .getNewApplication(newAppRequest).getApplicationId(); - Resource resource = BuilderUtils.newResource(1024, 1); + Resource resource = Resources.createResource(1024); ContainerLaunchContext amContainerSpec = ContainerLaunchContext .newInstance(null, null, null, null, null, null); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerAmbiguousLeafs.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerAmbiguousLeafs.java index 69824e3c3fa..88e6aff2537 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerAmbiguousLeafs.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerAmbiguousLeafs.java @@ -24,7 +24,7 @@ import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.exceptions.YarnException; import org.apache.hadoop.yarn.server.resourcemanager.MockRM; import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppState; -import org.apache.hadoop.yarn.server.utils.BuilderUtils; +import org.apache.hadoop.yarn.util.resource.Resources; import org.junit.Test; import java.io.IOException; @@ -52,7 +52,7 @@ public class TestCapacitySchedulerAmbiguousLeafs { final ApplicationAttemptId appAttemptId = TestUtils .getMockApplicationAttemptId(appId++, 1); - Resource resource = BuilderUtils.newResource(1024, 1); + Resource resource = Resources.createResource(1024); ContainerLaunchContext amContainerSpec = ContainerLaunchContext .newInstance(null, null, null, null, null, null); ApplicationSubmissionContext asc = ApplicationSubmissionContext diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestMixedQueueResourceCalculation.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestMixedQueueResourceCalculation.java new file mode 100644 index 00000000000..e5b7cc964e3 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestMixedQueueResourceCalculation.java @@ -0,0 +1,536 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + *

    + * http://www.apache.org/licenses/LICENSE-2.0 + *

    + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity; + +import org.apache.hadoop.thirdparty.com.google.common.collect.ImmutableMap; +import org.apache.hadoop.thirdparty.com.google.common.collect.ImmutableSet; +import org.apache.hadoop.yarn.api.records.NodeId; +import org.apache.hadoop.yarn.api.records.QueueState; +import org.apache.hadoop.yarn.api.records.Resource; +import org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager; +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueUpdateWarning.QueueUpdateWarningType; +import org.apache.hadoop.yarn.util.resource.ResourceUtils; +import org.junit.Assert; +import org.junit.Test; + +import java.io.IOException; +import java.util.Collection; +import java.util.Optional; + +import static org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager.NO_LABEL; +import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.ROOT; +import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerAutoCreatedQueueBase.GB; + +public class TestMixedQueueResourceCalculation extends CapacitySchedulerQueueCalculationTestBase { + private static final long MEMORY = 16384; + private static final long VCORES = 16; + private static final String C_VECTOR_WITH_WARNING = createCapacityVector(weight(3), + absolute(VCORES * 0.25)); + private static final String A11_VECTOR_WITH_WARNING = createCapacityVector(weight(1), + absolute(VCORES * 0.25)); + private static final String A1_VECTOR_WITH_WARNING = createCapacityVector(absolute(2048), + absolute(VCORES * 0.25)); + private static final String C_VECTOR_NO_REMAINING_RESOURCE = createCapacityVector(weight(3), + absolute(VCORES * 0.25)); + private static final String A1_VECTOR_NO_REMAINING_RESOURCE = createCapacityVector(weight(1), + absolute(VCORES * 0.25)); + + private static final Resource A12_EXPECTED_MAX_RESOURCE_MAX_WARNINGS = + createResource(MEMORY * 0.5, VCORES); + private static final Resource A11_EXPECTED_MAX_RESOURCE_MAX_WARNINGS = + createResource(MEMORY * 0.5, 0.1 * VCORES); + private static final Resource A11_EXPECTED_MIN_RESOURCE_MAX_WARNINGS = + createResource(0.5 * 0.5 * MEMORY, 0.1 * VCORES); + private static final Resource A12_EXPECTED_MIN_RESOURCE_MAX_WARNINGS = + createResource(0.5 * 0.5 * MEMORY, 0); + private static final String A11_MAX_VECTOR_MAX_WARNINGS = + createCapacityVector(absolute(MEMORY), percentage(10)); + private static final String A1_MAX_VECTOR_MAX_WARNINGS = + createCapacityVector(absolute(MEMORY * 0.5), + percentage(100)); + + private static final Resource UPDATE_RESOURCE = Resource.newInstance(16384, 16); + private static final Resource ZERO_RESOURCE = Resource.newInstance(0, 0); + + private static final Resource A_COMPLEX_NO_REMAINING_RESOURCE = Resource.newInstance(2486, 9); + private static final Resource A1_COMPLEX_NO_REMAINING_RESOURCE = Resource.newInstance(621, 4); + private static final Resource A11_COMPLEX_NO_REMAINING_RESOURCE = Resource.newInstance(217, 1); + private static final Resource A12_COMPLEX_NO_REMAINING_RESOURCE = Resource.newInstance(403, 3); + private static final Resource A2_COMPLEX_NO_REMAINING_RESOURCE = Resource.newInstance(1865, 5); + private static final Resource B_COMPLEX_NO_REMAINING_RESOURCE = Resource.newInstance(8095, 3); + private static final Resource B1_COMPLEX_NO_REMAINING_RESOURCE = Resource.newInstance(8095, 3); + private static final Resource C_COMPLEX_NO_REMAINING_RESOURCE = Resource.newInstance(5803, 4); + + private static final Resource B_WARNING_RESOURCE = Resource.newInstance(8096, 4); + private static final Resource B1_WARNING_RESOURCE = Resource.newInstance(8096, 3); + private static final Resource A_WARNING_RESOURCE = Resource.newInstance(8288, 12); + private static final Resource A1_WARNING_RESOURCE = Resource.newInstance(2048, 4); + private static final Resource A2_WARNING_RESOURCE = Resource.newInstance(2048, 8); + private static final Resource A12_WARNING_RESOURCE = Resource.newInstance(2048, 4); + + private static final String A_VECTOR_ZERO_RESOURCE = + createCapacityVector(percentage(100), weight(6)); + private static final String B_VECTOR_ZERO_RESOURCE = + createCapacityVector(absolute(MEMORY), absolute(VCORES * 0.5)); + + private static final String A_MAX_VECTOR_DIFFERENT_MIN_MAX = + createCapacityVector(absolute(MEMORY), percentage(80)); + private static final Resource B_EXPECTED_MAX_RESOURCE_DIFFERENT_MIN_MAX = + Resource.newInstance(MEMORY, (int) (VCORES * 0.5)); + private static final Resource A_EXPECTED_MAX_RESOURCE_DIFFERENT_MIN_MAX = + Resource.newInstance(MEMORY, (int) (VCORES * 0.8)); + private static final String B_MAX_VECTOR_DIFFERENT_MIN_MAX = + createCapacityVector(absolute(MEMORY), absolute(VCORES * 0.5)); + private static final String A_MIN_VECTOR_DIFFERENT_MIN_MAX = + createCapacityVector(percentage(50), absolute(VCORES * 0.5)); + private static final String B_MIN_VECTOR_DIFFERENT_MIN_MAX = + createCapacityVector(weight(6), percentage(100)); + private static final String B_INVALID_MAX_VECTOR = + createCapacityVector(absolute(MEMORY), weight(10)); + + private static final String X_LABEL = "x"; + private static final String Y_LABEL = "y"; + private static final String Z_LABEL = "z"; + + private static final String H1_NODE = "h1"; + private static final String H2_NODE = "h2"; + private static final String H3_NODE = "h3"; + private static final String H4_NODE = "h4"; + private static final String H5_NODE = "h5"; + private static final int H1_MEMORY = 60 * GB; + private static final int H1_VCORES = 60; + private static final int H2_MEMORY = 10 * GB; + private static final int H2_VCORES = 25; + private static final int H3_VCORES = 35; + private static final int H3_MEMORY = 10 * GB; + private static final int H4_MEMORY = 10 * GB; + private static final int H4_VCORES = 15; + + private static final String A11_MIN_VECTOR_MAX_WARNINGS = + createCapacityVector(percentage(50), percentage(100)); + private static final String A12_MIN_VECTOR_MAX_WARNINGS = + createCapacityVector(percentage(50), percentage(0)); + + private static final Resource A_EXPECTED_MIN_RESOURCE_NO_LABEL = createResource(2048, 8); + private static final Resource A1_EXPECTED_MIN_RESOURCE_NO_LABEL = createResource(1024, 5); + private static final Resource A2_EXPECTED_MIN_RESOURCE_NO_LABEL = createResource(1024, 2); + private static final Resource B_EXPECTED_MIN_RESOURCE_NO_LABEL = createResource(3072, 8); + private static final Resource A_EXPECTED_MIN_RESOURCE_X_LABEL = createResource(30720, 30); + private static final Resource A1_EXPECTED_MIN_RESOURCE_X_LABEL = createResource(20480, 0); + private static final Resource A2_EXPECTED_MIN_RESOURCE_X_LABEL = createResource(10240, 30); + private static final Resource B_EXPECTED_MIN_RESOURCE_X_LABEL = createResource(30720, 30); + private static final Resource A_EXPECTED_MIN_RESOURCE_Y_LABEL = createResource(8096, 42); + private static final Resource A1_EXPECTED_MIN_RESOURCE_Y_LABEL = createResource(6186, 21); + private static final Resource A2_EXPECTED_MIN_RESOURCE_Y_LABEL = createResource(1910, 21); + private static final Resource B_EXPECTED_MIN_RESOURCE_Y_LABEL = createResource(12384, 18); + private static final Resource A_EXPECTED_MIN_RESOURCE_Z_LABEL = createResource(7168, 11); + private static final Resource A1_EXPECTED_MIN_RESOURCE_Z_LABEL = createResource(6451, 4); + private static final Resource A2_EXPECTED_MIN_RESOURCE_Z_LABEL = createResource(716, 7); + private static final Resource B_EXPECTED_MIN_RESOURCE_Z_LABEL = createResource(3072, 4); + private static final Resource EMPTY_LABEL_RESOURCE = Resource.newInstance(5 * GB, 16); + + private static final String A_VECTOR_NO_LABEL = + createCapacityVector(absolute(2048), percentage(50)); + private static final String A1_VECTOR_NO_LABEL = + createCapacityVector(absolute(1024), percentage(70)); + private static final String A2_VECTOR_NO_LABEL = + createCapacityVector(absolute(1024), percentage(30)); + private static final String B_VECTOR_NO_LABEL = + createCapacityVector(weight(3), percentage(50)); + private static final String A_VECTOR_X_LABEL = + createCapacityVector(percentage(50), weight(3)); + private static final String A1_VECTOR_X_LABEL = + createCapacityVector(absolute(20480), percentage(10)); + private static final String A2_VECTOR_X_LABEL = + createCapacityVector(absolute(10240), absolute(30)); + private static final String B_VECTOR_X_LABEL = + createCapacityVector(percentage(50), percentage(50)); + private static final String A_VECTOR_Y_LABEL = + createCapacityVector(absolute(8096), weight(1)); + private static final String A1_VECTOR_Y_LABEL = + createCapacityVector(absolute(6186), weight(3)); + private static final String A2_VECTOR_Y_LABEL = + createCapacityVector(weight(3), weight(3)); + private static final String B_VECTOR_Y_LABEL = + createCapacityVector(percentage(100), percentage(30)); + private static final String A_VECTOR_Z_LABEL = + createCapacityVector(percentage(70), absolute(11)); + private static final String A1_VECTOR_Z_LABEL = + createCapacityVector(percentage(90), percentage(40)); + private static final String A2_VECTOR_Z_LABEL = + createCapacityVector(percentage(10), weight(4)); + private static final String B_VECTOR_Z_LABEL = + createCapacityVector(percentage(30), absolute(4)); + + private static final String A_VECTOR_NO_REMAINING_RESOURCE = + createCapacityVector(percentage(30), weight(6)); + private static final String A11_VECTOR_NO_REMAINING_RESOURCE = + createCapacityVector(percentage(35), percentage(25)); + private static final String A12_VECTOR_NO_REMAINING_RESOURCE = + createCapacityVector(percentage(65), percentage(75)); + private static final String A2_VECTOR_NO_REMAINING_RESOURCE = + createCapacityVector(weight(3), percentage(100)); + private static final String B_VECTOR_NO_REMAINING_RESOURCE = + createCapacityVector(absolute(8095), percentage(30)); + private static final String B1_VECTOR_NO_REMAINING_RESOURCE = + createCapacityVector(weight(5), absolute(3)); + private static final String A_VECTOR_WITH_WARNINGS = + createCapacityVector(percentage(100), weight(6)); + private static final String A12_VECTOR_WITH_WARNING = + createCapacityVector(percentage(100), percentage(100)); + private static final String A2_VECTOR_WITH_WARNING = + createCapacityVector(absolute(2048), percentage(100)); + private static final String B_VECTOR_WITH_WARNING = + createCapacityVector(absolute(8096), percentage(30)); + private static final String B1_VECTOR_WITH_WARNING = + createCapacityVector(absolute(10256), absolute(3)); + + @Override + public void setUp() throws Exception { + super.setUp(); + csConf.setLegacyQueueModeEnabled(false); + } + + /** + * Tests a complex scenario in which no warning or remaining resource is generated during the + * update phase (except for rounding leftovers, eg. 1 memory or 1 vcores). + * + * -root- + * / \ \ + * A B C + * / \ | + * A1 A2 B1 + * / \ + * A11 A12 + * + * @throws IOException if update is failed + */ + @Test + public void testComplexHierarchyWithoutRemainingResource() throws IOException { + setupQueueHierarchyWithoutRemainingResource(); + + QueueAssertionBuilder assertionBuilder = createAssertionBuilder() + .withQueue(A) + .assertEffectiveMinResource(A_COMPLEX_NO_REMAINING_RESOURCE) + .assertAbsoluteCapacity(resourceCalculator.divide(UPDATE_RESOURCE, + A_COMPLEX_NO_REMAINING_RESOURCE, UPDATE_RESOURCE)) + .withQueue(A1) + .assertEffectiveMinResource(A1_COMPLEX_NO_REMAINING_RESOURCE) + .assertAbsoluteCapacity(resourceCalculator.divide(UPDATE_RESOURCE, + A1_COMPLEX_NO_REMAINING_RESOURCE, UPDATE_RESOURCE)) + .withQueue(A11) + .assertEffectiveMinResource(A11_COMPLEX_NO_REMAINING_RESOURCE) + .assertAbsoluteCapacity(resourceCalculator.divide(UPDATE_RESOURCE, + A11_COMPLEX_NO_REMAINING_RESOURCE, UPDATE_RESOURCE)) + .withQueue(A12) + .assertEffectiveMinResource(A12_COMPLEX_NO_REMAINING_RESOURCE) + .assertAbsoluteCapacity(resourceCalculator.divide(UPDATE_RESOURCE, + A12_COMPLEX_NO_REMAINING_RESOURCE, UPDATE_RESOURCE)) + .withQueue(A2) + .assertEffectiveMinResource(A2_COMPLEX_NO_REMAINING_RESOURCE) + .assertAbsoluteCapacity(resourceCalculator.divide(UPDATE_RESOURCE, + A2_COMPLEX_NO_REMAINING_RESOURCE, UPDATE_RESOURCE)) + .withQueue(B) + .assertEffectiveMinResource(B_COMPLEX_NO_REMAINING_RESOURCE) + .assertAbsoluteCapacity(resourceCalculator.divide(UPDATE_RESOURCE, + B_COMPLEX_NO_REMAINING_RESOURCE, UPDATE_RESOURCE)) + .withQueue(B1) + .assertEffectiveMinResource(B1_COMPLEX_NO_REMAINING_RESOURCE) + .assertAbsoluteCapacity(resourceCalculator.divide(UPDATE_RESOURCE, + B1_COMPLEX_NO_REMAINING_RESOURCE, UPDATE_RESOURCE)) + .withQueue(C) + .assertEffectiveMinResource(C_COMPLEX_NO_REMAINING_RESOURCE) + .assertAbsoluteCapacity(resourceCalculator.divide(UPDATE_RESOURCE, + C_COMPLEX_NO_REMAINING_RESOURCE, UPDATE_RESOURCE)) + .build(); + + update(assertionBuilder, UPDATE_RESOURCE); + } + + /** + * Tests a complex scenario in which several validation warnings are generated during the update + * phase. + * + * -root- + * / \ \ + * A B C + * / \ | + * A1 A2 B1 + * / \ + * A11 A12 + * + * @throws IOException if update is failed + */ + @Test + public void testComplexHierarchyWithWarnings() throws IOException { + setupQueueHierarchyWithWarnings(); + QueueAssertionBuilder assertionBuilder = createAssertionBuilder() + .withQueue(A) + .assertEffectiveMinResource(A_WARNING_RESOURCE) + .assertAbsoluteCapacity(resourceCalculator.divide(UPDATE_RESOURCE, + A_WARNING_RESOURCE, UPDATE_RESOURCE)) + .withQueue(A1) + .assertEffectiveMinResource(A1_WARNING_RESOURCE) + .assertAbsoluteCapacity(resourceCalculator.divide(UPDATE_RESOURCE, + A1_WARNING_RESOURCE, UPDATE_RESOURCE)) + .withQueue(A2) + .assertEffectiveMinResource(A2_WARNING_RESOURCE) + .assertAbsoluteCapacity(resourceCalculator.divide(UPDATE_RESOURCE, + A2_WARNING_RESOURCE, UPDATE_RESOURCE)) + .withQueue(A11) + .assertEffectiveMinResource(ZERO_RESOURCE) + .assertAbsoluteCapacity(0) + .withQueue(A12) + .assertEffectiveMinResource(A12_WARNING_RESOURCE) + .assertAbsoluteCapacity(resourceCalculator.divide(UPDATE_RESOURCE, + A12_WARNING_RESOURCE, UPDATE_RESOURCE)) + .withQueue(B) + .assertEffectiveMinResource(B_WARNING_RESOURCE) + .assertAbsoluteCapacity(resourceCalculator.divide(UPDATE_RESOURCE, + B_WARNING_RESOURCE, UPDATE_RESOURCE)) + .withQueue(B1) + .assertEffectiveMinResource(B1_WARNING_RESOURCE) + .assertAbsoluteCapacity(resourceCalculator.divide(UPDATE_RESOURCE, + B1_WARNING_RESOURCE, UPDATE_RESOURCE)) + .withQueue(C) + .assertEffectiveMinResource(ZERO_RESOURCE) + .assertAbsoluteCapacity(0) + .build(); + + QueueCapacityUpdateContext updateContext = update(assertionBuilder, UPDATE_RESOURCE); + Optional queueCZeroResourceWarning = getSpecificWarning( + updateContext.getUpdateWarnings(), QueueUpdateWarningType.QUEUE_ZERO_RESOURCE, C); + Optional queueARemainingResourceWarning = getSpecificWarning( + updateContext.getUpdateWarnings(), QueueUpdateWarningType.BRANCH_UNDERUTILIZED, A); + Optional queueBDownscalingWarning = getSpecificWarning( + updateContext.getUpdateWarnings(), QueueUpdateWarningType.BRANCH_DOWNSCALED, B); + Optional queueA11ZeroResourceWarning = getSpecificWarning( + updateContext.getUpdateWarnings(), QueueUpdateWarningType.QUEUE_ZERO_RESOURCE, A11); + + Assert.assertTrue(queueCZeroResourceWarning.isPresent()); + Assert.assertTrue(queueARemainingResourceWarning.isPresent()); + Assert.assertTrue(queueBDownscalingWarning.isPresent()); + Assert.assertTrue(queueA11ZeroResourceWarning.isPresent()); + } + + @Test + public void testZeroResourceIfNoMemory() throws IOException { + csConf.setCapacityVector(A, NO_LABEL, A_VECTOR_ZERO_RESOURCE); + csConf.setCapacityVector(B, NO_LABEL, B_VECTOR_ZERO_RESOURCE); + + QueueAssertionBuilder assertionBuilder = createAssertionBuilder() + .withQueue(A) + .assertEffectiveMinResource(ZERO_RESOURCE) + .withQueue(B) + .assertEffectiveMinResource(createResource(MEMORY, VCORES * 0.5)) + .build(); + + QueueCapacityUpdateContext updateContext = update(assertionBuilder, UPDATE_RESOURCE); + Optional queueAZeroResourceWarning = getSpecificWarning( + updateContext.getUpdateWarnings(), QueueUpdateWarningType.QUEUE_ZERO_RESOURCE, A); + Optional rootUnderUtilizedWarning = getSpecificWarning( + updateContext.getUpdateWarnings(), QueueUpdateWarningType.BRANCH_UNDERUTILIZED, ROOT); + Assert.assertTrue(queueAZeroResourceWarning.isPresent()); + Assert.assertTrue(rootUnderUtilizedWarning.isPresent()); + } + + @Test + public void testDifferentMinimumAndMaximumCapacityTypes() throws IOException { + csConf.setCapacityVector(A, NO_LABEL, A_MIN_VECTOR_DIFFERENT_MIN_MAX); + csConf.setMaximumCapacityVector(A, NO_LABEL, A_MAX_VECTOR_DIFFERENT_MIN_MAX); + csConf.setCapacityVector(B, NO_LABEL, B_MIN_VECTOR_DIFFERENT_MIN_MAX); + csConf.setMaximumCapacityVector(B, NO_LABEL, B_MAX_VECTOR_DIFFERENT_MIN_MAX); + + QueueAssertionBuilder assertionBuilder = createAssertionBuilder() + .withQueue(A) + .assertEffectiveMinResource(ResourceUtils.multiplyFloor(UPDATE_RESOURCE, 0.5d)) + .assertEffectiveMaxResource(A_EXPECTED_MAX_RESOURCE_DIFFERENT_MIN_MAX) + .withQueue(B) + .assertEffectiveMinResource(ResourceUtils.multiplyFloor(UPDATE_RESOURCE, 0.5d)) + .assertEffectiveMaxResource(B_EXPECTED_MAX_RESOURCE_DIFFERENT_MIN_MAX) + .build(); + + QueueCapacityUpdateContext updateContext = update(assertionBuilder, UPDATE_RESOURCE); + Assert.assertEquals(0, updateContext.getUpdateWarnings().size()); + + // WEIGHT capacity type for maximum capacity is not supported + csConf.setMaximumCapacityVector(B, NO_LABEL, B_INVALID_MAX_VECTOR); + try { + cs.reinitialize(csConf, mockRM.getRMContext()); + update(assertionBuilder, UPDATE_RESOURCE); + Assert.fail("WEIGHT maximum capacity type is not supported, an error should be thrown when " + + "set up"); + } catch (IllegalStateException ignored) { + } + } + + @Test + public void testMaximumResourceWarnings() throws IOException { + csConf.setMaximumCapacityVector(A1, NO_LABEL, A1_MAX_VECTOR_MAX_WARNINGS); + csConf.setCapacityVector(A11, NO_LABEL, A11_MIN_VECTOR_MAX_WARNINGS); + csConf.setCapacityVector(A12, NO_LABEL, A12_MIN_VECTOR_MAX_WARNINGS); + csConf.setMaximumCapacityVector(A11, NO_LABEL, A11_MAX_VECTOR_MAX_WARNINGS); + + QueueAssertionBuilder assertionBuilder = createAssertionBuilder() + .withQueue(A11) + .assertEffectiveMinResource(A11_EXPECTED_MIN_RESOURCE_MAX_WARNINGS) + .assertEffectiveMaxResource(A11_EXPECTED_MAX_RESOURCE_MAX_WARNINGS) + .withQueue(A12) + .assertEffectiveMinResource(A12_EXPECTED_MIN_RESOURCE_MAX_WARNINGS) + .assertEffectiveMaxResource(A12_EXPECTED_MAX_RESOURCE_MAX_WARNINGS) + .build(); + + QueueCapacityUpdateContext updateContext = update(assertionBuilder, UPDATE_RESOURCE); + Optional queueA11ExceedsParentMaxResourceWarning = getSpecificWarning( + updateContext.getUpdateWarnings(), QueueUpdateWarningType.QUEUE_MAX_RESOURCE_EXCEEDS_PARENT, + A11); + Optional queueA11MinExceedsMaxWarning = getSpecificWarning( + updateContext.getUpdateWarnings(), QueueUpdateWarningType.QUEUE_EXCEEDS_MAX_RESOURCE, A11); + Assert.assertTrue(queueA11ExceedsParentMaxResourceWarning.isPresent()); + Assert.assertTrue(queueA11MinExceedsMaxWarning.isPresent()); + } + + @Test + public void testNodeLabels() throws Exception { + setLabeledQueueConfigs(); + + QueueAssertionBuilder assertionBuilder = createAssertionBuilder() + .withQueue(A) + .assertEffectiveMinResource(A_EXPECTED_MIN_RESOURCE_NO_LABEL, NO_LABEL) + .withQueue(A1) + .assertEffectiveMinResource(A1_EXPECTED_MIN_RESOURCE_NO_LABEL, NO_LABEL) + .withQueue(A2) + .assertEffectiveMinResource(A2_EXPECTED_MIN_RESOURCE_NO_LABEL, NO_LABEL) + .withQueue(B) + .assertEffectiveMinResource(B_EXPECTED_MIN_RESOURCE_NO_LABEL, NO_LABEL) + .withQueue(A) + .assertEffectiveMinResource(A_EXPECTED_MIN_RESOURCE_X_LABEL, X_LABEL) + .withQueue(A1) + .assertEffectiveMinResource(A1_EXPECTED_MIN_RESOURCE_X_LABEL, X_LABEL) + .withQueue(A2) + .assertEffectiveMinResource(A2_EXPECTED_MIN_RESOURCE_X_LABEL, X_LABEL) + .withQueue(B) + .assertEffectiveMinResource(B_EXPECTED_MIN_RESOURCE_X_LABEL, X_LABEL) + .withQueue(A) + .assertEffectiveMinResource(A_EXPECTED_MIN_RESOURCE_Y_LABEL, Y_LABEL) + .withQueue(A1) + .assertEffectiveMinResource(A1_EXPECTED_MIN_RESOURCE_Y_LABEL, Y_LABEL) + .withQueue(A2) + .assertEffectiveMinResource(A2_EXPECTED_MIN_RESOURCE_Y_LABEL, Y_LABEL) + .withQueue(B) + .assertEffectiveMinResource(B_EXPECTED_MIN_RESOURCE_Y_LABEL, Y_LABEL) + .withQueue(A) + .assertEffectiveMinResource(A_EXPECTED_MIN_RESOURCE_Z_LABEL, Z_LABEL) + .withQueue(A1) + .assertEffectiveMinResource(A1_EXPECTED_MIN_RESOURCE_Z_LABEL, Z_LABEL) + .withQueue(A2) + .assertEffectiveMinResource(A2_EXPECTED_MIN_RESOURCE_Z_LABEL, Z_LABEL) + .withQueue(B) + .assertEffectiveMinResource(B_EXPECTED_MIN_RESOURCE_Z_LABEL, Z_LABEL) + .build(); + + update(assertionBuilder, UPDATE_RESOURCE, EMPTY_LABEL_RESOURCE); + } + + private void setLabeledQueueConfigs() throws Exception { + mgr.addToCluserNodeLabelsWithDefaultExclusivity(ImmutableSet.of(X_LABEL, Y_LABEL, Z_LABEL)); + mgr.addLabelsToNode(ImmutableMap.of(NodeId.newInstance(H1_NODE, 0), + TestUtils.toSet(X_LABEL), NodeId.newInstance(H2_NODE, 0), + TestUtils.toSet(Y_LABEL), NodeId.newInstance(H3_NODE, 0), + TestUtils.toSet(Y_LABEL), NodeId.newInstance(H4_NODE, 0), + TestUtils.toSet(Z_LABEL), NodeId.newInstance(H5_NODE, 0), + RMNodeLabelsManager.EMPTY_STRING_SET)); + + mockRM.registerNode("h1:1234", H1_MEMORY, H1_VCORES); // label = x + mockRM.registerNode("h2:1234", H2_MEMORY, H2_VCORES); // label = y + mockRM.registerNode("h3:1234", H3_MEMORY, H3_VCORES); // label = y + mockRM.registerNode("h4:1234", H4_MEMORY, H4_VCORES); // label = z + + csConf.setCapacityVector(A, NO_LABEL, A_VECTOR_NO_LABEL); + csConf.setCapacityVector(A1, NO_LABEL, A1_VECTOR_NO_LABEL); + csConf.setCapacityVector(A2, NO_LABEL, A2_VECTOR_NO_LABEL); + csConf.setCapacityVector(B, NO_LABEL, B_VECTOR_NO_LABEL); + + csConf.setCapacityVector(A, X_LABEL, A_VECTOR_X_LABEL); + csConf.setCapacityVector(A1, X_LABEL, A1_VECTOR_X_LABEL); + csConf.setCapacityVector(A2, X_LABEL, A2_VECTOR_X_LABEL); + csConf.setCapacityVector(B, X_LABEL, B_VECTOR_X_LABEL); + + csConf.setCapacityVector(A, Y_LABEL, A_VECTOR_Y_LABEL); + csConf.setCapacityVector(A1, Y_LABEL, A1_VECTOR_Y_LABEL); + csConf.setCapacityVector(A2, Y_LABEL, A2_VECTOR_Y_LABEL); + csConf.setCapacityVector(B, Y_LABEL, B_VECTOR_Y_LABEL); + + csConf.setCapacityVector(A, Z_LABEL, A_VECTOR_Z_LABEL); + csConf.setCapacityVector(A1, Z_LABEL, A1_VECTOR_Z_LABEL); + csConf.setCapacityVector(A2, Z_LABEL, A2_VECTOR_Z_LABEL); + csConf.setCapacityVector(B, Z_LABEL, B_VECTOR_Z_LABEL); + + cs.reinitialize(csConf, mockRM.getRMContext()); + } + + private void setupQueueHierarchyWithoutRemainingResource() throws IOException { + csConf.setState(B, QueueState.STOPPED); + cs.reinitialize(csConf, mockRM.getRMContext()); + setQueues(); + + csConf.setState(B, QueueState.RUNNING); + csConf.setCapacityVector(A, NO_LABEL, A_VECTOR_NO_REMAINING_RESOURCE); + csConf.setCapacityVector(A1, NO_LABEL, A1_VECTOR_NO_REMAINING_RESOURCE); + csConf.setCapacityVector(A11, NO_LABEL, A11_VECTOR_NO_REMAINING_RESOURCE); + csConf.setCapacityVector(A12, NO_LABEL, A12_VECTOR_NO_REMAINING_RESOURCE); + csConf.setCapacityVector(A2, NO_LABEL, A2_VECTOR_NO_REMAINING_RESOURCE); + csConf.setCapacityVector(B, NO_LABEL, B_VECTOR_NO_REMAINING_RESOURCE); + csConf.setCapacityVector(B1, NO_LABEL, B1_VECTOR_NO_REMAINING_RESOURCE); + csConf.setCapacityVector(C, NO_LABEL, C_VECTOR_NO_REMAINING_RESOURCE); + + cs.reinitialize(csConf, mockRM.getRMContext()); + } + + private void setupQueueHierarchyWithWarnings() throws IOException { + csConf.setState(B, QueueState.STOPPED); + cs.reinitialize(csConf, mockRM.getRMContext()); + setQueues(); + + csConf.setState(B, QueueState.RUNNING); + csConf.setCapacityVector(A, NO_LABEL, A_VECTOR_WITH_WARNINGS); + csConf.setCapacityVector(A1, NO_LABEL, A1_VECTOR_WITH_WARNING); + csConf.setCapacityVector(A11, NO_LABEL, A11_VECTOR_WITH_WARNING); + csConf.setCapacityVector(A12, NO_LABEL, A12_VECTOR_WITH_WARNING); + csConf.setCapacityVector(A2, NO_LABEL, A2_VECTOR_WITH_WARNING); + csConf.setCapacityVector(B, NO_LABEL, B_VECTOR_WITH_WARNING); + csConf.setCapacityVector(B1, NO_LABEL, B1_VECTOR_WITH_WARNING); + csConf.setCapacityVector(C, NO_LABEL, C_VECTOR_WITH_WARNING); + + cs.reinitialize(csConf, mockRM.getRMContext()); + } + + private void setQueues() { + csConf.setQueues("root", new String[]{"a", "b", "c"}); + csConf.setQueues(A, new String[]{"a1", "a2"}); + csConf.setQueues(B, new String[]{"b1"}); + } + + private Optional getSpecificWarning( + Collection warnings, QueueUpdateWarningType warningTypeToSelect, + String queue) { + return warnings.stream().filter((w) -> w.getWarningType().equals(warningTypeToSelect) + && w.getQueue().equals(queue)).findFirst(); + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestQueueCapacityVector.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestQueueCapacityVector.java index 058e14bfaf2..18eead5d8ec 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestQueueCapacityVector.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestQueueCapacityVector.java @@ -20,7 +20,7 @@ package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity; import org.apache.hadoop.util.Lists; import org.apache.hadoop.yarn.conf.YarnConfiguration; -import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacityVector.QueueCapacityType; +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacityVector.ResourceUnitCapacityType; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacityVector.QueueCapacityVectorEntry; import org.apache.hadoop.yarn.util.resource.ResourceUtils; import org.junit.Assert; @@ -50,21 +50,21 @@ public class TestQueueCapacityVector { public void getResourceNamesByCapacityType() { QueueCapacityVector capacityVector = QueueCapacityVector.newInstance(); - capacityVector.setResource(MEMORY_URI, 10, QueueCapacityType.PERCENTAGE); - capacityVector.setResource(VCORES_URI, 6, QueueCapacityType.PERCENTAGE); + capacityVector.setResource(MEMORY_URI, 10, ResourceUnitCapacityType.PERCENTAGE); + capacityVector.setResource(VCORES_URI, 6, ResourceUnitCapacityType.PERCENTAGE); // custom is not set, defaults to 0 Assert.assertEquals(1, capacityVector.getResourceNamesByCapacityType( - QueueCapacityType.ABSOLUTE).size()); + ResourceUnitCapacityType.ABSOLUTE).size()); Assert.assertTrue(capacityVector.getResourceNamesByCapacityType( - QueueCapacityType.ABSOLUTE).contains(CUSTOM_RESOURCE)); + ResourceUnitCapacityType.ABSOLUTE).contains(CUSTOM_RESOURCE)); Assert.assertEquals(2, capacityVector.getResourceNamesByCapacityType( - QueueCapacityType.PERCENTAGE).size()); + ResourceUnitCapacityType.PERCENTAGE).size()); Assert.assertTrue(capacityVector.getResourceNamesByCapacityType( - QueueCapacityType.PERCENTAGE).contains(VCORES_URI)); + ResourceUnitCapacityType.PERCENTAGE).contains(VCORES_URI)); Assert.assertTrue(capacityVector.getResourceNamesByCapacityType( - QueueCapacityType.PERCENTAGE).contains(MEMORY_URI)); + ResourceUnitCapacityType.PERCENTAGE).contains(MEMORY_URI)); Assert.assertEquals(10, capacityVector.getResource(MEMORY_URI).getResourceValue(), EPSILON); Assert.assertEquals(6, capacityVector.getResource(VCORES_URI).getResourceValue(), EPSILON); } @@ -73,13 +73,15 @@ public class TestQueueCapacityVector { public void isResourceOfType() { QueueCapacityVector capacityVector = QueueCapacityVector.newInstance(); - capacityVector.setResource(MEMORY_URI, 10, QueueCapacityType.WEIGHT); - capacityVector.setResource(VCORES_URI, 6, QueueCapacityType.PERCENTAGE); - capacityVector.setResource(CUSTOM_RESOURCE, 3, QueueCapacityType.ABSOLUTE); + capacityVector.setResource(MEMORY_URI, 10, ResourceUnitCapacityType.WEIGHT); + capacityVector.setResource(VCORES_URI, 6, ResourceUnitCapacityType.PERCENTAGE); + capacityVector.setResource(CUSTOM_RESOURCE, 3, ResourceUnitCapacityType.ABSOLUTE); - Assert.assertTrue(capacityVector.isResourceOfType(MEMORY_URI, QueueCapacityType.WEIGHT)); - Assert.assertTrue(capacityVector.isResourceOfType(VCORES_URI, QueueCapacityType.PERCENTAGE)); - Assert.assertTrue(capacityVector.isResourceOfType(CUSTOM_RESOURCE, QueueCapacityType.ABSOLUTE)); + Assert.assertTrue(capacityVector.isResourceOfType(MEMORY_URI, ResourceUnitCapacityType.WEIGHT)); + Assert.assertTrue(capacityVector.isResourceOfType(VCORES_URI, + ResourceUnitCapacityType.PERCENTAGE)); + Assert.assertTrue(capacityVector.isResourceOfType(CUSTOM_RESOURCE, + ResourceUnitCapacityType.ABSOLUTE)); } @Test @@ -99,9 +101,9 @@ public class TestQueueCapacityVector { public void testToString() { QueueCapacityVector capacityVector = QueueCapacityVector.newInstance(); - capacityVector.setResource(MEMORY_URI, 10, QueueCapacityType.WEIGHT); - capacityVector.setResource(VCORES_URI, 6, QueueCapacityType.PERCENTAGE); - capacityVector.setResource(CUSTOM_RESOURCE, 3, QueueCapacityType.ABSOLUTE); + capacityVector.setResource(MEMORY_URI, 10, ResourceUnitCapacityType.WEIGHT); + capacityVector.setResource(VCORES_URI, 6, ResourceUnitCapacityType.PERCENTAGE); + capacityVector.setResource(CUSTOM_RESOURCE, 3, ResourceUnitCapacityType.ABSOLUTE); Assert.assertEquals(MIXED_CAPACITY_VECTOR_STRING, capacityVector.toString()); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestResourceVector.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestResourceVector.java index fd6edb1fa5d..c56b37dc990 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestResourceVector.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestResourceVector.java @@ -68,7 +68,7 @@ public class TestResourceVector { public void testSubtract() { ResourceVector lhsResourceVector = ResourceVector.of(13); ResourceVector rhsResourceVector = ResourceVector.of(5); - lhsResourceVector.subtract(rhsResourceVector); + lhsResourceVector.decrement(rhsResourceVector); Assert.assertEquals(8, lhsResourceVector.getValue(MEMORY_URI), EPSILON); Assert.assertEquals(8, lhsResourceVector.getValue(VCORES_URI), EPSILON); @@ -77,7 +77,7 @@ public class TestResourceVector { ResourceVector negativeResourceVector = ResourceVector.of(-100); // Check whether overflow causes any issues - negativeResourceVector.subtract(ResourceVector.of(Float.MAX_VALUE)); + negativeResourceVector.decrement(ResourceVector.of(Float.MAX_VALUE)); Assert.assertEquals(-Float.MAX_VALUE, negativeResourceVector.getValue(MEMORY_URI), EPSILON); Assert.assertEquals(-Float.MAX_VALUE, negativeResourceVector.getValue(VCORES_URI), EPSILON); Assert.assertEquals(-Float.MAX_VALUE, negativeResourceVector.getValue(CUSTOM_RESOURCE), @@ -111,7 +111,7 @@ public class TestResourceVector { Assert.assertNotEquals(resource, resourceVector); ResourceVector resourceVectorOne = ResourceVector.of(1); - resourceVectorOther.subtract(resourceVectorOne); + resourceVectorOther.decrement(resourceVectorOne); Assert.assertEquals(resourceVectorOther, resourceVector); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestUniformQueueResourceCalculation.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestUniformQueueResourceCalculation.java new file mode 100644 index 00000000000..863baaaaf95 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestUniformQueueResourceCalculation.java @@ -0,0 +1,191 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + *

    + * http://www.apache.org/licenses/LICENSE-2.0 + *

    + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity; + +import org.apache.hadoop.yarn.api.records.Resource; +import org.apache.hadoop.yarn.util.resource.ResourceUtils; +import org.junit.Test; + +import java.io.IOException; + +import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerAutoCreatedQueueBase.GB; + +public class TestUniformQueueResourceCalculation extends CapacitySchedulerQueueCalculationTestBase { + + private static final Resource QUEUE_A_RES = Resource.newInstance(80 * GB, + 10); + private static final Resource QUEUE_B_RES = Resource.newInstance(170 * GB, + 30); + private static final Resource QUEUE_A1_RES = Resource.newInstance(50 * GB, + 4); + private static final Resource QUEUE_A2_RES = Resource.newInstance(30 * GB, + 6); + private static final Resource QUEUE_A11_RES = Resource.newInstance(40 * GB, + 2); + private static final Resource QUEUE_A12_RES = Resource.newInstance(10 * GB, + 2); + private static final Resource UPDATE_RES = Resource.newInstance(250 * GB, 40); + private static final Resource PERCENTAGE_ALL_RES = Resource.newInstance(10 * GB, 20); + + public static final double A_CAPACITY = 0.3; + public static final double B_CAPACITY = 0.7; + public static final double A1_CAPACITY = 0.17; + public static final double A11_CAPACITY = 0.25; + public static final double A12_CAPACITY = 0.75; + public static final double A2_CAPACITY = 0.83; + + public static final float A_WEIGHT = 3; + public static final float B_WEIGHT = 6; + public static final float A1_WEIGHT = 2; + public static final float A11_WEIGHT = 5; + public static final float A12_WEIGHT = 8; + public static final float A2_WEIGHT = 3; + + public static final double A_NORMALIZED_WEIGHT = A_WEIGHT / (A_WEIGHT + B_WEIGHT); + public static final double B_NORMALIZED_WEIGHT = B_WEIGHT / (A_WEIGHT + B_WEIGHT); + public static final double A1_NORMALIZED_WEIGHT = A1_WEIGHT / (A1_WEIGHT + A2_WEIGHT); + public static final double A2_NORMALIZED_WEIGHT = A2_WEIGHT / (A1_WEIGHT + A2_WEIGHT); + public static final double A11_NORMALIZED_WEIGHT = A11_WEIGHT / (A11_WEIGHT + A12_WEIGHT); + public static final double A12_NORMALIZED_WEIGHT = A12_WEIGHT / (A11_WEIGHT + A12_WEIGHT); + + @Test + public void testWeightResourceCalculation() throws IOException { + csConf.setNonLabeledQueueWeight(A, A_WEIGHT); + csConf.setNonLabeledQueueWeight(B, B_WEIGHT); + csConf.setNonLabeledQueueWeight(A1, A1_WEIGHT); + csConf.setNonLabeledQueueWeight(A11, A11_WEIGHT); + csConf.setNonLabeledQueueWeight(A12, A12_WEIGHT); + csConf.setNonLabeledQueueWeight(A2, A2_WEIGHT); + + QueueAssertionBuilder queueAssertionBuilder = createAssertionBuilder() + .withQueue(A) + .assertEffectiveMinResource(ResourceUtils.multiplyRound(UPDATE_RES, A_NORMALIZED_WEIGHT)) + .assertAbsoluteCapacity(A_NORMALIZED_WEIGHT) + .withQueue(B) + .assertEffectiveMinResource(ResourceUtils.multiplyRound(UPDATE_RES, B_NORMALIZED_WEIGHT)) + .assertAbsoluteCapacity(B_NORMALIZED_WEIGHT) + .withQueue(A1) + .assertEffectiveMinResource(ResourceUtils.multiplyRound(UPDATE_RES, + A_NORMALIZED_WEIGHT * A1_NORMALIZED_WEIGHT)) + .assertAbsoluteCapacity(A_NORMALIZED_WEIGHT * A1_NORMALIZED_WEIGHT) + .withQueue(A2) + .assertEffectiveMinResource(ResourceUtils.multiplyRound(UPDATE_RES, + A_NORMALIZED_WEIGHT * A2_NORMALIZED_WEIGHT)) + .assertAbsoluteCapacity(A_NORMALIZED_WEIGHT * A2_NORMALIZED_WEIGHT) + .withQueue(A11) + .assertEffectiveMinResource(ResourceUtils.multiplyRound(UPDATE_RES, + A_NORMALIZED_WEIGHT * A1_NORMALIZED_WEIGHT * A11_NORMALIZED_WEIGHT)) + .assertAbsoluteCapacity(A_NORMALIZED_WEIGHT * A1_NORMALIZED_WEIGHT * A11_NORMALIZED_WEIGHT) + .withQueue(A12) + .assertEffectiveMinResource(ResourceUtils.multiplyRound(UPDATE_RES, + A_NORMALIZED_WEIGHT * A1_NORMALIZED_WEIGHT * A12_NORMALIZED_WEIGHT)) + .assertAbsoluteCapacity(A_NORMALIZED_WEIGHT * A1_NORMALIZED_WEIGHT * A12_NORMALIZED_WEIGHT) + .build(); + + update(queueAssertionBuilder, UPDATE_RES); + } + + @Test + public void testPercentageResourceCalculation() throws IOException { + csConf.setCapacity(A, (float) (A_CAPACITY * 100)); + csConf.setCapacity(B, (float) (B_CAPACITY * 100)); + csConf.setCapacity(A1, (float) (A1_CAPACITY * 100)); + csConf.setCapacity(A11, (float) (A11_CAPACITY * 100)); + csConf.setCapacity(A12, (float) (A12_CAPACITY * 100)); + csConf.setCapacity(A2, (float) (A2_CAPACITY * 100)); + + QueueAssertionBuilder queueAssertionBuilder = createAssertionBuilder() + .withQueue(A) + .assertEffectiveMinResource(ResourceUtils.multiplyFloor(PERCENTAGE_ALL_RES, A_CAPACITY)) + .assertCapacity(A_CAPACITY) + .assertAbsoluteCapacity(A_CAPACITY) + .withQueue(B) + .assertEffectiveMinResource(ResourceUtils.multiplyFloor(PERCENTAGE_ALL_RES, B_CAPACITY)) + .assertCapacity(B_CAPACITY) + .assertAbsoluteCapacity(B_CAPACITY) + .withQueue(A1) + .assertEffectiveMinResource(ResourceUtils.multiplyFloor(PERCENTAGE_ALL_RES, + A_CAPACITY * A1_CAPACITY)) + .assertCapacity(A1_CAPACITY) + .assertAbsoluteCapacity(A_CAPACITY * A1_CAPACITY) + .withQueue(A2) + .assertEffectiveMinResource(ResourceUtils.multiplyFloor(PERCENTAGE_ALL_RES, + A_CAPACITY * A2_CAPACITY)) + .assertCapacity(A2_CAPACITY) + .assertAbsoluteCapacity(A_CAPACITY * A2_CAPACITY) + .withQueue(A11) + .assertEffectiveMinResource(ResourceUtils.multiplyFloor(PERCENTAGE_ALL_RES, + A11_CAPACITY * A_CAPACITY * A1_CAPACITY)) + .assertCapacity(A11_CAPACITY) + .assertAbsoluteCapacity(A11_CAPACITY * A_CAPACITY * A1_CAPACITY) + .withQueue(A12) + .assertEffectiveMinResource(ResourceUtils.multiplyFloor(PERCENTAGE_ALL_RES, + A12_CAPACITY * A_CAPACITY * A1_CAPACITY)) + .assertCapacity(A12_CAPACITY) + .assertAbsoluteCapacity(A12_CAPACITY * A_CAPACITY * A1_CAPACITY) + .build(); + + update(queueAssertionBuilder, PERCENTAGE_ALL_RES); + } + + @Test + public void testAbsoluteResourceCalculation() throws IOException { + csConf.setMinimumResourceRequirement("", new QueuePath(A), QUEUE_A_RES); + csConf.setMinimumResourceRequirement("", new QueuePath(B), QUEUE_B_RES); + csConf.setMinimumResourceRequirement("", new QueuePath(A1), QUEUE_A1_RES); + csConf.setMinimumResourceRequirement("", new QueuePath(A2), QUEUE_A2_RES); + csConf.setMinimumResourceRequirement("", new QueuePath(A11), QUEUE_A11_RES); + csConf.setMinimumResourceRequirement("", new QueuePath(A12), QUEUE_A12_RES); + + QueueAssertionBuilder queueAssertionBuilder = createAssertionBuilder() + .withQueue(A) + .assertEffectiveMinResource(QUEUE_A_RES) + .withQueue(B) + .assertEffectiveMinResource(QUEUE_B_RES) + .withQueue(A1) + .assertEffectiveMinResource(QUEUE_A1_RES) + .withQueue(A2) + .assertEffectiveMinResource(QUEUE_A2_RES) + .withQueue(A11) + .assertEffectiveMinResource(QUEUE_A11_RES) + .withQueue(A12) + .assertEffectiveMinResource(QUEUE_A12_RES) + .build(); + + update(queueAssertionBuilder, UPDATE_RES); + + QueueAssertionBuilder queueAssertionHalfClusterResource = createAssertionBuilder() + .withQueue(A) + .assertEffectiveMinResource(ResourceUtils.multiplyFloor(QUEUE_A_RES, 0.5f)) + .withQueue(B) + .assertEffectiveMinResource(ResourceUtils.multiplyFloor(QUEUE_B_RES, 0.5f)) + .withQueue(A1) + .assertEffectiveMinResource(ResourceUtils.multiplyFloor(QUEUE_A1_RES, 0.5f)) + .withQueue(A2) + .assertEffectiveMinResource(ResourceUtils.multiplyFloor(QUEUE_A2_RES, 0.5f)) + .withQueue(A11) + .assertEffectiveMinResource(ResourceUtils.multiplyFloor(QUEUE_A11_RES, 0.5f)) + .withQueue(A12) + .assertEffectiveMinResource(ResourceUtils.multiplyFloor(QUEUE_A12_RES, 0.5f)) + .build(); + + update(queueAssertionHalfClusterResource, ResourceUtils.multiplyFloor(UPDATE_RES, 0.5f)); + } + +} \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/TestConfigurationUpdateAssembler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/TestConfigurationUpdateAssembler.java new file mode 100644 index 00000000000..890996ac23e --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/TestConfigurationUpdateAssembler.java @@ -0,0 +1,173 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.conf; + +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration; +import org.apache.hadoop.yarn.webapp.dao.QueueConfigInfo; +import org.apache.hadoop.yarn.webapp.dao.SchedConfUpdateInfo; +import org.junit.Before; +import org.junit.Test; + +import java.io.IOException; +import java.util.HashMap; +import java.util.Map; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertNull; +import static org.junit.Assert.assertThrows; +import static org.junit.Assert.assertTrue; + +/** + * Tests {@link ConfigurationUpdateAssembler}. + */ +public class TestConfigurationUpdateAssembler { + + private static final String A_PATH = "root.a"; + private static final String B_PATH = "root.b"; + private static final String C_PATH = "root.c"; + + private static final String CONFIG_NAME = "testConfigName"; + private static final String A_CONFIG_PATH = CapacitySchedulerConfiguration.PREFIX + A_PATH + + CapacitySchedulerConfiguration.DOT + CONFIG_NAME; + private static final String B_CONFIG_PATH = CapacitySchedulerConfiguration.PREFIX + B_PATH + + CapacitySchedulerConfiguration.DOT + CONFIG_NAME; + private static final String C_CONFIG_PATH = CapacitySchedulerConfiguration.PREFIX + C_PATH + + CapacitySchedulerConfiguration.DOT + CONFIG_NAME; + private static final String ROOT_QUEUES_PATH = CapacitySchedulerConfiguration.PREFIX + + CapacitySchedulerConfiguration.ROOT + CapacitySchedulerConfiguration.DOT + + CapacitySchedulerConfiguration.QUEUES; + + private static final String A_INIT_CONFIG_VALUE = "aInitValue"; + private static final String A_CONFIG_VALUE = "aValue"; + private static final String B_INIT_CONFIG_VALUE = "bInitValue"; + private static final String B_CONFIG_VALUE = "bValue"; + private static final String C_CONFIG_VALUE = "cValue"; + + private CapacitySchedulerConfiguration csConfig; + + @Before + public void setUp() { + csConfig = crateInitialCSConfig(); + } + + @Test + public void testAddQueue() throws Exception { + SchedConfUpdateInfo updateInfo = new SchedConfUpdateInfo(); + Map updateMap = new HashMap<>(); + updateMap.put(CONFIG_NAME, C_CONFIG_VALUE); + QueueConfigInfo queueConfigInfo = new QueueConfigInfo(C_PATH, updateMap); + updateInfo.getAddQueueInfo().add(queueConfigInfo); + + Map configurationUpdate = + ConfigurationUpdateAssembler.constructKeyValueConfUpdate(csConfig, updateInfo); + + assertEquals(C_CONFIG_VALUE, configurationUpdate.get(C_CONFIG_PATH)); + assertEquals("a,b,c", configurationUpdate.get(ROOT_QUEUES_PATH)); + } + + @Test + public void testAddExistingQueue() { + SchedConfUpdateInfo updateInfo = new SchedConfUpdateInfo(); + Map updateMap = new HashMap<>(); + updateMap.put(CONFIG_NAME, A_CONFIG_VALUE); + QueueConfigInfo queueConfigInfo = new QueueConfigInfo(A_PATH, updateMap); + updateInfo.getAddQueueInfo().add(queueConfigInfo); + + assertThrows(IOException.class, () -> { + ConfigurationUpdateAssembler.constructKeyValueConfUpdate(csConfig, updateInfo); + }); + } + + @Test + public void testAddInvalidQueue() { + SchedConfUpdateInfo updateInfo = new SchedConfUpdateInfo(); + Map updateMap = new HashMap<>(); + updateMap.put(CONFIG_NAME, A_CONFIG_VALUE); + QueueConfigInfo queueConfigInfo = new QueueConfigInfo("invalidPath", updateMap); + updateInfo.getAddQueueInfo().add(queueConfigInfo); + + assertThrows(IOException.class, () -> { + ConfigurationUpdateAssembler.constructKeyValueConfUpdate(csConfig, updateInfo); + }); + } + + @Test + public void testUpdateQueue() throws Exception { + SchedConfUpdateInfo updateInfo = new SchedConfUpdateInfo(); + Map updateMap = new HashMap<>(); + updateMap.put(CONFIG_NAME, A_CONFIG_VALUE); + QueueConfigInfo queueAConfigInfo = new QueueConfigInfo(A_PATH, updateMap); + updateInfo.getUpdateQueueInfo().add(queueAConfigInfo); + + Map updateMapQueueB = new HashMap<>(); + updateMapQueueB.put(CONFIG_NAME, B_CONFIG_VALUE); + QueueConfigInfo queueBConfigInfo = new QueueConfigInfo(B_PATH, updateMapQueueB); + + updateInfo.getUpdateQueueInfo().add(queueBConfigInfo); + + Map configurationUpdate = + ConfigurationUpdateAssembler.constructKeyValueConfUpdate(csConfig, updateInfo); + + assertEquals(A_CONFIG_VALUE, configurationUpdate.get(A_CONFIG_PATH)); + assertEquals(B_CONFIG_VALUE, configurationUpdate.get(B_CONFIG_PATH)); + } + + @Test + public void testRemoveQueue() throws Exception { + SchedConfUpdateInfo updateInfo = new SchedConfUpdateInfo(); + updateInfo.getRemoveQueueInfo().add(A_PATH); + + Map configurationUpdate = + ConfigurationUpdateAssembler.constructKeyValueConfUpdate(csConfig, updateInfo); + + assertTrue(configurationUpdate.containsKey(A_CONFIG_PATH)); + assertNull(configurationUpdate.get(A_CONFIG_PATH)); + assertEquals("b", configurationUpdate.get(ROOT_QUEUES_PATH)); + } + + @Test + public void testRemoveInvalidQueue() { + SchedConfUpdateInfo updateInfo = new SchedConfUpdateInfo(); + updateInfo.getRemoveQueueInfo().add("invalidPath"); + + assertThrows(IOException.class, () -> { + ConfigurationUpdateAssembler.constructKeyValueConfUpdate(csConfig, updateInfo); + }); + } + + @Test + public void testRemoveNonExistingQueue() { + SchedConfUpdateInfo updateInfo = new SchedConfUpdateInfo(); + updateInfo.getRemoveQueueInfo().add("root.d"); + + assertThrows(IOException.class, () -> { + ConfigurationUpdateAssembler.constructKeyValueConfUpdate(csConfig, updateInfo); + }); + } + + private CapacitySchedulerConfiguration crateInitialCSConfig() { + CapacitySchedulerConfiguration csConf = new CapacitySchedulerConfiguration(); + csConf.setQueues(CapacitySchedulerConfiguration.ROOT, new String[] {"a, b"}); + + csConf.set(A_CONFIG_PATH, A_INIT_CONFIG_VALUE); + csConf.set(B_CONFIG_PATH, B_INIT_CONFIG_VALUE); + + return csConf; + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/TestQueueCapacityConfigParser.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/TestQueueCapacityConfigParser.java index 1aba816abd2..4e8f31e1a85 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/TestQueueCapacityConfigParser.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/TestQueueCapacityConfigParser.java @@ -23,7 +23,7 @@ import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacityVector; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacityVector.QueueCapacityVectorEntry; -import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacityVector.QueueCapacityType; +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueCapacityVector.ResourceUnitCapacityType; import org.apache.hadoop.yarn.util.resource.ResourceUtils; import org.junit.Assert; import org.junit.Test; @@ -33,7 +33,6 @@ import java.util.List; import static org.apache.hadoop.yarn.api.records.ResourceInformation.GPU_URI; import static org.apache.hadoop.yarn.api.records.ResourceInformation.MEMORY_URI; import static org.apache.hadoop.yarn.api.records.ResourceInformation.VCORES_URI; -import static org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager.NO_LABEL; import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.TestQueueMetricsForCustomResources.GB; import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueueUtils.EPSILON; @@ -70,47 +69,43 @@ public class TestQueueCapacityConfigParser { @Test public void testPercentageCapacityConfig() { - CapacitySchedulerConfiguration conf = new CapacitySchedulerConfiguration(); - conf.setCapacity(QUEUE, PERCENTAGE_VALUE); - - QueueCapacityVector percentageCapacityVector = capacityConfigParser.parse(conf, QUEUE, - NO_LABEL); + QueueCapacityVector percentageCapacityVector = + capacityConfigParser.parse(Float.toString(PERCENTAGE_VALUE), QUEUE); QueueCapacityVectorEntry memory = percentageCapacityVector.getResource(MEMORY_URI); QueueCapacityVectorEntry vcore = percentageCapacityVector.getResource(VCORES_URI); - Assert.assertEquals(QueueCapacityType.PERCENTAGE, memory.getVectorResourceType()); + Assert.assertEquals(ResourceUnitCapacityType.PERCENTAGE, memory.getVectorResourceType()); Assert.assertEquals(PERCENTAGE_VALUE, memory.getResourceValue(), EPSILON); - Assert.assertEquals(QueueCapacityType.PERCENTAGE, vcore.getVectorResourceType()); + Assert.assertEquals(ResourceUnitCapacityType.PERCENTAGE, vcore.getVectorResourceType()); Assert.assertEquals(PERCENTAGE_VALUE, vcore.getResourceValue(), EPSILON); - QueueCapacityVector rootCapacityVector = capacityConfigParser.parse(conf, - CapacitySchedulerConfiguration.ROOT, NO_LABEL); + QueueCapacityVector rootCapacityVector = + capacityConfigParser.parse(Float.toString(PERCENTAGE_VALUE), + CapacitySchedulerConfiguration.ROOT); QueueCapacityVectorEntry memoryRoot = rootCapacityVector.getResource(MEMORY_URI); QueueCapacityVectorEntry vcoreRoot = rootCapacityVector.getResource(VCORES_URI); - Assert.assertEquals(QueueCapacityType.PERCENTAGE, memoryRoot.getVectorResourceType()); + Assert.assertEquals(ResourceUnitCapacityType.PERCENTAGE, memoryRoot.getVectorResourceType()); Assert.assertEquals(100f, memoryRoot.getResourceValue(), EPSILON); - Assert.assertEquals(QueueCapacityType.PERCENTAGE, vcoreRoot.getVectorResourceType()); + Assert.assertEquals(ResourceUnitCapacityType.PERCENTAGE, vcoreRoot.getVectorResourceType()); Assert.assertEquals(100f, vcoreRoot.getResourceValue(), EPSILON); } @Test public void testWeightCapacityConfig() { - CapacitySchedulerConfiguration conf = new CapacitySchedulerConfiguration(); - conf.setNonLabeledQueueWeight(QUEUE, WEIGHT_VALUE); - - QueueCapacityVector weightCapacityVector = capacityConfigParser.parse(conf, QUEUE, NO_LABEL); + QueueCapacityVector weightCapacityVector = capacityConfigParser.parse(WEIGHT_VALUE + "w", + QUEUE); QueueCapacityVectorEntry memory = weightCapacityVector.getResource(MEMORY_URI); QueueCapacityVectorEntry vcore = weightCapacityVector.getResource(VCORES_URI); - Assert.assertEquals(QueueCapacityType.WEIGHT, memory.getVectorResourceType()); + Assert.assertEquals(ResourceUnitCapacityType.WEIGHT, memory.getVectorResourceType()); Assert.assertEquals(WEIGHT_VALUE, memory.getResourceValue(), EPSILON); - Assert.assertEquals(QueueCapacityType.WEIGHT, vcore.getVectorResourceType()); + Assert.assertEquals(ResourceUnitCapacityType.WEIGHT, vcore.getVectorResourceType()); Assert.assertEquals(WEIGHT_VALUE, vcore.getResourceValue(), EPSILON); } @@ -122,26 +117,26 @@ public class TestQueueCapacityConfigParser { conf.set(YarnConfiguration.RESOURCE_TYPES, RESOURCE_TYPES); ResourceUtils.resetResourceTypes(conf); - QueueCapacityVector absoluteCapacityVector = capacityConfigParser.parse(conf, QUEUE, NO_LABEL); + QueueCapacityVector absoluteCapacityVector = capacityConfigParser.parse(ABSOLUTE_RESOURCE, + QUEUE); - Assert.assertEquals(QueueCapacityType.ABSOLUTE, absoluteCapacityVector.getResource(MEMORY_URI) - .getVectorResourceType()); + Assert.assertEquals(ResourceUnitCapacityType.ABSOLUTE, + absoluteCapacityVector.getResource(MEMORY_URI).getVectorResourceType()); Assert.assertEquals(12 * GB, absoluteCapacityVector.getResource(MEMORY_URI) .getResourceValue(), EPSILON); - Assert.assertEquals(QueueCapacityType.ABSOLUTE, absoluteCapacityVector.getResource(VCORES_URI) - .getVectorResourceType()); + Assert.assertEquals(ResourceUnitCapacityType.ABSOLUTE, + absoluteCapacityVector.getResource(VCORES_URI).getVectorResourceType()); Assert.assertEquals(VCORE_ABSOLUTE, absoluteCapacityVector.getResource(VCORES_URI) .getResourceValue(), EPSILON); - Assert.assertEquals(QueueCapacityType.ABSOLUTE, absoluteCapacityVector.getResource(GPU_URI) - .getVectorResourceType()); + Assert.assertEquals(ResourceUnitCapacityType.ABSOLUTE, + absoluteCapacityVector.getResource(GPU_URI).getVectorResourceType()); Assert.assertEquals(GPU_ABSOLUTE, absoluteCapacityVector.getResource(GPU_URI) .getResourceValue(), EPSILON); - conf.set(CapacitySchedulerConfiguration.getQueuePrefix(QUEUE) + - CapacitySchedulerConfiguration.CAPACITY, ABSOLUTE_RESOURCE_MEMORY_VCORE); - QueueCapacityVector withoutGpuVector = capacityConfigParser.parse(conf, QUEUE, NO_LABEL); + QueueCapacityVector withoutGpuVector = capacityConfigParser + .parse(ABSOLUTE_RESOURCE_MEMORY_VCORE, QUEUE); Assert.assertEquals(3, withoutGpuVector.getResourceCount()); Assert.assertEquals(0f, withoutGpuVector.getResource(GPU_URI).getResourceValue(), EPSILON); @@ -150,36 +145,31 @@ public class TestQueueCapacityConfigParser { @Test public void testMixedCapacityConfig() { CapacitySchedulerConfiguration conf = new CapacitySchedulerConfiguration(); - conf.set(CapacitySchedulerConfiguration.getQueuePrefix(QUEUE) - + CapacitySchedulerConfiguration.CAPACITY, MIXED_RESOURCE); conf.set(YarnConfiguration.RESOURCE_TYPES, RESOURCE_TYPES); ResourceUtils.resetResourceTypes(conf); QueueCapacityVector mixedCapacityVector = - capacityConfigParser.parse(conf, QUEUE, NO_LABEL); + capacityConfigParser.parse(MIXED_RESOURCE, QUEUE); - Assert.assertEquals(QueueCapacityType.ABSOLUTE, + Assert.assertEquals(ResourceUnitCapacityType.ABSOLUTE, mixedCapacityVector.getResource(MEMORY_URI).getVectorResourceType()); Assert.assertEquals(MEMORY_MIXED, mixedCapacityVector.getResource(MEMORY_URI) .getResourceValue(), EPSILON); - Assert.assertEquals(QueueCapacityType.PERCENTAGE, + Assert.assertEquals(ResourceUnitCapacityType.PERCENTAGE, mixedCapacityVector.getResource(VCORES_URI).getVectorResourceType()); Assert.assertEquals(PERCENTAGE_VALUE, mixedCapacityVector.getResource(VCORES_URI).getResourceValue(), EPSILON); - Assert.assertEquals(QueueCapacityType.WEIGHT, + Assert.assertEquals(ResourceUnitCapacityType.WEIGHT, mixedCapacityVector.getResource(GPU_URI).getVectorResourceType()); Assert.assertEquals(WEIGHT_VALUE, mixedCapacityVector.getResource(GPU_URI).getResourceValue(), EPSILON); // Test undefined capacity type default value - conf.set(CapacitySchedulerConfiguration.getQueuePrefix(QUEUE) - + CapacitySchedulerConfiguration.CAPACITY, ABSOLUTE_RESOURCE_MEMORY_VCORE); - QueueCapacityVector mixedCapacityVectorWithGpuUndefined = - capacityConfigParser.parse(conf, QUEUE, NO_LABEL); - Assert.assertEquals(QueueCapacityType.ABSOLUTE, + capacityConfigParser.parse(ABSOLUTE_RESOURCE_MEMORY_VCORE, QUEUE); + Assert.assertEquals(ResourceUnitCapacityType.ABSOLUTE, mixedCapacityVectorWithGpuUndefined.getResource(MEMORY_URI).getVectorResourceType()); Assert.assertEquals(0, mixedCapacityVectorWithGpuUndefined.getResource(GPU_URI) .getResourceValue(), EPSILON); @@ -188,52 +178,38 @@ public class TestQueueCapacityConfigParser { @Test public void testInvalidCapacityConfigs() { - CapacitySchedulerConfiguration conf = new CapacitySchedulerConfiguration(); - - conf.set(CapacitySchedulerConfiguration.getQueuePrefix(QUEUE) - + CapacitySchedulerConfiguration.CAPACITY, NONEXISTINGSUFFIX); QueueCapacityVector capacityVectorWithInvalidSuffix = - capacityConfigParser.parse(conf, QUEUE, NO_LABEL); + capacityConfigParser.parse(NONEXISTINGSUFFIX, QUEUE); List entriesWithInvalidSuffix = Lists.newArrayList(capacityVectorWithInvalidSuffix.iterator()); Assert.assertEquals(0, entriesWithInvalidSuffix.size()); - conf.set(CapacitySchedulerConfiguration.getQueuePrefix(QUEUE) - + CapacitySchedulerConfiguration.CAPACITY, INVALID_CAPACITY_FORMAT); QueueCapacityVector invalidDelimiterCapacityVector = - capacityConfigParser.parse(conf, QUEUE, NO_LABEL); + capacityConfigParser.parse(INVALID_CAPACITY_FORMAT, QUEUE); List invalidDelimiterEntries = Lists.newArrayList(invalidDelimiterCapacityVector.iterator()); Assert.assertEquals(0, invalidDelimiterEntries.size()); - conf.set(CapacitySchedulerConfiguration.getQueuePrefix(QUEUE) - + CapacitySchedulerConfiguration.CAPACITY, INVALID_CAPACITY_BRACKET); QueueCapacityVector invalidCapacityVector = - capacityConfigParser.parse(conf, QUEUE, NO_LABEL); + capacityConfigParser.parse(INVALID_CAPACITY_BRACKET, QUEUE); List resources = Lists.newArrayList(invalidCapacityVector.iterator()); Assert.assertEquals(0, resources.size()); - conf.set(CapacitySchedulerConfiguration.getQueuePrefix(QUEUE) - + CapacitySchedulerConfiguration.CAPACITY, EMPTY_BRACKET); QueueCapacityVector emptyBracketCapacityVector = - capacityConfigParser.parse(conf, QUEUE, NO_LABEL); + capacityConfigParser.parse(EMPTY_BRACKET, QUEUE); List emptyEntries = Lists.newArrayList(emptyBracketCapacityVector.iterator()); Assert.assertEquals(0, emptyEntries.size()); - conf.set(CapacitySchedulerConfiguration.getQueuePrefix(QUEUE) - + CapacitySchedulerConfiguration.CAPACITY, ""); QueueCapacityVector emptyCapacity = - capacityConfigParser.parse(conf, QUEUE, NO_LABEL); + capacityConfigParser.parse("", QUEUE); List emptyResources = Lists.newArrayList(emptyCapacity.iterator()); Assert.assertEquals(emptyResources.size(), 0); - conf.unset(CapacitySchedulerConfiguration.getQueuePrefix(QUEUE) - + CapacitySchedulerConfiguration.CAPACITY); QueueCapacityVector nonSetCapacity = - capacityConfigParser.parse(conf, QUEUE, NO_LABEL); + capacityConfigParser.parse(null, QUEUE); List nonSetResources = Lists.newArrayList(nonSetCapacity.iterator()); Assert.assertEquals(nonSetResources.size(), 0); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerTestBase.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerTestBase.java index 5508f0f2bde..a3d8f10ab2f 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerTestBase.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerTestBase.java @@ -51,7 +51,6 @@ import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceScheduler import org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.AppAddedSchedulerEvent; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.AppAttemptAddedSchedulerEvent; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeAddedSchedulerEvent; -import org.apache.hadoop.yarn.server.utils.BuilderUtils; import org.apache.hadoop.yarn.util.resource.Resources; import org.junit.Assert; @@ -124,7 +123,7 @@ public class FairSchedulerTestBase { int memory, int vcores, String host, int priority, int numContainers, boolean relaxLocality) { ResourceRequest request = recordFactory.newRecordInstance(ResourceRequest.class); - request.setCapability(BuilderUtils.newResource(memory, vcores)); + request.setCapability(Resources.createResource(memory, vcores)); request.setResourceName(host); request.setNumContainers(numContainers); Priority prio = recordFactory.newRecordInstance(Priority.class); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java index 2b768bebe92..2f700f3d82a 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java @@ -3295,7 +3295,7 @@ public class TestFairScheduler extends FairSchedulerTestBase { scheduler.start(); scheduler.reinitialize(conf, resourceManager.getRMContext()); - RMNode node = MockNodes.newNodeInfo(1, BuilderUtils.newResource(8192, 5)); + RMNode node = MockNodes.newNodeInfo(1, Resources.createResource(8192, 5)); NodeAddedSchedulerEvent nodeEvent = new NodeAddedSchedulerEvent(node); scheduler.handle(nodeEvent); @@ -3337,7 +3337,7 @@ public class TestFairScheduler extends FairSchedulerTestBase { scheduler.start(); scheduler.reinitialize(conf, resourceManager.getRMContext()); - RMNode node = MockNodes.newNodeInfo(1, BuilderUtils.newResource(8192, 7), + RMNode node = MockNodes.newNodeInfo(1, Resources.createResource(8192, 7), 1, "127.0.0.1"); NodeAddedSchedulerEvent nodeEvent = new NodeAddedSchedulerEvent(node); scheduler.handle(nodeEvent); @@ -3375,7 +3375,7 @@ public class TestFairScheduler extends FairSchedulerTestBase { scheduler.start(); scheduler.reinitialize(conf, resourceManager.getRMContext()); - RMNode node = MockNodes.newNodeInfo(1, BuilderUtils.newResource(12288, 12), + RMNode node = MockNodes.newNodeInfo(1, Resources.createResource(12288, 12), 1, "127.0.0.1"); NodeAddedSchedulerEvent nodeEvent = new NodeAddedSchedulerEvent(node); scheduler.handle(nodeEvent); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairSchedulerConfiguration.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairSchedulerConfiguration.java index 2ec1af363b1..38fbcd84153 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairSchedulerConfiguration.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairSchedulerConfiguration.java @@ -24,7 +24,6 @@ import org.apache.hadoop.yarn.api.records.Resource; import org.apache.hadoop.yarn.api.records.ResourceInformation; import org.apache.hadoop.yarn.api.records.impl.LightWeightResource; import org.apache.hadoop.yarn.conf.YarnConfiguration; -import org.apache.hadoop.yarn.server.utils.BuilderUtils; import org.apache.hadoop.yarn.util.UnitsConversionUtil; import org.apache.hadoop.yarn.util.resource.CustomResourceTypesConfigurationProvider; import org.apache.hadoop.yarn.util.resource.DominantResourceCalculator; @@ -125,8 +124,8 @@ public class TestFairSchedulerConfiguration { @Test public void testParseResourceConfigValue() throws Exception { - Resource expected = BuilderUtils.newResource(5 * 1024, 2); - Resource clusterResource = BuilderUtils.newResource(10 * 1024, 4); + Resource expected = Resources.createResource(5 * 1024, 2); + Resource clusterResource = Resources.createResource(10 * 1024, 4); assertEquals(expected, parseResourceConfigValue("5120 mb 2 vcores").getResource()); @@ -155,13 +154,13 @@ public class TestFairSchedulerConfiguration { assertEquals(expected, parseResourceConfigValue("50% Memory, 50% CpU"). getResource(clusterResource)); - assertEquals(BuilderUtils.newResource(5 * 1024, 4), + assertEquals(Resources.createResource(5 * 1024, 4), parseResourceConfigValue("50% memory, 100% cpu"). getResource(clusterResource)); - assertEquals(BuilderUtils.newResource(5 * 1024, 4), + assertEquals(Resources.createResource(5 * 1024, 4), parseResourceConfigValue(" 100% cpu, 50% memory"). getResource(clusterResource)); - assertEquals(BuilderUtils.newResource(5 * 1024, 0), + assertEquals(Resources.createResource(5 * 1024, 0), parseResourceConfigValue("50% memory, 0% cpu"). getResource(clusterResource)); assertEquals(expected, @@ -176,7 +175,7 @@ public class TestFairSchedulerConfiguration { assertEquals(expected, parseResourceConfigValue("50.% memory, 50.% cpu"). getResource(clusterResource)); - assertEquals(BuilderUtils.newResource((int)(1024 * 10 * 0.109), 2), + assertEquals(Resources.createResource((int)(1024 * 10 * 0.109), 2), parseResourceConfigValue("10.9% memory, 50.6% cpu"). getResource(clusterResource)); assertEquals(expected, @@ -187,8 +186,8 @@ public class TestFairSchedulerConfiguration { conf.set(YarnConfiguration.RESOURCE_TYPES, "test1"); ResourceUtils.resetResourceTypes(conf); - clusterResource = BuilderUtils.newResource(10 * 1024, 4); - expected = BuilderUtils.newResource(5 * 1024, 2); + clusterResource = Resources.createResource(10 * 1024, 4); + expected = Resources.createResource(5 * 1024, 2); expected.setResourceValue("test1", Long.MAX_VALUE); assertEquals(expected, @@ -233,7 +232,7 @@ public class TestFairSchedulerConfiguration { parseResourceConfigValue(" vcores = 2 , memory-mb = 5120 , " + "test1 = 4 ").getResource()); - expected = BuilderUtils.newResource(4 * 1024, 3); + expected = Resources.createResource(4 * 1024, 3); expected.setResourceValue("test1", 8L); assertEquals(expected, diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/TestFSConfigToCSConfigConverter.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/TestFSConfigToCSConfigConverter.java index c1c774bed09..86bf113d64e 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/TestFSConfigToCSConfigConverter.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/TestFSConfigToCSConfigConverter.java @@ -17,6 +17,7 @@ package org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.converter; import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.PREFIX; +import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.USER_LIMIT_FACTOR; import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.converter.FSConfigToCSConfigRuleHandler.DYNAMIC_MAX_ASSIGN; import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.converter.FSConfigToCSConfigRuleHandler.MAX_CAPACITY_PERCENTAGE; import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.converter.FSConfigToCSConfigRuleHandler.MAX_CHILD_CAPACITY; @@ -182,7 +183,26 @@ public class TestFSConfigToCSConfigConverter { conf.get(PREFIX + "root.admins.alice.maximum-am-resource-percent")); assertNull("root.users.joe maximum-am-resource-percent should be null", - conf.get(PREFIX + "root.users.joe maximum-am-resource-percent")); + conf.get(PREFIX + "root.users.joe.maximum-am-resource-percent")); + } + + @Test + public void testDefaultUserLimitFactor() throws Exception { + converter.convert(config); + + Configuration conf = converter.getCapacitySchedulerConfig(); + + assertNull("root.users user-limit-factor should be null", + conf.get(PREFIX + "root.users." + USER_LIMIT_FACTOR)); + + assertEquals("root.default user-limit-factor", "-1.0", + conf.get(PREFIX + "root.default.user-limit-factor")); + + assertEquals("root.users.joe user-limit-factor", "-1.0", + conf.get(PREFIX + "root.users.joe.user-limit-factor")); + + assertEquals("root.admins.bob user-limit-factor", "-1.0", + conf.get(PREFIX + "root.admins.bob.user-limit-factor")); } @Test diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/TestDominantResourceFairnessPolicy.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/TestDominantResourceFairnessPolicy.java index 197651a64c1..1cb25242b15 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/TestDominantResourceFairnessPolicy.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/TestDominantResourceFairnessPolicy.java @@ -39,7 +39,6 @@ import org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FakeSchedula import org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.Schedulable; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.DominantResourceFairnessPolicy.DominantResourceFairnessComparatorN; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.DominantResourceFairnessPolicy.DominantResourceFairnessComparator2; -import org.apache.hadoop.yarn.server.utils.BuilderUtils; import org.apache.hadoop.yarn.util.resource.ResourceUtils; import org.apache.hadoop.yarn.util.resource.Resources; import org.junit.Assert; @@ -84,8 +83,8 @@ public class TestDominantResourceFairnessPolicy { private Schedulable createSchedulable(int memUsage, int cpuUsage, float weights, int minMemShare, int minCpuShare) { - Resource usage = BuilderUtils.newResource(memUsage, cpuUsage); - Resource minShare = BuilderUtils.newResource(minMemShare, minCpuShare); + Resource usage = Resources.createResource(memUsage, cpuUsage); + Resource minShare = Resources.createResource(minMemShare, minCpuShare); return new FakeSchedulable(minShare, Resources.createResource(Integer.MAX_VALUE, Integer.MAX_VALUE), weights, Resources.none(), usage, 0l); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java index 7183f7f782f..b58dd0b6a65 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java @@ -955,10 +955,10 @@ public class TestFifoScheduler { // Ask for a 1 GB container for app 1 List ask1 = new ArrayList(); ask1.add(BuilderUtils.newResourceRequest(BuilderUtils.newPriority(0), - "rack1", BuilderUtils.newResource(GB, 1), 1, + "rack1", Resources.createResource(GB), 1, RMNodeLabelsManager.NO_LABEL)); ask1.add(BuilderUtils.newResourceRequest(BuilderUtils.newPriority(0), - ResourceRequest.ANY, BuilderUtils.newResource(GB, 1), 1, + ResourceRequest.ANY, Resources.createResource(GB), 1, RMNodeLabelsManager.NO_LABEL)); fs.allocate(appAttemptId1, ask1, null, emptyId, Collections.singletonList(host_1_0), null, NULL_UPDATE_REQUESTS); @@ -991,7 +991,7 @@ public class TestFifoScheduler { // this time, rack0 is also in blacklist, so only host_1_1 is available to // be assigned ask2.add(BuilderUtils.newResourceRequest(BuilderUtils.newPriority(0), - ResourceRequest.ANY, BuilderUtils.newResource(GB, 1), 1)); + ResourceRequest.ANY, Resources.createResource(GB), 1)); fs.allocate(appAttemptId1, ask2, null, emptyId, Collections.singletonList("rack0"), null, NULL_UPDATE_REQUESTS); @@ -1077,14 +1077,14 @@ public class TestFifoScheduler { // Ask for a 1 GB container for app 1 List ask1 = new ArrayList(); ask1.add(BuilderUtils.newResourceRequest(BuilderUtils.newPriority(0), - ResourceRequest.ANY, BuilderUtils.newResource(GB, 1), 1)); + ResourceRequest.ANY, Resources.createResource(GB), 1)); fs.allocate(appAttemptId1, ask1, null, emptyId, null, null, NULL_UPDATE_REQUESTS); // Ask for a 2 GB container for app 2 List ask2 = new ArrayList(); ask2.add(BuilderUtils.newResourceRequest(BuilderUtils.newPriority(0), - ResourceRequest.ANY, BuilderUtils.newResource(2 * GB, 1), 1)); + ResourceRequest.ANY, Resources.createResource(2 * GB), 1)); fs.allocate(appAttemptId2, ask2, null, emptyId, null, null, NULL_UPDATE_REQUESTS); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServices.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServices.java index ce9de643744..b416947c550 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServices.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServices.java @@ -57,6 +57,7 @@ import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.security.authorize.AuthorizationException; import org.apache.hadoop.service.Service.STATE; import org.apache.hadoop.util.VersionInfo; +import org.apache.hadoop.util.XMLUtils; import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationsRequest; import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationsResponse; import org.apache.hadoop.yarn.api.records.ApplicationId; @@ -310,7 +311,7 @@ public class TestRMWebServices extends JerseyTestBase { } public void verifyClusterInfoXML(String xml) throws JSONException, Exception { - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); @@ -436,7 +437,7 @@ public class TestRMWebServices extends JerseyTestBase { public void verifyClusterMetricsXML(String xml) throws JSONException, Exception { - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); @@ -607,7 +608,7 @@ public class TestRMWebServices extends JerseyTestBase { public void verifySchedulerFifoXML(String xml) throws JSONException, Exception { - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); @@ -1099,4 +1100,61 @@ public class TestRMWebServices extends JerseyTestBase { return webService; } + @Test + public void testClusterSchedulerOverviewFifo() throws JSONException, Exception { + WebResource r = resource(); + ClientResponse response = r.path("ws").path("v1").path("cluster") + .path("scheduler-overview").accept(MediaType.APPLICATION_JSON) + .get(ClientResponse.class); + + assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8, + response.getType().toString()); + JSONObject json = response.getEntity(JSONObject.class); + verifyClusterSchedulerOverView(json, "Fifo Scheduler"); + } + + public static void verifyClusterSchedulerOverView( + JSONObject json, String expectedSchedulerType) throws Exception { + + // why json contains 8 elements because we defined 8 fields + assertEquals("incorrect number of elements in: " + json, 8, json.length()); + + // 1.Verify that the schedulerType is as expected + String schedulerType = json.getString("schedulerType"); + assertEquals(expectedSchedulerType, schedulerType); + + // 2.Verify that schedulingResourceType is as expected + String schedulingResourceType = json.getString("schedulingResourceType"); + assertEquals("memory-mb (unit=Mi),vcores", schedulingResourceType); + + // 3.Verify that minimumAllocation is as expected + JSONObject minimumAllocation = json.getJSONObject("minimumAllocation"); + String minMemory = minimumAllocation.getString("memory"); + String minVCores = minimumAllocation.getString("vCores"); + assertEquals("1024", minMemory); + assertEquals("1", minVCores); + + // 4.Verify that maximumAllocation is as expected + JSONObject maximumAllocation = json.getJSONObject("maximumAllocation"); + String maxMemory = maximumAllocation.getString("memory"); + String maxVCores = maximumAllocation.getString("vCores"); + assertEquals("8192", maxMemory); + assertEquals("4", maxVCores); + + // 5.Verify that schedulerBusy is as expected + int schedulerBusy = json.getInt("schedulerBusy"); + assertEquals(-1, schedulerBusy); + + // 6.Verify that rmDispatcherEventQueueSize is as expected + int rmDispatcherEventQueueSize = json.getInt("rmDispatcherEventQueueSize"); + assertEquals(0, rmDispatcherEventQueueSize); + + // 7.Verify that schedulerDispatcherEventQueueSize is as expected + int schedulerDispatcherEventQueueSize = json.getInt("schedulerDispatcherEventQueueSize"); + assertEquals(0, schedulerDispatcherEventQueueSize); + + // 8.Verify that applicationPriority is as expected + int applicationPriority = json.getInt("applicationPriority"); + assertEquals(0, applicationPriority); + } } \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesAppAttempts.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesAppAttempts.java index 102f13897fc..4ad37ef8d9d 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesAppAttempts.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesAppAttempts.java @@ -25,6 +25,7 @@ import com.sun.jersey.guice.spi.container.servlet.GuiceContainer; import com.sun.jersey.test.framework.WebAppDescriptor; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.http.JettyUtils; +import org.apache.hadoop.util.XMLUtils; import org.apache.hadoop.yarn.api.records.ContainerState; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.server.resourcemanager.MockAM; @@ -395,7 +396,7 @@ public class TestRMWebServicesAppAttempts extends JerseyTestBase { response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesApps.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesApps.java index b304b8a7857..964aab6913c 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesApps.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesApps.java @@ -31,6 +31,7 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.http.JettyUtils; import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.util.Sets; +import org.apache.hadoop.util.XMLUtils; import org.apache.hadoop.yarn.api.records.ContainerState; import org.apache.hadoop.yarn.api.records.FinalApplicationStatus; import org.apache.hadoop.yarn.api.records.ResourceRequest; @@ -189,7 +190,7 @@ public class TestRMWebServicesApps extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); @@ -223,7 +224,7 @@ public class TestRMWebServicesApps extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); @@ -264,7 +265,7 @@ public class TestRMWebServicesApps extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); @@ -1724,7 +1725,7 @@ public class TestRMWebServicesApps extends JerseyTestBase { response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesAppsModification.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesAppsModification.java index f5f21aac249..0c8f742fae2 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesAppsModification.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesAppsModification.java @@ -56,6 +56,7 @@ import org.apache.hadoop.io.Text; import org.apache.hadoop.security.Credentials; import org.apache.hadoop.security.authentication.server.AuthenticationFilter; import org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler; +import org.apache.hadoop.util.XMLUtils; import org.apache.hadoop.yarn.api.records.ApplicationAccessType; import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext; @@ -532,7 +533,7 @@ public class TestRMWebServicesAppsModification extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); @@ -733,7 +734,7 @@ public class TestRMWebServicesAppsModification extends JerseyTestBase { protected String validateGetNewApplicationXMLResponse(String response) throws ParserConfigurationException, IOException, SAXException { - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(response)); @@ -1299,7 +1300,7 @@ public class TestRMWebServicesAppsModification extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); @@ -1329,7 +1330,7 @@ public class TestRMWebServicesAppsModification extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); @@ -1466,7 +1467,7 @@ public class TestRMWebServicesAppsModification extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java index b9ce10aaedd..ec65237fa6e 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java @@ -18,9 +18,12 @@ package org.apache.hadoop.yarn.server.resourcemanager.webapp; +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.databind.ObjectMapper; import com.google.inject.Guice; import com.google.inject.servlet.ServletModule; import com.sun.jersey.api.client.ClientResponse; +import com.sun.jersey.api.client.WebResource; import com.sun.jersey.guice.spi.container.servlet.GuiceContainer; import com.sun.jersey.test.framework.WebAppDescriptor; @@ -47,6 +50,7 @@ import javax.xml.transform.stream.StreamResult; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.http.JettyUtils; +import org.apache.hadoop.util.XMLUtils; import org.apache.hadoop.yarn.api.records.Resource; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.server.resourcemanager.MockRM; @@ -312,7 +316,7 @@ public class TestRMWebServicesCapacitySched extends JerseyTestBase { DOMSource domSource = new DOMSource(document); StringWriter writer = new StringWriter(); StreamResult result = new StreamResult(writer); - TransformerFactory tf = TransformerFactory.newInstance(); + TransformerFactory tf = XMLUtils.newSecureTransformerFactory(); Transformer transformer = tf.newTransformer(); transformer.setOutputProperty(OutputKeys.INDENT, "yes"); transformer.setOutputProperty("{http://xml.apache.org/xslt}indent-amount", "2"); @@ -321,7 +325,7 @@ public class TestRMWebServicesCapacitySched extends JerseyTestBase { } public static Document loadDocument(String xml) throws Exception { - DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory factory = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder builder = factory.newDocumentBuilder(); InputSource is = new InputSource(new StringReader(xml)); return builder.parse(is); @@ -334,7 +338,16 @@ public class TestRMWebServicesCapacitySched extends JerseyTestBase { JSONObject json = response.getEntity(JSONObject.class); String actual = json.toString(2); updateTestDataAutomatically(expectedResourceFilename, actual); - assertEquals(getResourceAsString(expectedResourceFilename), actual); + assertEquals( + prettyPrintJson(getResourceAsString(expectedResourceFilename)), + prettyPrintJson(actual)); + } + + private static String prettyPrintJson(String in) throws JsonProcessingException { + ObjectMapper objectMapper = new ObjectMapper(); + return objectMapper + .writerWithDefaultPrettyPrinter() + .writeValueAsString(objectMapper.readTree(in)); } public static void assertJsonType(ClientResponse response) { @@ -402,4 +415,16 @@ public class TestRMWebServicesCapacitySched extends JerseyTestBase { YarnConfiguration.SCHEDULER_RM_PLACEMENT_CONSTRAINTS_HANDLER); return new MockRM(conf); } + + @Test + public void testClusterSchedulerOverviewCapacity() throws Exception { + WebResource r = resource(); + ClientResponse response = r.path("ws").path("v1").path("cluster") + .path("scheduler-overview").accept(MediaType.APPLICATION_JSON) + .get(ClientResponse.class); + assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8, + response.getType().toString()); + JSONObject json = response.getEntity(JSONObject.class); + TestRMWebServices.verifyClusterSchedulerOverView(json, "Capacity Scheduler"); + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesDelegationTokens.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesDelegationTokens.java index 095b076df73..0e497da49e0 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesDelegationTokens.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesDelegationTokens.java @@ -48,6 +48,7 @@ import org.apache.hadoop.security.authentication.server.PseudoAuthenticationHand import org.apache.hadoop.security.token.SecretManager.InvalidToken; import org.apache.hadoop.security.token.Token; import org.apache.hadoop.util.Time; +import org.apache.hadoop.util.XMLUtils; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.security.client.RMDelegationTokenIdentifier; import org.apache.hadoop.yarn.server.resourcemanager.MockRM; @@ -697,7 +698,7 @@ public class TestRMWebServicesDelegationTokens extends JerseyTestBase { public static DelegationToken getDelegationTokenFromXML(String tokenXML) throws IOException, ParserConfigurationException, SAXException { - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(tokenXML)); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesForCSWithPartitions.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesForCSWithPartitions.java index 0697ad03219..6af91807041 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesForCSWithPartitions.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesForCSWithPartitions.java @@ -42,6 +42,7 @@ import javax.xml.parsers.DocumentBuilderFactory; import org.apache.hadoop.thirdparty.com.google.common.collect.ImmutableMap; import org.apache.hadoop.http.JettyUtils; import org.apache.hadoop.util.Sets; +import org.apache.hadoop.util.XMLUtils; import org.apache.hadoop.yarn.api.records.NodeId; import org.apache.hadoop.yarn.api.records.NodeLabel; import org.apache.hadoop.yarn.api.records.Priority; @@ -258,7 +259,7 @@ public class TestRMWebServicesForCSWithPartitions extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesNodes.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesNodes.java index c67c49a3610..20bdb64ee32 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesNodes.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesNodes.java @@ -49,6 +49,7 @@ import org.apache.hadoop.http.JettyUtils; import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.security.authentication.server.AuthenticationFilter; import org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler; +import org.apache.hadoop.util.XMLUtils; import org.apache.hadoop.yarn.api.records.ContainerStatus; import org.apache.hadoop.yarn.api.records.NodeAttribute; import org.apache.hadoop.yarn.api.records.NodeAttributeType; @@ -58,7 +59,6 @@ import org.apache.hadoop.yarn.api.records.Resource; import org.apache.hadoop.yarn.api.records.ResourceOption; import org.apache.hadoop.yarn.api.records.ResourceUtilization; import org.apache.hadoop.yarn.conf.YarnConfiguration; -import org.apache.hadoop.yarn.server.utils.BuilderUtils; import org.apache.hadoop.yarn.server.api.records.NodeHealthStatus; import org.apache.hadoop.yarn.server.api.records.NodeStatus; import org.apache.hadoop.yarn.server.api.records.OpportunisticContainersStatus; @@ -82,6 +82,7 @@ import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceInfo; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceOptionInfo; import org.apache.hadoop.yarn.util.Records; import org.apache.hadoop.yarn.util.RackResolver; +import org.apache.hadoop.yarn.util.resource.Resources; import org.apache.hadoop.yarn.util.YarnVersionInfo; import org.apache.hadoop.yarn.webapp.GenericExceptionHandler; import org.apache.hadoop.yarn.webapp.GuiceServletConfig; @@ -578,10 +579,9 @@ public class TestRMWebServicesNodes extends JerseyTestBase { response.getType().toString()); String msg = response.getEntity(String.class); System.out.println(msg); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); - InputSource is = new InputSource(); - is.setCharacterStream(new StringReader(msg)); + InputSource is = new InputSource(new StringReader(msg)); Document dom = db.parse(is); NodeList nodes = dom.getElementsByTagName("RemoteException"); Element element = (Element) nodes.item(0); @@ -646,10 +646,9 @@ public class TestRMWebServicesNodes extends JerseyTestBase { assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); - InputSource is = new InputSource(); - is.setCharacterStream(new StringReader(xml)); + InputSource is = new InputSource(new StringReader(xml)); Document dom = db.parse(is); NodeList nodesApps = dom.getElementsByTagName("nodes"); assertEquals("incorrect number of elements", 1, nodesApps.getLength()); @@ -672,7 +671,7 @@ public class TestRMWebServicesNodes extends JerseyTestBase { response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); @@ -694,7 +693,7 @@ public class TestRMWebServicesNodes extends JerseyTestBase { response.getType().toString()); String xml = response.getEntity(String.class); - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); @@ -1007,7 +1006,7 @@ public class TestRMWebServicesNodes extends JerseyTestBase { RegisterNodeManagerRequest registerReq = Records.newRecord(RegisterNodeManagerRequest.class); NodeId nodeId = NodeId.newInstance("host1", 1234); - Resource capability = BuilderUtils.newResource(1024, 1); + Resource capability = Resources.createResource(1024); registerReq.setResource(capability); registerReq.setNodeId(nodeId); registerReq.setHttpPort(1234); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/fairscheduler/TestRMWebServicesFairScheduler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/fairscheduler/TestRMWebServicesFairScheduler.java index bf605e9f5f6..cbc6c417859 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/fairscheduler/TestRMWebServicesFairScheduler.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/fairscheduler/TestRMWebServicesFairScheduler.java @@ -34,6 +34,7 @@ import org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.QueueManager import org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver; import org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices; import org.apache.hadoop.yarn.webapp.GenericExceptionHandler; import org.apache.hadoop.yarn.webapp.GuiceServletConfig; import org.apache.hadoop.yarn.webapp.JerseyTestBase; @@ -157,4 +158,15 @@ public class TestRMWebServicesFairScheduler extends JerseyTestBase { assertEquals("root", rootQueue.getString("queueName")); } + @Test + public void testClusterSchedulerOverviewFair() throws Exception { + WebResource r = resource(); + ClientResponse response = r.path("ws").path("v1").path("cluster") + .path("scheduler-overview").accept(MediaType.APPLICATION_JSON) + .get(ClientResponse.class); + assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8, + response.getType().toString()); + JSONObject json = response.getEntity(JSONObject.class); + TestRMWebServices.verifyClusterSchedulerOverView(json, "Fair Scheduler"); + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/helper/XmlCustomResourceTypeTestCase.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/helper/XmlCustomResourceTypeTestCase.java index 0ad92d20bb5..8048a69fbd9 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/helper/XmlCustomResourceTypeTestCase.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/helper/XmlCustomResourceTypeTestCase.java @@ -20,6 +20,7 @@ package org.apache.hadoop.yarn.server.resourcemanager.webapp.helper; import com.sun.jersey.api.client.WebResource; import org.apache.hadoop.http.JettyUtils; +import org.apache.hadoop.util.XMLUtils; import org.codehaus.jettison.json.JSONObject; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -29,7 +30,6 @@ import org.xml.sax.InputSource; import javax.ws.rs.core.MediaType; import javax.xml.parsers.DocumentBuilder; -import javax.xml.parsers.DocumentBuilderFactory; import javax.xml.transform.*; import javax.xml.transform.dom.DOMSource; import javax.xml.transform.stream.StreamResult; @@ -84,7 +84,7 @@ public class XmlCustomResourceTypeTestCase { try { String xml = response.getEntity(String.class); DocumentBuilder db = - DocumentBuilderFactory.newInstance().newDocumentBuilder(); + XMLUtils.newSecureDocumentBuilderFactory().newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); @@ -105,7 +105,7 @@ public class XmlCustomResourceTypeTestCase { public static String toXml(Node node) { StringWriter writer; try { - TransformerFactory tf = TransformerFactory.newInstance(); + TransformerFactory tf = XMLUtils.newSecureTransformerFactory(); Transformer transformer = tf.newTransformer(); transformer.setOutputProperty(OutputKeys.INDENT, "yes"); transformer.setOutputProperty( diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/webapp/TestRMWithCSRFFilter.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/webapp/TestRMWithCSRFFilter.java index 2925e841120..577e8acbc46 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/webapp/TestRMWithCSRFFilter.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/webapp/TestRMWithCSRFFilter.java @@ -30,6 +30,7 @@ import org.apache.hadoop.http.JettyUtils; import org.apache.hadoop.security.http.RestCsrfPreventionFilter; import org.apache.hadoop.service.Service.STATE; import org.apache.hadoop.util.VersionInfo; +import org.apache.hadoop.util.XMLUtils; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.server.resourcemanager.MockRM; import org.apache.hadoop.yarn.server.resourcemanager.ResourceManager; @@ -153,7 +154,7 @@ public class TestRMWithCSRFFilter extends JerseyTestBase { } public void verifyClusterInfoXML(String xml) throws Exception { - DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); + DocumentBuilderFactory dbf = XMLUtils.newSecureDocumentBuilderFactory(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/pom.xml index aa808801df4..0ba39653a5c 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/pom.xml +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/pom.xml @@ -74,6 +74,27 @@ test + + org.junit.jupiter + junit-jupiter-api + test + + + org.junit.jupiter + junit-jupiter-engine + test + + + org.junit.jupiter + junit-jupiter-params + test + + + org.junit.platform + junit-platform-launcher + test + + junit junit @@ -129,6 +150,12 @@ test-jar + + org.glassfish.grizzly + grizzly-http-servlet + test + + diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/Router.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/Router.java index e95b25678bf..24e9ad23c93 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/Router.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/Router.java @@ -27,6 +27,7 @@ import org.apache.hadoop.classification.InterfaceAudience.Private; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem; import org.apache.hadoop.metrics2.source.JvmMetrics; +import org.apache.hadoop.security.HttpCrossOriginFilterInitializer; import org.apache.hadoop.security.SecurityUtil; import org.apache.hadoop.service.CompositeService; import org.apache.hadoop.util.JvmPauseMonitor; @@ -78,6 +79,7 @@ public class Router extends CompositeService { private WebApp webApp; @VisibleForTesting protected String webAppAddress; + private static long clusterTimeStamp = System.currentTimeMillis(); /** * Priority of the Router shutdown hook. @@ -168,6 +170,16 @@ public class Router extends CompositeService { @VisibleForTesting public void startWepApp() { + // Initialize RouterWeb's CrossOrigin capability. + boolean enableCors = conf.getBoolean(YarnConfiguration.ROUTER_WEBAPP_ENABLE_CORS_FILTER, + YarnConfiguration.DEFAULT_ROUTER_WEBAPP_ENABLE_CORS_FILTER); + if (enableCors) { + conf.setBoolean(HttpCrossOriginFilterInitializer.PREFIX + + HttpCrossOriginFilterInitializer.ENABLED_SUFFIX, true); + } + + LOG.info("Instantiating RouterWebApp at {}.", webAppAddress); + RMWebAppUtil.setupSecurityAndFilters(conf, null); Builder builder = @@ -226,4 +238,8 @@ public class Router extends CompositeService { } return name; } + + public static long getClusterTimeStamp() { + return clusterTimeStamp; + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/RouterAuditLogger.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/RouterAuditLogger.java index a89d0e4462a..f3b428dab4a 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/RouterAuditLogger.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/RouterAuditLogger.java @@ -196,6 +196,24 @@ public class RouterAuditLogger { } } + /** + * Create a readable and parsable audit log string for a failed event. + * + * @param user User who made the service request. + * @param operation Operation requested by the user. + * @param perm Target permissions. + * @param target The target on which the operation is being performed. + * @param description Some additional information as to why the operation failed. + * @param subClusterId SubCluster Id in which operation was performed. + */ + public static void logFailure(String user, String operation, String perm, + String target, String description, SubClusterId subClusterId) { + if (LOG.isInfoEnabled()) { + LOG.info(createFailureLog(user, operation, perm, target, description, null, + subClusterId)); + } + } + /** * A helper api for creating an audit log for a failure event. */ diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/RouterMetrics.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/RouterMetrics.java index 712072cc279..033aa076658 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/RouterMetrics.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/RouterMetrics.java @@ -107,6 +107,44 @@ public final class RouterMetrics { private MutableGaugeInt numDeleteReservationFailedRetrieved; @Metric("# of listReservation failed to be retrieved") private MutableGaugeInt numListReservationFailedRetrieved; + @Metric("# of getAppActivities failed to be retrieved") + private MutableGaugeInt numGetAppActivitiesFailedRetrieved; + @Metric("# of getAppStatistics failed to be retrieved") + private MutableGaugeInt numGetAppStatisticsFailedRetrieved; + @Metric("# of getAppPriority failed to be retrieved") + private MutableGaugeInt numGetAppPriorityFailedRetrieved; + @Metric("# of getAppQueue failed to be retrieved") + private MutableGaugeInt numGetAppQueueFailedRetrieved; + @Metric("# of updateAppQueue failed to be retrieved") + private MutableGaugeInt numUpdateAppQueueFailedRetrieved; + @Metric("# of getAppTimeout failed to be retrieved") + private MutableGaugeInt numGetAppTimeoutFailedRetrieved; + @Metric("# of getAppTimeouts failed to be retrieved") + private MutableGaugeInt numGetAppTimeoutsFailedRetrieved; + @Metric("# of refreshQueues failed to be retrieved") + private MutableGaugeInt numRefreshQueuesFailedRetrieved; + @Metric("# of getRMNodeLabels failed to be retrieved") + private MutableGaugeInt numGetRMNodeLabelsFailedRetrieved; + @Metric("# of checkUserAccessToQueue failed to be retrieved") + private MutableGaugeInt numCheckUserAccessToQueueFailedRetrieved; + @Metric("# of refreshNodes failed to be retrieved") + private MutableGaugeInt numRefreshNodesFailedRetrieved; + @Metric("# of getDelegationToken failed to be retrieved") + private MutableGaugeInt numGetDelegationTokenFailedRetrieved; + @Metric("# of renewDelegationToken failed to be retrieved") + private MutableGaugeInt numRenewDelegationTokenFailedRetrieved; + @Metric("# of renewDelegationToken failed to be retrieved") + private MutableGaugeInt numCancelDelegationTokenFailedRetrieved; + @Metric("# of getActivities failed to be retrieved") + private MutableGaugeInt numGetActivitiesFailedRetrieved; + @Metric("# of getBulkActivities failed to be retrieved") + private MutableGaugeInt numGetBulkActivitiesFailedRetrieved; + @Metric("# of getSchedulerInfo failed to be retrieved") + private MutableGaugeInt numGetSchedulerInfoFailedRetrieved; + @Metric("# of refreshSuperUserGroupsConfiguration failed to be retrieved") + private MutableGaugeInt numRefreshSuperUserGroupsConfigurationFailedRetrieved; + @Metric("# of refreshUserToGroupsMappings failed to be retrieved") + private MutableGaugeInt numRefreshUserToGroupsMappingsFailedRetrieved; // Aggregate metrics are shared, and don't have to be looked up per call @Metric("Total number of successful Submitted apps and latency(ms)") @@ -175,6 +213,45 @@ public final class RouterMetrics { private MutableRate totalSucceededDeleteReservationRetrieved; @Metric("Total number of successful Retrieved ListReservation and latency(ms)") private MutableRate totalSucceededListReservationRetrieved; + @Metric("Total number of successful Retrieved GetAppActivities and latency(ms)") + private MutableRate totalSucceededGetAppActivitiesRetrieved; + @Metric("Total number of successful Retrieved GetAppStatistics and latency(ms)") + private MutableRate totalSucceededGetAppStatisticsRetrieved; + @Metric("Total number of successful Retrieved GetAppPriority and latency(ms)") + private MutableRate totalSucceededGetAppPriorityRetrieved; + @Metric("Total number of successful Retrieved GetAppQueue and latency(ms)") + private MutableRate totalSucceededGetAppQueueRetrieved; + @Metric("Total number of successful Retrieved UpdateAppQueue and latency(ms)") + private MutableRate totalSucceededUpdateAppQueueRetrieved; + @Metric("Total number of successful Retrieved GetAppTimeout and latency(ms)") + private MutableRate totalSucceededGetAppTimeoutRetrieved; + @Metric("Total number of successful Retrieved GetAppTimeouts and latency(ms)") + private MutableRate totalSucceededGetAppTimeoutsRetrieved; + @Metric("Total number of successful Retrieved RefreshQueues and latency(ms)") + private MutableRate totalSucceededRefreshQueuesRetrieved; + @Metric("Total number of successful Retrieved GetRMNodeLabels and latency(ms)") + private MutableRate totalSucceededGetRMNodeLabelsRetrieved; + @Metric("Total number of successful Retrieved CheckUserAccessToQueue and latency(ms)") + private MutableRate totalSucceededCheckUserAccessToQueueRetrieved; + @Metric("Total number of successful Retrieved RefreshNodes and latency(ms)") + private MutableRate totalSucceededRefreshNodesRetrieved; + @Metric("Total number of successful Retrieved GetDelegationToken and latency(ms)") + private MutableRate totalSucceededGetDelegationTokenRetrieved; + @Metric("Total number of successful Retrieved RenewDelegationToken and latency(ms)") + private MutableRate totalSucceededRenewDelegationTokenRetrieved; + @Metric("Total number of successful Retrieved CancelDelegationToken and latency(ms)") + private MutableRate totalSucceededCancelDelegationTokenRetrieved; + @Metric("Total number of successful Retrieved GetActivities and latency(ms)") + private MutableRate totalSucceededGetActivitiesRetrieved; + @Metric("Total number of successful Retrieved GetBulkActivities and latency(ms)") + private MutableRate totalSucceededGetBulkActivitiesRetrieved; + @Metric("Total number of successful Retrieved RefreshSuperUserGroupsConfig and latency(ms)") + private MutableRate totalSucceededRefreshSuperUserGroupsConfigurationRetrieved; + @Metric("Total number of successful Retrieved RefreshUserToGroupsMappings and latency(ms)") + private MutableRate totalSucceededRefreshUserToGroupsMappingsRetrieved; + + @Metric("Total number of successful Retrieved GetSchedulerInfo and latency(ms)") + private MutableRate totalSucceededGetSchedulerInfoRetrieved; /** * Provide quantile counters for all latencies. @@ -212,10 +289,30 @@ public final class RouterMetrics { private MutableQuantiles updateReservationLatency; private MutableQuantiles deleteReservationLatency; private MutableQuantiles listReservationLatency; + private MutableQuantiles getAppActivitiesLatency; + private MutableQuantiles getAppStatisticsLatency; + private MutableQuantiles getAppPriorityLatency; + private MutableQuantiles getAppQueueLatency; + private MutableQuantiles getUpdateQueueLatency; + private MutableQuantiles getAppTimeoutLatency; + private MutableQuantiles getAppTimeoutsLatency; + private MutableQuantiles refreshQueuesLatency; + private MutableQuantiles getRMNodeLabelsLatency; + private MutableQuantiles checkUserAccessToQueueLatency; + private MutableQuantiles refreshNodesLatency; + private MutableQuantiles getDelegationTokenLatency; + private MutableQuantiles renewDelegationTokenLatency; + private MutableQuantiles cancelDelegationTokenLatency; + private MutableQuantiles getActivitiesLatency; + private MutableQuantiles getBulkActivitiesLatency; + private MutableQuantiles getSchedulerInfoRetrievedLatency; + private MutableQuantiles refreshSuperUserGroupsConfLatency; + private MutableQuantiles refreshUserToGroupsMappingsLatency; private static volatile RouterMetrics instance = null; private static MetricsRegistry registry; + @SuppressWarnings("checkstyle:MethodLength") private RouterMetrics() { registry = new MetricsRegistry(RECORD_INFO); registry.tag(RECORD_INFO, "Router"); @@ -342,6 +439,63 @@ public final class RouterMetrics { listReservationLatency = registry.newQuantiles("listReservationLatency", "latency of list reservation timeouts", "ops", "latency", 10); + + getAppActivitiesLatency = registry.newQuantiles("getAppActivitiesLatency", + "latency of get app activities timeouts", "ops", "latency", 10); + + getAppStatisticsLatency = registry.newQuantiles("getAppStatisticsLatency", + "latency of get app statistics timeouts", "ops", "latency", 10); + + getAppPriorityLatency = registry.newQuantiles("getAppPriorityLatency", + "latency of get app priority timeouts", "ops", "latency", 10); + + getAppQueueLatency = registry.newQuantiles("getAppQueueLatency", + "latency of get app queue timeouts", "ops", "latency", 10); + + getUpdateQueueLatency = registry.newQuantiles("getUpdateQueueLatency", + "latency of update app queue timeouts", "ops", "latency", 10); + + getAppTimeoutLatency = registry.newQuantiles("getAppTimeoutLatency", + "latency of get apptimeout timeouts", "ops", "latency", 10); + + getAppTimeoutsLatency = registry.newQuantiles("getAppTimeoutsLatency", + "latency of get apptimeouts timeouts", "ops", "latency", 10); + + refreshQueuesLatency = registry.newQuantiles("refreshQueuesLatency", + "latency of get refresh queues timeouts", "ops", "latency", 10); + + getRMNodeLabelsLatency = registry.newQuantiles("getRMNodeLabelsLatency", + "latency of get rmnodelabels timeouts", "ops", "latency", 10); + + checkUserAccessToQueueLatency = registry.newQuantiles("checkUserAccessToQueueLatency", + "latency of get apptimeouts timeouts", "ops", "latency", 10); + + refreshNodesLatency = registry.newQuantiles("refreshNodesLatency", + "latency of get refresh nodes timeouts", "ops", "latency", 10); + + getDelegationTokenLatency = registry.newQuantiles("getDelegationTokenLatency", + "latency of get delegation token timeouts", "ops", "latency", 10); + + renewDelegationTokenLatency = registry.newQuantiles("renewDelegationTokenLatency", + "latency of renew delegation token timeouts", "ops", "latency", 10); + + cancelDelegationTokenLatency = registry.newQuantiles("cancelDelegationTokenLatency", + "latency of cancel delegation token timeouts", "ops", "latency", 10); + + getActivitiesLatency = registry.newQuantiles("getActivitiesLatency", + "latency of get activities timeouts", "ops", "latency", 10); + + getBulkActivitiesLatency = registry.newQuantiles("getBulkActivitiesLatency", + "latency of get bulk activities timeouts", "ops", "latency", 10); + + getSchedulerInfoRetrievedLatency = registry.newQuantiles("getSchedulerInfoRetrievedLatency", + "latency of get scheduler info timeouts", "ops", "latency", 10); + + refreshSuperUserGroupsConfLatency = registry.newQuantiles("refreshSuperUserGroupsConfLatency", + "latency of refresh superuser groups configuration timeouts", "ops", "latency", 10); + + refreshUserToGroupsMappingsLatency = registry.newQuantiles("refreshUserToGroupsMappingsLatency", + "latency of refresh user to groups mappings timeouts", "ops", "latency", 10); } public static RouterMetrics getMetrics() { @@ -528,6 +682,96 @@ public final class RouterMetrics { return totalSucceededListReservationRetrieved.lastStat().numSamples(); } + @VisibleForTesting + public long getNumSucceededGetAppActivitiesRetrieved() { + return totalSucceededGetAppActivitiesRetrieved.lastStat().numSamples(); + } + + @VisibleForTesting + public long getNumSucceededGetAppStatisticsRetrieved() { + return totalSucceededGetAppStatisticsRetrieved.lastStat().numSamples(); + } + + @VisibleForTesting + public long getNumSucceededGetAppPriorityRetrieved() { + return totalSucceededGetAppPriorityRetrieved.lastStat().numSamples(); + } + + @VisibleForTesting + public long getNumSucceededGetAppQueueRetrieved() { + return totalSucceededGetAppQueueRetrieved.lastStat().numSamples(); + } + + @VisibleForTesting + public long getNumSucceededUpdateAppQueueRetrieved() { + return totalSucceededUpdateAppQueueRetrieved.lastStat().numSamples(); + } + + @VisibleForTesting + public long getNumSucceededGetAppTimeoutRetrieved() { + return totalSucceededGetAppTimeoutRetrieved.lastStat().numSamples(); + } + + @VisibleForTesting + public long getNumSucceededGetAppTimeoutsRetrieved() { + return totalSucceededGetAppTimeoutsRetrieved.lastStat().numSamples(); + } + + @VisibleForTesting + public long getNumSucceededRefreshQueuesRetrieved() { + return totalSucceededRefreshQueuesRetrieved.lastStat().numSamples(); + } + + @VisibleForTesting + public long getNumSucceededRefreshNodesRetrieved() { + return totalSucceededRefreshNodesRetrieved.lastStat().numSamples(); + } + + @VisibleForTesting + public long getNumSucceededGetRMNodeLabelsRetrieved() { + return totalSucceededGetRMNodeLabelsRetrieved.lastStat().numSamples(); + } + + @VisibleForTesting + public long getNumSucceededCheckUserAccessToQueueRetrieved() { + return totalSucceededCheckUserAccessToQueueRetrieved.lastStat().numSamples(); + } + + @VisibleForTesting + public long getNumSucceededGetDelegationTokenRetrieved() { + return totalSucceededGetDelegationTokenRetrieved.lastStat().numSamples(); + } + + @VisibleForTesting + public long getNumSucceededRenewDelegationTokenRetrieved() { + return totalSucceededRenewDelegationTokenRetrieved.lastStat().numSamples(); + } + + @VisibleForTesting + public long getNumSucceededCancelDelegationTokenRetrieved() { + return totalSucceededCancelDelegationTokenRetrieved.lastStat().numSamples(); + } + + @VisibleForTesting + public long getNumSucceededGetActivitiesRetrieved() { + return totalSucceededGetActivitiesRetrieved.lastStat().numSamples(); + } + + @VisibleForTesting + public long getNumSucceededGetBulkActivitiesRetrieved() { + return totalSucceededGetBulkActivitiesRetrieved.lastStat().numSamples(); + } + + @VisibleForTesting + public long getNumSucceededGetSchedulerInfoRetrieved() { + return totalSucceededGetSchedulerInfoRetrieved.lastStat().numSamples(); + } + + @VisibleForTesting + public long getNumSucceededRefreshSuperUserGroupsConfigurationRetrieved() { + return totalSucceededRefreshSuperUserGroupsConfigurationRetrieved.lastStat().numSamples(); + } + @VisibleForTesting public double getLatencySucceededAppsCreated() { return totalSucceededAppsCreated.lastStat().mean(); @@ -693,6 +937,96 @@ public final class RouterMetrics { return totalSucceededListReservationRetrieved.lastStat().mean(); } + @VisibleForTesting + public double getLatencySucceededGetAppActivitiesRetrieved() { + return totalSucceededGetAppActivitiesRetrieved.lastStat().mean(); + } + + @VisibleForTesting + public double getLatencySucceededGetAppStatisticsRetrieved() { + return totalSucceededGetAppStatisticsRetrieved.lastStat().mean(); + } + + @VisibleForTesting + public double getLatencySucceededGetAppPriorityRetrieved() { + return totalSucceededGetAppPriorityRetrieved.lastStat().mean(); + } + + @VisibleForTesting + public double getLatencySucceededGetAppQueueRetrieved() { + return totalSucceededGetAppQueueRetrieved.lastStat().mean(); + } + + @VisibleForTesting + public double getLatencySucceededUpdateAppQueueRetrieved() { + return totalSucceededUpdateAppQueueRetrieved.lastStat().mean(); + } + + @VisibleForTesting + public double getLatencySucceededGetAppTimeoutRetrieved() { + return totalSucceededGetAppTimeoutRetrieved.lastStat().mean(); + } + + @VisibleForTesting + public double getLatencySucceededGetAppTimeoutsRetrieved() { + return totalSucceededGetAppTimeoutsRetrieved.lastStat().mean(); + } + + @VisibleForTesting + public double getLatencySucceededRefreshQueuesRetrieved() { + return totalSucceededRefreshQueuesRetrieved.lastStat().mean(); + } + + @VisibleForTesting + public double getLatencySucceededRefreshNodesRetrieved() { + return totalSucceededRefreshNodesRetrieved.lastStat().mean(); + } + + @VisibleForTesting + public double getLatencySucceededGetRMNodeLabelsRetrieved() { + return totalSucceededGetRMNodeLabelsRetrieved.lastStat().mean(); + } + + @VisibleForTesting + public double getLatencySucceededCheckUserAccessToQueueRetrieved() { + return totalSucceededCheckUserAccessToQueueRetrieved.lastStat().mean(); + } + + @VisibleForTesting + public double getLatencySucceededGetDelegationTokenRetrieved() { + return totalSucceededGetDelegationTokenRetrieved.lastStat().mean(); + } + + @VisibleForTesting + public double getLatencySucceededRenewDelegationTokenRetrieved() { + return totalSucceededRenewDelegationTokenRetrieved.lastStat().mean(); + } + + @VisibleForTesting + public double getLatencySucceededCancelDelegationTokenRetrieved() { + return totalSucceededCancelDelegationTokenRetrieved.lastStat().mean(); + } + + @VisibleForTesting + public double getLatencySucceededGetActivitiesRetrieved() { + return totalSucceededGetActivitiesRetrieved.lastStat().mean(); + } + + @VisibleForTesting + public double getLatencySucceededGetBulkActivitiesRetrieved() { + return totalSucceededGetBulkActivitiesRetrieved.lastStat().mean(); + } + + @VisibleForTesting + public double getLatencySucceededGetSchedulerInfoRetrieved() { + return totalSucceededGetSchedulerInfoRetrieved.lastStat().mean(); + } + + @VisibleForTesting + public double getLatencySucceededRefreshSuperUserGroupsConfigurationRetrieved() { + return totalSucceededRefreshSuperUserGroupsConfigurationRetrieved.lastStat().mean(); + } + @VisibleForTesting public int getAppsFailedCreated() { return numAppsFailedCreated.value(); @@ -846,6 +1180,83 @@ public final class RouterMetrics { return numListReservationFailedRetrieved.value(); } + public int getAppActivitiesFailedRetrieved() { + return numGetAppActivitiesFailedRetrieved.value(); + } + + public int getAppStatisticsFailedRetrieved() { + return numGetAppStatisticsFailedRetrieved.value(); + } + + public int getAppPriorityFailedRetrieved() { + return numGetAppPriorityFailedRetrieved.value(); + } + + public int getAppQueueFailedRetrieved() { + return numGetAppQueueFailedRetrieved.value(); + } + + public int getUpdateAppQueueFailedRetrieved() { + return numUpdateAppQueueFailedRetrieved.value(); + } + + public int getAppTimeoutFailedRetrieved() { + return numGetAppTimeoutFailedRetrieved.value(); + } + + public int getAppTimeoutsFailedRetrieved() { + return numGetAppTimeoutsFailedRetrieved.value(); + } + + + public int getRefreshQueuesFailedRetrieved() { + return numRefreshQueuesFailedRetrieved.value(); + } + + public int getRMNodeLabelsFailedRetrieved() { + return numGetRMNodeLabelsFailedRetrieved.value(); + } + + public int getCheckUserAccessToQueueFailedRetrieved() { + return numCheckUserAccessToQueueFailedRetrieved.value(); + } + + public int getNumRefreshNodesFailedRetrieved() { + return numRefreshNodesFailedRetrieved.value(); + } + + public int getNumRefreshSuperUserGroupsConfigurationFailedRetrieved() { + return numRefreshSuperUserGroupsConfigurationFailedRetrieved.value(); + } + + public int getNumRefreshUserToGroupsMappingsFailedRetrieved() { + return numRefreshUserToGroupsMappingsFailedRetrieved.value(); + } + + public int getDelegationTokenFailedRetrieved() { + return numGetDelegationTokenFailedRetrieved.value(); + } + + public int getRenewDelegationTokenFailedRetrieved() { + return numRenewDelegationTokenFailedRetrieved.value(); + } + + public int getCancelDelegationTokenFailedRetrieved() { + return numCancelDelegationTokenFailedRetrieved.value(); + } + + public int getActivitiesFailedRetrieved() { + return numGetActivitiesFailedRetrieved.value(); + } + + public int getBulkActivitiesFailedRetrieved(){ + return numGetBulkActivitiesFailedRetrieved.value(); + } + + public int getSchedulerInfoFailedRetrieved() { + return numGetSchedulerInfoFailedRetrieved.value(); + } + public void succeededAppsCreated(long duration) { totalSucceededAppsCreated.add(duration); getNewApplicationLatency.add(duration); @@ -1011,6 +1422,101 @@ public final class RouterMetrics { listReservationLatency.add(duration); } + public void succeededGetAppActivitiesRetrieved(long duration) { + totalSucceededGetAppActivitiesRetrieved.add(duration); + getAppActivitiesLatency.add(duration); + } + + public void succeededGetAppStatisticsRetrieved(long duration) { + totalSucceededGetAppStatisticsRetrieved.add(duration); + getAppStatisticsLatency.add(duration); + } + + public void succeededGetAppPriorityRetrieved(long duration) { + totalSucceededGetAppPriorityRetrieved.add(duration); + getAppPriorityLatency.add(duration); + } + + public void succeededGetAppQueueRetrieved(long duration) { + totalSucceededGetAppQueueRetrieved.add(duration); + getAppQueueLatency.add(duration); + } + + public void succeededUpdateAppQueueRetrieved(long duration) { + totalSucceededUpdateAppQueueRetrieved.add(duration); + getUpdateQueueLatency.add(duration); + } + + public void succeededGetAppTimeoutRetrieved(long duration) { + totalSucceededGetAppTimeoutRetrieved.add(duration); + getAppTimeoutLatency.add(duration); + } + + public void succeededGetAppTimeoutsRetrieved(long duration) { + totalSucceededGetAppTimeoutsRetrieved.add(duration); + getAppTimeoutsLatency.add(duration); + } + + public void succeededRefreshQueuesRetrieved(long duration) { + totalSucceededRefreshQueuesRetrieved.add(duration); + refreshQueuesLatency.add(duration); + } + + public void succeededRefreshNodesRetrieved(long duration) { + totalSucceededRefreshNodesRetrieved.add(duration); + refreshNodesLatency.add(duration); + } + + public void succeededGetRMNodeLabelsRetrieved(long duration) { + totalSucceededGetRMNodeLabelsRetrieved.add(duration); + getRMNodeLabelsLatency.add(duration); + } + + public void succeededCheckUserAccessToQueueRetrieved(long duration) { + totalSucceededCheckUserAccessToQueueRetrieved.add(duration); + checkUserAccessToQueueLatency.add(duration); + } + + public void succeededGetDelegationTokenRetrieved(long duration) { + totalSucceededGetDelegationTokenRetrieved.add(duration); + getDelegationTokenLatency.add(duration); + } + + public void succeededRenewDelegationTokenRetrieved(long duration) { + totalSucceededRenewDelegationTokenRetrieved.add(duration); + renewDelegationTokenLatency.add(duration); + } + + public void succeededCancelDelegationTokenRetrieved(long duration) { + totalSucceededCancelDelegationTokenRetrieved.add(duration); + cancelDelegationTokenLatency.add(duration); + } + + public void succeededGetActivitiesLatencyRetrieved(long duration) { + totalSucceededGetActivitiesRetrieved.add(duration); + getActivitiesLatency.add(duration); + } + + public void succeededGetBulkActivitiesRetrieved(long duration) { + totalSucceededGetBulkActivitiesRetrieved.add(duration); + getBulkActivitiesLatency.add(duration); + } + + public void succeededGetSchedulerInfoRetrieved(long duration) { + totalSucceededGetSchedulerInfoRetrieved.add(duration); + getSchedulerInfoRetrievedLatency.add(duration); + } + + public void succeededRefreshSuperUserGroupsConfRetrieved(long duration) { + totalSucceededRefreshSuperUserGroupsConfigurationRetrieved.add(duration); + refreshSuperUserGroupsConfLatency.add(duration); + } + + public void succeededRefreshUserToGroupsMappingsRetrieved(long duration) { + totalSucceededRefreshUserToGroupsMappingsRetrieved.add(duration); + refreshUserToGroupsMappingsLatency.add(duration); + } + public void incrAppsFailedCreated() { numAppsFailedCreated.incr(); } @@ -1063,11 +1569,11 @@ public final class RouterMetrics { numGetQueueUserAclsFailedRetrieved.incr(); } - public void incrContainerReportFailedRetrieved() { + public void incrGetContainerReportFailedRetrieved() { numGetContainerReportFailedRetrieved.incr(); } - public void incrContainerFailedRetrieved() { + public void incrGetContainersFailedRetrieved() { numGetContainersFailedRetrieved.incr(); } @@ -1142,4 +1648,80 @@ public final class RouterMetrics { public void incrListReservationFailedRetrieved() { numListReservationFailedRetrieved.incr(); } + + public void incrGetAppActivitiesFailedRetrieved() { + numGetAppActivitiesFailedRetrieved.incr(); + } + + public void incrGetAppStatisticsFailedRetrieved() { + numGetAppStatisticsFailedRetrieved.incr(); + } + + public void incrGetAppPriorityFailedRetrieved() { + numGetAppPriorityFailedRetrieved.incr(); + } + + public void incrGetAppQueueFailedRetrieved() { + numGetAppQueueFailedRetrieved.incr(); + } + + public void incrUpdateAppQueueFailedRetrieved() { + numUpdateAppQueueFailedRetrieved.incr(); + } + + public void incrGetAppTimeoutFailedRetrieved() { + numGetAppTimeoutFailedRetrieved.incr(); + } + + public void incrGetAppTimeoutsFailedRetrieved() { + numGetAppTimeoutsFailedRetrieved.incr(); + } + + public void incrRefreshQueuesFailedRetrieved() { + numRefreshQueuesFailedRetrieved.incr(); + } + + public void incrGetRMNodeLabelsFailedRetrieved() { + numGetRMNodeLabelsFailedRetrieved.incr(); + } + + public void incrCheckUserAccessToQueueFailedRetrieved() { + numCheckUserAccessToQueueFailedRetrieved.incr(); + } + + public void incrRefreshNodesFailedRetrieved() { + numRefreshNodesFailedRetrieved.incr(); + } + + public void incrRefreshSuperUserGroupsConfigurationFailedRetrieved() { + numRefreshSuperUserGroupsConfigurationFailedRetrieved.incr(); + } + + public void incrRefreshUserToGroupsMappingsFailedRetrieved() { + numRefreshUserToGroupsMappingsFailedRetrieved.incr(); + } + + public void incrGetDelegationTokenFailedRetrieved() { + numGetDelegationTokenFailedRetrieved.incr(); + } + + public void incrRenewDelegationTokenFailedRetrieved() { + numRenewDelegationTokenFailedRetrieved.incr(); + } + + public void incrCancelDelegationTokenFailedRetrieved() { + numCancelDelegationTokenFailedRetrieved.incr(); + } + + public void incrGetActivitiesFailedRetrieved() { + numGetActivitiesFailedRetrieved.incr(); + } + + public void incrGetBulkActivitiesFailedRetrieved() { + numGetBulkActivitiesFailedRetrieved.incr(); + } + + public void incrGetSchedulerInfoFailedRetrieved() { + numGetSchedulerInfoFailedRetrieved.incr(); + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/RouterServerUtil.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/RouterServerUtil.java index 36f02dd3e8a..8fa6ca2f055 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/RouterServerUtil.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/RouterServerUtil.java @@ -18,14 +18,28 @@ package org.apache.hadoop.yarn.server.router; +import org.apache.commons.lang3.math.NumberUtils; import org.apache.hadoop.classification.InterfaceAudience.Private; import org.apache.hadoop.classification.InterfaceAudience.Public; import org.apache.hadoop.classification.InterfaceStability.Unstable; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.security.UserGroupInformation; +import org.apache.hadoop.security.token.Token; import org.apache.hadoop.util.ReflectionUtils; import org.apache.hadoop.util.StringUtils; +import org.apache.hadoop.yarn.api.records.ReservationRequest; +import org.apache.hadoop.yarn.api.records.Priority; +import org.apache.hadoop.yarn.api.records.ReservationRequestInterpreter; +import org.apache.hadoop.yarn.api.records.Resource; +import org.apache.hadoop.yarn.api.records.ReservationRequests; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationDefinitionInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationRequestsInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationRequestInfo; +import org.apache.hadoop.yarn.api.records.ReservationDefinition; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceInfo; import org.apache.hadoop.yarn.exceptions.YarnException; import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; +import org.apache.hadoop.yarn.security.client.RMDelegationTokenIdentifier; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -34,6 +48,7 @@ import java.lang.reflect.Method; import java.util.ArrayList; import java.util.Collection; import java.util.List; +import java.util.EnumSet; import java.io.IOException; /** @@ -44,6 +59,16 @@ import java.io.IOException; @Unstable public final class RouterServerUtil { + private static final String APPLICATION_ID_PREFIX = "application_"; + + private static final String APP_ATTEMPT_ID_PREFIX = "appattempt_"; + + private static final String CONTAINER_PREFIX = "container_"; + + private static final String EPOCH_PREFIX = "e"; + + private static final String RESERVEIDSTR_PREFIX = "reservation_"; + /** Disable constructor. */ private RouterServerUtil() { } @@ -181,6 +206,28 @@ public final class RouterServerUtil { } } + /** + * Throws an IOException due to an error. + * + * @param t the throwable raised in the called class. + * @param errMsgFormat the error message format string. + * @param args referenced by the format specifiers in the format string. + * @throws IOException on failure + */ + @Public + @Unstable + public static void logAndThrowIOException(Throwable t, String errMsgFormat, Object... args) + throws IOException { + String msg = String.format(errMsgFormat, args); + if (t != null) { + LOG.error(msg, t); + throw new IOException(msg, t); + } else { + LOG.error(msg); + throw new IOException(msg); + } + } + /** * Throws an RunTimeException due to an error. * @@ -222,4 +269,359 @@ public final class RouterServerUtil { throw new RuntimeException(msg); } } + + /** + * Throws an RunTimeException due to an error. + * + * @param t the throwable raised in the called class. + * @param errMsgFormat the error message format string. + * @param args referenced by the format specifiers in the format string. + * @return RuntimeException + */ + @Public + @Unstable + public static RuntimeException logAndReturnRunTimeException( + Throwable t, String errMsgFormat, Object... args) { + String msg = String.format(errMsgFormat, args); + if (t != null) { + LOG.error(msg, t); + return new RuntimeException(msg, t); + } else { + LOG.error(msg); + return new RuntimeException(msg); + } + } + + /** + * Throws an RunTimeException due to an error. + * + * @param errMsgFormat the error message format string. + * @param args referenced by the format specifiers in the format string. + * @return RuntimeException + */ + @Public + @Unstable + public static RuntimeException logAndReturnRunTimeException( + String errMsgFormat, Object... args) { + return logAndReturnRunTimeException(null, errMsgFormat, args); + } + + /** + * Throws an YarnRuntimeException due to an error. + * + * @param t the throwable raised in the called class. + * @param errMsgFormat the error message format string. + * @param args referenced by the format specifiers in the format string. + * @return YarnRuntimeException + */ + @Public + @Unstable + public static YarnRuntimeException logAndReturnYarnRunTimeException( + Throwable t, String errMsgFormat, Object... args) { + String msg = String.format(errMsgFormat, args); + if (t != null) { + LOG.error(msg, t); + return new YarnRuntimeException(msg, t); + } else { + LOG.error(msg); + return new YarnRuntimeException(msg); + } + } + + /** + * Check applicationId is accurate. + * + * We need to ensure that applicationId cannot be empty and + * can be converted to ApplicationId object normally. + * + * @param applicationId applicationId of type string + * @throws IllegalArgumentException If the format of the applicationId is not accurate, + * an IllegalArgumentException needs to be thrown. + */ + @Public + @Unstable + public static void validateApplicationId(String applicationId) + throws IllegalArgumentException { + + // Make Sure applicationId is not empty. + if (applicationId == null || applicationId.isEmpty()) { + throw new IllegalArgumentException("Parameter error, the appId is empty or null."); + } + + // Make sure the prefix information of applicationId is accurate. + if (!applicationId.startsWith(APPLICATION_ID_PREFIX)) { + throw new IllegalArgumentException("Invalid ApplicationId prefix: " + + applicationId + ". The valid ApplicationId should start with prefix application"); + } + + // Check the split position of the string. + int pos1 = APPLICATION_ID_PREFIX.length() - 1; + int pos2 = applicationId.indexOf('_', pos1 + 1); + if (pos2 < 0) { + throw new IllegalArgumentException("Invalid ApplicationId: " + applicationId); + } + + // Confirm that the parsed rmId and appId are numeric types. + String rmId = applicationId.substring(pos1 + 1, pos2); + String appId = applicationId.substring(pos2 + 1); + if(!NumberUtils.isDigits(rmId) || !NumberUtils.isDigits(appId)){ + throw new IllegalArgumentException("Invalid ApplicationId: " + applicationId); + } + } + + /** + * Check appAttemptId is accurate. + * + * We need to ensure that appAttemptId cannot be empty and + * can be converted to ApplicationAttemptId object normally. + * + * @param appAttemptId appAttemptId of type string. + * @throws IllegalArgumentException If the format of the appAttemptId is not accurate, + * an IllegalArgumentException needs to be thrown. + */ + @Public + @Unstable + public static void validateApplicationAttemptId(String appAttemptId) + throws IllegalArgumentException { + + // Make Sure appAttemptId is not empty. + if (appAttemptId == null || appAttemptId.isEmpty()) { + throw new IllegalArgumentException("Parameter error, the appAttemptId is empty or null."); + } + + // Make sure the prefix information of appAttemptId is accurate. + if (!appAttemptId.startsWith(APP_ATTEMPT_ID_PREFIX)) { + throw new IllegalArgumentException("Invalid AppAttemptId prefix: " + appAttemptId); + } + + // Check the split position of the string. + int pos1 = APP_ATTEMPT_ID_PREFIX.length() - 1; + int pos2 = appAttemptId.indexOf('_', pos1 + 1); + if (pos2 < 0) { + throw new IllegalArgumentException("Invalid AppAttemptId: " + appAttemptId); + } + int pos3 = appAttemptId.indexOf('_', pos2 + 1); + if (pos3 < 0) { + throw new IllegalArgumentException("Invalid AppAttemptId: " + appAttemptId); + } + + // Confirm that the parsed rmId and appId and attemptId are numeric types. + String rmId = appAttemptId.substring(pos1 + 1, pos2); + String appId = appAttemptId.substring(pos2 + 1, pos3); + String attemptId = appAttemptId.substring(pos3 + 1); + + if (!NumberUtils.isDigits(rmId) || !NumberUtils.isDigits(appId) + || !NumberUtils.isDigits(attemptId)) { + throw new IllegalArgumentException("Invalid AppAttemptId: " + appAttemptId); + } + } + + /** + * Check containerId is accurate. + * + * We need to ensure that containerId cannot be empty and + * can be converted to ContainerId object normally. + * + * @param containerId containerId of type string. + * @throws IllegalArgumentException If the format of the appAttemptId is not accurate, + * an IllegalArgumentException needs to be thrown. + */ + @Public + @Unstable + public static void validateContainerId(String containerId) + throws IllegalArgumentException { + + // Make Sure containerId is not empty. + if (containerId == null || containerId.isEmpty()) { + throw new IllegalArgumentException("Parameter error, the containerId is empty or null."); + } + + // Make sure the prefix information of containerId is accurate. + if (!containerId.startsWith(CONTAINER_PREFIX)) { + throw new IllegalArgumentException("Invalid ContainerId prefix: " + containerId); + } + + // Check the split position of the string. + int pos1 = CONTAINER_PREFIX.length() - 1; + + String epoch = "0"; + if (containerId.regionMatches(pos1 + 1, EPOCH_PREFIX, 0, EPOCH_PREFIX.length())) { + int pos2 = containerId.indexOf('_', pos1 + 1); + if (pos2 < 0) { + throw new IllegalArgumentException("Invalid ContainerId: " + containerId); + } + String epochStr = containerId.substring(pos1 + 1 + EPOCH_PREFIX.length(), pos2); + epoch = epochStr; + // rewind the current position + pos1 = pos2; + } + + int pos2 = containerId.indexOf('_', pos1 + 1); + if (pos2 < 0) { + throw new IllegalArgumentException("Invalid ContainerId: " + containerId); + } + + int pos3 = containerId.indexOf('_', pos2 + 1); + if (pos3 < 0) { + throw new IllegalArgumentException("Invalid ContainerId: " + containerId); + } + + int pos4 = containerId.indexOf('_', pos3 + 1); + if (pos4 < 0) { + throw new IllegalArgumentException("Invalid ContainerId: " + containerId); + } + + // Confirm that the parsed appId and clusterTimestamp and attemptId and cid and epoch + // are numeric types. + String appId = containerId.substring(pos2 + 1, pos3); + String clusterTimestamp = containerId.substring(pos1 + 1, pos2); + String attemptId = containerId.substring(pos3 + 1, pos4); + String cid = containerId.substring(pos4 + 1); + + if (!NumberUtils.isDigits(appId) || !NumberUtils.isDigits(clusterTimestamp) + || !NumberUtils.isDigits(attemptId) || !NumberUtils.isDigits(cid) + || !NumberUtils.isDigits(epoch)) { + throw new IllegalArgumentException("Invalid ContainerId: " + containerId); + } + } + + public static boolean isAllowedDelegationTokenOp() throws IOException { + if (UserGroupInformation.isSecurityEnabled()) { + return EnumSet.of(UserGroupInformation.AuthenticationMethod.KERBEROS, + UserGroupInformation.AuthenticationMethod.KERBEROS_SSL, + UserGroupInformation.AuthenticationMethod.CERTIFICATE) + .contains(UserGroupInformation.getCurrentUser() + .getRealAuthenticationMethod()); + } else { + return true; + } + } + + public static String getRenewerForToken(Token token) + throws IOException { + UserGroupInformation user = UserGroupInformation.getCurrentUser(); + UserGroupInformation loginUser = UserGroupInformation.getLoginUser(); + // we can always renew our own tokens + return loginUser.getUserName().equals(user.getUserName()) + ? token.decodeIdentifier().getRenewer().toString() : user.getShortUserName(); + } + + /** + * Set User information. + * + * If the username is empty, we will use the Yarn Router user directly. + * Do not create a proxy user if userName matches the userName on current UGI. + * + * @param userName userName. + * @return UserGroupInformation. + */ + public static UserGroupInformation setupUser(final String userName) { + UserGroupInformation user = null; + try { + // If userName is empty, we will return UserGroupInformation.getCurrentUser. + // Do not create a proxy user if user name matches the user name on + // current UGI + if (userName == null || userName.trim().isEmpty()) { + user = UserGroupInformation.getCurrentUser(); + } else if (UserGroupInformation.isSecurityEnabled()) { + user = UserGroupInformation.createProxyUser(userName, UserGroupInformation.getLoginUser()); + } else if (userName.equalsIgnoreCase(UserGroupInformation.getCurrentUser().getUserName())) { + user = UserGroupInformation.getCurrentUser(); + } else { + user = UserGroupInformation.createProxyUser(userName, + UserGroupInformation.getCurrentUser()); + } + return user; + } catch (IOException e) { + throw RouterServerUtil.logAndReturnYarnRunTimeException(e, + "Error while creating Router Service for user : %s.", user); + } + } + + /** + * Check reservationId is accurate. + * + * We need to ensure that reservationId cannot be empty and + * can be converted to ReservationId object normally. + * + * @param reservationId reservationId. + * @throws IllegalArgumentException If the format of the reservationId is not accurate, + * an IllegalArgumentException needs to be thrown. + */ + @Public + @Unstable + public static void validateReservationId(String reservationId) throws IllegalArgumentException { + + if (reservationId == null || reservationId.isEmpty()) { + throw new IllegalArgumentException("Parameter error, the reservationId is empty or null."); + } + + if (!reservationId.startsWith(RESERVEIDSTR_PREFIX)) { + throw new IllegalArgumentException("Invalid ReservationId: " + reservationId); + } + + String[] resFields = reservationId.split("_"); + if (resFields.length != 3) { + throw new IllegalArgumentException("Invalid ReservationId: " + reservationId); + } + + String clusterTimestamp = resFields[1]; + String id = resFields[2]; + if (!NumberUtils.isDigits(id) || !NumberUtils.isDigits(clusterTimestamp)) { + throw new IllegalArgumentException("Invalid ReservationId: " + reservationId); + } + } + + /** + * Convert ReservationDefinitionInfo to ReservationDefinition. + * + * @param definitionInfo ReservationDefinitionInfo Object. + * @return ReservationDefinition. + */ + public static ReservationDefinition convertReservationDefinition( + ReservationDefinitionInfo definitionInfo) { + if (definitionInfo == null || definitionInfo.getReservationRequests() == null + || definitionInfo.getReservationRequests().getReservationRequest() == null + || definitionInfo.getReservationRequests().getReservationRequest().isEmpty()) { + throw new RuntimeException("definitionInfo Or ReservationRequests is Null."); + } + + // basic variable + long arrival = definitionInfo.getArrival(); + long deadline = definitionInfo.getDeadline(); + + // ReservationRequests reservationRequests + String name = definitionInfo.getReservationName(); + String recurrenceExpression = definitionInfo.getRecurrenceExpression(); + Priority priority = Priority.newInstance(definitionInfo.getPriority()); + + // reservation requests info + List reservationRequestList = new ArrayList<>(); + + ReservationRequestsInfo reservationRequestsInfo = definitionInfo.getReservationRequests(); + + List reservationRequestInfos = + reservationRequestsInfo.getReservationRequest(); + + for (ReservationRequestInfo resRequestInfo : reservationRequestInfos) { + ResourceInfo resourceInfo = resRequestInfo.getCapability(); + Resource capability = + Resource.newInstance(resourceInfo.getMemorySize(), resourceInfo.getvCores()); + ReservationRequest reservationRequest = ReservationRequest.newInstance(capability, + resRequestInfo.getNumContainers(), resRequestInfo.getMinConcurrency(), + resRequestInfo.getDuration()); + reservationRequestList.add(reservationRequest); + } + + ReservationRequestInterpreter[] values = ReservationRequestInterpreter.values(); + ReservationRequestInterpreter reservationRequestInterpreter = + values[reservationRequestsInfo.getReservationRequestsInterpreter()]; + ReservationRequests reservationRequests = ReservationRequests.newInstance( + reservationRequestList, reservationRequestInterpreter); + + ReservationDefinition definition = ReservationDefinition.newInstance( + arrival, deadline, reservationRequests, name, recurrenceExpression, priority); + + return definition; + } } \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/AbstractClientRequestInterceptor.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/AbstractClientRequestInterceptor.java index 961026d0146..10ed71b6010 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/AbstractClientRequestInterceptor.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/AbstractClientRequestInterceptor.java @@ -18,11 +18,10 @@ package org.apache.hadoop.yarn.server.router.clientrm; -import java.io.IOException; - import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.security.UserGroupInformation; -import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; +import org.apache.hadoop.yarn.server.router.RouterServerUtil; +import org.apache.hadoop.yarn.server.router.security.RouterDelegationTokenSecretManager; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -44,6 +43,8 @@ public abstract class AbstractClientRequestInterceptor @SuppressWarnings("checkstyle:visibilitymodifier") protected UserGroupInformation user = null; + private RouterDelegationTokenSecretManager tokenSecretManager = null; + /** * Sets the {@link ClientRequestInterceptor} in the chain. */ @@ -77,7 +78,7 @@ public abstract class AbstractClientRequestInterceptor */ @Override public void init(String userName) { - setupUser(userName); + this.user = RouterServerUtil.setupUser(userName); if (this.nextInterceptor != null) { this.nextInterceptor.init(userName); } @@ -101,28 +102,13 @@ public abstract class AbstractClientRequestInterceptor return this.nextInterceptor; } - private void setupUser(String userName) { - - try { - // Do not create a proxy user if user name matches the user name on - // current UGI - if (UserGroupInformation.isSecurityEnabled()) { - user = UserGroupInformation.createProxyUser(userName, UserGroupInformation.getLoginUser()); - } else if (userName.equalsIgnoreCase(UserGroupInformation.getCurrentUser().getUserName())) { - user = UserGroupInformation.getCurrentUser(); - } else { - user = UserGroupInformation.createProxyUser(userName, - UserGroupInformation.getCurrentUser()); - } - } catch (IOException e) { - String message = "Error while creating Router ClientRM Service for user:"; - if (user != null) { - message += ", user: " + user; - } - - LOG.info(message); - throw new YarnRuntimeException(message, e); - } + @Override + public RouterDelegationTokenSecretManager getTokenSecretManager() { + return tokenSecretManager; } + @Override + public void setTokenSecretManager(RouterDelegationTokenSecretManager tokenSecretManager) { + this.tokenSecretManager = tokenSecretManager; + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/ClientRequestInterceptor.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/ClientRequestInterceptor.java index 3e3ffce5f4b..6e19cbadf9d 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/ClientRequestInterceptor.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/ClientRequestInterceptor.java @@ -20,6 +20,7 @@ package org.apache.hadoop.yarn.server.router.clientrm; import org.apache.hadoop.conf.Configurable; import org.apache.hadoop.yarn.api.ApplicationClientProtocol; +import org.apache.hadoop.yarn.server.router.security.RouterDelegationTokenSecretManager; /** * Defines the contract to be implemented by the request interceptor classes, @@ -62,4 +63,18 @@ public interface ClientRequestInterceptor */ ClientRequestInterceptor getNextInterceptor(); + /** + * Set RouterDelegationTokenSecretManager for specific interceptor to support Token operations, + * including create Token, update Token, and delete Token. + * + * @param tokenSecretManager Router DelegationTokenSecretManager + */ + void setTokenSecretManager(RouterDelegationTokenSecretManager tokenSecretManager); + + /** + * Get RouterDelegationTokenSecretManager. + * + * @return Router DelegationTokenSecretManager. + */ + RouterDelegationTokenSecretManager getTokenSecretManager(); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java index a027977e134..a50ea5bc423 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java @@ -20,6 +20,7 @@ package org.apache.hadoop.yarn.server.router.clientrm; import org.apache.commons.lang3.StringUtils; import org.apache.commons.lang3.tuple.Pair; +import org.apache.hadoop.io.Text; import org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder; import java.io.IOException; import java.lang.reflect.Method; @@ -40,7 +41,6 @@ import java.util.concurrent.LinkedBlockingQueue; import java.util.concurrent.ThreadFactory; import java.util.concurrent.ThreadPoolExecutor; import java.util.concurrent.TimeUnit; -import org.apache.commons.lang3.NotImplementedException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.CommonConfigurationKeys; import org.apache.hadoop.security.UserGroupInformation; @@ -116,14 +116,20 @@ import org.apache.hadoop.yarn.api.protocolrecords.UpdateApplicationPriorityRespo import org.apache.hadoop.yarn.api.protocolrecords.UpdateApplicationTimeoutsRequest; import org.apache.hadoop.yarn.api.protocolrecords.UpdateApplicationTimeoutsResponse; import org.apache.hadoop.yarn.api.records.ApplicationId; +import org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext; import org.apache.hadoop.yarn.api.records.ReservationId; +import org.apache.hadoop.security.token.Token; +import org.apache.hadoop.yarn.server.utils.BuilderUtils; + import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.exceptions.YarnException; import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; +import org.apache.hadoop.yarn.security.client.RMDelegationTokenIdentifier; import org.apache.hadoop.yarn.server.federation.failover.FederationProxyProviderUtil; import org.apache.hadoop.yarn.server.federation.policies.FederationPolicyUtils; import org.apache.hadoop.yarn.server.federation.policies.RouterPolicyFacade; import org.apache.hadoop.yarn.server.federation.policies.exceptions.FederationPolicyInitializationException; +import org.apache.hadoop.yarn.server.federation.retry.FederationActionRetry; import org.apache.hadoop.yarn.server.federation.store.records.ApplicationHomeSubCluster; import org.apache.hadoop.yarn.server.federation.store.records.ReservationHomeSubCluster; import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId; @@ -134,6 +140,7 @@ import org.apache.hadoop.yarn.server.router.RouterMetrics; import org.apache.hadoop.yarn.server.router.RouterServerUtil; import org.apache.hadoop.yarn.util.Clock; import org.apache.hadoop.yarn.util.MonotonicClock; +import org.apache.hadoop.yarn.util.Records; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -176,6 +183,7 @@ public class FederationClientInterceptor private ThreadPoolExecutor executorService; private final Clock clock = new MonotonicClock(); private boolean returnPartialReport; + private long submitIntervalTime; @Override public void init(String userName) { @@ -184,15 +192,20 @@ public class FederationClientInterceptor federationFacade = FederationStateStoreFacade.getInstance(); rand = new Random(System.currentTimeMillis()); - int numThreads = getConf().getInt( - YarnConfiguration.ROUTER_USER_CLIENT_THREADS_SIZE, - YarnConfiguration.DEFAULT_ROUTER_USER_CLIENT_THREADS_SIZE); + int numMinThreads = getNumMinThreads(getConf()); + + int numMaxThreads = getNumMaxThreads(getConf()); + + long keepAliveTime = getConf().getTimeDuration( + YarnConfiguration.ROUTER_USER_CLIENT_THREAD_POOL_KEEP_ALIVE_TIME, + YarnConfiguration.DEFAULT_ROUTER_USER_CLIENT_THREAD_POOL_KEEP_ALIVE_TIME, TimeUnit.SECONDS); + ThreadFactory threadFactory = new ThreadFactoryBuilder() .setNameFormat("RPC Router Client-" + userName + "-%d ").build(); - BlockingQueue workQueue = new LinkedBlockingQueue<>(); - this.executorService = new ThreadPoolExecutor(numThreads, numThreads, - 0L, TimeUnit.MILLISECONDS, workQueue, threadFactory); + BlockingQueue workQueue = new LinkedBlockingQueue<>(); + this.executorService = new ThreadPoolExecutor(numMinThreads, numMaxThreads, + keepAliveTime, TimeUnit.MILLISECONDS, workQueue, threadFactory); final Configuration conf = this.getConf(); @@ -207,6 +220,10 @@ public class FederationClientInterceptor YarnConfiguration.ROUTER_CLIENTRM_SUBMIT_RETRY, YarnConfiguration.DEFAULT_ROUTER_CLIENTRM_SUBMIT_RETRY); + submitIntervalTime = conf.getTimeDuration( + YarnConfiguration.ROUTER_CLIENTRM_SUBMIT_INTERVAL_TIME, + YarnConfiguration.DEFAULT_CLIENTRM_SUBMIT_INTERVAL_TIME, TimeUnit.MILLISECONDS); + clientRMProxies = new ConcurrentHashMap<>(); routerMetrics = RouterMetrics.getMetrics(); @@ -293,25 +310,25 @@ public class FederationClientInterceptor Map subClustersActive = federationFacade.getSubClusters(true); - for (int i = 0; i < numSubmitRetries; ++i) { - SubClusterId subClusterId = getRandomActiveSubCluster(subClustersActive); - LOG.info("getNewApplication try #{} on SubCluster {}.", i, subClusterId); - ApplicationClientProtocol clientRMProxy = getClientRMProxyForSubCluster(subClusterId); - GetNewApplicationResponse response = null; - try { - response = clientRMProxy.getNewApplication(request); - } catch (Exception e) { - LOG.warn("Unable to create a new ApplicationId in SubCluster {}.", subClusterId.getId(), e); - subClustersActive.remove(subClusterId); - } + // Try calling the getNewApplication method + List blacklist = new ArrayList<>(); + int activeSubClustersCount = federationFacade.getActiveSubClustersCount(); + int actualRetryNums = Math.min(activeSubClustersCount, numSubmitRetries); + + try { + GetNewApplicationResponse response = + ((FederationActionRetry) (retryCount) -> + invokeGetNewApplication(subClustersActive, blacklist, request, retryCount)). + runWithRetries(actualRetryNums, submitIntervalTime); if (response != null) { long stopTime = clock.getTime(); routerMetrics.succeededAppsCreated(stopTime - startTime); - RouterAuditLogger.logSuccess(user.getShortUserName(), GET_NEW_APP, - TARGET_CLIENT_RM_SERVICE, response.getApplicationId()); return response; } + } catch (Exception e) { + routerMetrics.incrAppsFailedCreated(); + RouterServerUtil.logAndThrowException(e.getMessage(), e); } routerMetrics.incrAppsFailedCreated(); @@ -321,6 +338,46 @@ public class FederationClientInterceptor throw new YarnException(errMsg); } + /** + * Invoke GetNewApplication to different subClusters. + * + * @param subClustersActive Active SubClusters + * @param blackList Blacklist avoid repeated calls to unavailable subCluster. + * @param request getNewApplicationRequest. + * @param retryCount number of retries. + * @return Get NewApplicationResponse response, If the response is empty, the request fails, + * if the response is not empty, the request is successful. + * @throws YarnException yarn exception. + * @throws IOException io error. + */ + private GetNewApplicationResponse invokeGetNewApplication( + Map subClustersActive, + List blackList, GetNewApplicationRequest request, int retryCount) + throws YarnException, IOException { + SubClusterId subClusterId = + federationFacade.getRandomActiveSubCluster(subClustersActive, blackList); + LOG.info("getNewApplication try #{} on SubCluster {}.", retryCount, subClusterId); + ApplicationClientProtocol clientRMProxy = getClientRMProxyForSubCluster(subClusterId); + try { + GetNewApplicationResponse response = clientRMProxy.getNewApplication(request); + if (response != null) { + RouterAuditLogger.logSuccess(user.getShortUserName(), GET_NEW_APP, + TARGET_CLIENT_RM_SERVICE, response.getApplicationId(), subClusterId); + return response; + } + } catch (Exception e) { + RouterAuditLogger.logFailure(user.getShortUserName(), GET_NEW_APP, UNKNOWN, + TARGET_CLIENT_RM_SERVICE, e.getMessage(), subClusterId); + LOG.warn("Unable to create a new ApplicationId in SubCluster {}.", subClusterId.getId(), e); + blackList.add(subClusterId); + throw e; + } + // If SubmitApplicationResponse is empty, the request fails. + String msg = String.format("Unable to create a new ApplicationId in SubCluster %s.", + subClusterId.getId()); + throw new YarnException(msg); + } + /** * Today, in YARN there are no checks of any applicationId submitted. * @@ -400,88 +457,35 @@ public class FederationClientInterceptor RouterServerUtil.logAndThrowException(errMsg, null); } - SubmitApplicationResponse response = null; - long startTime = clock.getTime(); - ApplicationId applicationId = request.getApplicationSubmissionContext().getApplicationId(); - List blacklist = new ArrayList<>(); - for (int i = 0; i < numSubmitRetries; ++i) { + try { - SubClusterId subClusterId = policyFacade.getHomeSubcluster( - request.getApplicationSubmissionContext(), blacklist); + // We need to handle this situation, + // the user will provide us with an expected submitRetries, + // but if the number of Active SubClusters is less than this number at this time, + // we should provide a high number of retry according to the number of Active SubClusters. + int activeSubClustersCount = federationFacade.getActiveSubClustersCount(); + int actualRetryNums = Math.min(activeSubClustersCount, numSubmitRetries); - LOG.info("submitApplication appId {} try #{} on SubCluster {}.", - applicationId, i, subClusterId); - - ApplicationHomeSubCluster appHomeSubCluster = - ApplicationHomeSubCluster.newInstance(applicationId, subClusterId); - - if (i == 0) { - try { - // persist the mapping of applicationId and the subClusterId which has - // been selected as its home - subClusterId = - federationFacade.addApplicationHomeSubCluster(appHomeSubCluster); - } catch (YarnException e) { - routerMetrics.incrAppsFailedSubmitted(); - String message = - String.format("Unable to insert the ApplicationId %s into the FederationStateStore.", - applicationId); - RouterAuditLogger.logFailure(user.getShortUserName(), SUBMIT_NEW_APP, UNKNOWN, - TARGET_CLIENT_RM_SERVICE, message, applicationId, subClusterId); - RouterServerUtil.logAndThrowException(message, e); - } - } else { - try { - // update the mapping of applicationId and the home subClusterId to - // the new subClusterId we have selected - federationFacade.updateApplicationHomeSubCluster(appHomeSubCluster); - } catch (YarnException e) { - String message = - String.format("Unable to update the ApplicationId %s into the FederationStateStore.", - applicationId); - SubClusterId subClusterIdInStateStore = - federationFacade.getApplicationHomeSubCluster(applicationId); - if (subClusterId == subClusterIdInStateStore) { - LOG.info("Application {} already submitted on SubCluster {}.", - applicationId, subClusterId); - } else { - routerMetrics.incrAppsFailedSubmitted(); - RouterAuditLogger.logFailure(user.getShortUserName(), SUBMIT_NEW_APP, UNKNOWN, - TARGET_CLIENT_RM_SERVICE, message, applicationId, subClusterId); - RouterServerUtil.logAndThrowException(message, e); - } - } - } - - ApplicationClientProtocol clientRMProxy = - getClientRMProxyForSubCluster(subClusterId); - - try { - response = clientRMProxy.submitApplication(request); - } catch (Exception e) { - LOG.warn("Unable to submit the application {} to SubCluster {} error = {}.", - applicationId, subClusterId.getId(), e); - } + // Try calling the SubmitApplication method + SubmitApplicationResponse response = + ((FederationActionRetry) (retryCount) -> + invokeSubmitApplication(blacklist, request, retryCount)). + runWithRetries(actualRetryNums, submitIntervalTime); if (response != null) { - LOG.info("Application {} with appId {} submitted on {}.", - request.getApplicationSubmissionContext().getApplicationName(), - applicationId, subClusterId); long stopTime = clock.getTime(); routerMetrics.succeededAppsSubmitted(stopTime - startTime); - RouterAuditLogger.logSuccess(user.getShortUserName(), SUBMIT_NEW_APP, - TARGET_CLIENT_RM_SERVICE, applicationId, subClusterId); return response; - } else { - // Empty response from the ResourceManager. - // Blacklist this subcluster for this request. - blacklist.add(subClusterId); } + + } catch (Exception e) { + routerMetrics.incrAppsFailedSubmitted(); + RouterServerUtil.logAndThrowException(e.getMessage(), e); } routerMetrics.incrAppsFailedSubmitted(); @@ -492,6 +496,78 @@ public class FederationClientInterceptor throw new YarnException(msg); } + /** + * Invoke SubmitApplication to different subClusters. + * + * Step1. Select homeSubCluster for Application according to Policy. + * + * Step2. Query homeSubCluster according to ApplicationId, + * if homeSubCluster does not exist or first attempt(consider repeated submissions), write; + * if homeSubCluster exists, update. + * + * Step3. Find the clientRMProxy of the corresponding cluster according to homeSubCluster, + * and then call the SubmitApplication method. + * + * Step4. If SubmitApplicationResponse is empty, the request fails, + * if SubmitApplicationResponse is not empty, the request is successful. + * + * @param blackList Blacklist avoid repeated calls to unavailable subCluster. + * @param request submitApplicationRequest. + * @param retryCount number of retries. + * @return submitApplication response, If the response is empty, the request fails, + * if the response is not empty, the request is successful. + * @throws YarnException yarn exception. + */ + private SubmitApplicationResponse invokeSubmitApplication( + List blackList, SubmitApplicationRequest request, int retryCount) + throws YarnException, IOException { + + // The request is not checked here, + // because the request has been checked before the method is called. + // We get applicationId and subClusterId from context. + ApplicationSubmissionContext appSubmissionContext = request.getApplicationSubmissionContext(); + ApplicationId applicationId = appSubmissionContext.getApplicationId(); + SubClusterId subClusterId = null; + + try { + + // Step1. Select homeSubCluster for Application according to Policy. + subClusterId = policyFacade.getHomeSubcluster(appSubmissionContext, blackList); + LOG.info("submitApplication appId {} try #{} on SubCluster {}.", + applicationId, retryCount, subClusterId); + + // Step2. We Store the mapping relationship + // between Application and HomeSubCluster in stateStore. + federationFacade.addOrUpdateApplicationHomeSubCluster( + applicationId, subClusterId, retryCount); + + // Step3. SubmitApplication to the subCluster + ApplicationClientProtocol clientRMProxy = getClientRMProxyForSubCluster(subClusterId); + SubmitApplicationResponse response = clientRMProxy.submitApplication(request); + + // Step4. if SubmitApplicationResponse is not empty, the request is successful. + if (response != null) { + LOG.info("Application {} submitted on subCluster {}.", applicationId, subClusterId); + RouterAuditLogger.logSuccess(user.getShortUserName(), SUBMIT_NEW_APP, + TARGET_CLIENT_RM_SERVICE, applicationId, subClusterId); + return response; + } + } catch (Exception e) { + RouterAuditLogger.logFailure(user.getShortUserName(), SUBMIT_NEW_APP, UNKNOWN, + TARGET_CLIENT_RM_SERVICE, e.getMessage(), applicationId, subClusterId); + LOG.warn("Unable to submitApplication appId {} try #{} on SubCluster {}.", + applicationId, retryCount, subClusterId, e); + if (subClusterId != null) { + blackList.add(subClusterId); + } + throw e; + } + + // If SubmitApplicationResponse is empty, the request fails. + String msg = String.format("Application %s failed to be submitted.", applicationId); + throw new YarnException(msg); + } + /** * The YARN Router will forward to the respective YARN RM in which the AM is * running. @@ -855,7 +931,7 @@ public class FederationClientInterceptor try { response = clientRMProxy.moveApplicationAcrossQueues(request); } catch (Exception e) { - routerMetrics.incrAppAttemptsFailedRetrieved(); + routerMetrics.incrMoveApplicationAcrossQueuesFailedRetrieved(); RouterServerUtil.logAndThrowException("Unable to moveApplicationAcrossQueues for " + applicationId + " to SubCluster " + subClusterId.getId(), e); } @@ -1174,7 +1250,7 @@ public class FederationClientInterceptor try { response = clientRMProxy.getApplicationAttemptReport(request); } catch (Exception e) { - routerMetrics.incrAppAttemptsFailedRetrieved(); + routerMetrics.incrAppAttemptReportFailedRetrieved(); String msg = String.format( "Unable to get the applicationAttempt report for %s to SubCluster %s.", request.getApplicationAttemptId(), subClusterId.getId()); @@ -1237,7 +1313,7 @@ public class FederationClientInterceptor public GetContainerReportResponse getContainerReport( GetContainerReportRequest request) throws YarnException, IOException { if(request == null || request.getContainerId() == null){ - routerMetrics.incrContainerReportFailedRetrieved(); + routerMetrics.incrGetContainerReportFailedRetrieved(); RouterServerUtil.logAndThrowException("Missing getContainerReport request " + "or containerId", null); } @@ -1249,7 +1325,7 @@ public class FederationClientInterceptor try { subClusterId = getApplicationHomeSubCluster(applicationId); } catch (YarnException ex) { - routerMetrics.incrContainerReportFailedRetrieved(); + routerMetrics.incrGetContainerReportFailedRetrieved(); RouterServerUtil.logAndThrowException("Application " + applicationId + " does not exist in FederationStateStore.", ex); } @@ -1260,7 +1336,7 @@ public class FederationClientInterceptor try { response = clientRMProxy.getContainerReport(request); } catch (Exception ex) { - routerMetrics.incrContainerReportFailedRetrieved(); + routerMetrics.incrGetContainerReportFailedRetrieved(); LOG.error("Unable to get the container report for {} from SubCluster {}.", applicationId, subClusterId.getId(), ex); } @@ -1280,7 +1356,7 @@ public class FederationClientInterceptor public GetContainersResponse getContainers(GetContainersRequest request) throws YarnException, IOException { if (request == null || request.getApplicationAttemptId() == null) { - routerMetrics.incrContainerFailedRetrieved(); + routerMetrics.incrGetContainersFailedRetrieved(); RouterServerUtil.logAndThrowException( "Missing getContainers request or ApplicationAttemptId.", null); } @@ -1291,7 +1367,7 @@ public class FederationClientInterceptor try { subClusterId = getApplicationHomeSubCluster(applicationId); } catch (YarnException ex) { - routerMetrics.incrContainerFailedRetrieved(); + routerMetrics.incrGetContainersFailedRetrieved(); RouterServerUtil.logAndThrowException("Application " + applicationId + " does not exist in FederationStateStore.", ex); } @@ -1302,7 +1378,7 @@ public class FederationClientInterceptor try { response = clientRMProxy.getContainers(request); } catch (Exception ex) { - routerMetrics.incrContainerFailedRetrieved(); + routerMetrics.incrGetContainersFailedRetrieved(); RouterServerUtil.logAndThrowException("Unable to get the containers for " + applicationId + " from SubCluster " + subClusterId.getId(), ex); } @@ -1321,19 +1397,103 @@ public class FederationClientInterceptor @Override public GetDelegationTokenResponse getDelegationToken( GetDelegationTokenRequest request) throws YarnException, IOException { - throw new NotImplementedException("Code is not implemented"); + + if (request == null || request.getRenewer() == null) { + routerMetrics.incrGetDelegationTokenFailedRetrieved(); + RouterServerUtil.logAndThrowException( + "Missing getDelegationToken request or Renewer.", null); + } + + try { + // Verify that the connection is kerberos authenticated + if (!RouterServerUtil.isAllowedDelegationTokenOp()) { + routerMetrics.incrGetDelegationTokenFailedRetrieved(); + throw new IOException( + "Delegation Token can be issued only with kerberos authentication."); + } + + long startTime = clock.getTime(); + UserGroupInformation ugi = UserGroupInformation.getCurrentUser(); + Text owner = new Text(ugi.getUserName()); + Text realUser = null; + if (ugi.getRealUser() != null) { + realUser = new Text(ugi.getRealUser().getUserName()); + } + + RMDelegationTokenIdentifier tokenIdentifier = + new RMDelegationTokenIdentifier(owner, new Text(request.getRenewer()), realUser); + Token realRMDToken = + new Token<>(tokenIdentifier, this.getTokenSecretManager()); + + org.apache.hadoop.yarn.api.records.Token routerRMDTToken = + BuilderUtils.newDelegationToken(realRMDToken.getIdentifier(), + realRMDToken.getKind().toString(), + realRMDToken.getPassword(), realRMDToken.getService().toString()); + + long stopTime = clock.getTime(); + routerMetrics.succeededGetDelegationTokenRetrieved((stopTime - startTime)); + return GetDelegationTokenResponse.newInstance(routerRMDTToken); + } catch(IOException e) { + routerMetrics.incrGetDelegationTokenFailedRetrieved(); + throw new YarnException(e); + } } @Override public RenewDelegationTokenResponse renewDelegationToken( RenewDelegationTokenRequest request) throws YarnException, IOException { - throw new NotImplementedException("Code is not implemented"); + try { + + if (!RouterServerUtil.isAllowedDelegationTokenOp()) { + routerMetrics.incrRenewDelegationTokenFailedRetrieved(); + throw new IOException( + "Delegation Token can be renewed only with kerberos authentication"); + } + + long startTime = clock.getTime(); + org.apache.hadoop.yarn.api.records.Token protoToken = request.getDelegationToken(); + Token token = new Token<>( + protoToken.getIdentifier().array(), protoToken.getPassword().array(), + new Text(protoToken.getKind()), new Text(protoToken.getService())); + String user = RouterServerUtil.getRenewerForToken(token); + long nextExpTime = this.getTokenSecretManager().renewToken(token, user); + RenewDelegationTokenResponse renewResponse = + Records.newRecord(RenewDelegationTokenResponse.class); + renewResponse.setNextExpirationTime(nextExpTime); + long stopTime = clock.getTime(); + routerMetrics.succeededRenewDelegationTokenRetrieved((stopTime - startTime)); + return renewResponse; + + } catch (IOException e) { + routerMetrics.incrRenewDelegationTokenFailedRetrieved(); + throw new YarnException(e); + } } @Override public CancelDelegationTokenResponse cancelDelegationToken( CancelDelegationTokenRequest request) throws YarnException, IOException { - throw new NotImplementedException("Code is not implemented"); + try { + if (!RouterServerUtil.isAllowedDelegationTokenOp()) { + routerMetrics.incrCancelDelegationTokenFailedRetrieved(); + throw new IOException( + "Delegation Token can be cancelled only with kerberos authentication"); + } + + long startTime = clock.getTime(); + org.apache.hadoop.yarn.api.records.Token protoToken = request.getDelegationToken(); + Token token = new Token<>( + protoToken.getIdentifier().array(), protoToken.getPassword().array(), + new Text(protoToken.getKind()), new Text(protoToken.getService())); + String user = UserGroupInformation.getCurrentUser().getUserName(); + this.getTokenSecretManager().cancelToken(token, user); + long stopTime = clock.getTime(); + routerMetrics.succeededCancelDelegationTokenRetrieved((stopTime - startTime)); + return Records.newRecord(CancelDelegationTokenResponse.class); + } catch (IOException e) { + routerMetrics.incrCancelDelegationTokenFailedRetrieved(); + throw new YarnException(e); + } } @Override @@ -1800,4 +1960,49 @@ public class FederationClientInterceptor } } } + + protected int getNumMinThreads(Configuration conf) { + + String threadSize = conf.get(YarnConfiguration.ROUTER_USER_CLIENT_THREADS_SIZE); + + // If the user configures YarnConfiguration.ROUTER_USER_CLIENT_THREADS_SIZE, + // we will still get the number of threads from this configuration. + if (StringUtils.isNotBlank(threadSize)) { + LOG.warn("{} is a deprecated property, " + + "please remove it, use {} to configure the minimum number of thread pool.", + YarnConfiguration.ROUTER_USER_CLIENT_THREADS_SIZE, + YarnConfiguration.ROUTER_USER_CLIENT_THREAD_POOL_MINIMUM_POOL_SIZE); + return Integer.parseInt(threadSize); + } + + int numMinThreads = conf.getInt( + YarnConfiguration.ROUTER_USER_CLIENT_THREAD_POOL_MINIMUM_POOL_SIZE, + YarnConfiguration.DEFAULT_ROUTER_USER_CLIENT_THREAD_POOL_MINIMUM_POOL_SIZE); + return numMinThreads; + } + + protected int getNumMaxThreads(Configuration conf) { + + String threadSize = conf.get(YarnConfiguration.ROUTER_USER_CLIENT_THREADS_SIZE); + + // If the user configures YarnConfiguration.ROUTER_USER_CLIENT_THREADS_SIZE, + // we will still get the number of threads from this configuration. + if (StringUtils.isNotBlank(threadSize)) { + LOG.warn("{} is a deprecated property, " + + "please remove it, use {} to configure the maximum number of thread pool.", + YarnConfiguration.ROUTER_USER_CLIENT_THREADS_SIZE, + YarnConfiguration.ROUTER_USER_CLIENT_THREAD_POOL_MAXIMUM_POOL_SIZE); + return Integer.parseInt(threadSize); + } + + int numMaxThreads = conf.getInt( + YarnConfiguration.ROUTER_USER_CLIENT_THREAD_POOL_MAXIMUM_POOL_SIZE, + YarnConfiguration.DEFAULT_ROUTER_USER_CLIENT_THREAD_POOL_MAXIMUM_POOL_SIZE); + return numMaxThreads; + } + + @VisibleForTesting + public void setNumSubmitRetries(int numSubmitRetries) { + this.numSubmitRetries = numSubmitRetries; + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/RouterClientRMService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/RouterClientRMService.java index b60a267746e..e3e84079b71 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/RouterClientRMService.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/RouterClientRMService.java @@ -22,6 +22,7 @@ import java.io.IOException; import java.net.InetSocketAddress; import java.util.Collections; import java.util.Map; +import java.util.concurrent.TimeUnit; import org.apache.hadoop.classification.InterfaceAudience.Private; import org.apache.hadoop.conf.Configuration; @@ -105,6 +106,7 @@ import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.exceptions.YarnException; import org.apache.hadoop.yarn.ipc.YarnRPC; import org.apache.hadoop.yarn.server.router.RouterServerUtil; +import org.apache.hadoop.yarn.server.router.security.RouterDelegationTokenSecretManager; import org.apache.hadoop.yarn.server.router.security.authorize.RouterPolicyProvider; import org.apache.hadoop.yarn.util.LRUCacheHashMap; import org.slf4j.Logger; @@ -136,6 +138,8 @@ public class RouterClientRMService extends AbstractService // and remove the oldest used ones. private Map userPipelineMap; + private RouterDelegationTokenSecretManager routerDTSecretManager; + public RouterClientRMService() { super(RouterClientRMService.class.getName()); } @@ -164,8 +168,12 @@ public class RouterClientRMService extends AbstractService serverConf.getInt(YarnConfiguration.RM_CLIENT_THREAD_COUNT, YarnConfiguration.DEFAULT_RM_CLIENT_THREAD_COUNT); + // Initialize RouterRMDelegationTokenSecretManager. + routerDTSecretManager = createRouterRMDelegationTokenSecretManager(conf); + routerDTSecretManager.startThreads(); + this.server = rpc.getServer(ApplicationClientProtocol.class, this, - listenerEndpoint, serverConf, null, numWorkerThreads); + listenerEndpoint, serverConf, routerDTSecretManager, numWorkerThreads); // Enable service authorization? if (conf.getBoolean( @@ -508,6 +516,13 @@ public class RouterClientRMService extends AbstractService ClientRequestInterceptor interceptorChain = this.createRequestInterceptorChain(); interceptorChain.init(user); + + // We set the RouterDelegationTokenSecretManager instance to the interceptorChain + // and let the interceptor use it. + if (routerDTSecretManager != null) { + interceptorChain.setTokenSecretManager(routerDTSecretManager); + } + chainWrapper.init(interceptorChain); } catch (Exception e) { LOG.error("Init ClientRequestInterceptor error for user: {}.", user, e); @@ -558,4 +573,54 @@ public class RouterClientRMService extends AbstractService public Map getUserPipelineMap() { return userPipelineMap; } + + /** + * Create RouterRMDelegationTokenSecretManager. + * In the YARN federation, the Router will replace the RM to + * manage the RMDelegationToken (generate, update, cancel), + * so the relevant configuration parameters still obtain the configuration parameters of the RM. + * + * @param conf Configuration + * @return RouterDelegationTokenSecretManager. + */ + protected RouterDelegationTokenSecretManager createRouterRMDelegationTokenSecretManager( + Configuration conf) { + + long secretKeyInterval = conf.getLong( + YarnConfiguration.RM_DELEGATION_KEY_UPDATE_INTERVAL_KEY, + YarnConfiguration.RM_DELEGATION_KEY_UPDATE_INTERVAL_DEFAULT); + + long tokenMaxLifetime = conf.getLong( + YarnConfiguration.RM_DELEGATION_TOKEN_MAX_LIFETIME_KEY, + YarnConfiguration.RM_DELEGATION_TOKEN_MAX_LIFETIME_DEFAULT); + + long tokenRenewInterval = conf.getLong( + YarnConfiguration.RM_DELEGATION_TOKEN_RENEW_INTERVAL_KEY, + YarnConfiguration.RM_DELEGATION_TOKEN_RENEW_INTERVAL_DEFAULT); + + long removeScanInterval = conf.getTimeDuration( + YarnConfiguration.RM_DELEGATION_TOKEN_REMOVE_SCAN_INTERVAL_KEY, + YarnConfiguration.RM_DELEGATION_TOKEN_REMOVE_SCAN_INTERVAL_DEFAULT, + TimeUnit.MILLISECONDS); + + return new RouterDelegationTokenSecretManager(secretKeyInterval, + tokenMaxLifetime, tokenRenewInterval, removeScanInterval); + } + + @VisibleForTesting + public RouterDelegationTokenSecretManager getRouterDTSecretManager() { + return routerDTSecretManager; + } + + @VisibleForTesting + public void setRouterDTSecretManager(RouterDelegationTokenSecretManager routerDTSecretManager) { + this.routerDTSecretManager = routerDTSecretManager; + } + + @VisibleForTesting + public void initUserPipelineMap(Configuration conf) { + int maxCacheSize = conf.getInt(YarnConfiguration.ROUTER_PIPELINE_CACHE_MAX_SIZE, + YarnConfiguration.DEFAULT_ROUTER_PIPELINE_CACHE_MAX_SIZE); + this.userPipelineMap = Collections.synchronizedMap(new LRUCacheHashMap<>(maxCacheSize, true)); + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/RouterYarnClientUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/RouterYarnClientUtils.java index e70d5521ffc..481005b478d 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/RouterYarnClientUtils.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/RouterYarnClientUtils.java @@ -80,6 +80,9 @@ public final class RouterYarnClientUtils { tmp.getNumNodeManagers() + metrics.getNumNodeManagers()); tmp.setNumActiveNodeManagers( tmp.getNumActiveNodeManagers() + metrics.getNumActiveNodeManagers()); + tmp.setNumDecommissioningNodeManagers( + tmp.getNumDecommissioningNodeManagers() + metrics + .getNumDecommissioningNodeManagers()); tmp.setNumDecommissionedNodeManagers( tmp.getNumDecommissionedNodeManagers() + metrics .getNumDecommissionedNodeManagers()); @@ -90,6 +93,9 @@ public final class RouterYarnClientUtils { tmp.setNumUnhealthyNodeManagers( tmp.getNumUnhealthyNodeManagers() + metrics .getNumUnhealthyNodeManagers()); + tmp.setNumShutdownNodeManagers( + tmp.getNumShutdownNodeManagers() + metrics + .getNumShutdownNodeManagers()); } return GetClusterMetricsResponse.newInstance(tmp); } @@ -526,4 +532,3 @@ public final class RouterYarnClientUtils { return GetNodesToAttributesResponse.newInstance(attributesMap); } } - diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/rmadmin/AbstractRMAdminRequestInterceptor.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/rmadmin/AbstractRMAdminRequestInterceptor.java index f789aa2b47e..8b09d699719 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/rmadmin/AbstractRMAdminRequestInterceptor.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/rmadmin/AbstractRMAdminRequestInterceptor.java @@ -19,6 +19,8 @@ package org.apache.hadoop.yarn.server.router.rmadmin; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.security.UserGroupInformation; +import org.apache.hadoop.yarn.server.router.RouterServerUtil; /** * Implements the {@link RMAdminRequestInterceptor} interface and provides @@ -31,6 +33,9 @@ public abstract class AbstractRMAdminRequestInterceptor private Configuration conf; private RMAdminRequestInterceptor nextInterceptor; + @SuppressWarnings("checkstyle:visibilitymodifier") + protected UserGroupInformation user = null; + /** * Sets the {@link RMAdminRequestInterceptor} in the chain. */ @@ -63,9 +68,10 @@ public abstract class AbstractRMAdminRequestInterceptor * Initializes the {@link RMAdminRequestInterceptor}. */ @Override - public void init(String user) { + public void init(String userName) { + this.user = RouterServerUtil.setupUser(userName); if (this.nextInterceptor != null) { - this.nextInterceptor.init(user); + this.nextInterceptor.init(userName); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/rmadmin/FederationRMAdminInterceptor.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/rmadmin/FederationRMAdminInterceptor.java new file mode 100644 index 00000000000..41d87c3f588 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/rmadmin/FederationRMAdminInterceptor.java @@ -0,0 +1,451 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.router.rmadmin; + +import org.apache.commons.collections.CollectionUtils; +import org.apache.commons.lang3.NotImplementedException; +import org.apache.hadoop.classification.VisibleForTesting; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.CommonConfigurationKeys; +import org.apache.hadoop.ipc.StandbyException; +import org.apache.hadoop.security.UserGroupInformation; +import org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder; +import org.apache.hadoop.yarn.conf.YarnConfiguration; +import org.apache.hadoop.yarn.exceptions.YarnException; +import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; +import org.apache.hadoop.yarn.server.api.ResourceManagerAdministrationProtocol; +import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshQueuesRequest; +import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshQueuesResponse; +import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshNodesRequest; +import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshNodesResponse; +import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshSuperUserGroupsConfigurationRequest; +import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshSuperUserGroupsConfigurationResponse; +import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshUserToGroupsMappingsRequest; +import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshUserToGroupsMappingsResponse; +import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshAdminAclsRequest; +import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshAdminAclsResponse; +import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshServiceAclsRequest; +import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshServiceAclsResponse; +import org.apache.hadoop.yarn.server.api.protocolrecords.UpdateNodeResourceRequest; +import org.apache.hadoop.yarn.server.api.protocolrecords.UpdateNodeResourceResponse; +import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshNodesResourcesRequest; +import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshNodesResourcesResponse; +import org.apache.hadoop.yarn.server.api.protocolrecords.AddToClusterNodeLabelsRequest; +import org.apache.hadoop.yarn.server.api.protocolrecords.AddToClusterNodeLabelsResponse; +import org.apache.hadoop.yarn.server.api.protocolrecords.RemoveFromClusterNodeLabelsRequest; +import org.apache.hadoop.yarn.server.api.protocolrecords.RemoveFromClusterNodeLabelsResponse; +import org.apache.hadoop.yarn.server.api.protocolrecords.ReplaceLabelsOnNodeRequest; +import org.apache.hadoop.yarn.server.api.protocolrecords.ReplaceLabelsOnNodeResponse; +import org.apache.hadoop.yarn.server.api.protocolrecords.CheckForDecommissioningNodesRequest; +import org.apache.hadoop.yarn.server.api.protocolrecords.CheckForDecommissioningNodesResponse; +import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshClusterMaxPriorityRequest; +import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshClusterMaxPriorityResponse; +import org.apache.hadoop.yarn.server.api.protocolrecords.NodesToAttributesMappingRequest; +import org.apache.hadoop.yarn.server.api.protocolrecords.NodesToAttributesMappingResponse; +import org.apache.hadoop.yarn.server.federation.failover.FederationProxyProviderUtil; +import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId; +import org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade; +import org.apache.hadoop.yarn.server.router.RouterMetrics; +import org.apache.hadoop.yarn.server.router.RouterServerUtil; +import org.apache.hadoop.yarn.util.Clock; +import org.apache.hadoop.yarn.util.MonotonicClock; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.IOException; +import java.util.Map; +import java.util.Collection; +import java.util.concurrent.ThreadPoolExecutor; +import java.util.concurrent.ThreadFactory; +import java.util.concurrent.BlockingQueue; +import java.util.concurrent.LinkedBlockingQueue; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.ConcurrentHashMap; + +public class FederationRMAdminInterceptor extends AbstractRMAdminRequestInterceptor { + + private static final Logger LOG = + LoggerFactory.getLogger(FederationRMAdminInterceptor.class); + + private Map adminRMProxies; + private FederationStateStoreFacade federationFacade; + private final Clock clock = new MonotonicClock(); + private RouterMetrics routerMetrics; + private ThreadPoolExecutor executorService; + private Configuration conf; + + @Override + public void init(String userName) { + super.init(userName); + + int numThreads = getConf().getInt( + YarnConfiguration.ROUTER_USER_CLIENT_THREADS_SIZE, + YarnConfiguration.DEFAULT_ROUTER_USER_CLIENT_THREADS_SIZE); + ThreadFactory threadFactory = new ThreadFactoryBuilder() + .setNameFormat("RPC Router RMAdminClient-" + userName + "-%d ").build(); + + BlockingQueue workQueue = new LinkedBlockingQueue(); + this.executorService = new ThreadPoolExecutor(numThreads, numThreads, + 0L, TimeUnit.MILLISECONDS, workQueue, threadFactory); + + federationFacade = FederationStateStoreFacade.getInstance(); + this.conf = this.getConf(); + this.adminRMProxies = new ConcurrentHashMap<>(); + routerMetrics = RouterMetrics.getMetrics(); + } + + @VisibleForTesting + protected ResourceManagerAdministrationProtocol getAdminRMProxyForSubCluster( + SubClusterId subClusterId) throws YarnException { + + if (adminRMProxies.containsKey(subClusterId)) { + return adminRMProxies.get(subClusterId); + } + + ResourceManagerAdministrationProtocol adminRMProxy = null; + try { + boolean serviceAuthEnabled = this.conf.getBoolean( + CommonConfigurationKeys.HADOOP_SECURITY_AUTHORIZATION, false); + UserGroupInformation realUser = user; + if (serviceAuthEnabled) { + realUser = UserGroupInformation.createProxyUser( + user.getShortUserName(), UserGroupInformation.getLoginUser()); + } + adminRMProxy = FederationProxyProviderUtil.createRMProxy(getConf(), + ResourceManagerAdministrationProtocol.class, subClusterId, realUser); + } catch (Exception e) { + RouterServerUtil.logAndThrowException(e, + "Unable to create the interface to reach the SubCluster %s", subClusterId); + } + adminRMProxies.put(subClusterId, adminRMProxy); + return adminRMProxy; + } + + @Override + public void setNextInterceptor(RMAdminRequestInterceptor next) { + throw new YarnRuntimeException("setNextInterceptor is being called on " + + "FederationRMAdminRequestInterceptor, which should be the last one " + + "in the chain. Check if the interceptor pipeline configuration " + + "is correct"); + } + + /** + * Refresh queue requests. + * + * The Router supports refreshing all SubCluster queues at once, + * and also supports refreshing queues by SubCluster. + * + * @param request RefreshQueuesRequest, If subClusterId is not empty, + * it means that we want to refresh the queue of the specified subClusterId. + * If subClusterId is empty, it means we want to refresh all queues. + * + * @return RefreshQueuesResponse, There is no specific information in the response, + * as long as it is not empty, it means that the request is successful. + * + * @throws StandbyException exception thrown by non-active server. + * @throws YarnException indicates exceptions from yarn servers. + * @throws IOException io error occurs. + */ + @Override + public RefreshQueuesResponse refreshQueues(RefreshQueuesRequest request) + throws StandbyException, YarnException, IOException { + + // parameter verification. + if (request == null) { + routerMetrics.incrRefreshQueuesFailedRetrieved(); + RouterServerUtil.logAndThrowException("Missing RefreshQueues request.", null); + } + + // call refreshQueues of activeSubClusters. + try { + long startTime = clock.getTime(); + RMAdminProtocolMethod remoteMethod = new RMAdminProtocolMethod( + new Class[] {RefreshQueuesRequest.class}, new Object[] {request}); + + String subClusterId = request.getSubClusterId(); + Collection refreshQueueResps = + remoteMethod.invokeConcurrent(this, RefreshQueuesResponse.class, subClusterId); + + // If we get the return result from refreshQueueResps, + // it means that the call has been successful, + // and the RefreshQueuesResponse method can be reconstructed and returned. + if (CollectionUtils.isNotEmpty(refreshQueueResps)) { + long stopTime = clock.getTime(); + routerMetrics.succeededRefreshQueuesRetrieved(stopTime - startTime); + return RefreshQueuesResponse.newInstance(); + } + } catch (YarnException e) { + routerMetrics.incrRefreshQueuesFailedRetrieved(); + RouterServerUtil.logAndThrowException(e, + "Unable to refreshQueue due to exception. " + e.getMessage()); + } + + routerMetrics.incrRefreshQueuesFailedRetrieved(); + throw new YarnException("Unable to refreshQueue."); + } + + /** + * Refresh node requests. + * + * The Router supports refreshing all SubCluster nodes at once, + * and also supports refreshing node by SubCluster. + * + * @param request RefreshNodesRequest, If subClusterId is not empty, + * it means that we want to refresh the node of the specified subClusterId. + * If subClusterId is empty, it means we want to refresh all nodes. + * + * @return RefreshNodesResponse, There is no specific information in the response, + * as long as it is not empty, it means that the request is successful. + * + * @throws StandbyException exception thrown by non-active server. + * @throws YarnException indicates exceptions from yarn servers. + * @throws IOException io error occurs. + */ + @Override + public RefreshNodesResponse refreshNodes(RefreshNodesRequest request) + throws StandbyException, YarnException, IOException { + + // parameter verification. + // We will not check whether the DecommissionType is empty, + // because this parameter has a default value at the proto level. + if (request == null) { + routerMetrics.incrRefreshNodesFailedRetrieved(); + RouterServerUtil.logAndThrowException("Missing RefreshNodes request.", null); + } + + // call refreshNodes of activeSubClusters. + try { + long startTime = clock.getTime(); + RMAdminProtocolMethod remoteMethod = new RMAdminProtocolMethod( + new Class[] {RefreshNodesRequest.class}, new Object[] {request}); + + String subClusterId = request.getSubClusterId(); + Collection refreshNodesResps = + remoteMethod.invokeConcurrent(this, RefreshNodesResponse.class, subClusterId); + + if (CollectionUtils.isNotEmpty(refreshNodesResps)) { + long stopTime = clock.getTime(); + routerMetrics.succeededRefreshNodesRetrieved(stopTime - startTime); + return RefreshNodesResponse.newInstance(); + } + } catch (YarnException e) { + routerMetrics.incrRefreshNodesFailedRetrieved(); + RouterServerUtil.logAndThrowException(e, + "Unable to refreshNodes due to exception. " + e.getMessage()); + } + + routerMetrics.incrRefreshNodesFailedRetrieved(); + throw new YarnException("Unable to refreshNodes due to exception."); + } + + /** + * Refresh SuperUserGroupsConfiguration requests. + * + * The Router supports refreshing all subCluster SuperUserGroupsConfiguration at once, + * and also supports refreshing SuperUserGroupsConfiguration by SubCluster. + * + * @param request RefreshSuperUserGroupsConfigurationRequest, + * If subClusterId is not empty, it means that we want to + * refresh the superuser groups configuration of the specified subClusterId. + * If subClusterId is empty, it means we want to + * refresh all subCluster superuser groups configuration. + * + * @return RefreshSuperUserGroupsConfigurationResponse, + * There is no specific information in the response, as long as it is not empty, + * it means that the request is successful. + * + * @throws StandbyException exception thrown by non-active server. + * @throws YarnException indicates exceptions from yarn servers. + * @throws IOException io error occurs. + */ + @Override + public RefreshSuperUserGroupsConfigurationResponse refreshSuperUserGroupsConfiguration( + RefreshSuperUserGroupsConfigurationRequest request) + throws StandbyException, YarnException, IOException { + + // parameter verification. + if (request == null) { + routerMetrics.incrRefreshSuperUserGroupsConfigurationFailedRetrieved(); + RouterServerUtil.logAndThrowException("Missing RefreshSuperUserGroupsConfiguration request.", + null); + } + + // call refreshSuperUserGroupsConfiguration of activeSubClusters. + try { + long startTime = clock.getTime(); + RMAdminProtocolMethod remoteMethod = new RMAdminProtocolMethod( + new Class[] {RefreshSuperUserGroupsConfigurationRequest.class}, new Object[] {request}); + + String subClusterId = request.getSubClusterId(); + Collection refreshSuperUserGroupsConfResps = + remoteMethod.invokeConcurrent(this, RefreshSuperUserGroupsConfigurationResponse.class, + subClusterId); + + if (CollectionUtils.isNotEmpty(refreshSuperUserGroupsConfResps)) { + long stopTime = clock.getTime(); + routerMetrics.succeededRefreshSuperUserGroupsConfRetrieved(stopTime - startTime); + return RefreshSuperUserGroupsConfigurationResponse.newInstance(); + } + } catch (YarnException e) { + routerMetrics.incrRefreshSuperUserGroupsConfigurationFailedRetrieved(); + RouterServerUtil.logAndThrowException(e, + "Unable to refreshSuperUserGroupsConfiguration due to exception. " + e.getMessage()); + } + + routerMetrics.incrRefreshSuperUserGroupsConfigurationFailedRetrieved(); + throw new YarnException("Unable to refreshSuperUserGroupsConfiguration."); + } + + /** + * Refresh UserToGroupsMappings requests. + * + * The Router supports refreshing all subCluster UserToGroupsMappings at once, + * and also supports refreshing UserToGroupsMappings by subCluster. + * + * @param request RefreshUserToGroupsMappingsRequest, + * If subClusterId is not empty, it means that we want to + * refresh the user groups mapping of the specified subClusterId. + * If subClusterId is empty, it means we want to + * refresh all subCluster user groups mapping. + * + * @return RefreshUserToGroupsMappingsResponse, + * There is no specific information in the response, as long as it is not empty, + * it means that the request is successful. + * + * @throws StandbyException exception thrown by non-active server. + * @throws YarnException indicates exceptions from yarn servers. + * @throws IOException io error occurs. + */ + @Override + public RefreshUserToGroupsMappingsResponse refreshUserToGroupsMappings( + RefreshUserToGroupsMappingsRequest request) throws StandbyException, YarnException, + IOException { + + // parameter verification. + if (request == null) { + routerMetrics.incrRefreshUserToGroupsMappingsFailedRetrieved(); + RouterServerUtil.logAndThrowException("Missing RefreshUserToGroupsMappings request.", null); + } + + // call refreshUserToGroupsMappings of activeSubClusters. + try { + long startTime = clock.getTime(); + RMAdminProtocolMethod remoteMethod = new RMAdminProtocolMethod( + new Class[] {RefreshUserToGroupsMappingsRequest.class}, new Object[] {request}); + + String subClusterId = request.getSubClusterId(); + Collection refreshUserToGroupsMappingsResps = + remoteMethod.invokeConcurrent(this, RefreshUserToGroupsMappingsResponse.class, + subClusterId); + + if (CollectionUtils.isNotEmpty(refreshUserToGroupsMappingsResps)) { + long stopTime = clock.getTime(); + routerMetrics.succeededRefreshUserToGroupsMappingsRetrieved(stopTime - startTime); + return RefreshUserToGroupsMappingsResponse.newInstance(); + } + } catch (YarnException e) { + routerMetrics.incrRefreshUserToGroupsMappingsFailedRetrieved(); + RouterServerUtil.logAndThrowException(e, + "Unable to refreshUserToGroupsMappings due to exception. " + e.getMessage()); + } + + routerMetrics.incrRefreshUserToGroupsMappingsFailedRetrieved(); + throw new YarnException("Unable to refreshUserToGroupsMappings."); + } + + @Override + public RefreshAdminAclsResponse refreshAdminAcls(RefreshAdminAclsRequest request) + throws YarnException, IOException { + throw new NotImplementedException(); + } + + @Override + public RefreshServiceAclsResponse refreshServiceAcls(RefreshServiceAclsRequest request) + throws YarnException, IOException { + throw new NotImplementedException(); + } + + @Override + public UpdateNodeResourceResponse updateNodeResource(UpdateNodeResourceRequest request) + throws YarnException, IOException { + throw new NotImplementedException(); + } + + @Override + public RefreshNodesResourcesResponse refreshNodesResources(RefreshNodesResourcesRequest request) + throws YarnException, IOException { + throw new NotImplementedException(); + } + + @Override + public AddToClusterNodeLabelsResponse addToClusterNodeLabels( + AddToClusterNodeLabelsRequest request) + throws YarnException, IOException { + throw new NotImplementedException(); + } + + @Override + public RemoveFromClusterNodeLabelsResponse removeFromClusterNodeLabels( + RemoveFromClusterNodeLabelsRequest request) + throws YarnException, IOException { + throw new NotImplementedException(); + } + + @Override + public ReplaceLabelsOnNodeResponse replaceLabelsOnNode(ReplaceLabelsOnNodeRequest request) + throws YarnException, IOException { + throw new NotImplementedException(); + } + + @Override + public CheckForDecommissioningNodesResponse checkForDecommissioningNodes( + CheckForDecommissioningNodesRequest checkForDecommissioningNodesRequest) + throws YarnException, IOException { + throw new NotImplementedException(); + } + + @Override + public RefreshClusterMaxPriorityResponse refreshClusterMaxPriority( + RefreshClusterMaxPriorityRequest request) + throws YarnException, IOException { + throw new NotImplementedException(); + } + + @Override + public NodesToAttributesMappingResponse mapAttributesToNodes( + NodesToAttributesMappingRequest request) + throws YarnException, IOException { + throw new NotImplementedException(); + } + + @Override + public String[] getGroupsForUser(String user) throws IOException { + return new String[0]; + } + + @VisibleForTesting + public FederationStateStoreFacade getFederationFacade() { + return federationFacade; + } + + @VisibleForTesting + public ThreadPoolExecutor getExecutorService() { + return executorService; + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/rmadmin/RMAdminProtocolMethod.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/rmadmin/RMAdminProtocolMethod.java new file mode 100644 index 00000000000..1a5b038f19c --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/rmadmin/RMAdminProtocolMethod.java @@ -0,0 +1,186 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.yarn.server.router.rmadmin; + +import org.apache.commons.lang3.StringUtils; +import org.apache.commons.lang3.tuple.Pair; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.yarn.exceptions.YarnException; +import org.apache.hadoop.yarn.server.api.ResourceManagerAdministrationProtocol; +import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId; +import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo; +import org.apache.hadoop.yarn.server.federation.utils.FederationMethodWrapper; +import org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.IOException; +import java.lang.reflect.Method; +import java.util.Collection; +import java.util.Map; +import java.util.Set; +import java.util.TreeMap; +import java.util.List; +import java.util.ArrayList; +import java.util.Collections; +import java.util.concurrent.Callable; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.Future; +import java.util.concurrent.ThreadPoolExecutor; + +/** + * Class to define admin method, params and arguments. + */ +public class RMAdminProtocolMethod extends FederationMethodWrapper { + + private static final Logger LOG = + LoggerFactory.getLogger(RMAdminProtocolMethod.class); + + private FederationStateStoreFacade federationFacade; + private FederationRMAdminInterceptor rmAdminInterceptor; + private Configuration configuration; + + public RMAdminProtocolMethod(Class[] pTypes, Object... pParams) + throws IOException { + super(pTypes, pParams); + } + + public Collection invokeConcurrent(FederationRMAdminInterceptor interceptor, + Class clazz, String subClusterId) throws YarnException { + this.rmAdminInterceptor = interceptor; + this.federationFacade = FederationStateStoreFacade.getInstance(); + this.configuration = interceptor.getConf(); + if (StringUtils.isNotBlank(subClusterId)) { + return invoke(clazz, subClusterId); + } else { + return invokeConcurrent(clazz); + } + } + + @Override + protected Collection invokeConcurrent(Class clazz) throws YarnException { + String methodName = Thread.currentThread().getStackTrace()[3].getMethodName(); + this.setMethodName(methodName); + + ThreadPoolExecutor executorService = rmAdminInterceptor.getExecutorService(); + + // Get Active SubClusters + Map subClusterInfo = + federationFacade.getSubClusters(true); + Collection subClusterIds = subClusterInfo.keySet(); + + List>> callables = new ArrayList<>(); + List>> futures = new ArrayList<>(); + Map exceptions = new TreeMap<>(); + + // Generate parallel Callable tasks + for (SubClusterId subClusterId : subClusterIds) { + callables.add(() -> { + ResourceManagerAdministrationProtocol protocol = + rmAdminInterceptor.getAdminRMProxyForSubCluster(subClusterId); + Class[] types = this.getTypes(); + Object[] params = this.getParams(); + Method method = ResourceManagerAdministrationProtocol.class.getMethod(methodName, types); + Object result = method.invoke(protocol, params); + return Pair.of(subClusterId, result); + }); + } + + // Get results from multiple threads + Map results = new TreeMap<>(); + try { + futures.addAll(executorService.invokeAll(callables)); + futures.stream().forEach(future -> { + SubClusterId subClusterId = null; + try { + Pair pair = future.get(); + subClusterId = pair.getKey(); + Object result = pair.getValue(); + if (result != null) { + R rResult = clazz.cast(result); + results.put(subClusterId, rResult); + } + } catch (InterruptedException | ExecutionException e) { + Throwable cause = e.getCause(); + LOG.error("Cannot execute {} on {}: {}", methodName, subClusterId, cause.getMessage()); + exceptions.put(subClusterId, e); + } + }); + } catch (InterruptedException e) { + throw new YarnException("invokeConcurrent Failed.", e); + } + + // All sub-clusters return results to be considered successful, + // otherwise an exception will be thrown. + if (exceptions != null && !exceptions.isEmpty()) { + Set subClusterIdSets = exceptions.keySet(); + throw new YarnException("invokeConcurrent Failed, An exception occurred in subClusterIds = " + + StringUtils.join(subClusterIdSets, ",")); + } + + // return result + return results.values(); + } + + /** + * Call the method in the protocol according to the subClusterId. + * + * @param clazz return type + * @param subClusterId subCluster Id + * @param Generic R + * @return response collection. + * @throws YarnException yarn exception. + */ + protected Collection invoke(Class clazz, String subClusterId) throws YarnException { + + // Get the method name to call + String methodName = Thread.currentThread().getStackTrace()[3].getMethodName(); + this.setMethodName(methodName); + + // Get Active SubClusters + Map subClusterInfoMap = + federationFacade.getSubClusters(true); + + // According to subCluster of string type, convert to SubClusterId type + SubClusterId subClusterIdKey = SubClusterId.newInstance(subClusterId); + + // If the provided subCluster is not Active or does not exist, + // an exception will be returned directly. + if (!subClusterInfoMap.containsKey(subClusterIdKey)) { + throw new YarnException("subClusterId = " + subClusterId + " is not an active subCluster."); + } + + // Call the method in the protocol and convert it according to clazz. + try { + ResourceManagerAdministrationProtocol protocol = + rmAdminInterceptor.getAdminRMProxyForSubCluster(subClusterIdKey); + Class[] types = this.getTypes(); + Object[] params = this.getParams(); + Method method = ResourceManagerAdministrationProtocol.class.getMethod(methodName, types); + Object result = method.invoke(protocol, params); + if (result != null) { + return Collections.singletonList(clazz.cast(result)); + } + } catch (Exception e) { + throw new YarnException("invoke Failed, An exception occurred in subClusterId = " + + subClusterId, e); + } + throw new YarnException("invoke Failed, An exception occurred in subClusterId = " + + subClusterId); + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/security/RouterDelegationTokenSecretManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/security/RouterDelegationTokenSecretManager.java new file mode 100644 index 00000000000..918bf16e4f4 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/security/RouterDelegationTokenSecretManager.java @@ -0,0 +1,289 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.yarn.server.router.security; + +import org.apache.hadoop.classification.InterfaceAudience.Public; +import org.apache.hadoop.classification.VisibleForTesting; +import org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager; +import org.apache.hadoop.security.token.delegation.DelegationKey; +import org.apache.hadoop.util.ExitUtil; +import org.apache.hadoop.yarn.exceptions.YarnException; +import org.apache.hadoop.yarn.security.client.RMDelegationTokenIdentifier; +import org.apache.hadoop.yarn.security.client.YARNDelegationTokenIdentifier; +import org.apache.hadoop.yarn.server.federation.store.records.RouterMasterKey; +import org.apache.hadoop.yarn.server.federation.store.records.RouterMasterKeyResponse; +import org.apache.hadoop.yarn.server.federation.store.records.RouterRMTokenResponse; +import org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.IOException; +import java.nio.ByteBuffer; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Map; +import java.util.Set; + +/** + * A Router specific delegation token secret manager. + * The secret manager is responsible for generating and accepting the password + * for each token. + */ +public class RouterDelegationTokenSecretManager + extends AbstractDelegationTokenSecretManager { + + private static final Logger LOG = LoggerFactory + .getLogger(RouterDelegationTokenSecretManager.class); + + private FederationStateStoreFacade federationFacade; + + /** + * Create a Router Secret manager. + * + * @param delegationKeyUpdateInterval the number of milliseconds for rolling + * new secret keys. + * @param delegationTokenMaxLifetime the maximum lifetime of the delegation + * tokens in milliseconds + * @param delegationTokenRenewInterval how often the tokens must be renewed + * in milliseconds + * @param delegationTokenRemoverScanInterval how often the tokens are scanned + */ + public RouterDelegationTokenSecretManager(long delegationKeyUpdateInterval, + long delegationTokenMaxLifetime, long delegationTokenRenewInterval, + long delegationTokenRemoverScanInterval) { + super(delegationKeyUpdateInterval, delegationTokenMaxLifetime, + delegationTokenRenewInterval, delegationTokenRemoverScanInterval); + this.federationFacade = FederationStateStoreFacade.getInstance(); + } + + @Override + public RMDelegationTokenIdentifier createIdentifier() { + return new RMDelegationTokenIdentifier(); + } + + private boolean shouldIgnoreException(Exception e) { + return !running && e.getCause() instanceof InterruptedException; + } + + /** + * The Router Supports Store the New Master Key. + * During this Process, Facade will call the specific StateStore to store the MasterKey. + * + * @param newKey DelegationKey + */ + @Override + public void storeNewMasterKey(DelegationKey newKey) { + try { + federationFacade.storeNewMasterKey(newKey); + } catch (Exception e) { + if (!shouldIgnoreException(e)) { + LOG.error("Error in storing master key with KeyID: {}.", newKey.getKeyId()); + ExitUtil.terminate(1, e); + } + } + } + + /** + * The Router Supports Remove the master key. + * During this Process, Facade will call the specific StateStore to remove the MasterKey. + * + * @param delegationKey DelegationKey + */ + @Override + public void removeStoredMasterKey(DelegationKey delegationKey) { + try { + federationFacade.removeStoredMasterKey(delegationKey); + } catch (Exception e) { + if (!shouldIgnoreException(e)) { + LOG.error("Error in removing master key with KeyID: {}.", delegationKey.getKeyId()); + ExitUtil.terminate(1, e); + } + } + } + + /** + * The Router Supports Store new Token. + * + * @param identifier RMDelegationToken + * @param renewDate renewDate + * @throws IOException IO exception occurred. + */ + @Override + public void storeNewToken(RMDelegationTokenIdentifier identifier, + long renewDate) throws IOException { + try { + federationFacade.storeNewToken(identifier, renewDate); + } catch (Exception e) { + if (!shouldIgnoreException(e)) { + LOG.error("Error in storing RMDelegationToken with sequence number: {}.", + identifier.getSequenceNumber()); + ExitUtil.terminate(1, e); + } + } + } + + /** + * The Router Supports Update Token. + * + * @param id RMDelegationToken + * @param renewDate renewDate + * @throws IOException IO exception occurred + */ + @Override + public void updateStoredToken(RMDelegationTokenIdentifier id, long renewDate) throws IOException { + try { + federationFacade.updateStoredToken(id, renewDate); + } catch (Exception e) { + if (!shouldIgnoreException(e)) { + LOG.error("Error in updating persisted RMDelegationToken with sequence number: {}.", + id.getSequenceNumber()); + ExitUtil.terminate(1, e); + } + } + } + + /** + * The Router Supports Remove Token. + * + * @param identifier Delegation Token + * @throws IOException IO exception occurred. + */ + @Override + public void removeStoredToken(RMDelegationTokenIdentifier identifier) throws IOException { + try { + federationFacade.removeStoredToken(identifier); + } catch (Exception e) { + if (!shouldIgnoreException(e)) { + LOG.error("Error in removing RMDelegationToken with sequence number: {}", + identifier.getSequenceNumber()); + ExitUtil.terminate(1, e); + } + } + } + + /** + * The Router supports obtaining the DelegationKey stored in the Router StateStote + * according to the DelegationKey. + * + * @param key Param DelegationKey + * @return Delegation Token + * @throws YarnException An internal conversion error occurred when getting the Token + * @throws IOException IO exception occurred + */ + public DelegationKey getMasterKeyByDelegationKey(DelegationKey key) + throws YarnException, IOException { + try { + RouterMasterKeyResponse response = federationFacade.getMasterKeyByDelegationKey(key); + RouterMasterKey masterKey = response.getRouterMasterKey(); + ByteBuffer keyByteBuf = masterKey.getKeyBytes(); + byte[] keyBytes = new byte[keyByteBuf.remaining()]; + keyByteBuf.get(keyBytes); + DelegationKey delegationKey = + new DelegationKey(masterKey.getKeyId(), masterKey.getExpiryDate(), keyBytes); + return delegationKey; + } catch (IOException ex) { + throw new IOException(ex); + } catch (YarnException ex) { + throw new YarnException(ex); + } + } + + /** + * Get RMDelegationTokenIdentifier according to RouterStoreToken. + * + * @param identifier RMDelegationTokenIdentifier + * @return RMDelegationTokenIdentifier + * @throws YarnException An internal conversion error occurred when getting the Token + * @throws IOException IO exception occurred + */ + public RMDelegationTokenIdentifier getTokenByRouterStoreToken( + RMDelegationTokenIdentifier identifier) throws YarnException, IOException { + try { + RouterRMTokenResponse response = federationFacade.getTokenByRouterStoreToken(identifier); + YARNDelegationTokenIdentifier responseIdentifier = + response.getRouterStoreToken().getTokenIdentifier(); + return (RMDelegationTokenIdentifier) responseIdentifier; + } catch (Exception ex) { + throw new YarnException(ex); + } + } + + public void setFederationFacade(FederationStateStoreFacade federationFacade) { + this.federationFacade = federationFacade; + } + + @Public + @VisibleForTesting + public int getLatestDTSequenceNumber() { + return delegationTokenSequenceNumber; + } + + @Public + @VisibleForTesting + public synchronized Set getAllMasterKeys() { + return new HashSet<>(allKeys.values()); + } + + @Public + @VisibleForTesting + public synchronized Map getAllTokens() { + Map allTokens = new HashMap<>(); + for (Map.Entry entry : currentTokens.entrySet()) { + RMDelegationTokenIdentifier keyIdentifier = entry.getKey(); + DelegationTokenInformation tokenInformation = entry.getValue(); + allTokens.put(keyIdentifier, tokenInformation.getRenewDate()); + } + return allTokens; + } + + public long getRenewDate(RMDelegationTokenIdentifier ident) + throws InvalidToken { + DelegationTokenInformation info = currentTokens.get(ident); + if (info == null) { + throw new InvalidToken("token (" + ident.toString() + + ") can't be found in cache"); + } + return info.getRenewDate(); + } + + @Override + protected synchronized int incrementDelegationTokenSeqNum() { + return federationFacade.incrementDelegationTokenSeqNum(); + } + + @Override + protected synchronized int getDelegationTokenSeqNum() { + return federationFacade.getDelegationTokenSeqNum(); + } + + @Override + protected synchronized void setDelegationTokenSeqNum(int seqNum) { + federationFacade.setDelegationTokenSeqNum(seqNum); + } + + @Override + protected synchronized int getCurrentKeyId() { + return federationFacade.getCurrentKeyId(); + } + + @Override + protected synchronized int incrementCurrentKeyId() { + return federationFacade.incrementCurrentKeyId(); + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/security/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/security/package-info.java new file mode 100644 index 00000000000..16a7488c071 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/security/package-info.java @@ -0,0 +1,19 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.router.security; \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AboutBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AboutBlock.java index a8a6e6bbac9..878ac75d1c7 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AboutBlock.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AboutBlock.java @@ -18,15 +18,11 @@ package org.apache.hadoop.yarn.server.router.webapp; -import com.sun.jersey.api.client.Client; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.util.StringUtils; -import org.apache.hadoop.yarn.conf.YarnConfiguration; -import org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWSConsts; -import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ClusterMetricsInfo; +import org.apache.commons.lang3.time.DateFormatUtils; +import org.apache.hadoop.yarn.exceptions.YarnException; +import org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade; import org.apache.hadoop.yarn.server.router.Router; -import org.apache.hadoop.yarn.webapp.util.WebAppUtils; -import org.apache.hadoop.yarn.webapp.view.HtmlBlock; +import org.apache.hadoop.yarn.server.router.webapp.dao.RouterInfo; import org.apache.hadoop.yarn.webapp.view.InfoBlock; import com.google.inject.Inject; @@ -34,62 +30,57 @@ import com.google.inject.Inject; /** * About block for the Router Web UI. */ -public class AboutBlock extends HtmlBlock { - - private static final long BYTES_IN_MB = 1024 * 1024; +public class AboutBlock extends RouterBlock { private final Router router; @Inject AboutBlock(Router router, ViewContext ctx) { - super(ctx); + super(router, ctx); this.router = router; } @Override protected void render(Block html) { - Configuration conf = this.router.getConfig(); - String webAppAddress = WebAppUtils.getRouterWebAppURLWithScheme(conf); - Client client = RouterWebServiceUtil.createJerseyClient(conf); - ClusterMetricsInfo metrics = RouterWebServiceUtil - .genericForward(webAppAddress, null, ClusterMetricsInfo.class, - HTTPMethods.GET, - RMWSConsts.RM_WEB_SERVICE_PATH + RMWSConsts.METRICS, null, null, - conf, client); - boolean isEnabled = conf.getBoolean( - YarnConfiguration.FEDERATION_ENABLED, - YarnConfiguration.DEFAULT_FEDERATION_ENABLED); - info("Cluster Status"). - __("Federation Enabled", isEnabled). - __("Applications Submitted", metrics.getAppsSubmitted()). - __("Applications Pending", metrics.getAppsPending()). - __("Applications Running", metrics.getAppsRunning()). - __("Applications Failed", metrics.getAppsFailed()). - __("Applications Killed", metrics.getAppsKilled()). - __("Applications Completed", metrics.getAppsCompleted()). - __("Containers Allocated", metrics.getContainersAllocated()). - __("Containers Reserved", metrics.getReservedContainers()). - __("Containers Pending", metrics.getPendingContainers()). - __("Available Memory", - StringUtils.byteDesc(metrics.getAvailableMB() * BYTES_IN_MB)). - __("Allocated Memory", - StringUtils.byteDesc(metrics.getAllocatedMB() * BYTES_IN_MB)). - __("Reserved Memory", - StringUtils.byteDesc(metrics.getReservedMB() * BYTES_IN_MB)). - __("Total Memory", - StringUtils.byteDesc(metrics.getTotalMB() * BYTES_IN_MB)). - __("Available VirtualCores", metrics.getAvailableVirtualCores()). - __("Allocated VirtualCores", metrics.getAllocatedVirtualCores()). - __("Reserved VirtualCores", metrics.getReservedVirtualCores()). - __("Total VirtualCores", metrics.getTotalVirtualCores()). - __("Active Nodes", metrics.getActiveNodes()). - __("Lost Nodes", metrics.getLostNodes()). - __("Available Nodes", metrics.getDecommissionedNodes()). - __("Unhealthy Nodes", metrics.getUnhealthyNodes()). - __("Rebooted Nodes", metrics.getRebootedNodes()). - __("Total Nodes", metrics.getTotalNodes()); + boolean isEnabled = isYarnFederationEnabled(); + // If Yarn Federation is not enabled, the user needs to be prompted. + initUserHelpInformationDiv(html, isEnabled); + + // Metrics Overview Table + html.__(MetricsOverviewTable.class); + + // Init Yarn Router Basic Information + initYarnRouterBasicInformation(isEnabled); + + // InfoBlock html.__(InfoBlock.class); } + + /** + * Init Yarn Router Basic Infomation. + * @param isEnabled true, federation is enabled; false, federation is not enabled. + */ + private void initYarnRouterBasicInformation(boolean isEnabled) { + FederationStateStoreFacade facade = FederationStateStoreFacade.getInstance(); + RouterInfo routerInfo = new RouterInfo(router); + String lastStartTime = + DateFormatUtils.format(routerInfo.getStartedOn(), DATE_PATTERN); + try { + info("Yarn Router Overview"). + __("Federation Enabled:", String.valueOf(isEnabled)). + __("Router ID:", routerInfo.getClusterId()). + __("Router state:", routerInfo.getState()). + __("Router SubCluster Count:", facade.getSubClusters(true).size()). + __("Router RMStateStore:", routerInfo.getRouterStateStore()). + __("Router started on:", lastStartTime). + __("Router version:", routerInfo.getRouterBuildVersion() + + " on " + routerInfo.getRouterVersionBuiltOn()). + __("Hadoop version:", routerInfo.getHadoopBuildVersion() + + " on " + routerInfo.getHadoopVersionBuiltOn()); + } catch (YarnException e) { + LOG.error("initYarnRouterBasicInformation error.", e); + } + } } \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AbstractRESTRequestInterceptor.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AbstractRESTRequestInterceptor.java index f1919c2e5a1..ad79addfca4 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AbstractRESTRequestInterceptor.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AbstractRESTRequestInterceptor.java @@ -19,6 +19,10 @@ package org.apache.hadoop.yarn.server.router.webapp; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.security.UserGroupInformation; +import org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService; + +import org.apache.hadoop.yarn.server.router.RouterServerUtil; /** * Extends the RequestInterceptor class and provides common functionality which @@ -29,6 +33,8 @@ public abstract class AbstractRESTRequestInterceptor private Configuration conf; private RESTRequestInterceptor nextInterceptor; + private UserGroupInformation user = null; + private RouterClientRMService routerClientRMService = null; /** * Sets the {@link RESTRequestInterceptor} in the chain. @@ -62,9 +68,10 @@ public abstract class AbstractRESTRequestInterceptor * Initializes the {@link RESTRequestInterceptor}. */ @Override - public void init(String user) { + public void init(String userName) { + this.user = RouterServerUtil.setupUser(userName); if (this.nextInterceptor != null) { - this.nextInterceptor.init(user); + this.nextInterceptor.init(userName); } } @@ -86,4 +93,17 @@ public abstract class AbstractRESTRequestInterceptor return this.nextInterceptor; } + public UserGroupInformation getUser() { + return user; + } + + @Override + public RouterClientRMService getRouterClientRMService() { + return routerClientRMService; + } + + @Override + public void setRouterClientRMService(RouterClientRMService routerClientRMService) { + this.routerClientRMService = routerClientRMService; + } } \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AppsBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AppsBlock.java index 87f20c81bb8..7f277ae3ae7 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AppsBlock.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AppsBlock.java @@ -21,11 +21,18 @@ package org.apache.hadoop.yarn.server.router.webapp; import static org.apache.commons.text.StringEscapeUtils.escapeHtml4; import static org.apache.commons.text.StringEscapeUtils.escapeEcmaScript; import static org.apache.hadoop.yarn.util.StringHelper.join; +import static org.apache.hadoop.yarn.webapp.YarnWebParams.APP_SC; +import static org.apache.hadoop.yarn.webapp.YarnWebParams.APP_STATE; import static org.apache.hadoop.yarn.webapp.view.JQueryUI.C_PROGRESSBAR; import static org.apache.hadoop.yarn.webapp.view.JQueryUI.C_PROGRESSBAR_VALUE; import com.sun.jersey.api.client.Client; +import org.apache.commons.collections.CollectionUtils; +import org.apache.commons.lang3.StringUtils; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId; +import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo; +import org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade; import org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWSConsts; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppInfo; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppsInfo; @@ -34,34 +41,102 @@ import org.apache.hadoop.yarn.webapp.hamlet2.Hamlet; import org.apache.hadoop.yarn.webapp.hamlet2.Hamlet.TABLE; import org.apache.hadoop.yarn.webapp.hamlet2.Hamlet.TBODY; import org.apache.hadoop.yarn.webapp.util.WebAppUtils; -import org.apache.hadoop.yarn.webapp.view.HtmlBlock; import com.google.inject.Inject; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.stream.Collectors; + /** * Applications block for the Router Web UI. */ -public class AppsBlock extends HtmlBlock { +public class AppsBlock extends RouterBlock { + private final Router router; + private final Configuration conf; @Inject AppsBlock(Router router, ViewContext ctx) { - super(ctx); + super(router, ctx); this.router = router; + this.conf = this.router.getConfig(); } @Override protected void render(Block html) { - // Get the applications from the Resource Managers - Configuration conf = this.router.getConfig(); - Client client = RouterWebServiceUtil.createJerseyClient(conf); - String webAppAddress = WebAppUtils.getRouterWebAppURLWithScheme(conf); - AppsInfo apps = RouterWebServiceUtil - .genericForward(webAppAddress, null, AppsInfo.class, HTTPMethods.GET, - RMWSConsts.RM_WEB_SERVICE_PATH + RMWSConsts.APPS, null, null, conf, - client); - setTitle("Applications"); + boolean isEnabled = isYarnFederationEnabled(); + + // Get subClusterName + String subClusterName = $(APP_SC); + String reqState = $(APP_STATE); + + // We will try to get the subClusterName. + // If the subClusterName is not empty, + // it means that we need to get the Node list of a subCluster. + AppsInfo appsInfo = null; + if (subClusterName != null && !subClusterName.isEmpty()) { + initSubClusterMetricsOverviewTable(html, subClusterName); + appsInfo = getSubClusterAppsInfo(subClusterName, reqState); + } else { + // Metrics Overview Table + html.__(MetricsOverviewTable.class); + appsInfo = getYarnFederationAppsInfo(isEnabled); + } + + initYarnFederationAppsOfCluster(appsInfo, html); + } + + private static String escape(String str) { + return escapeEcmaScript(escapeHtml4(str)); + } + + private AppsInfo getYarnFederationAppsInfo(boolean isEnabled) { + if (isEnabled) { + String webAddress = WebAppUtils.getRouterWebAppURLWithScheme(this.conf); + return getSubClusterAppsInfoByWebAddress(webAddress, StringUtils.EMPTY); + } + return null; + } + + private AppsInfo getSubClusterAppsInfo(String subCluster, String states) { + try { + SubClusterId subClusterId = SubClusterId.newInstance(subCluster); + FederationStateStoreFacade facade = FederationStateStoreFacade.getInstance(); + SubClusterInfo subClusterInfo = facade.getSubCluster(subClusterId); + + if (subClusterInfo != null) { + // Prepare webAddress + String webAddress = subClusterInfo.getRMWebServiceAddress(); + String herfWebAppAddress = ""; + if (webAddress != null && !webAddress.isEmpty()) { + herfWebAppAddress = WebAppUtils.getHttpSchemePrefix(conf) + webAddress; + return getSubClusterAppsInfoByWebAddress(herfWebAppAddress, states); + } + } + } catch (Exception e) { + LOG.error("get AppsInfo From SubCluster = {} error.", subCluster, e); + } + return null; + } + + private AppsInfo getSubClusterAppsInfoByWebAddress(String webAddress, String states) { + Client client = RouterWebServiceUtil.createJerseyClient(conf); + Map queryParams = new HashMap<>(); + if (StringUtils.isNotBlank(states)) { + queryParams.put("states", new String[]{states}); + } + AppsInfo apps = RouterWebServiceUtil + .genericForward(webAddress, null, AppsInfo.class, HTTPMethods.GET, + RMWSConsts.RM_WEB_SERVICE_PATH + RMWSConsts.APPS, null, queryParams, conf, + client); + client.destroy(); + return apps; + } + + private void initYarnFederationAppsOfCluster(AppsInfo appsInfo, Block html) { TBODY> tbody = html.table("#apps").thead() .tr() @@ -81,45 +156,18 @@ public class AppsBlock extends HtmlBlock { // Render the applications StringBuilder appsTableData = new StringBuilder("[\n"); - for (AppInfo app : apps.getApps()) { - try { - String percent = String.format("%.1f", app.getProgress() * 100.0F); - String trackingURL = - app.getTrackingUrl() == null ? "#" : app.getTrackingUrl(); - // AppID numerical value parsed by parseHadoopID in yarn.dt.plugins.js - appsTableData.append("[\"") - .append("") - .append(app.getAppId()).append("\",\"") - .append(escape(app.getUser())).append("\",\"") - .append(escape(app.getName())).append("\",\"") - .append(escape(app.getApplicationType())).append("\",\"") - .append(escape(app.getQueue())).append("\",\"") - .append(String.valueOf(app.getPriority())).append("\",\"") - .append(app.getStartTime()).append("\",\"") - .append(app.getFinishTime()).append("\",\"") - .append(app.getState()).append("\",\"") - .append(app.getFinalStatus()).append("\",\"") - // Progress bar - .append("
    ").append("
    ") - // History link - .append("\",\"") - .append("History").append(""); - appsTableData.append("\"],\n"); + if (appsInfo != null && CollectionUtils.isNotEmpty(appsInfo.getApps())) { - } catch (Exception e) { - LOG.info( - "Cannot add application {}: {}", app.getAppId(), e.getMessage()); + List appInfoList = + appsInfo.getApps().stream().map(this::parseAppInfoData).collect(Collectors.toList()); + + if (CollectionUtils.isNotEmpty(appInfoList)) { + String formattedAppInfo = StringUtils.join(appInfoList, ","); + appsTableData.append(formattedAppInfo); } } - if (appsTableData.charAt(appsTableData.length() - 2) == ',') { - appsTableData.delete(appsTableData.length() - 2, - appsTableData.length() - 1); - } + appsTableData.append("]"); html.script().$type("text/javascript") .__("var appsTableData=" + appsTableData).__(); @@ -127,7 +175,39 @@ public class AppsBlock extends HtmlBlock { tbody.__().__(); } - private static String escape(String str) { - return escapeEcmaScript(escapeHtml4(str)); + private String parseAppInfoData(AppInfo app) { + StringBuilder appsDataBuilder = new StringBuilder(); + try { + String percent = String.format("%.1f", app.getProgress() * 100.0F); + String trackingURL = app.getTrackingUrl() == null ? "#" : app.getTrackingUrl(); + + // AppID numerical value parsed by parseHadoopID in yarn.dt.plugins.js + appsDataBuilder.append("[\"") + .append("") + .append(app.getAppId()).append("\",\"") + .append(escape(app.getUser())).append("\",\"") + .append(escape(app.getName())).append("\",\"") + .append(escape(app.getApplicationType())).append("\",\"") + .append(escape(app.getQueue())).append("\",\"") + .append(app.getPriority()).append("\",\"") + .append(app.getStartTime()).append("\",\"") + .append(app.getFinishTime()).append("\",\"") + .append(app.getState()).append("\",\"") + .append(app.getFinalStatus()).append("\",\"") + // Progress bar + .append("
    ").append("
    ") + // History link + .append("\",\"") + .append("History").append(""); + appsDataBuilder.append("\"]\n"); + + } catch (Exception e) { + LOG.warn("Cannot add application {}: {}", app.getAppId(), e.getMessage()); + } + return appsDataBuilder.toString(); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AppsPage.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AppsPage.java index 12d0b5b4ede..f820aed05f2 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AppsPage.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AppsPage.java @@ -20,11 +20,13 @@ package org.apache.hadoop.yarn.server.router.webapp; import static org.apache.hadoop.yarn.util.StringHelper.sjoin; import static org.apache.hadoop.yarn.webapp.YarnWebParams.APP_STATE; +import static org.apache.hadoop.yarn.webapp.YarnWebParams.APP_SC; import static org.apache.hadoop.yarn.webapp.view.JQueryUI.DATATABLES; import static org.apache.hadoop.yarn.webapp.view.JQueryUI.DATATABLES_ID; import static org.apache.hadoop.yarn.webapp.view.JQueryUI.initID; import static org.apache.hadoop.yarn.webapp.view.JQueryUI.tableInit; +import org.apache.commons.lang3.StringUtils; import org.apache.hadoop.yarn.webapp.SubView; class AppsPage extends RouterView { @@ -37,9 +39,14 @@ class AppsPage extends RouterView { setTableStyles(html, "apps", ".queue {width:6em}", ".ui {width:8em}"); // Set the correct title. + String subClusterName = $(APP_SC); String reqState = $(APP_STATE); - reqState = (reqState == null || reqState.isEmpty() ? "All" : reqState); - setTitle(sjoin(reqState, "Applications")); + + if(StringUtils.isBlank(subClusterName)){ + subClusterName = "Federation "; + } + reqState = (StringUtils.isBlank(reqState) ? "All" : reqState); + setTitle(sjoin(subClusterName, reqState, "Applications")); } private String appsTableInit() { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/DefaultRequestInterceptorREST.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/DefaultRequestInterceptorREST.java index c07056ce8a1..918865da467 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/DefaultRequestInterceptorREST.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/DefaultRequestInterceptorREST.java @@ -28,6 +28,7 @@ import javax.servlet.http.HttpServletResponse; import javax.ws.rs.core.Response; import com.sun.jersey.api.client.Client; +import org.apache.hadoop.classification.VisibleForTesting; import org.apache.hadoop.security.authorize.AuthorizationException; import org.apache.hadoop.yarn.exceptions.YarnException; import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; @@ -526,10 +527,10 @@ public class DefaultRequestInterceptorREST @Override public RMQueueAclInfo checkUserAccessToQueue(String queue, String username, - String queueAclType, HttpServletRequest hsr) { + String queueAclType, HttpServletRequest hsr) throws AuthorizationException { return RouterWebServiceUtil.genericForward(webAppAddress, hsr, RMQueueAclInfo.class, HTTPMethods.GET, - RMWSConsts.RM_WEB_SERVICE_PATH + RMWSConsts.QUEUES + "/" + queue + RMWSConsts.RM_WEB_SERVICE_PATH + "/" + RMWSConsts.QUEUES + "/" + queue + "/access", null, null, getConf(), client); } @@ -602,4 +603,17 @@ public class DefaultRequestInterceptorREST + containerId + "/" + RMWSConsts.SIGNAL + "/" + command, null, null, getConf(), client); } + + @VisibleForTesting + public Client getClient() { + return client; + } + + @Override + public NodeLabelsInfo getRMNodeLabels(HttpServletRequest hsr) { + return RouterWebServiceUtil.genericForward(webAppAddress, hsr, + NodeLabelsInfo.class, HTTPMethods.GET, + RMWSConsts.RM_WEB_SERVICE_PATH + RMWSConsts.GET_RM_NODE_LABELS, + null, null, getConf(), client); + } } \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/FederationBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/FederationBlock.java index 70976d6221d..44f8d7407bf 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/FederationBlock.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/FederationBlock.java @@ -20,50 +20,42 @@ package org.apache.hadoop.yarn.server.router.webapp; import java.io.StringReader; import java.util.ArrayList; -import java.util.Collections; -import java.util.Comparator; import java.util.List; import java.util.Map; import java.util.HashMap; +import java.util.Date; import com.google.gson.Gson; -import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.util.StringUtils; -import org.apache.commons.lang3.time.DateFormatUtils; -import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId; import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo; -import org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade; +import org.apache.hadoop.yarn.server.federation.store.records.SubClusterState; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ClusterMetricsInfo; import org.apache.hadoop.yarn.server.router.Router; import org.apache.hadoop.yarn.webapp.hamlet2.Hamlet; import org.apache.hadoop.yarn.webapp.hamlet2.Hamlet.TABLE; import org.apache.hadoop.yarn.webapp.hamlet2.Hamlet.TBODY; import org.apache.hadoop.yarn.webapp.util.WebAppUtils; -import org.apache.hadoop.yarn.webapp.view.HtmlBlock; import com.google.inject.Inject; import com.sun.jersey.api.json.JSONConfiguration; import com.sun.jersey.api.json.JSONJAXBContext; import com.sun.jersey.api.json.JSONUnmarshaller; -class FederationBlock extends HtmlBlock { +class FederationBlock extends RouterBlock { private final Router router; @Inject FederationBlock(ViewContext ctx, Router router) { - super(ctx); + super(router, ctx); this.router = router; } @Override public void render(Block html) { - Configuration conf = this.router.getConfig(); - boolean isEnabled = conf.getBoolean( - YarnConfiguration.FEDERATION_ENABLED, - YarnConfiguration.DEFAULT_FEDERATION_ENABLED); + boolean isEnabled = isYarnFederationEnabled(); // init Html Page Federation initHtmlPageFederation(html, isEnabled); @@ -121,23 +113,6 @@ class FederationBlock extends HtmlBlock { private void initHtmlPageFederation(Block html, boolean isEnabled) { List> lists = new ArrayList<>(); - // If Yarn Federation is not enabled, the user needs to be prompted. - if (!isEnabled) { - html.style(".alert {padding: 15px; margin-bottom: 20px; " + - " border: 1px solid transparent; border-radius: 4px;}"); - html.style(".alert-dismissable {padding-right: 35px;}"); - html.style(".alert-info {color: #856404;background-color: #fff3cd;border-color: #ffeeba;}"); - - Hamlet.DIV div = html.div("#div_id").$class("alert alert-dismissable alert-info"); - div.p().$style("color:red").__("Federation is not Enabled.").__() - .p().__() - .p().__("We can refer to the following documents to configure Yarn Federation. ").__() - .p().__() - .a("https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/Federation.html", - "Hadoop: YARN Federation"). - __(); - } - // Table header TBODY> tbody = html.table("#rms").$class("cell-border").$style("width:100%").thead().tr() @@ -150,16 +125,9 @@ class FederationBlock extends HtmlBlock { .__().__().tbody(); try { - // Binding to the FederationStateStore - FederationStateStoreFacade facade = FederationStateStoreFacade.getInstance(); - - Map subClustersInfo = facade.getSubClusters(true); // Sort the SubClusters - List subclusters = new ArrayList<>(); - subclusters.addAll(subClustersInfo.values()); - Comparator cmp = Comparator.comparing(o -> o.getSubClusterId()); - Collections.sort(subclusters, cmp); + List subclusters = getSubClusterInfoList(); for (SubClusterInfo subcluster : subclusters) { @@ -182,32 +150,51 @@ class FederationBlock extends HtmlBlock { ClusterMetricsInfo subClusterInfo = getClusterMetricsInfo(capability); // Prepare LastStartTime & LastHeartBeat - String lastStartTime = - DateFormatUtils.format(subcluster.getLastStartTime(), DATE_PATTERN); - String lastHeartBeat = - DateFormatUtils.format(subcluster.getLastHeartBeat(), DATE_PATTERN); + Date lastStartTime = new Date(subcluster.getLastStartTime()); + Date lastHeartBeat = new Date(subcluster.getLastHeartBeat()); // Prepare Resource long totalMB = subClusterInfo.getTotalMB(); String totalMBDesc = StringUtils.byteDesc(totalMB * BYTES_IN_MB); long totalVirtualCores = subClusterInfo.getTotalVirtualCores(); - String resources = String.format("", totalMBDesc, totalVirtualCores); + String resources = String.format("", totalMBDesc, totalVirtualCores); // Prepare Node long totalNodes = subClusterInfo.getTotalNodes(); long activeNodes = subClusterInfo.getActiveNodes(); - String nodes = String.format("", totalNodes, activeNodes); + String nodes = String.format("", totalNodes, activeNodes); // Prepare HTML Table + String stateStyle = "color:#dc3545;font-weight:bolder"; + SubClusterState state = subcluster.getState(); + if (SubClusterState.SC_RUNNING == state) { + stateStyle = "color:#28a745;font-weight:bolder"; + } + tbody.tr().$id(subClusterIdText) .td().$class("details-control").a(herfWebAppAddress, subClusterIdText).__() - .td(subcluster.getState().name()) - .td(lastStartTime) - .td(lastHeartBeat) + .td().$style(stateStyle).__(state.name()).__() + .td().__(lastStartTime).__() + .td().__(lastHeartBeat).__() .td(resources) .td(nodes) .__(); + // Formatted memory information + long allocatedMB = subClusterInfo.getAllocatedMB(); + String allocatedMBDesc = StringUtils.byteDesc(allocatedMB * BYTES_IN_MB); + long availableMB = subClusterInfo.getAvailableMB(); + String availableMBDesc = StringUtils.byteDesc(availableMB * BYTES_IN_MB); + long pendingMB = subClusterInfo.getPendingMB(); + String pendingMBDesc = StringUtils.byteDesc(pendingMB * BYTES_IN_MB); + long reservedMB = subClusterInfo.getReservedMB(); + String reservedMBDesc = StringUtils.byteDesc(reservedMB * BYTES_IN_MB); + + subclusterMap.put("totalmemory", totalMBDesc); + subclusterMap.put("allocatedmemory", allocatedMBDesc); + subclusterMap.put("availablememory", availableMBDesc); + subclusterMap.put("pendingmemory", pendingMBDesc); + subclusterMap.put("reservedmemory", reservedMBDesc); subclusterMap.put("subcluster", subClusterId.getId()); subclusterMap.put("capability", capability); lists.add(subclusterMap); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/FederationInterceptorREST.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/FederationInterceptorREST.java index b8c55cf6ee9..69dba5b07e6 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/FederationInterceptorREST.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/FederationInterceptorREST.java @@ -21,20 +21,20 @@ package org.apache.hadoop.yarn.server.router.webapp; import java.io.IOException; import java.lang.reflect.Method; import java.security.Principal; +import java.security.PrivilegedExceptionAction; import java.util.ArrayList; import java.util.Collection; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.Map.Entry; -import java.util.Random; import java.util.Set; import java.util.concurrent.Callable; import java.util.concurrent.CompletionService; import java.util.concurrent.ExecutorCompletionService; import java.util.concurrent.ExecutorService; import java.util.concurrent.Future; -import java.util.stream.Collectors; +import java.util.concurrent.TimeUnit; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletRequestWrapper; @@ -44,28 +44,45 @@ import javax.ws.rs.core.Response; import javax.ws.rs.core.Response.Status; import org.apache.commons.lang3.NotImplementedException; +import org.apache.commons.lang3.StringUtils; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.impl.prefetch.Validate; +import org.apache.hadoop.io.Text; +import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.security.authorize.AuthorizationException; +import org.apache.hadoop.security.token.Token; import org.apache.hadoop.util.ReflectionUtils; import org.apache.hadoop.util.Sets; +import org.apache.hadoop.util.Time; import org.apache.hadoop.util.concurrent.HadoopExecutors; +import org.apache.hadoop.yarn.api.protocolrecords.GetDelegationTokenRequest; +import org.apache.hadoop.yarn.api.protocolrecords.GetDelegationTokenResponse; +import org.apache.hadoop.yarn.api.protocolrecords.RenewDelegationTokenRequest; +import org.apache.hadoop.yarn.api.protocolrecords.RenewDelegationTokenResponse; +import org.apache.hadoop.yarn.api.protocolrecords.CancelDelegationTokenRequest; +import org.apache.hadoop.yarn.api.protocolrecords.CancelDelegationTokenResponse; +import org.apache.hadoop.yarn.api.protocolrecords.ReservationSubmissionRequest; import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext; import org.apache.hadoop.yarn.api.records.ContainerId; import org.apache.hadoop.yarn.api.records.NodeLabel; import org.apache.hadoop.yarn.api.records.ReservationId; +import org.apache.hadoop.yarn.api.records.ReservationDefinition; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.exceptions.YarnException; import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; +import org.apache.hadoop.yarn.security.client.RMDelegationTokenIdentifier; import org.apache.hadoop.yarn.server.federation.policies.FederationPolicyUtils; import org.apache.hadoop.yarn.server.federation.policies.RouterPolicyFacade; +import org.apache.hadoop.yarn.server.federation.policies.exceptions.FederationPolicyException; import org.apache.hadoop.yarn.server.federation.policies.exceptions.FederationPolicyInitializationException; import org.apache.hadoop.yarn.server.federation.resolver.SubClusterResolver; -import org.apache.hadoop.yarn.server.federation.store.records.ApplicationHomeSubCluster; +import org.apache.hadoop.yarn.server.federation.retry.FederationActionRetry; import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId; import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo; import org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade; import org.apache.hadoop.yarn.server.resourcemanager.webapp.NodeIDsInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWSConsts; import org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebAppUtil; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ActivitiesInfo; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppActivitiesInfo; @@ -98,10 +115,18 @@ import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceOptionIn import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.BulkActivitiesInfo; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.SchedulerTypeInfo; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeLabelInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationDefinitionInfo; import org.apache.hadoop.yarn.server.router.RouterMetrics; import org.apache.hadoop.yarn.server.router.RouterServerUtil; import org.apache.hadoop.yarn.server.router.clientrm.ClientMethod; +import org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService; +import org.apache.hadoop.yarn.server.router.security.RouterDelegationTokenSecretManager; import org.apache.hadoop.yarn.server.router.webapp.cache.RouterAppInfoCacheKey; +import org.apache.hadoop.yarn.server.router.webapp.dao.FederationBulkActivitiesInfo; +import org.apache.hadoop.yarn.server.router.webapp.dao.FederationRMQueueAclInfo; +import org.apache.hadoop.yarn.server.router.webapp.dao.SubClusterResult; +import org.apache.hadoop.yarn.server.router.webapp.dao.FederationSchedulerTypeInfo; +import org.apache.hadoop.yarn.server.utils.BuilderUtils; import org.apache.hadoop.yarn.server.webapp.dao.AppAttemptInfo; import org.apache.hadoop.yarn.server.webapp.dao.ContainerInfo; import org.apache.hadoop.yarn.server.webapp.dao.ContainersInfo; @@ -117,6 +142,9 @@ import org.slf4j.LoggerFactory; import org.apache.hadoop.classification.VisibleForTesting; import org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder; +import static org.apache.hadoop.yarn.server.router.webapp.RouterWebServiceUtil.extractToken; +import static org.apache.hadoop.yarn.server.router.webapp.RouterWebServiceUtil.getKerberosUserGroupInformation; + /** * Extends the {@code AbstractRESTRequestInterceptor} class and provides an * implementation for federation of YARN RM and scaling an application across @@ -130,13 +158,14 @@ public class FederationInterceptorREST extends AbstractRESTRequestInterceptor { private int numSubmitRetries; private FederationStateStoreFacade federationFacade; - private Random rand; private RouterPolicyFacade policyFacade; private RouterMetrics routerMetrics; private final Clock clock = new MonotonicClock(); private boolean returnPartialReport; private boolean appInfosCacheEnabled; private int appInfosCacheCount; + private boolean allowPartialResult; + private long submitIntervalTime; private Map interceptors; private LRUCacheHashMap appInfosCaches; @@ -148,8 +177,10 @@ public class FederationInterceptorREST extends AbstractRESTRequestInterceptor { @Override public void init(String user) { + + super.init(user); + federationFacade = FederationStateStoreFacade.getInstance(); - rand = new Random(); final Configuration conf = this.getConf(); @@ -187,24 +218,14 @@ public class FederationInterceptorREST extends AbstractRESTRequestInterceptor { YarnConfiguration.DEFAULT_ROUTER_APPSINFO_CACHED_COUNT); appInfosCaches = new LRUCacheHashMap<>(appInfosCacheCount, true); } - } - private SubClusterId getRandomActiveSubCluster( - Map activeSubclusters, - List blackListSubClusters) throws YarnException { + allowPartialResult = conf.getBoolean( + YarnConfiguration.ROUTER_INTERCEPTOR_ALLOW_PARTIAL_RESULT_ENABLED, + YarnConfiguration.DEFAULT_ROUTER_INTERCEPTOR_ALLOW_PARTIAL_RESULT_ENABLED); - if (activeSubclusters == null || activeSubclusters.size() < 1) { - RouterServerUtil.logAndThrowException( - FederationPolicyUtils.NO_ACTIVE_SUBCLUSTER_AVAILABLE, null); - } - Collection keySet = activeSubclusters.keySet(); - FederationPolicyUtils.validateSubClusterAvailability(keySet, blackListSubClusters); - if (blackListSubClusters != null) { - keySet.removeAll(blackListSubClusters); - } - - List list = keySet.stream().collect(Collectors.toList()); - return list.get(rand.nextInt(list.size())); + submitIntervalTime = conf.getTimeDuration( + YarnConfiguration.ROUTER_CLIENTRM_SUBMIT_INTERVAL_TIME, + YarnConfiguration.DEFAULT_CLIENTRM_SUBMIT_INTERVAL_TIME, TimeUnit.MILLISECONDS); } @VisibleForTesting @@ -235,7 +256,8 @@ public class FederationInterceptorREST extends AbstractRESTRequestInterceptor { .isAssignableFrom(interceptorClass)) { interceptorInstance = (DefaultRequestInterceptorREST) ReflectionUtils .newInstance(interceptorClass, conf); - + String userName = getUser().getUserName(); + interceptorInstance.init(userName); } else { throw new YarnRuntimeException( "Class: " + interceptorClassName + " not instance of " @@ -293,62 +315,79 @@ public class FederationInterceptorREST extends AbstractRESTRequestInterceptor { long startTime = clock.getTime(); - Map subClustersActive; try { - subClustersActive = federationFacade.getSubClusters(true); - } catch (YarnException e) { - routerMetrics.incrAppsFailedCreated(); - return Response.status(Status.INTERNAL_SERVER_ERROR) - .entity(e.getLocalizedMessage()).build(); - } + Map subClustersActive = + federationFacade.getSubClusters(true); - List blacklist = new ArrayList<>(); - - for (int i = 0; i < numSubmitRetries; ++i) { - - SubClusterId subClusterId; - try { - subClusterId = getRandomActiveSubCluster(subClustersActive, blacklist); - } catch (YarnException e) { - routerMetrics.incrAppsFailedCreated(); - return Response.status(Status.SERVICE_UNAVAILABLE) - .entity(e.getLocalizedMessage()).build(); - } - - LOG.debug("getNewApplication try #{} on SubCluster {}.", i, subClusterId); - - DefaultRequestInterceptorREST interceptor = - getOrCreateInterceptorForSubCluster(subClusterId, - subClustersActive.get(subClusterId).getRMWebServiceAddress()); - Response response = null; - try { - response = interceptor.createNewApplication(hsr); - } catch (Exception e) { - LOG.warn("Unable to create a new ApplicationId in SubCluster {}.", - subClusterId.getId(), e); - } - - if (response != null && - response.getStatus() == HttpServletResponse.SC_OK) { + // We declare blackList and retries. + List blackList = new ArrayList<>(); + int actualRetryNums = federationFacade.getRetryNumbers(numSubmitRetries); + Response response = ((FederationActionRetry) (retryCount) -> + invokeGetNewApplication(subClustersActive, blackList, hsr, retryCount)). + runWithRetries(actualRetryNums, submitIntervalTime); + // If the response is not empty and the status is SC_OK, + // this request can be returned directly. + if (response != null && response.getStatus() == HttpServletResponse.SC_OK) { long stopTime = clock.getTime(); routerMetrics.succeededAppsCreated(stopTime - startTime); - return response; - } else { - // Empty response from the ResourceManager. - // Blacklist this subcluster for this request. - blacklist.add(subClusterId); } + } catch (FederationPolicyException e) { + // If a FederationPolicyException is thrown, the service is unavailable. + routerMetrics.incrAppsFailedCreated(); + return Response.status(Status.SERVICE_UNAVAILABLE).entity(e.getLocalizedMessage()).build(); + } catch (Exception e) { + routerMetrics.incrAppsFailedCreated(); + return Response.status(Status.INTERNAL_SERVER_ERROR).entity(e.getLocalizedMessage()).build(); } + // return error message directly. String errMsg = "Fail to create a new application."; LOG.error(errMsg); routerMetrics.incrAppsFailedCreated(); - return Response - .status(Status.INTERNAL_SERVER_ERROR) - .entity(errMsg) - .build(); + return Response.status(Status.INTERNAL_SERVER_ERROR).entity(errMsg).build(); + } + + /** + * Invoke GetNewApplication to different subClusters. + * + * @param subClustersActive Active SubClusters. + * @param blackList Blacklist avoid repeated calls to unavailable subCluster. + * @param hsr HttpServletRequest. + * @param retryCount number of retries. + * @return Get response, If the response is empty or status not equal SC_OK, the request fails, + * if the response is not empty and status equal SC_OK, the request is successful. + * @throws YarnException yarn exception. + * @throws IOException io error. + * @throws InterruptedException interrupted exception. + */ + private Response invokeGetNewApplication(Map subClustersActive, + List blackList, HttpServletRequest hsr, int retryCount) + throws YarnException, IOException, InterruptedException { + + SubClusterId subClusterId = + federationFacade.getRandomActiveSubCluster(subClustersActive, blackList); + + LOG.info("getNewApplication try #{} on SubCluster {}.", retryCount, subClusterId); + + DefaultRequestInterceptorREST interceptor = getOrCreateInterceptorForSubCluster(subClusterId, + subClustersActive.get(subClusterId).getRMWebServiceAddress()); + + try { + Response response = interceptor.createNewApplication(hsr); + if (response != null && response.getStatus() == HttpServletResponse.SC_OK) { + return response; + } + } catch (Exception e) { + blackList.add(subClusterId); + RouterServerUtil.logAndThrowException(e.getMessage(), e); + } + + // We need to throw the exception directly. + String msg = String.format("Unable to create a new ApplicationId in SubCluster %s.", + subClusterId.getId()); + throw new YarnException(msg); } /** @@ -423,142 +462,106 @@ public class FederationInterceptorREST extends AbstractRESTRequestInterceptor { long startTime = clock.getTime(); + // We verify the parameters to ensure that newApp is not empty and + // that the format of applicationId is correct. if (newApp == null || newApp.getApplicationId() == null) { routerMetrics.incrAppsFailedSubmitted(); String errMsg = "Missing ApplicationSubmissionContextInfo or " + "applicationSubmissionContext information."; - return Response - .status(Status.BAD_REQUEST) - .entity(errMsg) - .build(); + return Response.status(Status.BAD_REQUEST).entity(errMsg).build(); } - ApplicationId applicationId = null; try { - applicationId = ApplicationId.fromString(newApp.getApplicationId()); + String applicationId = newApp.getApplicationId(); + RouterServerUtil.validateApplicationId(applicationId); } catch (IllegalArgumentException e) { routerMetrics.incrAppsFailedSubmitted(); - return Response - .status(Status.BAD_REQUEST) - .entity(e.getLocalizedMessage()) - .build(); + return Response.status(Status.BAD_REQUEST).entity(e.getLocalizedMessage()).build(); } - List blacklist = new ArrayList<>(); - - for (int i = 0; i < numSubmitRetries; ++i) { - - ApplicationSubmissionContext context = - RMWebAppUtil.createAppSubmissionContext(newApp, this.getConf()); - - SubClusterId subClusterId = null; - try { - subClusterId = policyFacade.getHomeSubcluster(context, blacklist); - } catch (YarnException e) { - routerMetrics.incrAppsFailedSubmitted(); - return Response - .status(Status.SERVICE_UNAVAILABLE) - .entity(e.getLocalizedMessage()) - .build(); - } - LOG.info("submitApplication appId {} try #{} on SubCluster {}.", - applicationId, i, subClusterId); - - ApplicationHomeSubCluster appHomeSubCluster = - ApplicationHomeSubCluster.newInstance(applicationId, subClusterId); - - if (i == 0) { - try { - // persist the mapping of applicationId and the subClusterId which has - // been selected as its home - subClusterId = - federationFacade.addApplicationHomeSubCluster(appHomeSubCluster); - } catch (YarnException e) { - routerMetrics.incrAppsFailedSubmitted(); - String errMsg = "Unable to insert the ApplicationId " + applicationId - + " into the FederationStateStore"; - return Response - .status(Status.SERVICE_UNAVAILABLE) - .entity(errMsg + " " + e.getLocalizedMessage()) - .build(); - } - } else { - try { - // update the mapping of applicationId and the home subClusterId to - // the new subClusterId we have selected - federationFacade.updateApplicationHomeSubCluster(appHomeSubCluster); - } catch (YarnException e) { - String errMsg = "Unable to update the ApplicationId " + applicationId - + " into the FederationStateStore"; - SubClusterId subClusterIdInStateStore; - try { - subClusterIdInStateStore = - federationFacade.getApplicationHomeSubCluster(applicationId); - } catch (YarnException e1) { - routerMetrics.incrAppsFailedSubmitted(); - return Response - .status(Status.SERVICE_UNAVAILABLE) - .entity(e1.getLocalizedMessage()) - .build(); - } - if (subClusterId == subClusterIdInStateStore) { - LOG.info("Application {} already submitted on SubCluster {}.", - applicationId, subClusterId); - } else { - routerMetrics.incrAppsFailedSubmitted(); - return Response - .status(Status.SERVICE_UNAVAILABLE) - .entity(errMsg) - .build(); - } - } - } - - SubClusterInfo subClusterInfo; - try { - subClusterInfo = federationFacade.getSubCluster(subClusterId); - } catch (YarnException e) { - routerMetrics.incrAppsFailedSubmitted(); - return Response - .status(Status.SERVICE_UNAVAILABLE) - .entity(e.getLocalizedMessage()) - .build(); - } - - Response response = null; - try { - response = getOrCreateInterceptorForSubCluster(subClusterId, - subClusterInfo.getRMWebServiceAddress()).submitApplication(newApp, - hsr); - } catch (Exception e) { - LOG.warn("Unable to submit the application {} to SubCluster {}", - applicationId, subClusterId.getId(), e); - } - - if (response != null && - response.getStatus() == HttpServletResponse.SC_ACCEPTED) { - LOG.info("Application {} with appId {} submitted on {}", - context.getApplicationName(), applicationId, subClusterId); - + List blackList = new ArrayList<>(); + try { + int activeSubClustersCount = federationFacade.getActiveSubClustersCount(); + int actualRetryNums = Math.min(activeSubClustersCount, numSubmitRetries); + Response response = ((FederationActionRetry) (retryCount) -> + invokeSubmitApplication(newApp, blackList, hsr, retryCount)). + runWithRetries(actualRetryNums, submitIntervalTime); + if (response != null) { long stopTime = clock.getTime(); routerMetrics.succeededAppsSubmitted(stopTime - startTime); - return response; - } else { - // Empty response from the ResourceManager. - // Blacklist this subcluster for this request. - blacklist.add(subClusterId); } + } catch (Exception e) { + routerMetrics.incrAppsFailedSubmitted(); + return Response.status(Status.SERVICE_UNAVAILABLE).entity(e.getLocalizedMessage()).build(); } routerMetrics.incrAppsFailedSubmitted(); - String errMsg = "Application " + newApp.getApplicationName() - + " with appId " + applicationId + " failed to be submitted."; + String errMsg = String.format("Application %s with appId %s failed to be submitted.", + newApp.getApplicationName(), newApp.getApplicationId()); LOG.error(errMsg); - return Response - .status(Status.SERVICE_UNAVAILABLE) - .entity(errMsg) - .build(); + return Response.status(Status.SERVICE_UNAVAILABLE).entity(errMsg).build(); + } + + /** + * Invoke SubmitApplication to different subClusters. + * + * @param submissionContext application submission context. + * @param blackList Blacklist avoid repeated calls to unavailable subCluster. + * @param hsr HttpServletRequest. + * @param retryCount number of retries. + * @return Get response, If the response is empty or status not equal SC_ACCEPTED, + * the request fails, if the response is not empty and status equal SC_OK, + * the request is successful. + * @throws YarnException yarn exception. + * @throws IOException io error. + */ + private Response invokeSubmitApplication(ApplicationSubmissionContextInfo submissionContext, + List blackList, HttpServletRequest hsr, int retryCount) + throws YarnException, IOException, InterruptedException { + + // Step1. We convert ApplicationSubmissionContextInfo to ApplicationSubmissionContext + // and Prepare parameters. + ApplicationSubmissionContext context = + RMWebAppUtil.createAppSubmissionContext(submissionContext, this.getConf()); + ApplicationId applicationId = ApplicationId.fromString(submissionContext.getApplicationId()); + SubClusterId subClusterId = null; + + try { + // Get subClusterId from policy. + subClusterId = policyFacade.getHomeSubcluster(context, blackList); + + // Print the log of submitting the submitApplication. + LOG.info("submitApplication appId {} try #{} on SubCluster {}.", + applicationId, retryCount, subClusterId); + + // Step2. We Store the mapping relationship + // between Application and HomeSubCluster in stateStore. + federationFacade.addOrUpdateApplicationHomeSubCluster( + applicationId, subClusterId, retryCount); + + // Step3. We get subClusterInfo based on subClusterId. + SubClusterInfo subClusterInfo = federationFacade.getSubCluster(subClusterId); + + // Step4. Submit the request, if the response is HttpServletResponse.SC_ACCEPTED, + // We return the response, otherwise we throw an exception. + Response response = getOrCreateInterceptorForSubCluster(subClusterId, + subClusterInfo.getRMWebServiceAddress()).submitApplication(submissionContext, hsr); + if (response != null && response.getStatus() == HttpServletResponse.SC_ACCEPTED) { + LOG.info("Application {} with appId {} submitted on {}.", + context.getApplicationName(), applicationId, subClusterId); + return response; + } + String msg = String.format("application %s failed to be submitted.", applicationId); + throw new YarnException(msg); + } catch (Exception e) { + LOG.warn("Unable to submit the application {} to SubCluster {}.", applicationId, + subClusterId, e); + if (subClusterId != null) { + blackList.add(subClusterId); + } + throw e; + } } /** @@ -1000,11 +1003,14 @@ public class FederationInterceptorREST extends AbstractRESTRequestInterceptor { nodes.addAll(nodesInfo.getNodes()); }); } catch (NotFoundException e) { - LOG.error("Get all active sub cluster(s) error.", e); + LOG.error("get all active sub cluster(s) error.", e); + throw e; } catch (YarnException e) { - LOG.error("getNodes error.", e); + LOG.error("getNodes by states = {} error.", states, e); + throw new YarnRuntimeException(e); } catch (IOException e) { - LOG.error("getNodes error with io error.", e); + LOG.error("getNodes by states = {} error with io error.", states, e); + throw new YarnRuntimeException(e); } // Delete duplicate from all the node reports got from all the available @@ -1138,9 +1144,43 @@ public class FederationInterceptorREST extends AbstractRESTRequestInterceptor { throw new NotImplementedException("Code is not implemented"); } + /** + * This method retrieves the current scheduler status, and it is reachable by + * using {@link RMWSConsts#SCHEDULER}. + * + * For the federation mode, the SchedulerType information of the cluster + * cannot be integrated and displayed, and the specific cluster information needs to be marked. + * + * @return the current scheduler status + */ @Override public SchedulerTypeInfo getSchedulerInfo() { - throw new NotImplementedException("Code is not implemented"); + try { + long startTime = Time.now(); + Map subClustersActive = getActiveSubclusters(); + Class[] argsClasses = new Class[]{}; + Object[] args = new Object[]{}; + ClientMethod remoteMethod = new ClientMethod("getSchedulerInfo", argsClasses, args); + Map subClusterInfoMap = + invokeConcurrent(subClustersActive.values(), remoteMethod, SchedulerTypeInfo.class); + FederationSchedulerTypeInfo federationSchedulerTypeInfo = new FederationSchedulerTypeInfo(); + subClusterInfoMap.forEach((subClusterInfo, schedulerTypeInfo) -> { + SubClusterId subClusterId = subClusterInfo.getSubClusterId(); + schedulerTypeInfo.setSubClusterId(subClusterId.getId()); + federationSchedulerTypeInfo.getList().add(schedulerTypeInfo); + }); + long stopTime = Time.now(); + routerMetrics.succeededGetSchedulerInfoRetrieved(stopTime - startTime); + return federationSchedulerTypeInfo; + } catch (NotFoundException e) { + routerMetrics.incrGetSchedulerInfoFailedRetrieved(); + RouterServerUtil.logAndThrowRunTimeException("Get all active sub cluster(s) error.", e); + } catch (YarnException | IOException e) { + routerMetrics.incrGetSchedulerInfoFailedRetrieved(); + RouterServerUtil.logAndThrowRunTimeException("getSchedulerInfo error.", e); + } + routerMetrics.incrGetSchedulerInfoFailedRetrieved(); + throw new RuntimeException("getSchedulerInfo error."); } @Override @@ -1149,16 +1189,110 @@ public class FederationInterceptorREST extends AbstractRESTRequestInterceptor { throw new NotImplementedException("Code is not implemented"); } + /** + * This method retrieve all the activities in a specific node, and it is + * reachable by using {@link RMWSConsts#SCHEDULER_ACTIVITIES}. + * + * @param hsr the servlet request + * @param nodeId the node we want to retrieve the activities. It is a + * QueryParam. + * @param groupBy the groupBy type by which the activities should be + * aggregated. It is a QueryParam. + * @return all the activities in the specific node + */ @Override public ActivitiesInfo getActivities(HttpServletRequest hsr, String nodeId, String groupBy) { - throw new NotImplementedException("Code is not implemented"); + try { + // Check the parameters to ensure that the parameters are not empty + Validate.checkNotNullAndNotEmpty(nodeId, "nodeId"); + Validate.checkNotNullAndNotEmpty(groupBy, "groupBy"); + + // Query SubClusterInfo according to id, + // if the nodeId cannot get SubClusterInfo, an exception will be thrown directly. + SubClusterInfo subClusterInfo = getNodeSubcluster(nodeId); + + // Call the corresponding subCluster to get ActivitiesInfo. + long startTime = clock.getTime(); + DefaultRequestInterceptorREST interceptor = getOrCreateInterceptorForSubCluster( + subClusterInfo.getSubClusterId(), subClusterInfo.getRMWebServiceAddress()); + final HttpServletRequest hsrCopy = clone(hsr); + ActivitiesInfo activitiesInfo = interceptor.getActivities(hsrCopy, nodeId, groupBy); + if (activitiesInfo != null) { + long stopTime = clock.getTime(); + routerMetrics.succeededGetActivitiesLatencyRetrieved(stopTime - startTime); + return activitiesInfo; + } + } catch (IllegalArgumentException e) { + routerMetrics.incrGetActivitiesFailedRetrieved(); + throw e; + } catch (NotFoundException e) { + routerMetrics.incrGetActivitiesFailedRetrieved(); + throw e; + } + + routerMetrics.incrGetActivitiesFailedRetrieved(); + throw new RuntimeException("getActivities Failed."); } + /** + * This method retrieve the last n activities inside scheduler, and it is + * reachable by using {@link RMWSConsts#SCHEDULER_BULK_ACTIVITIES}. + * + * @param hsr the servlet request + * @param groupBy the groupBy type by which the activities should be + * aggregated. It is a QueryParam. + * @param activitiesCount number of activities + * @return last n activities + */ @Override public BulkActivitiesInfo getBulkActivities(HttpServletRequest hsr, String groupBy, int activitiesCount) throws InterruptedException { - throw new NotImplementedException("Code is not implemented"); + try { + // Step1. Check the parameters to ensure that the parameters are not empty + Validate.checkNotNullAndNotEmpty(groupBy, "groupBy"); + Validate.checkNotNegative(activitiesCount, "activitiesCount"); + + // Step2. Call the interface of subCluster concurrently and get the returned result. + Map subClustersActive = getActiveSubclusters(); + final HttpServletRequest hsrCopy = clone(hsr); + Class[] argsClasses = new Class[]{HttpServletRequest.class, String.class, int.class}; + Object[] args = new Object[]{hsrCopy, groupBy, activitiesCount}; + ClientMethod remoteMethod = new ClientMethod("getBulkActivities", argsClasses, args); + Map appStatisticsMap = invokeConcurrent( + subClustersActive.values(), remoteMethod, BulkActivitiesInfo.class); + + // Step3. Generate Federation objects and set subCluster information. + long startTime = clock.getTime(); + FederationBulkActivitiesInfo fedBulkActivitiesInfo = new FederationBulkActivitiesInfo(); + appStatisticsMap.forEach((subClusterInfo, bulkActivitiesInfo) -> { + SubClusterId subClusterId = subClusterInfo.getSubClusterId(); + bulkActivitiesInfo.setSubClusterId(subClusterId.getId()); + fedBulkActivitiesInfo.getList().add(bulkActivitiesInfo); + }); + long stopTime = clock.getTime(); + routerMetrics.succeededGetBulkActivitiesRetrieved(stopTime - startTime); + return fedBulkActivitiesInfo; + } catch (IllegalArgumentException e) { + routerMetrics.incrGetBulkActivitiesFailedRetrieved(); + throw e; + } catch (NotFoundException e) { + routerMetrics.incrGetBulkActivitiesFailedRetrieved(); + RouterServerUtil.logAndThrowRunTimeException("get all active sub cluster(s) error.", e); + } catch (IOException e) { + routerMetrics.incrGetBulkActivitiesFailedRetrieved(); + RouterServerUtil.logAndThrowRunTimeException(e, + "getBulkActivities by groupBy = %s, activitiesCount = %s with io error.", + groupBy, String.valueOf(activitiesCount)); + } catch (YarnException e) { + routerMetrics.incrGetBulkActivitiesFailedRetrieved(); + RouterServerUtil.logAndThrowRunTimeException(e, + "getBulkActivities by groupBy = %s, activitiesCount = %s with yarn error.", + groupBy, String.valueOf(activitiesCount)); + } + + routerMetrics.incrGetBulkActivitiesFailedRetrieved(); + throw new RuntimeException("getBulkActivities Failed."); } @Override @@ -1170,32 +1304,45 @@ public class FederationInterceptorREST extends AbstractRESTRequestInterceptor { // Only verify the app_id, // because the specific subCluster needs to be found according to the app_id, // and other verifications are directly handed over to the corresponding subCluster RM - if (appId == null || appId.isEmpty()) { - throw new IllegalArgumentException("Parameter error, the appId is empty or null."); + // Check that the appId format is accurate + try { + RouterServerUtil.validateApplicationId(appId); + } catch (IllegalArgumentException e) { + routerMetrics.incrGetAppActivitiesFailedRetrieved(); + throw e; } try { + long startTime = clock.getTime(); SubClusterInfo subClusterInfo = getHomeSubClusterInfoByAppId(appId); DefaultRequestInterceptorREST interceptor = getOrCreateInterceptorForSubCluster( subClusterInfo.getSubClusterId(), subClusterInfo.getRMWebServiceAddress()); - final HttpServletRequest hsrCopy = clone(hsr); - return interceptor.getAppActivities(hsrCopy, appId, time, requestPriorities, - allocationRequestIds, groupBy, limit, actions, summarize); + AppActivitiesInfo appActivitiesInfo = interceptor.getAppActivities(hsrCopy, appId, time, + requestPriorities, allocationRequestIds, groupBy, limit, actions, summarize); + if (appActivitiesInfo != null) { + long stopTime = clock.getTime(); + routerMetrics.succeededGetAppActivitiesRetrieved(stopTime - startTime); + return appActivitiesInfo; + } } catch (IllegalArgumentException e) { - RouterServerUtil.logAndThrowRunTimeException(e, "Unable to get subCluster by appId: %s.", - appId); + routerMetrics.incrGetAppActivitiesFailedRetrieved(); + RouterServerUtil.logAndThrowRunTimeException(e, + "Unable to get subCluster by appId: %s.", appId); } catch (YarnException e) { - RouterServerUtil.logAndThrowRunTimeException("getAppActivities Failed.", e); + routerMetrics.incrGetAppActivitiesFailedRetrieved(); + RouterServerUtil.logAndThrowRunTimeException(e, + "getAppActivities by appId = %s error .", appId); } - - return null; + routerMetrics.incrGetAppActivitiesFailedRetrieved(); + throw new RuntimeException("getAppActivities Failed."); } @Override public ApplicationStatisticsInfo getAppStatistics(HttpServletRequest hsr, Set stateQueries, Set typeQueries) { try { + long startTime = clock.getTime(); Map subClustersActive = getActiveSubclusters(); final HttpServletRequest hsrCopy = clone(hsr); Class[] argsClasses = new Class[]{HttpServletRequest.class, Set.class, Set.class}; @@ -1203,19 +1350,38 @@ public class FederationInterceptorREST extends AbstractRESTRequestInterceptor { ClientMethod remoteMethod = new ClientMethod("getAppStatistics", argsClasses, args); Map appStatisticsMap = invokeConcurrent( subClustersActive.values(), remoteMethod, ApplicationStatisticsInfo.class); - return RouterWebServiceUtil.mergeApplicationStatisticsInfo(appStatisticsMap.values()); + ApplicationStatisticsInfo applicationStatisticsInfo = + RouterWebServiceUtil.mergeApplicationStatisticsInfo(appStatisticsMap.values()); + if (applicationStatisticsInfo != null) { + long stopTime = clock.getTime(); + routerMetrics.succeededGetAppStatisticsRetrieved(stopTime - startTime); + return applicationStatisticsInfo; + } + } catch (NotFoundException e) { + routerMetrics.incrGetAppStatisticsFailedRetrieved(); + RouterServerUtil.logAndThrowRunTimeException("get all active sub cluster(s) error.", e); } catch (IOException e) { - RouterServerUtil.logAndThrowRunTimeException(e, "Get all active sub cluster(s) error."); + routerMetrics.incrGetAppStatisticsFailedRetrieved(); + RouterServerUtil.logAndThrowRunTimeException(e, + "getAppStatistics error by stateQueries = %s, typeQueries = %s with io error.", + StringUtils.join(stateQueries, ","), StringUtils.join(typeQueries, ",")); } catch (YarnException e) { - RouterServerUtil.logAndThrowRunTimeException(e, "getAppStatistics error."); + routerMetrics.incrGetAppStatisticsFailedRetrieved(); + RouterServerUtil.logAndThrowRunTimeException(e, + "getAppStatistics by stateQueries = %s, typeQueries = %s with yarn error.", + StringUtils.join(stateQueries, ","), StringUtils.join(typeQueries, ",")); } - return null; + routerMetrics.incrGetAppStatisticsFailedRetrieved(); + throw RouterServerUtil.logAndReturnRunTimeException( + "getAppStatistics by stateQueries = %s, typeQueries = %s Failed.", + StringUtils.join(stateQueries, ","), StringUtils.join(typeQueries, ",")); } @Override public NodeToLabelsInfo getNodeToLabels(HttpServletRequest hsr) throws IOException { try { + long startTime = clock.getTime(); Map subClustersActive = getActiveSubclusters(); final HttpServletRequest hsrCopy = clone(hsr); Class[] argsClasses = new Class[]{HttpServletRequest.class}; @@ -1223,27 +1389,64 @@ public class FederationInterceptorREST extends AbstractRESTRequestInterceptor { ClientMethod remoteMethod = new ClientMethod("getNodeToLabels", argsClasses, args); Map nodeToLabelsInfoMap = invokeConcurrent(subClustersActive.values(), remoteMethod, NodeToLabelsInfo.class); - return RouterWebServiceUtil.mergeNodeToLabels(nodeToLabelsInfoMap); + NodeToLabelsInfo nodeToLabelsInfo = + RouterWebServiceUtil.mergeNodeToLabels(nodeToLabelsInfoMap); + if (nodeToLabelsInfo != null) { + long stopTime = clock.getTime(); + routerMetrics.succeededGetNodeToLabelsRetrieved(stopTime - startTime); + return nodeToLabelsInfo; + } } catch (NotFoundException e) { - LOG.error("Get all active sub cluster(s) error.", e); - throw new IOException("Get all active sub cluster(s) error.", e); + routerMetrics.incrNodeToLabelsFailedRetrieved(); + RouterServerUtil.logAndThrowIOException("get all active sub cluster(s) error.", e); } catch (YarnException e) { - LOG.error("getNodeToLabels error.", e); - throw new IOException("getNodeToLabels error.", e); + routerMetrics.incrNodeToLabelsFailedRetrieved(); + RouterServerUtil.logAndThrowIOException("getNodeToLabels error.", e); } + routerMetrics.incrNodeToLabelsFailedRetrieved(); + throw new RuntimeException("getNodeToLabels Failed."); + } + + @Override + public NodeLabelsInfo getRMNodeLabels(HttpServletRequest hsr) throws IOException { + try { + long startTime = clock.getTime(); + Map subClustersActive = getActiveSubclusters(); + final HttpServletRequest hsrCopy = clone(hsr); + Class[] argsClasses = new Class[]{HttpServletRequest.class}; + Object[] args = new Object[]{hsrCopy}; + ClientMethod remoteMethod = new ClientMethod("getRMNodeLabels", argsClasses, args); + Map nodeToLabelsInfoMap = + invokeConcurrent(subClustersActive.values(), remoteMethod, NodeLabelsInfo.class); + NodeLabelsInfo nodeToLabelsInfo = + RouterWebServiceUtil.mergeNodeLabelsInfo(nodeToLabelsInfoMap); + if (nodeToLabelsInfo != null) { + long stopTime = clock.getTime(); + routerMetrics.succeededGetRMNodeLabelsRetrieved(stopTime - startTime); + return nodeToLabelsInfo; + } + } catch (NotFoundException e) { + routerMetrics.incrGetRMNodeLabelsFailedRetrieved(); + RouterServerUtil.logAndThrowIOException("get all active sub cluster(s) error.", e); + } catch (YarnException e) { + routerMetrics.incrGetRMNodeLabelsFailedRetrieved(); + RouterServerUtil.logAndThrowIOException("getRMNodeLabels error.", e); + } + routerMetrics.incrGetRMNodeLabelsFailedRetrieved(); + throw new RuntimeException("getRMNodeLabels Failed."); } @Override public LabelsToNodesInfo getLabelsToNodes(Set labels) throws IOException { try { + long startTime = clock.getTime(); Map subClustersActive = getActiveSubclusters(); Class[] argsClasses = new Class[]{Set.class}; Object[] args = new Object[]{labels}; ClientMethod remoteMethod = new ClientMethod("getLabelsToNodes", argsClasses, args); Map labelsToNodesInfoMap = invokeConcurrent(subClustersActive.values(), remoteMethod, LabelsToNodesInfo.class); - Map labelToNodesMap = new HashMap<>(); labelsToNodesInfoMap.values().forEach(labelsToNode -> { Map values = labelsToNode.getLabelsToNodes(); @@ -1255,13 +1458,23 @@ public class FederationInterceptorREST extends AbstractRESTRequestInterceptor { labelToNodesMap.put(key, newValue); } }); - return new LabelsToNodesInfo(labelToNodesMap); + LabelsToNodesInfo labelsToNodesInfo = new LabelsToNodesInfo(labelToNodesMap); + if (labelsToNodesInfo != null) { + long stopTime = clock.getTime(); + routerMetrics.succeededGetLabelsToNodesRetrieved(stopTime - startTime); + return labelsToNodesInfo; + } } catch (NotFoundException e) { - RouterServerUtil.logAndThrowIOException("Get all active sub cluster(s) error.", e); + routerMetrics.incrLabelsToNodesFailedRetrieved(); + RouterServerUtil.logAndThrowIOException("get all active sub cluster(s) error.", e); } catch (YarnException e) { - RouterServerUtil.logAndThrowIOException("getLabelsToNodes error.", e); + routerMetrics.incrLabelsToNodesFailedRetrieved(); + RouterServerUtil.logAndThrowIOException( + e, "getLabelsToNodes by labels = %s with yarn error.", StringUtils.join(labels, ",")); } - return null; + routerMetrics.incrLabelsToNodesFailedRetrieved(); + throw RouterServerUtil.logAndReturnRunTimeException( + "getLabelsToNodes by labels = %s Failed.", StringUtils.join(labels, ",")); } @Override @@ -1280,6 +1493,7 @@ public class FederationInterceptorREST extends AbstractRESTRequestInterceptor { public NodeLabelsInfo getClusterNodeLabels(HttpServletRequest hsr) throws IOException { try { + long startTime = clock.getTime(); Map subClustersActive = getActiveSubclusters(); final HttpServletRequest hsrCopy = clone(hsr); Class[] argsClasses = new Class[]{HttpServletRequest.class}; @@ -1289,13 +1503,21 @@ public class FederationInterceptorREST extends AbstractRESTRequestInterceptor { invokeConcurrent(subClustersActive.values(), remoteMethod, NodeLabelsInfo.class); Set hashSets = Sets.newHashSet(); nodeToLabelsInfoMap.values().forEach(item -> hashSets.addAll(item.getNodeLabels())); - return new NodeLabelsInfo(hashSets); + NodeLabelsInfo nodeLabelsInfo = new NodeLabelsInfo(hashSets); + if (nodeLabelsInfo != null) { + long stopTime = clock.getTime(); + routerMetrics.succeededGetClusterNodeLabelsRetrieved(stopTime - startTime); + return nodeLabelsInfo; + } } catch (NotFoundException e) { - RouterServerUtil.logAndThrowIOException("Get all active sub cluster(s) error.", e); + routerMetrics.incrClusterNodeLabelsFailedRetrieved(); + RouterServerUtil.logAndThrowIOException("get all active sub cluster(s) error.", e); } catch (YarnException e) { - RouterServerUtil.logAndThrowIOException("getClusterNodeLabels error.", e); + routerMetrics.incrClusterNodeLabelsFailedRetrieved(); + RouterServerUtil.logAndThrowIOException("getClusterNodeLabels with yarn error.", e); } - return null; + routerMetrics.incrClusterNodeLabelsFailedRetrieved(); + throw new RuntimeException("getClusterNodeLabels Failed."); } @Override @@ -1314,45 +1536,68 @@ public class FederationInterceptorREST extends AbstractRESTRequestInterceptor { public NodeLabelsInfo getLabelsOnNode(HttpServletRequest hsr, String nodeId) throws IOException { try { + long startTime = clock.getTime(); Map subClustersActive = getActiveSubclusters(); final HttpServletRequest hsrCopy = clone(hsr); Class[] argsClasses = new Class[]{HttpServletRequest.class, String.class}; Object[] args = new Object[]{hsrCopy, nodeId}; ClientMethod remoteMethod = new ClientMethod("getLabelsOnNode", argsClasses, args); Map nodeToLabelsInfoMap = - invokeConcurrent(subClustersActive.values(), remoteMethod, NodeLabelsInfo.class); + invokeConcurrent(subClustersActive.values(), remoteMethod, NodeLabelsInfo.class); Set hashSets = Sets.newHashSet(); nodeToLabelsInfoMap.values().forEach(item -> hashSets.addAll(item.getNodeLabels())); - return new NodeLabelsInfo(hashSets); + NodeLabelsInfo nodeLabelsInfo = new NodeLabelsInfo(hashSets); + if (nodeLabelsInfo != null) { + long stopTime = clock.getTime(); + routerMetrics.succeededGetLabelsToNodesRetrieved(stopTime - startTime); + return nodeLabelsInfo; + } } catch (NotFoundException e) { - RouterServerUtil.logAndThrowIOException("Get all active sub cluster(s) error.", e); + routerMetrics.incrLabelsToNodesFailedRetrieved(); + RouterServerUtil.logAndThrowIOException("get all active sub cluster(s) error.", e); } catch (YarnException e) { - RouterServerUtil.logAndThrowIOException("getClusterNodeLabels error.", e); + routerMetrics.incrLabelsToNodesFailedRetrieved(); + RouterServerUtil.logAndThrowIOException( + e, "getLabelsOnNode nodeId = %s with yarn error.", nodeId); } - return null; + routerMetrics.incrLabelsToNodesFailedRetrieved(); + throw RouterServerUtil.logAndReturnRunTimeException( + "getLabelsOnNode by nodeId = %s Failed.", nodeId); } @Override public AppPriority getAppPriority(HttpServletRequest hsr, String appId) throws AuthorizationException { - if (appId == null || appId.isEmpty()) { - throw new IllegalArgumentException("Parameter error, the appId is empty or null."); + // Check that the appId format is accurate + try { + RouterServerUtil.validateApplicationId(appId); + } catch (IllegalArgumentException e) { + routerMetrics.incrGetAppPriorityFailedRetrieved(); + throw e; } try { + long startTime = clock.getTime(); SubClusterInfo subClusterInfo = getHomeSubClusterInfoByAppId(appId); DefaultRequestInterceptorREST interceptor = getOrCreateInterceptorForSubCluster( subClusterInfo.getSubClusterId(), subClusterInfo.getRMWebServiceAddress()); - return interceptor.getAppPriority(hsr, appId); + AppPriority appPriority = interceptor.getAppPriority(hsr, appId); + if (appPriority != null) { + long stopTime = clock.getTime(); + routerMetrics.succeededGetAppPriorityRetrieved(stopTime - startTime); + return appPriority; + } } catch (IllegalArgumentException e) { + routerMetrics.incrGetAppPriorityFailedRetrieved(); RouterServerUtil.logAndThrowRunTimeException(e, "Unable to get the getAppPriority appId: %s.", appId); } catch (YarnException e) { - RouterServerUtil.logAndThrowRunTimeException("getAppPriority Failed.", e); + routerMetrics.incrGetAppPriorityFailedRetrieved(); + RouterServerUtil.logAndThrowRunTimeException("getAppPriority error.", e); } - - return null; + routerMetrics.incrGetAppPriorityFailedRetrieved(); + throw new RuntimeException("getAppPriority Failed."); } @Override @@ -1360,50 +1605,74 @@ public class FederationInterceptorREST extends AbstractRESTRequestInterceptor { HttpServletRequest hsr, String appId) throws AuthorizationException, YarnException, InterruptedException, IOException { - if (appId == null || appId.isEmpty()) { - throw new IllegalArgumentException("Parameter error, the appId is empty or null."); + // Check that the appId format is accurate + try { + RouterServerUtil.validateApplicationId(appId); + } catch (IllegalArgumentException e) { + routerMetrics.incrUpdateAppPriorityFailedRetrieved(); + throw e; } if (targetPriority == null) { + routerMetrics.incrUpdateAppPriorityFailedRetrieved(); throw new IllegalArgumentException("Parameter error, the targetPriority is empty or null."); } try { + long startTime = clock.getTime(); SubClusterInfo subClusterInfo = getHomeSubClusterInfoByAppId(appId); DefaultRequestInterceptorREST interceptor = getOrCreateInterceptorForSubCluster( subClusterInfo.getSubClusterId(), subClusterInfo.getRMWebServiceAddress()); - return interceptor.updateApplicationPriority(targetPriority, hsr, appId); + Response response = interceptor.updateApplicationPriority(targetPriority, hsr, appId); + if (response != null) { + long stopTime = clock.getTime(); + routerMetrics.succeededUpdateAppPriorityRetrieved(stopTime - startTime); + return response; + } } catch (IllegalArgumentException e) { + routerMetrics.incrUpdateAppPriorityFailedRetrieved(); RouterServerUtil.logAndThrowRunTimeException(e, "Unable to get the updateApplicationPriority appId: %s.", appId); } catch (YarnException e) { - RouterServerUtil.logAndThrowRunTimeException("updateApplicationPriority Failed.", e); + routerMetrics.incrUpdateAppPriorityFailedRetrieved(); + RouterServerUtil.logAndThrowRunTimeException("updateApplicationPriority error.", e); } - - return null; + routerMetrics.incrUpdateAppPriorityFailedRetrieved(); + throw new RuntimeException("updateApplicationPriority Failed."); } @Override public AppQueue getAppQueue(HttpServletRequest hsr, String appId) throws AuthorizationException { - if (appId == null || appId.isEmpty()) { - throw new IllegalArgumentException("Parameter error, the appId is empty or null."); + // Check that the appId format is accurate + try { + RouterServerUtil.validateApplicationId(appId); + } catch (IllegalArgumentException e) { + routerMetrics.incrGetAppQueueFailedRetrieved(); + throw e; } try { + long startTime = clock.getTime(); SubClusterInfo subClusterInfo = getHomeSubClusterInfoByAppId(appId); DefaultRequestInterceptorREST interceptor = getOrCreateInterceptorForSubCluster( subClusterInfo.getSubClusterId(), subClusterInfo.getRMWebServiceAddress()); - return interceptor.getAppQueue(hsr, appId); + AppQueue queue = interceptor.getAppQueue(hsr, appId); + if (queue != null) { + long stopTime = clock.getTime(); + routerMetrics.succeededGetAppQueueRetrieved((stopTime - startTime)); + return queue; + } } catch (IllegalArgumentException e) { - RouterServerUtil.logAndThrowRunTimeException(e, - "Unable to get queue by appId: %s.", appId); + routerMetrics.incrGetAppQueueFailedRetrieved(); + RouterServerUtil.logAndThrowRunTimeException(e, "Unable to get queue by appId: %s.", appId); } catch (YarnException e) { - RouterServerUtil.logAndThrowRunTimeException("getAppQueue Failed.", e); + routerMetrics.incrGetAppQueueFailedRetrieved(); + RouterServerUtil.logAndThrowRunTimeException("getAppQueue error.", e); } - - return null; + routerMetrics.incrGetAppQueueFailedRetrieved(); + throw new RuntimeException("getAppQueue Failed."); } @Override @@ -1411,75 +1680,483 @@ public class FederationInterceptorREST extends AbstractRESTRequestInterceptor { String appId) throws AuthorizationException, YarnException, InterruptedException, IOException { - if (appId == null || appId.isEmpty()) { - throw new IllegalArgumentException("Parameter error, the appId is empty or null."); + // Check that the appId format is accurate + try { + RouterServerUtil.validateApplicationId(appId); + } catch (IllegalArgumentException e) { + routerMetrics.incrUpdateAppQueueFailedRetrieved(); + throw e; } if (targetQueue == null) { + routerMetrics.incrUpdateAppQueueFailedRetrieved(); throw new IllegalArgumentException("Parameter error, the targetQueue is null."); } try { + long startTime = clock.getTime(); SubClusterInfo subClusterInfo = getHomeSubClusterInfoByAppId(appId); DefaultRequestInterceptorREST interceptor = getOrCreateInterceptorForSubCluster( subClusterInfo.getSubClusterId(), subClusterInfo.getRMWebServiceAddress()); - return interceptor.updateAppQueue(targetQueue, hsr, appId); + Response response = interceptor.updateAppQueue(targetQueue, hsr, appId); + if (response != null) { + long stopTime = clock.getTime(); + routerMetrics.succeededUpdateAppQueueRetrieved(stopTime - startTime); + return response; + } } catch (IllegalArgumentException e) { + routerMetrics.incrUpdateAppQueueFailedRetrieved(); RouterServerUtil.logAndThrowRunTimeException(e, "Unable to update app queue by appId: %s.", appId); } catch (YarnException e) { - RouterServerUtil.logAndThrowRunTimeException("updateAppQueue Failed.", e); + routerMetrics.incrUpdateAppQueueFailedRetrieved(); + RouterServerUtil.logAndThrowRunTimeException("updateAppQueue error.", e); + } + routerMetrics.incrUpdateAppQueueFailedRetrieved(); + throw new RuntimeException("updateAppQueue Failed."); + } + + /** + * This method posts a delegation token from the client. + * + * @param tokenData the token to delegate. It is a content param. + * @param hsr the servlet request. + * @return Response containing the status code. + * @throws AuthorizationException if Kerberos auth failed. + * @throws IOException if the delegation failed. + * @throws InterruptedException if interrupted. + * @throws Exception in case of bad request. + */ + @Override + public Response postDelegationToken(DelegationToken tokenData, HttpServletRequest hsr) + throws AuthorizationException, IOException, InterruptedException, Exception { + + if (tokenData == null || hsr == null) { + throw new IllegalArgumentException("Parameter error, the tokenData or hsr is null."); } - return null; + try { + // get Caller UserGroupInformation + Configuration conf = federationFacade.getConf(); + UserGroupInformation callerUGI = getKerberosUserGroupInformation(conf, hsr); + + // create a delegation token + return createDelegationToken(tokenData, callerUGI); + } catch (YarnException e) { + LOG.error("Create delegation token request failed.", e); + return Response.status(Status.FORBIDDEN).entity(e.getMessage()).build(); + } } - @Override - public Response postDelegationToken(DelegationToken tokenData, - HttpServletRequest hsr) throws AuthorizationException, IOException, - InterruptedException, Exception { - throw new NotImplementedException("Code is not implemented"); + /** + * Create DelegationToken. + * + * @param dtoken DelegationToken Data. + * @param callerUGI UserGroupInformation. + * @return Response. + * @throws Exception An exception occurred when creating a delegationToken. + */ + private Response createDelegationToken(DelegationToken dtoken, UserGroupInformation callerUGI) + throws IOException, InterruptedException { + + String renewer = dtoken.getRenewer(); + + GetDelegationTokenResponse resp = callerUGI.doAs( + (PrivilegedExceptionAction) () -> { + GetDelegationTokenRequest createReq = GetDelegationTokenRequest.newInstance(renewer); + return this.getRouterClientRMService().getDelegationToken(createReq); + }); + + DelegationToken respToken = getDelegationToken(renewer, resp); + return Response.status(Status.OK).entity(respToken).build(); } + /** + * Get DelegationToken. + * + * @param renewer renewer. + * @param resp GetDelegationTokenResponse. + * @return DelegationToken. + * @throws IOException if there are I/O errors. + */ + private DelegationToken getDelegationToken(String renewer, GetDelegationTokenResponse resp) + throws IOException { + // Step1. Parse token from GetDelegationTokenResponse. + Token tk = getToken(resp); + String tokenKind = tk.getKind().toString(); + RMDelegationTokenIdentifier tokenIdentifier = tk.decodeIdentifier(); + String owner = tokenIdentifier.getOwner().toString(); + long maxDate = tokenIdentifier.getMaxDate(); + + // Step2. Call the interface to get the expiration time of Token. + RouterClientRMService clientRMService = this.getRouterClientRMService(); + RouterDelegationTokenSecretManager tokenSecretManager = + clientRMService.getRouterDTSecretManager(); + long currentExpiration = tokenSecretManager.getRenewDate(tokenIdentifier); + + // Step3. Generate Delegation token. + DelegationToken delegationToken = new DelegationToken(tk.encodeToUrlString(), + renewer, owner, tokenKind, currentExpiration, maxDate); + + return delegationToken; + } + + /** + * GetToken. + * We convert RMDelegationToken in GetDelegationTokenResponse to Token. + * + * @param resp GetDelegationTokenResponse. + * @return Token. + */ + private static Token getToken(GetDelegationTokenResponse resp) { + org.apache.hadoop.yarn.api.records.Token token = resp.getRMDelegationToken(); + byte[] identifier = token.getIdentifier().array(); + byte[] password = token.getPassword().array(); + Text kind = new Text(token.getKind()); + Text service = new Text(token.getService()); + Token tk = new Token<>(identifier, password, kind, service); + return tk; + } + + /** + * This method updates the expiration for a delegation token from the client. + * + * @param hsr the servlet request + * @return Response containing the status code. + * @throws AuthorizationException if Kerberos auth failed. + * @throws IOException if the delegation failed. + * @throws InterruptedException if interrupted. + * @throws Exception in case of bad request. + */ @Override public Response postDelegationTokenExpiration(HttpServletRequest hsr) - throws AuthorizationException, IOException, InterruptedException, - Exception { - throw new NotImplementedException("Code is not implemented"); + throws AuthorizationException, IOException, InterruptedException, Exception { + + if (hsr == null) { + throw new IllegalArgumentException("Parameter error, the hsr is null."); + } + + try { + // get Caller UserGroupInformation + Configuration conf = federationFacade.getConf(); + UserGroupInformation callerUGI = getKerberosUserGroupInformation(conf, hsr); + return renewDelegationToken(hsr, callerUGI); + } catch (YarnException e) { + LOG.error("Renew delegation token request failed.", e); + return Response.status(Status.FORBIDDEN).entity(e.getMessage()).build(); + } } + /** + * Renew DelegationToken. + * + * @param hsr HttpServletRequest. + * @param callerUGI UserGroupInformation. + * @return Response + * @throws IOException if there are I/O errors. + * @throws InterruptedException if any thread has interrupted. + */ + private Response renewDelegationToken(HttpServletRequest hsr, UserGroupInformation callerUGI) + throws IOException, InterruptedException { + + // renew Delegation Token + DelegationToken tokenData = new DelegationToken(); + String encodeToken = extractToken(hsr).encodeToUrlString(); + tokenData.setToken(encodeToken); + + // Parse token data + Token token = extractToken(tokenData.getToken()); + org.apache.hadoop.yarn.api.records.Token dToken = + BuilderUtils.newDelegationToken(token.getIdentifier(), token.getKind().toString(), + token.getPassword(), token.getService().toString()); + + // Renew token + RenewDelegationTokenResponse resp = callerUGI.doAs( + (PrivilegedExceptionAction) () -> { + RenewDelegationTokenRequest req = RenewDelegationTokenRequest.newInstance(dToken); + return this.getRouterClientRMService().renewDelegationToken(req); + }); + + // return DelegationToken + long renewTime = resp.getNextExpirationTime(); + DelegationToken respToken = new DelegationToken(); + respToken.setNextExpirationTime(renewTime); + return Response.status(Status.OK).entity(respToken).build(); + } + + /** + * Cancel DelegationToken. + * + * @param hsr the servlet request + * @return Response containing the status code. + * @throws AuthorizationException if Kerberos auth failed. + * @throws IOException if the delegation failed. + * @throws InterruptedException if interrupted. + * @throws Exception in case of bad request. + */ @Override public Response cancelDelegationToken(HttpServletRequest hsr) - throws AuthorizationException, IOException, InterruptedException, - Exception { - throw new NotImplementedException("Code is not implemented"); + throws AuthorizationException, IOException, InterruptedException, Exception { + try { + // get Caller UserGroupInformation + Configuration conf = federationFacade.getConf(); + UserGroupInformation callerUGI = getKerberosUserGroupInformation(conf, hsr); + + // parse Token Data + Token token = extractToken(hsr); + org.apache.hadoop.yarn.api.records.Token dToken = BuilderUtils + .newDelegationToken(token.getIdentifier(), token.getKind().toString(), + token.getPassword(), token.getService().toString()); + + // cancelDelegationToken + callerUGI.doAs((PrivilegedExceptionAction) () -> { + CancelDelegationTokenRequest req = CancelDelegationTokenRequest.newInstance(dToken); + return this.getRouterClientRMService().cancelDelegationToken(req); + }); + + return Response.status(Status.OK).build(); + } catch (YarnException e) { + LOG.error("Cancel delegation token request failed.", e); + return Response.status(Status.FORBIDDEN).entity(e.getMessage()).build(); + } } @Override public Response createNewReservation(HttpServletRequest hsr) throws AuthorizationException, IOException, InterruptedException { - throw new NotImplementedException("Code is not implemented"); + long startTime = clock.getTime(); + try { + Map subClustersActive = + federationFacade.getSubClusters(true); + // We declare blackList and retries. + List blackList = new ArrayList<>(); + int actualRetryNums = federationFacade.getRetryNumbers(numSubmitRetries); + Response response = ((FederationActionRetry) (retryCount) -> + invokeCreateNewReservation(subClustersActive, blackList, hsr, retryCount)). + runWithRetries(actualRetryNums, submitIntervalTime); + // If the response is not empty and the status is SC_OK, + // this request can be returned directly. + if (response != null && response.getStatus() == HttpServletResponse.SC_OK) { + long stopTime = clock.getTime(); + routerMetrics.succeededGetNewReservationRetrieved(stopTime - startTime); + return response; + } + } catch (FederationPolicyException e) { + // If a FederationPolicyException is thrown, the service is unavailable. + routerMetrics.incrGetNewReservationFailedRetrieved(); + return Response.status(Status.SERVICE_UNAVAILABLE).entity(e.getLocalizedMessage()).build(); + } catch (Exception e) { + routerMetrics.incrGetNewReservationFailedRetrieved(); + return Response.status(Status.INTERNAL_SERVER_ERROR).entity(e.getLocalizedMessage()).build(); + } + + // return error message directly. + String errMsg = "Fail to create a new reservation."; + LOG.error(errMsg); + routerMetrics.incrGetNewReservationFailedRetrieved(); + return Response.status(Status.INTERNAL_SERVER_ERROR).entity(errMsg).build(); + } + + private Response invokeCreateNewReservation(Map subClustersActive, + List blackList, HttpServletRequest hsr, int retryCount) + throws YarnException, IOException, InterruptedException { + SubClusterId subClusterId = + federationFacade.getRandomActiveSubCluster(subClustersActive, blackList); + LOG.info("createNewReservation try #{} on SubCluster {}.", retryCount, subClusterId); + SubClusterInfo subClusterInfo = subClustersActive.get(subClusterId); + DefaultRequestInterceptorREST interceptor = getOrCreateInterceptorForSubCluster( + subClusterId, subClusterInfo.getRMWebServiceAddress()); + try { + Response response = interceptor.createNewReservation(hsr); + if (response != null && response.getStatus() == HttpServletResponse.SC_OK) { + return response; + } + } catch (Exception e) { + blackList.add(subClusterId); + RouterServerUtil.logAndThrowException(e.getMessage(), e); + } + // We need to throw the exception directly. + String msg = String.format("Unable to create a new ReservationId in SubCluster %s.", + subClusterId.getId()); + throw new YarnException(msg); } @Override public Response submitReservation(ReservationSubmissionRequestInfo resContext, - HttpServletRequest hsr) - throws AuthorizationException, IOException, InterruptedException { - throw new NotImplementedException("Code is not implemented"); + HttpServletRequest hsr) throws AuthorizationException, IOException, InterruptedException { + long startTime = clock.getTime(); + if (resContext == null || resContext.getReservationId() == null + || resContext.getReservationDefinition() == null || resContext.getQueue() == null) { + routerMetrics.incrSubmitReservationFailedRetrieved(); + String errMsg = "Missing submitReservation resContext or reservationId " + + "or reservation definition or queue."; + return Response.status(Status.BAD_REQUEST).entity(errMsg).build(); + } + + // Check that the resId format is accurate + String resId = resContext.getReservationId(); + try { + RouterServerUtil.validateReservationId(resId); + } catch (IllegalArgumentException e) { + routerMetrics.incrSubmitReservationFailedRetrieved(); + throw e; + } + + List blackList = new ArrayList<>(); + try { + int activeSubClustersCount = federationFacade.getActiveSubClustersCount(); + int actualRetryNums = Math.min(activeSubClustersCount, numSubmitRetries); + Response response = ((FederationActionRetry) (retryCount) -> + invokeSubmitReservation(resContext, blackList, hsr, retryCount)). + runWithRetries(actualRetryNums, submitIntervalTime); + if (response != null) { + long stopTime = clock.getTime(); + routerMetrics.succeededSubmitReservationRetrieved(stopTime - startTime); + return response; + } + } catch (Exception e) { + routerMetrics.incrSubmitReservationFailedRetrieved(); + return Response.status(Status.SERVICE_UNAVAILABLE).entity(e.getLocalizedMessage()).build(); + } + + routerMetrics.incrSubmitReservationFailedRetrieved(); + String msg = String.format("Reservation %s failed to be submitted.", resId); + return Response.status(Status.SERVICE_UNAVAILABLE).entity(msg).build(); + } + + private Response invokeSubmitReservation(ReservationSubmissionRequestInfo requestContext, + List blackList, HttpServletRequest hsr, int retryCount) + throws YarnException, IOException, InterruptedException { + String resId = requestContext.getReservationId(); + ReservationId reservationId = ReservationId.parseReservationId(resId); + ReservationDefinitionInfo definitionInfo = requestContext.getReservationDefinition(); + ReservationDefinition definition = + RouterServerUtil.convertReservationDefinition(definitionInfo); + + // First, Get SubClusterId according to specific strategy. + ReservationSubmissionRequest request = ReservationSubmissionRequest.newInstance( + definition, requestContext.getQueue(), reservationId); + SubClusterId subClusterId = null; + + try { + // Get subClusterId from policy. + subClusterId = policyFacade.getReservationHomeSubCluster(request); + + // Print the log of submitting the submitApplication. + LOG.info("submitReservation ReservationId {} try #{} on SubCluster {}.", reservationId, + retryCount, subClusterId); + + // Step2. We Store the mapping relationship + // between Application and HomeSubCluster in stateStore. + federationFacade.addOrUpdateReservationHomeSubCluster(reservationId, + subClusterId, retryCount); + + // Step3. We get subClusterInfo based on subClusterId. + SubClusterInfo subClusterInfo = federationFacade.getSubCluster(subClusterId); + + DefaultRequestInterceptorREST interceptor = getOrCreateInterceptorForSubCluster( + subClusterInfo.getSubClusterId(), subClusterInfo.getRMWebServiceAddress()); + HttpServletRequest hsrCopy = clone(hsr); + Response response = interceptor.submitReservation(requestContext, hsrCopy); + if (response != null && response.getStatus() == HttpServletResponse.SC_ACCEPTED) { + LOG.info("Reservation {} submitted on subCluster {}.", reservationId, subClusterId); + return response; + } + String msg = String.format("application %s failed to be submitted.", resId); + throw new YarnException(msg); + } catch (Exception e) { + LOG.warn("Unable to submit the reservation {} to SubCluster {}.", resId, + subClusterId, e); + if (subClusterId != null) { + blackList.add(subClusterId); + } + throw e; + } } @Override public Response updateReservation(ReservationUpdateRequestInfo resContext, - HttpServletRequest hsr) - throws AuthorizationException, IOException, InterruptedException { - throw new NotImplementedException("Code is not implemented"); + HttpServletRequest hsr) throws AuthorizationException, IOException, InterruptedException { + + // parameter verification + if (resContext == null || resContext.getReservationId() == null + || resContext.getReservationDefinition() == null) { + routerMetrics.incrUpdateReservationFailedRetrieved(); + String errMsg = "Missing updateReservation resContext or reservationId " + + "or reservation definition."; + return Response.status(Status.BAD_REQUEST).entity(errMsg).build(); + } + + // get reservationId + String reservationId = resContext.getReservationId(); + + // Check that the reservationId format is accurate + try { + RouterServerUtil.validateReservationId(reservationId); + } catch (IllegalArgumentException e) { + routerMetrics.incrUpdateReservationFailedRetrieved(); + throw e; + } + + try { + SubClusterInfo subClusterInfo = getHomeSubClusterInfoByReservationId(reservationId); + DefaultRequestInterceptorREST interceptor = getOrCreateInterceptorForSubCluster( + subClusterInfo.getSubClusterId(), subClusterInfo.getRMWebServiceAddress()); + HttpServletRequest hsrCopy = clone(hsr); + Response response = interceptor.updateReservation(resContext, hsrCopy); + if (response != null) { + return response; + } + } catch (Exception e) { + routerMetrics.incrUpdateReservationFailedRetrieved(); + RouterServerUtil.logAndThrowRunTimeException("updateReservation Failed.", e); + } + + // throw an exception + routerMetrics.incrUpdateReservationFailedRetrieved(); + throw new YarnRuntimeException("updateReservation Failed, reservationId = " + reservationId); } @Override public Response deleteReservation(ReservationDeleteRequestInfo resContext, HttpServletRequest hsr) throws AuthorizationException, IOException, InterruptedException { - throw new NotImplementedException("Code is not implemented"); + + // parameter verification + if (resContext == null || resContext.getReservationId() == null) { + routerMetrics.incrDeleteReservationFailedRetrieved(); + String errMsg = "Missing deleteReservation request or reservationId."; + return Response.status(Status.BAD_REQUEST).entity(errMsg).build(); + } + + // get ReservationId + String reservationId = resContext.getReservationId(); + + // Check that the reservationId format is accurate + try { + RouterServerUtil.validateReservationId(reservationId); + } catch (IllegalArgumentException e) { + routerMetrics.incrDeleteReservationFailedRetrieved(); + throw e; + } + + try { + SubClusterInfo subClusterInfo = getHomeSubClusterInfoByReservationId(reservationId); + DefaultRequestInterceptorREST interceptor = getOrCreateInterceptorForSubCluster( + subClusterInfo.getSubClusterId(), subClusterInfo.getRMWebServiceAddress()); + HttpServletRequest hsrCopy = clone(hsr); + Response response = interceptor.deleteReservation(resContext, hsrCopy); + if (response != null) { + return response; + } + } catch (Exception e) { + routerMetrics.incrDeleteReservationFailedRetrieved(); + RouterServerUtil.logAndThrowRunTimeException("deleteReservation Failed.", e); + } + + // throw an exception + routerMetrics.incrDeleteReservationFailedRetrieved(); + throw new YarnRuntimeException("deleteReservation Failed, reservationId = " + reservationId); } @Override @@ -1497,7 +2174,16 @@ public class FederationInterceptorREST extends AbstractRESTRequestInterceptor { throw new IllegalArgumentException("Parameter error, the reservationId is empty or null."); } + // Check that the reservationId format is accurate try { + RouterServerUtil.validateReservationId(reservationId); + } catch (IllegalArgumentException e) { + routerMetrics.incrListReservationFailedRetrieved(); + throw e; + } + + try { + long startTime1 = clock.getTime(); SubClusterInfo subClusterInfo = getHomeSubClusterInfoByReservationId(reservationId); DefaultRequestInterceptorREST interceptor = getOrCreateInterceptorForSubCluster( subClusterInfo.getSubClusterId(), subClusterInfo.getRMWebServiceAddress()); @@ -1505,11 +2191,13 @@ public class FederationInterceptorREST extends AbstractRESTRequestInterceptor { Response response = interceptor.listReservation(queue, reservationId, startTime, endTime, includeResourceAllocations, hsrCopy); if (response != null) { + long stopTime = clock.getTime(); + routerMetrics.succeededListReservationRetrieved(stopTime - startTime1); return response; } } catch (YarnException e) { routerMetrics.incrListReservationFailedRetrieved(); - RouterServerUtil.logAndThrowRunTimeException("listReservation Failed.", e); + RouterServerUtil.logAndThrowRunTimeException("listReservation error.", e); } routerMetrics.incrListReservationFailedRetrieved(); @@ -1521,47 +2209,80 @@ public class FederationInterceptorREST extends AbstractRESTRequestInterceptor { String type) throws AuthorizationException { if (appId == null || appId.isEmpty()) { + routerMetrics.incrGetAppTimeoutFailedRetrieved(); throw new IllegalArgumentException("Parameter error, the appId is empty or null."); } + // Check that the appId format is accurate + try { + ApplicationId.fromString(appId); + } catch (IllegalArgumentException e) { + routerMetrics.incrGetAppTimeoutFailedRetrieved(); + throw e; + } + if (type == null || type.isEmpty()) { + routerMetrics.incrGetAppTimeoutFailedRetrieved(); throw new IllegalArgumentException("Parameter error, the type is empty or null."); } try { + long startTime = clock.getTime(); SubClusterInfo subClusterInfo = getHomeSubClusterInfoByAppId(appId); DefaultRequestInterceptorREST interceptor = getOrCreateInterceptorForSubCluster( subClusterInfo.getSubClusterId(), subClusterInfo.getRMWebServiceAddress()); - return interceptor.getAppTimeout(hsr, appId, type); + AppTimeoutInfo appTimeoutInfo = interceptor.getAppTimeout(hsr, appId, type); + if (appTimeoutInfo != null) { + long stopTime = clock.getTime(); + routerMetrics.succeededGetAppTimeoutRetrieved((stopTime - startTime)); + return appTimeoutInfo; + } } catch (IllegalArgumentException e) { + routerMetrics.incrGetAppTimeoutFailedRetrieved(); RouterServerUtil.logAndThrowRunTimeException(e, "Unable to get the getAppTimeout appId: %s.", appId); } catch (YarnException e) { - RouterServerUtil.logAndThrowRunTimeException("getAppTimeout Failed.", e); + routerMetrics.incrGetAppTimeoutFailedRetrieved(); + RouterServerUtil.logAndThrowRunTimeException("getAppTimeout error.", e); } - return null; + routerMetrics.incrGetAppTimeoutFailedRetrieved(); + throw new RuntimeException("getAppTimeout Failed."); } @Override public AppTimeoutsInfo getAppTimeouts(HttpServletRequest hsr, String appId) throws AuthorizationException { - if (appId == null || appId.isEmpty()) { - throw new IllegalArgumentException("Parameter error, the appId is empty or null."); + // Check that the appId format is accurate + try { + RouterServerUtil.validateApplicationId(appId); + } catch (IllegalArgumentException e) { + routerMetrics.incrGetAppTimeoutsFailedRetrieved(); + throw e; } try { + long startTime = clock.getTime(); SubClusterInfo subClusterInfo = getHomeSubClusterInfoByAppId(appId); DefaultRequestInterceptorREST interceptor = getOrCreateInterceptorForSubCluster( subClusterInfo.getSubClusterId(), subClusterInfo.getRMWebServiceAddress()); - return interceptor.getAppTimeouts(hsr, appId); + AppTimeoutsInfo appTimeoutsInfo = interceptor.getAppTimeouts(hsr, appId); + if (appTimeoutsInfo != null) { + long stopTime = clock.getTime(); + routerMetrics.succeededGetAppTimeoutsRetrieved((stopTime - startTime)); + return appTimeoutsInfo; + } } catch (IllegalArgumentException e) { + routerMetrics.incrGetAppTimeoutsFailedRetrieved(); RouterServerUtil.logAndThrowRunTimeException(e, "Unable to get the getAppTimeouts appId: %s.", appId); } catch (YarnException e) { - RouterServerUtil.logAndThrowRunTimeException("getAppTimeouts Failed.", e); + routerMetrics.incrGetAppTimeoutsFailedRetrieved(); + RouterServerUtil.logAndThrowRunTimeException("getAppTimeouts error.", e); } - return null; + + routerMetrics.incrGetAppTimeoutsFailedRetrieved(); + throw new RuntimeException("getAppTimeouts Failed."); } @Override @@ -1569,112 +2290,215 @@ public class FederationInterceptorREST extends AbstractRESTRequestInterceptor { HttpServletRequest hsr, String appId) throws AuthorizationException, YarnException, InterruptedException, IOException { - if (appId == null || appId.isEmpty()) { - throw new IllegalArgumentException("Parameter error, the appId is empty or null."); + // Check that the appId format is accurate + try { + RouterServerUtil.validateApplicationId(appId); + } catch (IllegalArgumentException e) { + routerMetrics.incrUpdateApplicationTimeoutsRetrieved(); + throw e; } if (appTimeout == null) { + routerMetrics.incrUpdateApplicationTimeoutsRetrieved(); throw new IllegalArgumentException("Parameter error, the appTimeout is null."); } try { + long startTime = Time.now(); SubClusterInfo subClusterInfo = getHomeSubClusterInfoByAppId(appId); DefaultRequestInterceptorREST interceptor = getOrCreateInterceptorForSubCluster( subClusterInfo.getSubClusterId(), subClusterInfo.getRMWebServiceAddress()); - return interceptor.updateApplicationTimeout(appTimeout, hsr, appId); + Response response = interceptor.updateApplicationTimeout(appTimeout, hsr, appId); + if (response != null) { + long stopTime = clock.getTime(); + routerMetrics.succeededUpdateAppTimeoutsRetrieved((stopTime - startTime)); + return response; + } } catch (IllegalArgumentException e) { + routerMetrics.incrUpdateApplicationTimeoutsRetrieved(); RouterServerUtil.logAndThrowRunTimeException(e, "Unable to get the updateApplicationTimeout appId: %s.", appId); } catch (YarnException e) { - RouterServerUtil.logAndThrowRunTimeException("updateApplicationTimeout Failed.", e); + routerMetrics.incrUpdateApplicationTimeoutsRetrieved(); + RouterServerUtil.logAndThrowRunTimeException("updateApplicationTimeout error.", e); } - return null; + + routerMetrics.incrUpdateApplicationTimeoutsRetrieved(); + throw new RuntimeException("updateApplicationTimeout Failed."); } @Override public AppAttemptsInfo getAppAttempts(HttpServletRequest hsr, String appId) { - if (appId == null || appId.isEmpty()) { - throw new IllegalArgumentException("Parameter error, the appId is empty or null."); + // Check that the appId format is accurate + try { + RouterServerUtil.validateApplicationId(appId); + } catch (IllegalArgumentException e) { + routerMetrics.incrAppAttemptsFailedRetrieved(); + throw e; } try { + long startTime = Time.now(); SubClusterInfo subClusterInfo = getHomeSubClusterInfoByAppId(appId); DefaultRequestInterceptorREST interceptor = getOrCreateInterceptorForSubCluster( subClusterInfo.getSubClusterId(), subClusterInfo.getRMWebServiceAddress()); - return interceptor.getAppAttempts(hsr, appId); + AppAttemptsInfo appAttemptsInfo = interceptor.getAppAttempts(hsr, appId); + if (appAttemptsInfo != null) { + long stopTime = Time.now(); + routerMetrics.succeededAppAttemptsRetrieved(stopTime - startTime); + return appAttemptsInfo; + } } catch (IllegalArgumentException e) { + routerMetrics.incrAppAttemptsFailedRetrieved(); RouterServerUtil.logAndThrowRunTimeException(e, "Unable to get the AppAttempt appId: %s.", appId); } catch (YarnException e) { - RouterServerUtil.logAndThrowRunTimeException("getAppAttempts Failed.", e); + routerMetrics.incrAppAttemptsFailedRetrieved(); + RouterServerUtil.logAndThrowRunTimeException("getAppAttempts error.", e); } - return null; + + routerMetrics.incrAppAttemptsFailedRetrieved(); + throw new RuntimeException("getAppAttempts Failed."); } @Override public RMQueueAclInfo checkUserAccessToQueue(String queue, String username, - String queueAclType, HttpServletRequest hsr) { - throw new NotImplementedException("Code is not implemented"); + String queueAclType, HttpServletRequest hsr) throws AuthorizationException { + + // Parameter Verification + if (queue == null || queue.isEmpty()) { + routerMetrics.incrCheckUserAccessToQueueFailedRetrieved(); + throw new IllegalArgumentException("Parameter error, the queue is empty or null."); + } + + if (username == null || username.isEmpty()) { + routerMetrics.incrCheckUserAccessToQueueFailedRetrieved(); + throw new IllegalArgumentException("Parameter error, the username is empty or null."); + } + + if (queueAclType == null || queueAclType.isEmpty()) { + routerMetrics.incrCheckUserAccessToQueueFailedRetrieved(); + throw new IllegalArgumentException("Parameter error, the queueAclType is empty or null."); + } + + // Traverse SubCluster and call checkUserAccessToQueue Api + try { + long startTime = Time.now(); + Map subClustersActive = getActiveSubclusters(); + final HttpServletRequest hsrCopy = clone(hsr); + Class[] argsClasses = new Class[]{String.class, String.class, String.class, + HttpServletRequest.class}; + Object[] args = new Object[]{queue, username, queueAclType, hsrCopy}; + ClientMethod remoteMethod = new ClientMethod("checkUserAccessToQueue", argsClasses, args); + Map rmQueueAclInfoMap = + invokeConcurrent(subClustersActive.values(), remoteMethod, RMQueueAclInfo.class); + FederationRMQueueAclInfo aclInfo = new FederationRMQueueAclInfo(); + rmQueueAclInfoMap.forEach((subClusterInfo, rMQueueAclInfo) -> { + SubClusterId subClusterId = subClusterInfo.getSubClusterId(); + rMQueueAclInfo.setSubClusterId(subClusterId.getId()); + aclInfo.getList().add(rMQueueAclInfo); + }); + long stopTime = Time.now(); + routerMetrics.succeededCheckUserAccessToQueueRetrieved(stopTime - startTime); + return aclInfo; + } catch (NotFoundException e) { + routerMetrics.incrCheckUserAccessToQueueFailedRetrieved(); + RouterServerUtil.logAndThrowRunTimeException("Get all active sub cluster(s) error.", e); + } catch (YarnException | IOException e) { + routerMetrics.incrCheckUserAccessToQueueFailedRetrieved(); + RouterServerUtil.logAndThrowRunTimeException("checkUserAccessToQueue error.", e); + } + + routerMetrics.incrCheckUserAccessToQueueFailedRetrieved(); + throw new RuntimeException("checkUserAccessToQueue error."); } @Override public AppAttemptInfo getAppAttempt(HttpServletRequest req, HttpServletResponse res, String appId, String appAttemptId) { - if (appId == null || appId.isEmpty()) { - throw new IllegalArgumentException("Parameter error, the appId is empty or null."); - } - if (appAttemptId == null || appAttemptId.isEmpty()) { - throw new IllegalArgumentException("Parameter error, the appAttemptId is empty or null."); - } - + // Check that the appId/appAttemptId format is accurate try { - SubClusterInfo subClusterInfo = getHomeSubClusterInfoByAppId(appId); + RouterServerUtil.validateApplicationId(appId); + RouterServerUtil.validateApplicationAttemptId(appAttemptId); + } catch (IllegalArgumentException e) { + routerMetrics.incrAppAttemptReportFailedRetrieved(); + throw e; + } + // Call the getAppAttempt method + try { + long startTime = Time.now(); + SubClusterInfo subClusterInfo = getHomeSubClusterInfoByAppId(appId); DefaultRequestInterceptorREST interceptor = getOrCreateInterceptorForSubCluster( subClusterInfo.getSubClusterId(), subClusterInfo.getRMWebServiceAddress()); - return interceptor.getAppAttempt(req, res, appId, appAttemptId); + AppAttemptInfo appAttemptInfo = interceptor.getAppAttempt(req, res, appId, appAttemptId); + if (appAttemptInfo != null) { + long stopTime = Time.now(); + routerMetrics.succeededAppAttemptReportRetrieved(stopTime - startTime); + return appAttemptInfo; + } } catch (IllegalArgumentException e) { + routerMetrics.incrAppAttemptReportFailedRetrieved(); RouterServerUtil.logAndThrowRunTimeException(e, - "Unable to get the AppAttempt appId: %s, appAttemptId: %s.", appId, appAttemptId); + "Unable to getAppAttempt by appId: %s, appAttemptId: %s.", appId, appAttemptId); } catch (YarnException e) { - RouterServerUtil.logAndThrowRunTimeException("getContainer Failed.", e); + routerMetrics.incrAppAttemptReportFailedRetrieved(); + RouterServerUtil.logAndThrowRunTimeException(e, + "getAppAttempt error, appId: %s, appAttemptId: %s.", appId, appAttemptId); } - return null; + routerMetrics.incrAppAttemptReportFailedRetrieved(); + throw RouterServerUtil.logAndReturnRunTimeException( + "getAppAttempt failed, appId: %s, appAttemptId: %s.", appId, appAttemptId); } @Override public ContainersInfo getContainers(HttpServletRequest req, HttpServletResponse res, String appId, String appAttemptId) { - ContainersInfo containersInfo = new ContainersInfo(); - - Map subClustersActive; + // Check that the appId/appAttemptId format is accurate try { - subClustersActive = getActiveSubclusters(); - } catch (NotFoundException e) { - LOG.error("Get all active sub cluster(s) error.", e); - return containersInfo; + RouterServerUtil.validateApplicationId(appId); + RouterServerUtil.validateApplicationAttemptId(appAttemptId); + } catch (IllegalArgumentException e) { + routerMetrics.incrGetContainersFailedRetrieved(); + throw e; } try { + long startTime = clock.getTime(); + ContainersInfo containersInfo = new ContainersInfo(); + Map subClustersActive = getActiveSubclusters(); Class[] argsClasses = new Class[]{ HttpServletRequest.class, HttpServletResponse.class, String.class, String.class}; Object[] args = new Object[]{req, res, appId, appAttemptId}; ClientMethod remoteMethod = new ClientMethod("getContainers", argsClasses, args); Map containersInfoMap = invokeConcurrent(subClustersActive.values(), remoteMethod, ContainersInfo.class); - if (containersInfoMap != null) { + if (containersInfoMap != null && !containersInfoMap.isEmpty()) { containersInfoMap.values().forEach(containers -> containersInfo.addAll(containers.getContainers())); } - } catch (Exception ex) { - LOG.error("Failed to return GetContainers.", ex); + if (containersInfo != null) { + long stopTime = clock.getTime(); + routerMetrics.succeededGetContainersRetrieved(stopTime - startTime); + return containersInfo; + } + } catch (NotFoundException e) { + routerMetrics.incrGetContainersFailedRetrieved(); + RouterServerUtil.logAndThrowRunTimeException(e, "getContainers error, appId = %s, " + + " appAttemptId = %s, Probably getActiveSubclusters error.", appId, appAttemptId); + } catch (IOException | YarnException e) { + routerMetrics.incrGetContainersFailedRetrieved(); + RouterServerUtil.logAndThrowRunTimeException(e, "getContainers error, appId = %s, " + + " appAttemptId = %s.", appId, appAttemptId); } - return containersInfo; + routerMetrics.incrGetContainersFailedRetrieved(); + throw RouterServerUtil.logAndReturnRunTimeException( + "getContainers failed, appId: %s, appAttemptId: %s.", appId, appAttemptId); } @Override @@ -1682,32 +2506,45 @@ public class FederationInterceptorREST extends AbstractRESTRequestInterceptor { HttpServletResponse res, String appId, String appAttemptId, String containerId) { - if (appId == null || appId.isEmpty()) { - throw new IllegalArgumentException("Parameter error, the appId is empty or null."); - } - if (appAttemptId == null || appAttemptId.isEmpty()) { - throw new IllegalArgumentException("Parameter error, the appAttemptId is empty or null."); - } - if (containerId == null || containerId.isEmpty()) { - throw new IllegalArgumentException("Parameter error, the containerId is empty or null."); + // FederationInterceptorREST#getContainer is logically + // the same as FederationClientInterceptor#getContainerReport, + // so use the same Metric. + + // Check that the appId/appAttemptId/containerId format is accurate + try { + RouterServerUtil.validateApplicationId(appId); + RouterServerUtil.validateApplicationAttemptId(appAttemptId); + RouterServerUtil.validateContainerId(containerId); + } catch (IllegalArgumentException e) { + routerMetrics.incrGetContainerReportFailedRetrieved(); + throw e; } try { + long startTime = Time.now(); SubClusterInfo subClusterInfo = getHomeSubClusterInfoByAppId(appId); - DefaultRequestInterceptorREST interceptor = getOrCreateInterceptorForSubCluster( subClusterInfo.getSubClusterId(), subClusterInfo.getRMWebServiceAddress()); - return interceptor.getContainer(req, res, appId, appAttemptId, containerId); + ContainerInfo containerInfo = + interceptor.getContainer(req, res, appId, appAttemptId, containerId); + if (containerInfo != null) { + long stopTime = Time.now(); + routerMetrics.succeededGetContainerReportRetrieved(stopTime - startTime); + return containerInfo; + } } catch (IllegalArgumentException e) { String msg = String.format( "Unable to get the AppAttempt appId: %s, appAttemptId: %s, containerId: %s.", appId, appAttemptId, containerId); + routerMetrics.incrGetContainerReportFailedRetrieved(); RouterServerUtil.logAndThrowRunTimeException(msg, e); } catch (YarnException e) { + routerMetrics.incrGetContainerReportFailedRetrieved(); RouterServerUtil.logAndThrowRunTimeException("getContainer Failed.", e); } - return null; + routerMetrics.incrGetContainerReportFailedRetrieved(); + throw new RuntimeException("getContainer Failed."); } @Override @@ -1735,31 +2572,45 @@ public class FederationInterceptorREST extends AbstractRESTRequestInterceptor { public Response signalToContainer(String containerId, String command, HttpServletRequest req) { - if (containerId == null || containerId.isEmpty()) { - throw new IllegalArgumentException("Parameter error, the containerId is empty or null."); + // Check if containerId is empty or null + try { + RouterServerUtil.validateContainerId(containerId); + } catch (IllegalArgumentException e) { + routerMetrics.incrSignalToContainerFailedRetrieved(); + throw e; } + // Check if command is empty or null if (command == null || command.isEmpty()) { + routerMetrics.incrSignalToContainerFailedRetrieved(); throw new IllegalArgumentException("Parameter error, the command is empty or null."); } try { + long startTime = Time.now(); + ContainerId containerIdObj = ContainerId.fromString(containerId); ApplicationId applicationId = containerIdObj.getApplicationAttemptId().getApplicationId(); - SubClusterInfo subClusterInfo = getHomeSubClusterInfoByAppId(applicationId.toString()); - DefaultRequestInterceptorREST interceptor = getOrCreateInterceptorForSubCluster( subClusterInfo.getSubClusterId(), subClusterInfo.getRMWebServiceAddress()); - return interceptor.signalToContainer(containerId, command, req); + Response response = interceptor.signalToContainer(containerId, command, req); + if (response != null) { + long stopTime = Time.now(); + routerMetrics.succeededSignalToContainerRetrieved(stopTime - startTime); + return response; + } } catch (YarnException e) { + routerMetrics.incrSignalToContainerFailedRetrieved(); RouterServerUtil.logAndThrowRunTimeException("signalToContainer Failed.", e); } catch (AuthorizationException e) { + routerMetrics.incrSignalToContainerFailedRetrieved(); RouterServerUtil.logAndThrowRunTimeException("signalToContainer Author Failed.", e); } - return null; + routerMetrics.incrSignalToContainerFailedRetrieved(); + throw new RuntimeException("signalToContainer Failed."); } @Override @@ -1774,9 +2625,16 @@ public class FederationInterceptorREST extends AbstractRESTRequestInterceptor { Map results = new HashMap<>(); - // Send the requests in parallel - CompletionService compSvc = new ExecutorCompletionService<>(this.threadpool); + // If there is a sub-cluster access error, + // we should choose whether to throw exception information according to user configuration. + // Send the requests in parallel. + CompletionService> compSvc = new ExecutorCompletionService<>(threadpool); + // This part of the code should be able to expose the accessed Exception information. + // We use Pair to store related information. The left value of the Pair is the response, + // and the right value is the exception. + // If the request is normal, the response is not empty and the exception is empty; + // if the request is abnormal, the response is empty and the exception is not empty. for (final SubClusterInfo info : clusterIds) { compSvc.submit(() -> { DefaultRequestInterceptorREST interceptor = getOrCreateInterceptorForSubCluster( @@ -1786,29 +2644,42 @@ public class FederationInterceptorREST extends AbstractRESTRequestInterceptor { getMethod(request.getMethodName(), request.getTypes()); Object retObj = method.invoke(interceptor, request.getParams()); R ret = clazz.cast(retObj); - return ret; + return new SubClusterResult<>(info, ret, null); } catch (Exception e) { LOG.error("SubCluster {} failed to call {} method.", info.getSubClusterId(), request.getMethodName(), e); - return null; + return new SubClusterResult<>(info, null, e); } }); } - clusterIds.stream().forEach(clusterId -> { + for (int i = 0; i < clusterIds.size(); i++) { + SubClusterInfo subClusterInfo = null; try { - Future future = compSvc.take(); - R response = future.get(); + Future> future = compSvc.take(); + SubClusterResult result = future.get(); + subClusterInfo = result.getSubClusterInfo(); + + R response = result.getResponse(); if (response != null) { - results.put(clusterId, response); + results.put(subClusterInfo, response); + } + + Exception exception = result.getException(); + + // If allowPartialResult=false, it means that if an exception occurs in a subCluster, + // an exception will be thrown directly. + if (!allowPartialResult && exception != null) { + throw exception; } } catch (Throwable e) { - String msg = String.format("SubCluster %s failed to %s report.", - clusterId, request.getMethodName()); - LOG.warn(msg, e); - throw new YarnRuntimeException(msg, e); + String subClusterId = subClusterInfo != null ? + subClusterInfo.getSubClusterId().getId() : "UNKNOWN"; + LOG.error("SubCluster {} failed to {} report.", subClusterId, request.getMethodName(), e); + throw new YarnRuntimeException(e.getCause().getMessage(), e); } - }); + } + return results; } @@ -1831,6 +2702,8 @@ public class FederationInterceptorREST extends AbstractRESTRequestInterceptor { } subClusterInfo = federationFacade.getSubCluster(subClusterId); return subClusterInfo; + } catch (IllegalArgumentException e){ + throw new IllegalArgumentException(e); } catch (YarnException e) { RouterServerUtil.logAndThrowException(e, "Get HomeSubClusterInfo by applicationId %s failed.", appId); @@ -1867,4 +2740,25 @@ public class FederationInterceptorREST extends AbstractRESTRequestInterceptor { public LRUCacheHashMap getAppInfosCaches() { return appInfosCaches; } + + @VisibleForTesting + public Map getInterceptors() { + return interceptors; + } + + public void setAllowPartialResult(boolean allowPartialResult) { + this.allowPartialResult = allowPartialResult; + } + + @VisibleForTesting + public Map invokeConcurrentGetNodeLabel() + throws IOException, YarnException { + Map subClustersActive = getActiveSubclusters(); + Class[] argsClasses = new Class[]{String.class}; + Object[] args = new Object[]{null}; + ClientMethod remoteMethod = new ClientMethod("getNodes", argsClasses, args); + Map nodesMap = + invokeConcurrent(subClustersActive.values(), remoteMethod, NodesInfo.class); + return nodesMap; + } } \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/MetricsOverviewTable.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/MetricsOverviewTable.java new file mode 100644 index 00000000000..1a157a10ce0 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/MetricsOverviewTable.java @@ -0,0 +1,264 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + *

    + * http://www.apache.org/licenses/LICENSE-2.0 + *

    + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.yarn.server.router.webapp; + +import com.google.inject.Inject; +import com.sun.jersey.api.client.Client; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.yarn.exceptions.YarnException; +import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWSConsts; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ClusterMetricsInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.SchedulerOverviewInfo; +import org.apache.hadoop.yarn.server.router.Router; +import org.apache.hadoop.yarn.server.router.webapp.dao.RouterClusterMetrics; +import org.apache.hadoop.yarn.server.router.webapp.dao.RouterSchedulerMetrics; +import org.apache.hadoop.yarn.webapp.hamlet2.Hamlet; +import org.apache.hadoop.yarn.webapp.util.WebAppUtils; + +import java.io.IOException; +import java.util.Collection; +import java.util.List; + +public class MetricsOverviewTable extends RouterBlock { + + private final Router router; + + @Inject + MetricsOverviewTable(Router router, ViewContext ctx) { + super(router, ctx); + this.router = router; + } + + @Override + protected void render(Block html) { + // Initialize page styles + html.style(".metrics {margin-bottom:5px}"); + + // get routerClusterMetrics Info + ClusterMetricsInfo routerClusterMetricsInfo = getRouterClusterMetricsInfo(); + RouterClusterMetrics routerClusterMetrics = new RouterClusterMetrics(routerClusterMetricsInfo); + + // metrics div + Hamlet.DIV div = html.div().$class("metrics"); + try { + initFederationClusterAppsMetrics(div, routerClusterMetrics); + initFederationClusterNodesMetrics(div, routerClusterMetrics); + List subClusters = getSubClusterInfoList(); + initFederationClusterSchedulersMetrics(div, routerClusterMetrics, subClusters); + } catch (Exception e) { + LOG.error("MetricsOverviewTable init error.", e); + } + div.__(); + } + + protected void render(Block html, String subClusterId) { + // Initialize page styles + html.style(".metrics {margin-bottom:5px}"); + + // get subClusterId ClusterMetrics Info + ClusterMetricsInfo clusterMetricsInfo = + getClusterMetricsInfoBySubClusterId(subClusterId); + RouterClusterMetrics routerClusterMetrics = + new RouterClusterMetrics(clusterMetricsInfo, subClusterId); + + // metrics div + Hamlet.DIV div = html.div().$class("metrics"); + try { + initFederationClusterAppsMetrics(div, routerClusterMetrics); + initFederationClusterNodesMetrics(div, routerClusterMetrics); + Collection subClusters = getSubClusterInfoList(subClusterId); + initFederationClusterSchedulersMetrics(div, routerClusterMetrics, subClusters); + } catch (Exception e) { + LOG.error("MetricsOverviewTable init error.", e); + } + div.__(); + } + + /** + * Init Federation Cluster Apps Metrics. + * Contains App information, resource usage information. + * + * @param div data display div. + * @param metrics data metric information. + */ + private void initFederationClusterAppsMetrics(Hamlet.DIV div, + RouterClusterMetrics metrics) { + div.h3(metrics.getWebPageTitlePrefix() + " Cluster Metrics"). + table("#metricsoverview"). + thead().$class("ui-widget-header"). + // Initialize table header information + tr(). + th().$class("ui-state-default").__("Apps Submitted").__(). + th().$class("ui-state-default").__("Apps Pending").__(). + th().$class("ui-state-default").__("Apps Running").__(). + th().$class("ui-state-default").__("Apps Completed").__(). + th().$class("ui-state-default").__("Containers Running").__(). + th().$class("ui-state-default").__("Used Resources").__(). + th().$class("ui-state-default").__("Total Resources").__(). + th().$class("ui-state-default").__("Reserved Resources").__(). + th().$class("ui-state-default").__("Physical Mem Used %").__(). + th().$class("ui-state-default").__("Physical VCores Used %").__(). + __(). + __(). + // Initialize table data information + tbody().$class("ui-widget-content"). + tr(). + td(metrics.getAppsSubmitted()). + td(metrics.getAppsPending()). + td(String.valueOf(metrics.getAppsRunning())). + td(metrics.getAppsCompleted()). + td(metrics.getAllocatedContainers()). + td(metrics.getUsedResources()). + td(metrics.getTotalResources()). + td(metrics.getReservedResources()). + td(metrics.getUtilizedMBPercent()). + td(metrics.getUtilizedVirtualCoresPercent()). + __(). + __().__(); + } + + /** + * Init Federation Cluster Nodes Metrics. + * + * @param div data display div. + * @param metrics data metric information. + */ + private void initFederationClusterNodesMetrics(Hamlet.DIV div, + RouterClusterMetrics metrics) { + div.h3(metrics.getWebPageTitlePrefix() + " Cluster Nodes Metrics"). + table("#nodemetricsoverview"). + thead().$class("ui-widget-header"). + // Initialize table header information + tr(). + th().$class("ui-state-default").__("Active Nodes").__(). + th().$class("ui-state-default").__("Decommissioning Nodes").__(). + th().$class("ui-state-default").__("Decommissioned Nodes").__(). + th().$class("ui-state-default").__("Lost Nodes").__(). + th().$class("ui-state-default").__("Unhealthy Nodes").__(). + th().$class("ui-state-default").__("Rebooted Nodes").__(). + th().$class("ui-state-default").__("Shutdown Nodes").__(). + __(). + __(). + // Initialize table data information + tbody().$class("ui-widget-content"). + tr(). + td(String.valueOf(metrics.getActiveNodes())). + td(String.valueOf(metrics.getDecommissioningNodes())). + td(String.valueOf(metrics.getDecommissionedNodes())). + td(String.valueOf(metrics.getLostNodes())). + td(String.valueOf(metrics.getUnhealthyNodes())). + td(String.valueOf(metrics.getRebootedNodes())). + td(String.valueOf(metrics.getShutdownNodes())). + __(). + __().__(); + } + + /** + * Init Federation Cluster SchedulersMetrics. + * + * @param div data display div. + * @param metrics data metric information. + * @param subclusters active subcluster List. + * @throws YarnException yarn error. + * @throws IOException io error. + * @throws InterruptedException interrupt error. + */ + private void initFederationClusterSchedulersMetrics(Hamlet.DIV div, + RouterClusterMetrics metrics, Collection subclusters) + throws YarnException, IOException, InterruptedException { + + Hamlet.TBODY>> fsMetricsScheduleTr = + div.h3(metrics.getWebPageTitlePrefix() + " Scheduler Metrics"). + table("#schedulermetricsoverview"). + thead().$class("ui-widget-header"). + tr(). + th().$class("ui-state-default").__("SubCluster").__(). + th().$class("ui-state-default").__("Scheduler Type").__(). + th().$class("ui-state-default").__("Scheduling Resource Type").__(). + th().$class("ui-state-default").__("Minimum Allocation").__(). + th().$class("ui-state-default").__("Maximum Allocation").__(). + th().$class("ui-state-default").__("Maximum Cluster Application Priority").__(). + th().$class("ui-state-default").__("Scheduler Busy %").__(). + th().$class("ui-state-default").__("RM Dispatcher EventQueue Size").__(). + th().$class("ui-state-default") + .__("Scheduler Dispatcher EventQueue Size").__(). + __(). + __(). + tbody().$class("ui-widget-content"); + + boolean isEnabled = isYarnFederationEnabled(); + + // If Federation mode is not enabled or there is currently no SubCluster available, + // each column in the list should be displayed as N/A + if (!isEnabled || subclusters == null || subclusters.isEmpty()) { + fsMetricsScheduleTr.tr(). + td(UNAVAILABLE). + td(UNAVAILABLE). + td(UNAVAILABLE). + td(UNAVAILABLE). + td(UNAVAILABLE). + td(UNAVAILABLE). + td(UNAVAILABLE). + td(UNAVAILABLE). + td(UNAVAILABLE) + .__(); + } else { + initSubClusterOverViewTable(metrics, fsMetricsScheduleTr, subclusters); + } + + fsMetricsScheduleTr.__().__(); + } + + private void initSubClusterOverViewTable(RouterClusterMetrics metrics, + Hamlet.TBODY>> fsMetricsScheduleTr, + Collection subclusters) { + + // configuration + Configuration config = this.router.getConfig(); + + Client client = RouterWebServiceUtil.createJerseyClient(config); + + // Traverse all SubClusters to get cluster information. + for (SubClusterInfo subcluster : subclusters) { + + // Call the RM interface to obtain schedule information + String webAppAddress = WebAppUtils.getHttpSchemePrefix(config) + + subcluster.getRMWebServiceAddress(); + + SchedulerOverviewInfo typeInfo = RouterWebServiceUtil + .genericForward(webAppAddress, null, SchedulerOverviewInfo.class, HTTPMethods.GET, + RMWSConsts.RM_WEB_SERVICE_PATH + RMWSConsts.SCHEDULER_OVERVIEW, null, null, + config, client); + RouterSchedulerMetrics rsMetrics = new RouterSchedulerMetrics(subcluster, metrics, typeInfo); + + // Basic information + fsMetricsScheduleTr.tr(). + td(rsMetrics.getSubCluster()). + td(rsMetrics.getSchedulerType()). + td(rsMetrics.getSchedulingResourceType()). + td(rsMetrics.getMinimumAllocation()). + td(rsMetrics.getMaximumAllocation()). + td(rsMetrics.getApplicationPriority()). + td(rsMetrics.getSchedulerBusy()). + td(rsMetrics.getRmDispatcherEventQueueSize()). + td(rsMetrics.getSchedulerDispatcherEventQueueSize()). + __(); + } + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/NavBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/NavBlock.java index 9c39bb7b7a2..2266370ee95 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/NavBlock.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/NavBlock.java @@ -18,29 +18,54 @@ package org.apache.hadoop.yarn.server.router.webapp; -import org.apache.hadoop.yarn.webapp.view.HtmlBlock; +import com.google.inject.Inject; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.yarn.server.router.Router; +import org.apache.hadoop.yarn.server.webapp.WebPageUtils; +import org.apache.hadoop.yarn.webapp.hamlet2.Hamlet; +import java.util.List; /** * Navigation block for the Router Web UI. */ -public class NavBlock extends HtmlBlock { +public class NavBlock extends RouterBlock { + + private Router router; + + @Inject + public NavBlock(Router router, ViewContext ctx) { + super(router, ctx); + this.router = router; + } @Override public void render(Block html) { - html. - div("#nav"). + Hamlet.UL> mainList = html.div("#nav"). h3("Cluster"). ul(). - li().a(url(""), "About").__(). - li().a(url("federation"), "Federation").__(). - li().a(url("nodes"), "Nodes").__(). - li().a(url("apps"), "Applications").__(). - __(). - h3("Tools"). - ul(). - li().a("/conf", "Configuration").__(). - li().a("/logs", "Local logs").__(). - li().a("/stacks", "Server stacks").__(). - li().a("/jmx?qry=Hadoop:*", "Server metrics").__().__().__(); + li().a(url(""), "About").__(). + li().a(url("federation"), "Federation").__(); + + List subClusterIds = getActiveSubClusterIds(); + + // ### nodes info + initNodesMenu(mainList, subClusterIds); + + // ### nodelabels info + initNodeLabelsMenu(mainList, subClusterIds); + + // ### applications info + initApplicationsMenu(mainList, subClusterIds); + + // ### tools + Hamlet.DIV sectionBefore = mainList.__(); + Configuration conf = new Configuration(); + Hamlet.UL> tools = WebPageUtils.appendToolSection(sectionBefore, conf); + + if (tools == null) { + return; + } + + tools.__().__(); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/NodeLabelsBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/NodeLabelsBlock.java new file mode 100644 index 00000000000..62e2b5d8537 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/NodeLabelsBlock.java @@ -0,0 +1,143 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.yarn.server.router.webapp; + +import com.google.inject.Inject; +import com.sun.jersey.api.client.Client; +import org.apache.commons.lang3.StringUtils; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.yarn.api.records.NodeLabel; +import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId; +import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo; +import org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWSConsts; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeLabelInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeLabelsInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.PartitionInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceInfo; +import org.apache.hadoop.yarn.server.router.Router; +import org.apache.hadoop.yarn.webapp.hamlet2.Hamlet; +import org.apache.hadoop.yarn.webapp.util.WebAppUtils; + +import static org.apache.hadoop.yarn.webapp.YarnWebParams.NODE_SC; + +/** + * Navigation block for the Router Web UI. + */ +public class NodeLabelsBlock extends RouterBlock { + + private Router router; + + @Inject + public NodeLabelsBlock(Router router, ViewContext ctx) { + super(router, ctx); + this.router = router; + } + + @Override + protected void render(Block html) { + boolean isEnabled = isYarnFederationEnabled(); + + // Get subClusterName + String subClusterName = $(NODE_SC); + + NodeLabelsInfo nodeLabelsInfo = null; + if (StringUtils.isNotEmpty(subClusterName)) { + nodeLabelsInfo = getSubClusterNodeLabelsInfo(subClusterName); + } else { + nodeLabelsInfo = getYarnFederationNodeLabelsInfo(isEnabled); + } + + initYarnFederationNodeLabelsOfCluster(nodeLabelsInfo, html); + } + + /** + * Get NodeLabels Info based on SubCluster. + * @return NodeLabelsInfo. + */ + private NodeLabelsInfo getSubClusterNodeLabelsInfo(String subCluster) { + try { + SubClusterId subClusterId = SubClusterId.newInstance(subCluster); + FederationStateStoreFacade facade = FederationStateStoreFacade.getInstance(); + SubClusterInfo subClusterInfo = facade.getSubCluster(subClusterId); + + if (subClusterInfo != null) { + // Prepare webAddress + String webAddress = subClusterInfo.getRMWebServiceAddress(); + String herfWebAppAddress = ""; + if (webAddress != null && !webAddress.isEmpty()) { + herfWebAppAddress = + WebAppUtils.getHttpSchemePrefix(this.router.getConfig()) + webAddress; + return getSubClusterNodeLabelsByWebAddress(herfWebAppAddress); + } + } + } catch (Exception e) { + LOG.error("get NodeLabelsInfo From SubCluster = {} error.", subCluster, e); + } + return null; + } + + private NodeLabelsInfo getYarnFederationNodeLabelsInfo(boolean isEnabled) { + if (isEnabled) { + String webAddress = WebAppUtils.getRouterWebAppURLWithScheme(this.router.getConfig()); + return getSubClusterNodeLabelsByWebAddress(webAddress); + } + return null; + } + + private NodeLabelsInfo getSubClusterNodeLabelsByWebAddress(String webAddress) { + Configuration conf = this.router.getConfig(); + Client client = RouterWebServiceUtil.createJerseyClient(conf); + NodeLabelsInfo nodes = RouterWebServiceUtil + .genericForward(webAddress, null, NodeLabelsInfo.class, HTTPMethods.GET, + RMWSConsts.RM_WEB_SERVICE_PATH + RMWSConsts.GET_RM_NODE_LABELS, null, null, conf, + client); + client.destroy(); + return nodes; + } + + private void initYarnFederationNodeLabelsOfCluster(NodeLabelsInfo nodeLabelsInfo, Block html) { + + Hamlet.TBODY> tbody = html.table("#nodelabels"). + thead(). + tr(). + th(".name", "Label Name"). + th(".type", "Label Type"). + th(".numOfActiveNMs", "Num Of Active NMs"). + th(".totalResource", "Total Resource"). + __().__(). + tbody(); + + if (nodeLabelsInfo != null) { + for (NodeLabelInfo info : nodeLabelsInfo.getNodeLabelsInfo()) { + Hamlet.TR>> row = + tbody.tr().td(info.getName().isEmpty() ? + NodeLabel.DEFAULT_NODE_LABEL_PARTITION : info.getName()); + String type = (info.getExclusivity()) ? "Exclusive Partition" : "Non Exclusive Partition"; + row = row.td(type); + int nActiveNMs = info.getActiveNMs(); + row = row.td(String.valueOf(nActiveNMs)); + PartitionInfo partitionInfo = info.getPartitionInfo(); + ResourceInfo available = partitionInfo.getResourceAvailable(); + row.td(available.toFormattedString()).__(); + } + } + + tbody.__().__(); + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/NodeLabelsPage.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/NodeLabelsPage.java new file mode 100644 index 00000000000..9b3cea46817 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/NodeLabelsPage.java @@ -0,0 +1,49 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.router.webapp; + +import org.apache.hadoop.yarn.webapp.SubView; +import org.apache.hadoop.yarn.webapp.hamlet2.Hamlet; + +import static org.apache.hadoop.yarn.webapp.YarnWebParams.NODE_SC; +import static org.apache.hadoop.yarn.webapp.view.JQueryUI.DATATABLES_ID; + +/** + * Renders a block for the nodelabels with metrics information. + */ +public class NodeLabelsPage extends RouterView { + + @Override + protected void preHead(Hamlet.HTML<__> html) { + commonPreHead(html); + String type = $(NODE_SC); + String title = "Node labels of the cluster"; + if (type != null && !type.isEmpty()) { + title = title + " (" + type + ")"; + } + setTitle(title); + set(DATATABLES_ID, "nodelabels"); + setTableStyles(html, "nodelabels", ".healthStatus {width:10em}", ".healthReport {width:10em}"); + } + + @Override + protected Class content() { + return NodeLabelsBlock.class; + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/NodesBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/NodesBlock.java index 4734cf6bbf3..7e92506e0d6 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/NodesBlock.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/NodesBlock.java @@ -19,50 +19,108 @@ package org.apache.hadoop.yarn.server.router.webapp; import com.sun.jersey.api.client.Client; +import org.apache.commons.collections.CollectionUtils; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.util.StringUtils; +import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId; +import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo; +import org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade; import org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWSConsts; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeInfo; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodesInfo; import org.apache.hadoop.yarn.server.router.Router; -import org.apache.hadoop.yarn.util.Times; import org.apache.hadoop.yarn.webapp.hamlet2.Hamlet; import org.apache.hadoop.yarn.webapp.hamlet2.Hamlet.TABLE; import org.apache.hadoop.yarn.webapp.hamlet2.Hamlet.TBODY; import org.apache.hadoop.yarn.webapp.hamlet2.Hamlet.TR; import org.apache.hadoop.yarn.webapp.util.WebAppUtils; -import org.apache.hadoop.yarn.webapp.view.HtmlBlock; import com.google.inject.Inject; +import java.util.Date; + +import static org.apache.hadoop.yarn.webapp.YarnWebParams.NODE_SC; + /** * Nodes block for the Router Web UI. */ -public class NodesBlock extends HtmlBlock { - - private static final long BYTES_IN_MB = 1024 * 1024; +public class NodesBlock extends RouterBlock { private final Router router; @Inject NodesBlock(Router router, ViewContext ctx) { - super(ctx); + super(router, ctx); this.router = router; } @Override protected void render(Block html) { - // Get the node info from the federation + + boolean isEnabled = isYarnFederationEnabled(); + + // Get subClusterName + String subClusterName = $(NODE_SC); + + // We will try to get the subClusterName. + // If the subClusterName is not empty, + // it means that we need to get the Node list of a subCluster. + NodesInfo nodesInfo = null; + if (subClusterName != null && !subClusterName.isEmpty()) { + initSubClusterMetricsOverviewTable(html, subClusterName); + nodesInfo = getSubClusterNodesInfo(subClusterName); + } else { + // Metrics Overview Table + html.__(MetricsOverviewTable.class); + nodesInfo = getYarnFederationNodesInfo(isEnabled); + } + + // Initialize NodeInfo List + initYarnFederationNodesOfCluster(nodesInfo, html); + } + + private NodesInfo getYarnFederationNodesInfo(boolean isEnabled) { + if (isEnabled) { + String webAddress = WebAppUtils.getRouterWebAppURLWithScheme(this.router.getConfig()); + return getSubClusterNodesInfoByWebAddress(webAddress); + } + return null; + } + + private NodesInfo getSubClusterNodesInfo(String subCluster) { + try { + SubClusterId subClusterId = SubClusterId.newInstance(subCluster); + FederationStateStoreFacade facade = FederationStateStoreFacade.getInstance(); + SubClusterInfo subClusterInfo = facade.getSubCluster(subClusterId); + + if (subClusterInfo != null) { + // Prepare webAddress + String webAddress = subClusterInfo.getRMWebServiceAddress(); + String herfWebAppAddress = ""; + if (webAddress != null && !webAddress.isEmpty()) { + herfWebAppAddress = + WebAppUtils.getHttpSchemePrefix(this.router.getConfig()) + webAddress; + return getSubClusterNodesInfoByWebAddress(herfWebAppAddress); + } + } + } catch (Exception e) { + LOG.error("get NodesInfo From SubCluster = {} error.", subCluster, e); + } + return null; + } + + private NodesInfo getSubClusterNodesInfoByWebAddress(String webAddress) { Configuration conf = this.router.getConfig(); Client client = RouterWebServiceUtil.createJerseyClient(conf); - String webAppAddress = WebAppUtils.getRouterWebAppURLWithScheme(conf); NodesInfo nodes = RouterWebServiceUtil - .genericForward(webAppAddress, null, NodesInfo.class, HTTPMethods.GET, - RMWSConsts.RM_WEB_SERVICE_PATH + RMWSConsts.NODES, null, null, conf, - client); - - setTitle("Nodes"); + .genericForward(webAddress, null, NodesInfo.class, HTTPMethods.GET, + RMWSConsts.RM_WEB_SERVICE_PATH + RMWSConsts.NODES, null, null, conf, + client); + client.destroy(); + return nodes; + } + private void initYarnFederationNodesOfCluster(NodesInfo nodesInfo, Block html) { TBODY> tbody = html.table("#nodes").thead().tr() .th(".nodelabels", "Node Labels") .th(".rack", "Rack") @@ -79,34 +137,42 @@ public class NodesBlock extends HtmlBlock { .th(".nodeManagerVersion", "Version") .__().__().tbody(); - // Add nodes to the web UI - for (NodeInfo info : nodes.getNodes()) { - int usedMemory = (int) info.getUsedMemory(); - int availableMemory = (int) info.getAvailableMemory(); - TR>> row = tbody.tr(); - row.td().__(StringUtils.join(",", info.getNodeLabels())).__(); - row.td().__(info.getRack()).__(); - row.td().__(info.getState()).__(); - row.td().__(info.getNodeId()).__(); - boolean isInactive = false; - if (isInactive) { - row.td().__("N/A").__(); - } else { - String httpAddress = info.getNodeHTTPAddress(); - row.td().a("//" + httpAddress, httpAddress).__(); + if (nodesInfo != null && CollectionUtils.isNotEmpty(nodesInfo.getNodes())) { + for (NodeInfo info : nodesInfo.getNodes()) { + int usedMemory = (int) info.getUsedMemory(); + int availableMemory = (int) info.getAvailableMemory(); + TR>> row = tbody.tr(); + row.td().__(StringUtils.join(",", info.getNodeLabels())).__(); + row.td().__(info.getRack()).__(); + row.td().__(info.getState()).__(); + row.td().__(info.getNodeId()).__(); + boolean isInactive = false; + if (isInactive) { + row.td().__(UNAVAILABLE).__(); + } else { + String httpAddress = info.getNodeHTTPAddress(); + String herfWebAppAddress = ""; + if (httpAddress != null && !httpAddress.isEmpty()) { + herfWebAppAddress = + WebAppUtils.getHttpSchemePrefix(this.router.getConfig()) + httpAddress; + } + row.td().a(herfWebAppAddress, httpAddress).__(); + } + + row.td().br().$title(String.valueOf(info.getLastHealthUpdate())).__() + .__(new Date(info.getLastHealthUpdate())).__() + .td(info.getHealthReport()) + .td(String.valueOf(info.getNumContainers())).td().br() + .$title(String.valueOf(usedMemory)).__() + .__(StringUtils.byteDesc(usedMemory * BYTES_IN_MB)).__().td().br() + .$title(String.valueOf(availableMemory)).__() + .__(StringUtils.byteDesc(availableMemory * BYTES_IN_MB)).__() + .td(String.valueOf(info.getUsedVirtualCores())) + .td(String.valueOf(info.getAvailableVirtualCores())) + .td(info.getVersion()).__(); } - row.td().br().$title(String.valueOf(info.getLastHealthUpdate())).__() - .__(Times.format(info.getLastHealthUpdate())).__() - .td(info.getHealthReport()) - .td(String.valueOf(info.getNumContainers())).td().br() - .$title(String.valueOf(usedMemory)).__() - .__(StringUtils.byteDesc(usedMemory * BYTES_IN_MB)).__().td().br() - .$title(String.valueOf(availableMemory)).__() - .__(StringUtils.byteDesc(availableMemory * BYTES_IN_MB)).__() - .td(String.valueOf(info.getUsedVirtualCores())) - .td(String.valueOf(info.getAvailableVirtualCores())) - .td(info.getVersion()).__(); } + tbody.__().__(); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/NodesPage.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/NodesPage.java index 7b2a3da7650..0723cff792d 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/NodesPage.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/NodesPage.java @@ -18,7 +18,7 @@ package org.apache.hadoop.yarn.server.router.webapp; -import static org.apache.hadoop.yarn.webapp.YarnWebParams.NODE_STATE; +import static org.apache.hadoop.yarn.webapp.YarnWebParams.NODE_SC; import static org.apache.hadoop.yarn.webapp.view.JQueryUI.DATATABLES; import static org.apache.hadoop.yarn.webapp.view.JQueryUI.DATATABLES_ID; import static org.apache.hadoop.yarn.webapp.view.JQueryUI.initID; @@ -31,7 +31,7 @@ class NodesPage extends RouterView { @Override protected void preHead(Page.HTML<__> html) { commonPreHead(html); - String type = $(NODE_STATE); + String type = $(NODE_SC); String title = "Nodes of the cluster"; if (type != null && !type.isEmpty()) { title = title + " (" + type + ")"; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/RESTRequestInterceptor.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/RESTRequestInterceptor.java index 917809ad6cd..2724cdd5ad2 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/RESTRequestInterceptor.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/RESTRequestInterceptor.java @@ -23,6 +23,7 @@ import javax.servlet.http.HttpServletResponse; import org.apache.hadoop.conf.Configurable; import org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServiceProtocol; +import org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService; import org.apache.hadoop.yarn.server.webapp.WebServices; import org.apache.hadoop.yarn.server.webapp.dao.AppAttemptInfo; import org.apache.hadoop.yarn.server.webapp.dao.ContainerInfo; @@ -122,4 +123,18 @@ public interface RESTRequestInterceptor */ ContainerInfo getContainer(HttpServletRequest req, HttpServletResponse res, String appId, String appAttemptId, String containerId); + + /** + * Set RouterClientRMService. + * + * @param routerClientRMService routerClientRMService. + */ + void setRouterClientRMService(RouterClientRMService routerClientRMService); + + /** + * Get RouterClientRMService. + * + * @return RouterClientRMService + */ + RouterClientRMService getRouterClientRMService(); } \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/RouterBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/RouterBlock.java new file mode 100644 index 00000000000..31ab83daaaf --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/RouterBlock.java @@ -0,0 +1,260 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + *

    + * http://www.apache.org/licenses/LICENSE-2.0 + *

    + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.yarn.server.router.webapp; + +import com.sun.jersey.api.client.Client; +import org.apache.commons.collections.CollectionUtils; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.yarn.api.records.YarnApplicationState; +import org.apache.hadoop.yarn.conf.YarnConfiguration; +import org.apache.hadoop.yarn.exceptions.YarnException; +import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId; +import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo; +import org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWSConsts; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ClusterMetricsInfo; +import org.apache.hadoop.yarn.server.router.Router; +import org.apache.hadoop.yarn.webapp.hamlet2.Hamlet; +import org.apache.hadoop.yarn.webapp.util.WebAppUtils; +import org.apache.hadoop.yarn.webapp.view.HtmlBlock; + +import java.util.List; +import java.util.ArrayList; +import java.util.Map; +import java.util.Collection; +import java.util.Collections; +import java.util.Comparator; + +public abstract class RouterBlock extends HtmlBlock { + + private final Router router; + private final ViewContext ctx; + private final FederationStateStoreFacade facade; + private final Configuration conf; + + public RouterBlock(Router router, ViewContext ctx) { + super(ctx); + this.ctx = ctx; + this.router = router; + this.facade = FederationStateStoreFacade.getInstance(); + this.conf = this.router.getConfig(); + } + + /** + * Get RouterClusterMetrics Info. + * + * @return Router ClusterMetricsInfo. + */ + protected ClusterMetricsInfo getRouterClusterMetricsInfo() { + boolean isEnabled = isYarnFederationEnabled(); + if(isEnabled) { + String webAppAddress = WebAppUtils.getRouterWebAppURLWithScheme(conf); + Client client = RouterWebServiceUtil.createJerseyClient(conf); + ClusterMetricsInfo metrics = RouterWebServiceUtil + .genericForward(webAppAddress, null, ClusterMetricsInfo.class, HTTPMethods.GET, + RMWSConsts.RM_WEB_SERVICE_PATH + RMWSConsts.METRICS, null, null, + conf, client); + client.destroy(); + return metrics; + } + return null; + } + + /** + * Get a list of subclusters. + * + * @return subcluster List. + * @throws YarnException if the call to the getSubClusters is unsuccessful. + */ + protected List getSubClusterInfoList() throws YarnException { + + Map subClustersInfo = facade.getSubClusters(true); + + // Sort the SubClusters. + List subclusters = new ArrayList<>(); + subclusters.addAll(subClustersInfo.values()); + Comparator cmp = Comparator.comparing(o -> o.getSubClusterId()); + Collections.sort(subclusters, cmp); + + return subclusters; + } + + /** + * Whether Yarn Federation is enabled. + * + * @return true, enable yarn federation; false, not enable yarn federation; + */ + protected boolean isYarnFederationEnabled() { + boolean isEnabled = conf.getBoolean( + YarnConfiguration.FEDERATION_ENABLED, + YarnConfiguration.DEFAULT_FEDERATION_ENABLED); + return isEnabled; + } + + /** + * Get a list of SubClusterIds for ActiveSubClusters. + * + * @return list of SubClusterIds. + */ + protected List getActiveSubClusterIds() { + List result = new ArrayList<>(); + try { + Map subClustersInfo = facade.getSubClusters(true); + subClustersInfo.values().stream().forEach(subClusterInfo -> { + result.add(subClusterInfo.getSubClusterId().getId()); + }); + } catch (Exception e) { + LOG.error("getActiveSubClusters error.", e); + } + return result; + } + + /** + * init SubCluster MetricsOverviewTable. + * + * @param html HTML Object. + * @param subclusterId subClusterId + */ + protected void initSubClusterMetricsOverviewTable(Block html, String subclusterId) { + MetricsOverviewTable metricsOverviewTable = new MetricsOverviewTable(this.router, this.ctx); + metricsOverviewTable.render(html, subclusterId); + } + + /** + * Get ClusterMetricsInfo By SubClusterId. + * + * @param subclusterId subClusterId + * @return SubCluster RM ClusterMetricsInfo + */ + protected ClusterMetricsInfo getClusterMetricsInfoBySubClusterId(String subclusterId) { + try { + SubClusterId subClusterId = SubClusterId.newInstance(subclusterId); + SubClusterInfo subClusterInfo = facade.getSubCluster(subClusterId); + if (subClusterInfo != null) { + Client client = RouterWebServiceUtil.createJerseyClient(this.conf); + // Call the RM interface to obtain schedule information + String webAppAddress = WebAppUtils.getHttpSchemePrefix(this.conf) + + subClusterInfo.getRMWebServiceAddress(); + ClusterMetricsInfo metrics = RouterWebServiceUtil + .genericForward(webAppAddress, null, ClusterMetricsInfo.class, HTTPMethods.GET, + RMWSConsts.RM_WEB_SERVICE_PATH + RMWSConsts.METRICS, null, null, + conf, client); + client.destroy(); + return metrics; + } + } catch (Exception e) { + LOG.error("getClusterMetricsInfoBySubClusterId subClusterId = {} error.", subclusterId, e); + } + return null; + } + + protected Collection getSubClusterInfoList(String subclusterId) { + try { + SubClusterId subClusterId = SubClusterId.newInstance(subclusterId); + SubClusterInfo subClusterInfo = facade.getSubCluster(subClusterId); + return Collections.singletonList(subClusterInfo); + } catch (Exception e) { + LOG.error("getSubClusterInfoList subClusterId = {} error.", subclusterId, e); + } + return null; + } + + public FederationStateStoreFacade getFacade() { + return facade; + } + + /** + * Initialize the Nodes menu. + * + * @param mainList HTML Object. + * @param subClusterIds subCluster List. + */ + protected void initNodesMenu(Hamlet.UL> mainList, + List subClusterIds) { + if (CollectionUtils.isNotEmpty(subClusterIds)) { + Hamlet.UL>>> nodesList = + mainList.li().a(url("nodes"), "Nodes").ul(). + $style("padding:0.3em 1em 0.1em 2em"); + + // ### nodes info + nodesList.li().__(); + for (String subClusterId : subClusterIds) { + nodesList.li().a(url("nodes", subClusterId), subClusterId).__(); + } + nodesList.__().__(); + } else { + mainList.li().a(url("nodes"), "Nodes").__(); + } + } + + /** + * Initialize the Applications menu. + * + * @param mainList HTML Object. + * @param subClusterIds subCluster List. + */ + protected void initApplicationsMenu(Hamlet.UL> mainList, + List subClusterIds) { + if (CollectionUtils.isNotEmpty(subClusterIds)) { + Hamlet.UL>>> apps = + mainList.li().a(url("apps"), "Applications").ul(); + apps.li().__(); + for (String subClusterId : subClusterIds) { + Hamlet.LI>>>> subClusterList = apps. + li().a(url("apps", subClusterId), subClusterId); + Hamlet.UL>>>>> subAppStates = + subClusterList.ul().$style("padding:0.3em 1em 0.1em 2em"); + subAppStates.li().__(); + for (YarnApplicationState state : YarnApplicationState.values()) { + subAppStates. + li().a(url("apps", subClusterId, state.toString()), state.toString()).__(); + } + subAppStates.li().__().__(); + subClusterList.__(); + } + apps.__().__(); + } else { + mainList.li().a(url("apps"), "Applications").__(); + } + } + + /** + * Initialize the NodeLabels menu. + * + * @param mainList HTML Object. + * @param subClusterIds subCluster List. + */ + protected void initNodeLabelsMenu(Hamlet.UL> mainList, + List subClusterIds) { + + if (CollectionUtils.isNotEmpty(subClusterIds)) { + Hamlet.UL>>> nodesList = + mainList.li().a(url("nodelabels"), "Node Labels").ul(). + $style("padding:0.3em 1em 0.1em 2em"); + + // ### nodelabels info + nodesList.li().__(); + for (String subClusterId : subClusterIds) { + nodesList.li().a(url("nodelabels", subClusterId), subClusterId).__(); + } + nodesList.__().__(); + } else { + mainList.li().a(url("nodelabels"), "Node Labels").__(); + } + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/RouterController.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/RouterController.java index 38df0e7886c..7d7165f7cad 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/RouterController.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/RouterController.java @@ -56,4 +56,9 @@ public class RouterController extends Controller { setTitle("Nodes"); render(NodesPage.class); } + + public void nodeLabels() { + setTitle("Node Labels"); + render(NodeLabelsPage.class); + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/RouterWebApp.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/RouterWebApp.java index ba07a1afba4..989a3d43b43 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/RouterWebApp.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/RouterWebApp.java @@ -24,6 +24,8 @@ import org.apache.hadoop.yarn.webapp.GenericExceptionHandler; import org.apache.hadoop.yarn.webapp.WebApp; import org.apache.hadoop.yarn.webapp.YarnWebParams; +import static org.apache.hadoop.yarn.util.StringHelper.pajoin; + /** * The Router webapp. */ @@ -47,8 +49,9 @@ public class RouterWebApp extends WebApp implements YarnWebParams { route("/", RouterController.class); route("/cluster", RouterController.class, "about"); route("/about", RouterController.class, "about"); - route("/apps", RouterController.class, "apps"); - route("/nodes", RouterController.class, "nodes"); + route(pajoin("/apps", APP_SC, APP_STATE), RouterController.class, "apps"); + route(pajoin("/nodes", NODE_SC), RouterController.class, "nodes"); route("/federation", RouterController.class, "federation"); + route(pajoin("/nodelabels", NODE_SC), RouterController.class, "nodeLabels"); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/RouterWebServiceUtil.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/RouterWebServiceUtil.java index 7423c8c907b..e33ce155079 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/RouterWebServiceUtil.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/RouterWebServiceUtil.java @@ -20,6 +20,7 @@ package org.apache.hadoop.yarn.server.router.webapp; import static javax.servlet.http.HttpServletResponse.SC_NO_CONTENT; import static javax.servlet.http.HttpServletResponse.SC_OK; +import static org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices.DELEGATION_TOKEN_HEADER; import java.io.IOException; import java.net.InetSocketAddress; @@ -43,11 +44,18 @@ import javax.ws.rs.core.Response; import javax.ws.rs.core.Response.ResponseBuilder; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.CommonConfigurationKeys; import org.apache.hadoop.net.NetUtils; import org.apache.hadoop.security.UserGroupInformation; +import org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler; +import org.apache.hadoop.security.authorize.AuthorizationException; +import org.apache.hadoop.security.token.Token; +import org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler; import org.apache.hadoop.yarn.api.records.YarnApplicationState; import org.apache.hadoop.yarn.api.records.NodeLabel; import org.apache.hadoop.yarn.conf.YarnConfiguration; +import org.apache.hadoop.yarn.exceptions.YarnException; +import org.apache.hadoop.yarn.security.client.RMDelegationTokenIdentifier; import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo; import org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebAppUtil; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppInfo; @@ -59,6 +67,8 @@ import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeLabelsInfo; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeToLabelsInfo; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ApplicationStatisticsInfo; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.StatisticsItemInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeLabelInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.PartitionInfo; import org.apache.hadoop.yarn.server.uam.UnmanagedApplicationManager; import org.apache.hadoop.yarn.webapp.BadRequestException; import org.apache.hadoop.yarn.webapp.ForbiddenException; @@ -576,4 +586,146 @@ public final class RouterWebServiceUtil { return result; } + + public static NodeLabelsInfo mergeNodeLabelsInfo(Map paramMap) { + Map resultMap = new HashMap<>(); + paramMap.values().stream() + .flatMap(nodeLabelsInfo -> nodeLabelsInfo.getNodeLabelsInfo().stream()) + .forEach(nodeLabelInfo -> { + String keyLabelName = nodeLabelInfo.getName(); + if (resultMap.containsKey(keyLabelName)) { + NodeLabelInfo mapNodeLabelInfo = resultMap.get(keyLabelName); + mapNodeLabelInfo = mergeNodeLabelInfo(mapNodeLabelInfo, nodeLabelInfo); + resultMap.put(keyLabelName, mapNodeLabelInfo); + } else { + resultMap.put(keyLabelName, nodeLabelInfo); + } + }); + NodeLabelsInfo nodeLabelsInfo = new NodeLabelsInfo(); + nodeLabelsInfo.getNodeLabelsInfo().addAll(resultMap.values()); + return nodeLabelsInfo; + } + + private static NodeLabelInfo mergeNodeLabelInfo(NodeLabelInfo left, NodeLabelInfo right) { + NodeLabelInfo resultNodeLabelInfo = new NodeLabelInfo(); + resultNodeLabelInfo.setName(left.getName()); + + int newActiveNMs = left.getActiveNMs() + right.getActiveNMs(); + resultNodeLabelInfo.setActiveNMs(newActiveNMs); + + boolean newExclusivity = left.getExclusivity() && right.getExclusivity(); + resultNodeLabelInfo.setExclusivity(newExclusivity); + + PartitionInfo leftPartition = left.getPartitionInfo(); + PartitionInfo rightPartition = right.getPartitionInfo(); + PartitionInfo newPartitionInfo = PartitionInfo.addTo(leftPartition, rightPartition); + resultNodeLabelInfo.setPartitionInfo(newPartitionInfo); + return resultNodeLabelInfo; + } + + /** + * initForWritableEndpoints does the init and acls verification for all + * writable REST end points. + * + * @param conf Configuration. + * @param callerUGI remote caller who initiated the request. + * @throws AuthorizationException in case of no access to perfom this op. + */ + public static void initForWritableEndpoints(Configuration conf, UserGroupInformation callerUGI) + throws AuthorizationException { + if (callerUGI == null) { + String msg = "Unable to obtain user name, user not authenticated"; + throw new AuthorizationException(msg); + } + + if (UserGroupInformation.isSecurityEnabled() && isStaticUser(conf, callerUGI)) { + String msg = "The default static user cannot carry out this operation."; + throw new ForbiddenException(msg); + } + } + + /** + * Determine whether the user is a static user. + * + * @param conf Configuration. + * @param callerUGI remote caller who initiated the request. + * @return true, static user; false, not static user; + */ + private static boolean isStaticUser(Configuration conf, UserGroupInformation callerUGI) { + String staticUser = conf.get(CommonConfigurationKeys.HADOOP_HTTP_STATIC_USER, + CommonConfigurationKeys.DEFAULT_HADOOP_HTTP_STATIC_USER); + return staticUser.equals(callerUGI.getUserName()); + } + + public static void createKerberosUserGroupInformation(HttpServletRequest hsr) + throws YarnException { + String authType = hsr.getAuthType(); + + if (!KerberosAuthenticationHandler.TYPE.equalsIgnoreCase(authType)) { + String msg = "Delegation token operations can only be carried out on a " + + "Kerberos authenticated channel. Expected auth type is " + + KerberosAuthenticationHandler.TYPE + ", got type " + authType; + throw new YarnException(msg); + } + + Object ugiAttr = + hsr.getAttribute(DelegationTokenAuthenticationHandler.DELEGATION_TOKEN_UGI_ATTRIBUTE); + if (ugiAttr != null) { + String msg = "Delegation token operations cannot be carried out using " + + "delegation token authentication."; + throw new YarnException(msg); + } + } + + /** + * Parse Token data. + * + * @param encodedToken tokenData + * @return RMDelegationTokenIdentifier. + */ + public static Token extractToken(String encodedToken) { + Token token = new Token<>(); + try { + token.decodeFromUrlString(encodedToken); + } catch (Exception ie) { + throw new BadRequestException("Could not decode encoded token"); + } + return token; + } + + public static Token extractToken(HttpServletRequest request) { + String encodedToken = request.getHeader(DELEGATION_TOKEN_HEADER); + if (encodedToken == null) { + String msg = "Header '" + DELEGATION_TOKEN_HEADER + + "' containing encoded token not found"; + throw new BadRequestException(msg); + } + return extractToken(encodedToken); + } + + /** + * Get Kerberos UserGroupInformation. + * + * Parse ugi from hsr and set kerberos authentication attributes. + * + * @param conf Configuration. + * @param request the servlet request. + * @return UserGroupInformation. + * @throws AuthorizationException if Kerberos auth failed. + * @throws YarnException If Authentication Type verification fails. + */ + public static UserGroupInformation getKerberosUserGroupInformation(Configuration conf, + HttpServletRequest request) throws AuthorizationException, YarnException { + // Parse ugi from hsr And Check ugi as expected. + // If ugi is empty or user is a static user, an exception will be thrown. + UserGroupInformation callerUGI = RMWebAppUtil.getCallerUserGroupInformation(request, true); + initForWritableEndpoints(conf, callerUGI); + + // Set AuthenticationMethod Kerberos for ugi. + createKerberosUserGroupInformation(request); + callerUGI.setAuthenticationMethod(UserGroupInformation.AuthenticationMethod.KERBEROS); + + // return caller UGI + return callerUGI; + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/RouterWebServices.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/RouterWebServices.java index b1dc8635b3f..c9c56c46c7c 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/RouterWebServices.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/RouterWebServices.java @@ -81,6 +81,7 @@ import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.BulkActivitiesIn import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.SchedulerTypeInfo; import org.apache.hadoop.yarn.server.router.Router; import org.apache.hadoop.yarn.server.router.RouterServerUtil; +import org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService; import org.apache.hadoop.yarn.server.webapp.dao.ContainerInfo; import org.apache.hadoop.yarn.server.webapp.dao.ContainersInfo; import org.apache.hadoop.yarn.util.LRUCacheHashMap; @@ -208,6 +209,8 @@ public class RouterWebServices implements RMWebServiceProtocol { RESTRequestInterceptor interceptorChain = this.createRequestInterceptorChain(); interceptorChain.init(user); + RouterClientRMService routerClientRMService = router.getClientRMProxyService(); + interceptorChain.setRouterClientRMService(routerClientRMService); chainWrapper.init(interceptorChain); } catch (Exception e) { LOG.error("Init RESTRequestInterceptor error for user: {}", user, e); @@ -943,4 +946,19 @@ public class RouterWebServices implements RMWebServiceProtocol { return pipeline.getRootInterceptor() .signalToContainer(containerId, command, req); } + + @GET + @Path(RMWSConsts.GET_RM_NODE_LABELS) + @Produces({ MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8, + MediaType.APPLICATION_XML + "; " + JettyUtils.UTF_8 }) + public NodeLabelsInfo getRMNodeLabels(@Context HttpServletRequest hsr) + throws IOException { + init(); + RequestInterceptorChainWrapper pipeline = getInterceptorChain(hsr); + return pipeline.getRootInterceptor().getRMNodeLabels(hsr); + } + + public Router getRouter() { + return router; + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/dao/FederationBulkActivitiesInfo.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/dao/FederationBulkActivitiesInfo.java new file mode 100644 index 00000000000..87d11ad0feb --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/dao/FederationBulkActivitiesInfo.java @@ -0,0 +1,49 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + *

    + * http://www.apache.org/licenses/LICENSE-2.0 + *

    + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.yarn.server.router.webapp.dao; + +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.BulkActivitiesInfo; + +import javax.xml.bind.annotation.XmlAccessType; +import javax.xml.bind.annotation.XmlAccessorType; +import javax.xml.bind.annotation.XmlElement; +import javax.xml.bind.annotation.XmlRootElement; +import java.util.ArrayList; + +@XmlRootElement +@XmlAccessorType(XmlAccessType.FIELD) +public class FederationBulkActivitiesInfo extends BulkActivitiesInfo { + + @XmlElement(name = "subCluster") + private ArrayList list = new ArrayList<>(); + + public FederationBulkActivitiesInfo() { + } // JAXB needs this + + public FederationBulkActivitiesInfo(ArrayList list) { + this.list = list; + } + + public ArrayList getList() { + return list; + } + + public void setList(ArrayList list) { + this.list = list; + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/dao/FederationRMQueueAclInfo.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/dao/FederationRMQueueAclInfo.java new file mode 100644 index 00000000000..4e61fd772ee --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/dao/FederationRMQueueAclInfo.java @@ -0,0 +1,50 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + *

    + * http://www.apache.org/licenses/LICENSE-2.0 + *

    + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.yarn.server.router.webapp.dao; + +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.RMQueueAclInfo; + +import javax.xml.bind.annotation.XmlAccessType; +import javax.xml.bind.annotation.XmlAccessorType; +import javax.xml.bind.annotation.XmlElement; +import javax.xml.bind.annotation.XmlRootElement; +import java.util.ArrayList; +import java.util.List; + +@XmlRootElement +@XmlAccessorType(XmlAccessType.FIELD) +public class FederationRMQueueAclInfo extends RMQueueAclInfo { + + @XmlElement(name = "subCluster") + private List list = new ArrayList<>(); + + public FederationRMQueueAclInfo() { + } // JAXB needs this + + public FederationRMQueueAclInfo(ArrayList list) { + this.list = list; + } + + public List getList() { + return list; + } + + public void setList(List list) { + this.list = list; + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/dao/FederationSchedulerTypeInfo.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/dao/FederationSchedulerTypeInfo.java new file mode 100644 index 00000000000..733af0ce8e2 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/dao/FederationSchedulerTypeInfo.java @@ -0,0 +1,49 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + *

    + * http://www.apache.org/licenses/LICENSE-2.0 + *

    + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.yarn.server.router.webapp.dao; + +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.SchedulerTypeInfo; + +import javax.xml.bind.annotation.XmlAccessType; +import javax.xml.bind.annotation.XmlAccessorType; +import javax.xml.bind.annotation.XmlElement; +import javax.xml.bind.annotation.XmlRootElement; +import java.util.ArrayList; +import java.util.List; + +@XmlRootElement +@XmlAccessorType(XmlAccessType.FIELD) +public class FederationSchedulerTypeInfo extends SchedulerTypeInfo { + @XmlElement(name = "subCluster") + private List list = new ArrayList<>(); + + public FederationSchedulerTypeInfo() { + } // JAXB needs this + + public FederationSchedulerTypeInfo(ArrayList list) { + this.list = list; + } + + public List getList() { + return list; + } + + public void setList(List list) { + this.list = list; + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/dao/RouterClusterMetrics.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/dao/RouterClusterMetrics.java new file mode 100644 index 00000000000..f06f85574db --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/dao/RouterClusterMetrics.java @@ -0,0 +1,323 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + *

    + * http://www.apache.org/licenses/LICENSE-2.0 + *

    + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.yarn.server.router.webapp.dao; + +import org.apache.hadoop.util.StringUtils; +import org.apache.hadoop.yarn.api.records.Resource; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ClusterMetricsInfo; +import org.apache.hadoop.yarn.util.resource.Resources; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import javax.xml.bind.annotation.XmlAccessType; +import javax.xml.bind.annotation.XmlAccessorType; +import javax.xml.bind.annotation.XmlRootElement; + +@XmlRootElement +@XmlAccessorType(XmlAccessType.FIELD) +public class RouterClusterMetrics { + + protected static final long BYTES_IN_MB = 1024 * 1024; + private static final Logger LOG = LoggerFactory.getLogger(RouterClusterMetrics.class); + + // webPageTitlePrefix + private String webPageTitlePrefix = "Federation"; + + // Application Information. + private String appsSubmitted = "N/A"; + private String appsCompleted = "N/A"; + private String appsPending = "N/A"; + private String appsRunning = "N/A"; + private String appsFailed = "N/A"; + private String appsKilled = "N/A"; + + // Memory Information. + private String totalMemory = "N/A"; + private String reservedMemory = "N/A"; + private String availableMemory = "N/A"; + private String allocatedMemory = "N/A"; + private String pendingMemory = "N/A"; + + // VirtualCores Information. + private String reservedVirtualCores = "N/A"; + private String availableVirtualCores = "N/A"; + private String allocatedVirtualCores = "N/A"; + private String pendingVirtualCores = "N/A"; + private String totalVirtualCores = "N/A"; + + // Resources Information. + private String usedResources = "N/A"; + private String totalResources = "N/A"; + private String reservedResources = "N/A"; + private String allocatedContainers = "N/A"; + + // Resource Percent Information. + private String utilizedMBPercent = "N/A"; + private String utilizedVirtualCoresPercent = "N/A"; + + // Node Information. + private String activeNodes = "N/A"; + private String decommissioningNodes = "N/A"; + private String decommissionedNodes = "N/A"; + private String lostNodes = "N/A"; + private String unhealthyNodes = "N/A"; + private String rebootedNodes = "N/A"; + private String shutdownNodes = "N/A"; + + public RouterClusterMetrics() { + + } + + public RouterClusterMetrics(ClusterMetricsInfo metrics) { + if (metrics != null) { + // Application Information Conversion. + conversionApplicationInformation(metrics); + + // Memory Information Conversion. + conversionMemoryInformation(metrics); + + // Resources Information Conversion. + conversionResourcesInformation(metrics); + + // Percent Information Conversion. + conversionResourcesPercent(metrics); + + // Node Information Conversion. + conversionNodeInformation(metrics); + } + } + + public RouterClusterMetrics(ClusterMetricsInfo metrics, + String webPageTitlePrefix) { + this(metrics); + this.webPageTitlePrefix = webPageTitlePrefix; + } + + // Get Key Metric Information + public String getAppsSubmitted() { + return appsSubmitted; + } + + public String getAppsCompleted() { + return appsCompleted; + } + + public String getAppsPending() { + return appsPending; + } + + public String getAppsRunning() { + return appsRunning; + } + + public String getAppsFailed() { + return appsFailed; + } + + public String getAppsKilled() { + return appsKilled; + } + + public String getTotalMemory() { + return totalMemory; + } + + public String getReservedMemory() { + return reservedMemory; + } + + public String getAvailableMemory() { + return availableMemory; + } + + public String getAllocatedMemory() { + return allocatedMemory; + } + + public String getPendingMemory() { + return pendingMemory; + } + + public String getReservedVirtualCores() { + return reservedVirtualCores; + } + + public String getAvailableVirtualCores() { + return availableVirtualCores; + } + + public String getAllocatedVirtualCores() { + return allocatedVirtualCores; + } + + public String getPendingVirtualCores() { + return pendingVirtualCores; + } + + public String getTotalVirtualCores() { + return totalVirtualCores; + } + + public String getUsedResources() { + return usedResources; + } + + public String getTotalResources() { + return totalResources; + } + + public String getReservedResources() { + return reservedResources; + } + + public String getAllocatedContainers() { + return allocatedContainers; + } + + public String getUtilizedMBPercent() { + return utilizedMBPercent; + } + + public String getUtilizedVirtualCoresPercent() { + return utilizedVirtualCoresPercent; + } + + public String getActiveNodes() { + return activeNodes; + } + + public String getDecommissioningNodes() { + return decommissioningNodes; + } + + public String getDecommissionedNodes() { + return decommissionedNodes; + } + + public String getLostNodes() { + return lostNodes; + } + + public String getUnhealthyNodes() { + return unhealthyNodes; + } + + public String getRebootedNodes() { + return rebootedNodes; + } + + public String getShutdownNodes() { + return shutdownNodes; + } + + // Metric Information Conversion + public void conversionApplicationInformation(ClusterMetricsInfo metrics) { + try { + // Application Information. + this.appsSubmitted = String.valueOf(metrics.getAppsSubmitted()); + this.appsCompleted = String.valueOf(metrics.getAppsCompleted() + + metrics.getAppsFailed() + metrics.getAppsKilled()); + this.appsPending = String.valueOf(metrics.getAppsPending()); + this.appsRunning = String.valueOf(metrics.getAppsRunning()); + this.appsFailed = String.valueOf(metrics.getAppsFailed()); + this.appsKilled = String.valueOf(metrics.getAppsKilled()); + } catch (Exception e) { + LOG.error("conversionApplicationInformation error.", e); + } + } + + // Metric Memory Information + public void conversionMemoryInformation(ClusterMetricsInfo metrics) { + try { + // Memory Information. + this.totalMemory = StringUtils.byteDesc(metrics.getTotalMB() * BYTES_IN_MB); + this.reservedMemory = StringUtils.byteDesc(metrics.getReservedMB() * BYTES_IN_MB); + this.availableMemory = StringUtils.byteDesc(metrics.getAvailableMB() * BYTES_IN_MB); + this.allocatedMemory = StringUtils.byteDesc(metrics.getAllocatedMB() * BYTES_IN_MB); + this.pendingMemory = StringUtils.byteDesc(metrics.getPendingMB() * BYTES_IN_MB); + } catch (Exception e) { + LOG.error("conversionMemoryInformation error.", e); + } + } + + // ResourcesInformation Conversion + public void conversionResourcesInformation(ClusterMetricsInfo metrics) { + try { + // Parse resource information from metrics. + Resource metricUsedResources; + Resource metricTotalResources; + Resource metricReservedResources; + + int metricAllocatedContainers; + if (metrics.getCrossPartitionMetricsAvailable()) { + metricAllocatedContainers = metrics.getTotalAllocatedContainersAcrossPartition(); + metricUsedResources = metrics.getTotalUsedResourcesAcrossPartition().getResource(); + metricTotalResources = metrics.getTotalClusterResourcesAcrossPartition().getResource(); + metricReservedResources = metrics.getTotalReservedResourcesAcrossPartition().getResource(); + // getTotalUsedResourcesAcrossPartition includes reserved resources. + Resources.subtractFrom(metricUsedResources, metricReservedResources); + } else { + metricAllocatedContainers = metrics.getContainersAllocated(); + metricUsedResources = Resource.newInstance(metrics.getAllocatedMB(), + (int) metrics.getAllocatedVirtualCores()); + metricTotalResources = Resource.newInstance(metrics.getTotalMB(), + (int) metrics.getTotalVirtualCores()); + metricReservedResources = Resource.newInstance(metrics.getReservedMB(), + (int) metrics.getReservedVirtualCores()); + } + + // Convert to standard format. + usedResources = metricUsedResources.getFormattedString(); + totalResources = metricTotalResources.getFormattedString(); + reservedResources = metricReservedResources.getFormattedString(); + allocatedContainers = String.valueOf(metricAllocatedContainers); + + } catch (Exception e) { + LOG.error("conversionResourcesInformation error.", e); + } + } + + // ResourcesPercent Conversion + public void conversionResourcesPercent(ClusterMetricsInfo metrics) { + try { + this.utilizedMBPercent = String.valueOf(metrics.getUtilizedMBPercent()); + this.utilizedVirtualCoresPercent = String.valueOf(metrics.getUtilizedVirtualCoresPercent()); + } catch (Exception e) { + LOG.error("conversionResourcesPercent error.", e); + } + } + + // NodeInformation Conversion + public void conversionNodeInformation(ClusterMetricsInfo metrics) { + try { + this.activeNodes = String.valueOf(metrics.getActiveNodes()); + this.decommissioningNodes = String.valueOf(metrics.getDecommissioningNodes()); + this.decommissionedNodes = String.valueOf(metrics.getDecommissionedNodes()); + this.lostNodes = String.valueOf(metrics.getLostNodes()); + this.unhealthyNodes = String.valueOf(metrics.getUnhealthyNodes()); + this.rebootedNodes = String.valueOf(metrics.getRebootedNodes()); + this.shutdownNodes = String.valueOf(metrics.getShutdownNodes()); + } catch (Exception e) { + LOG.error("conversionNodeInformation error.", e); + } + } + + public String getWebPageTitlePrefix() { + return webPageTitlePrefix; + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/dao/RouterInfo.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/dao/RouterInfo.java new file mode 100644 index 00000000000..7cedd0c8e1f --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/dao/RouterInfo.java @@ -0,0 +1,104 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + *

    + * http://www.apache.org/licenses/LICENSE-2.0 + *

    + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.yarn.server.router.webapp.dao; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.service.Service; +import org.apache.hadoop.util.VersionInfo; +import org.apache.hadoop.yarn.conf.YarnConfiguration; +import org.apache.hadoop.yarn.server.router.Router; +import org.apache.hadoop.yarn.util.YarnVersionInfo; + +import javax.xml.bind.annotation.XmlAccessType; +import javax.xml.bind.annotation.XmlAccessorType; +import javax.xml.bind.annotation.XmlRootElement; + +@XmlRootElement +@XmlAccessorType(XmlAccessType.FIELD) +public class RouterInfo { + private long id; + private long startedOn; + private Service.STATE state; + private String routerStateStoreName; + private String routerVersion; + private String routerBuildVersion; + private String routerVersionBuiltOn; + private String hadoopVersion; + private String hadoopBuildVersion; + private String hadoopVersionBuiltOn; + + public RouterInfo() { + } // JAXB needs this + + public RouterInfo(Router router) { + long ts = Router.getClusterTimeStamp(); + this.id = ts; + this.state = router.getServiceState(); + Configuration configuration = router.getConfig(); + this.routerStateStoreName = configuration.get( + YarnConfiguration.FEDERATION_STATESTORE_CLIENT_CLASS, + YarnConfiguration.DEFAULT_FEDERATION_STATESTORE_CLIENT_CLASS); + this.routerVersion = YarnVersionInfo.getVersion(); + this.routerBuildVersion = YarnVersionInfo.getBuildVersion(); + this.routerVersionBuiltOn = YarnVersionInfo.getDate(); + this.hadoopVersion = VersionInfo.getVersion(); + this.hadoopBuildVersion = VersionInfo.getBuildVersion(); + this.hadoopVersionBuiltOn = VersionInfo.getDate(); + this.startedOn = ts; + } + + public String getState() { + return this.state.toString(); + } + + public String getRouterStateStore() { + return this.routerStateStoreName; + } + + public String getRouterVersion() { + return this.routerVersion; + } + + public String getRouterBuildVersion() { + return this.routerBuildVersion; + } + + public String getRouterVersionBuiltOn() { + return this.routerVersionBuiltOn; + } + + public String getHadoopVersion() { + return this.hadoopVersion; + } + + public String getHadoopBuildVersion() { + return this.hadoopBuildVersion; + } + + public String getHadoopVersionBuiltOn() { + return this.hadoopVersionBuiltOn; + } + + public long getClusterId() { + return this.id; + } + + public long getStartedOn() { + return this.startedOn; + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/dao/RouterSchedulerMetrics.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/dao/RouterSchedulerMetrics.java new file mode 100644 index 00000000000..4a3af1ba43d --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/dao/RouterSchedulerMetrics.java @@ -0,0 +1,109 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + *

    + * http://www.apache.org/licenses/LICENSE-2.0 + *

    + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.yarn.server.router.webapp.dao; + + +import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.SchedulerOverviewInfo; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import javax.xml.bind.annotation.XmlAccessType; +import javax.xml.bind.annotation.XmlAccessorType; +import javax.xml.bind.annotation.XmlRootElement; + +@XmlRootElement +@XmlAccessorType(XmlAccessType.FIELD) +public class RouterSchedulerMetrics { + + // Metrics Log. + private static final Logger LOG = LoggerFactory.getLogger(RouterSchedulerMetrics.class); + + // Scheduler Information. + private String subCluster = "N/A"; + private String schedulerType = "N/A"; + private String schedulingResourceType = "N/A"; + private String minimumAllocation = "N/A"; + private String maximumAllocation = "N/A"; + private String applicationPriority = "N/A"; + private String schedulerBusy = "N/A"; + private String rmDispatcherEventQueueSize = "N/A"; + private String schedulerDispatcherEventQueueSize = "N/A"; + + public RouterSchedulerMetrics() { + + } + + public RouterSchedulerMetrics(SubClusterInfo subClusterInfo, RouterClusterMetrics metrics, + SchedulerOverviewInfo overview) { + try { + // Parse Scheduler Information. + this.subCluster = subClusterInfo.getSubClusterId().getId(); + this.schedulerType = overview.getSchedulerType(); + this.schedulingResourceType = overview.getSchedulingResourceType(); + this.minimumAllocation = overview.getMinimumAllocation().toString(); + this.maximumAllocation = overview.getMaximumAllocation().toString(); + this.applicationPriority = String.valueOf(overview.getApplicationPriority()); + if (overview.getSchedulerBusy() != -1) { + this.schedulerBusy = String.valueOf(overview.getSchedulerBusy()); + } + this.rmDispatcherEventQueueSize = + String.valueOf(overview.getRmDispatcherEventQueueSize()); + this.schedulerDispatcherEventQueueSize = + String.valueOf(overview.getSchedulerDispatcherEventQueueSize()); + } catch (Exception ex) { + LOG.error("RouterSchedulerMetrics Error.", ex); + } + } + + public String getSubCluster() { + return subCluster; + } + + public String getSchedulerType() { + return schedulerType; + } + + public String getSchedulingResourceType() { + return schedulingResourceType; + } + + public String getMinimumAllocation() { + return minimumAllocation; + } + + public String getMaximumAllocation() { + return maximumAllocation; + } + + public String getApplicationPriority() { + return applicationPriority; + } + + public String getRmDispatcherEventQueueSize() { + return rmDispatcherEventQueueSize; + } + + public String getSchedulerDispatcherEventQueueSize() { + return schedulerDispatcherEventQueueSize; + } + + public String getSchedulerBusy() { + return schedulerBusy; + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/dao/SubClusterResult.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/dao/SubClusterResult.java new file mode 100644 index 00000000000..2a527c28d10 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/dao/SubClusterResult.java @@ -0,0 +1,59 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + *

    + * http://www.apache.org/licenses/LICENSE-2.0 + *

    + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.yarn.server.router.webapp.dao; + +import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo; + +public class SubClusterResult { + private SubClusterInfo subClusterInfo; + private R response; + private Exception exception; + + public SubClusterResult() { + } + + public SubClusterResult(SubClusterInfo subCluster, R res, Exception ex) { + this.subClusterInfo = subCluster; + this.response = res; + this.exception = ex; + } + + public SubClusterInfo getSubClusterInfo() { + return subClusterInfo; + } + + public void setSubClusterInfo(SubClusterInfo subClusterInfo) { + this.subClusterInfo = subClusterInfo; + } + + public Exception getException() { + return exception; + } + + public void setException(Exception exception) { + this.exception = exception; + } + + public R getResponse() { + return response; + } + + public void setResponse(R response) { + this.response = response; + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/dao/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/dao/package-info.java new file mode 100644 index 00000000000..27f43ad1ff4 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/dao/package-info.java @@ -0,0 +1,20 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +/** Router Web Dao package. **/ +package org.apache.hadoop.yarn.server.router.webapp.dao; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/TestRouter.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/TestRouter.java index 9f0b4c72aac..d5501d74440 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/TestRouter.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/TestRouter.java @@ -22,11 +22,27 @@ import static org.junit.Assert.fail; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.CommonConfigurationKeysPublic; +import org.apache.hadoop.http.HttpServer2; +import org.apache.hadoop.security.HttpCrossOriginFilterInitializer; import org.apache.hadoop.security.authorize.AccessControlList; import org.apache.hadoop.security.authorize.ServiceAuthorizationManager; +import org.apache.hadoop.security.http.CrossOriginFilter; import org.apache.hadoop.yarn.conf.YarnConfiguration; +import org.apache.hadoop.yarn.webapp.WebApp; +import org.eclipse.jetty.servlet.FilterHolder; +import org.eclipse.jetty.servlet.ServletHandler; +import org.eclipse.jetty.webapp.WebAppContext; +import org.glassfish.grizzly.servlet.HttpServletResponseImpl; import org.junit.Assert; import org.junit.Test; +import org.mockito.Mockito; + +import javax.servlet.FilterChain; +import javax.servlet.ServletException; +import javax.servlet.http.HttpServletRequest; +import java.io.IOException; +import java.util.HashMap; +import java.util.Map; /** * Tests {@link Router}. @@ -87,4 +103,97 @@ public class TestRouter { } } + @Test + public void testRouterSupportCrossOrigin() throws ServletException, IOException { + + // We design test cases like this + // We start the Router and enable the Router to support Cross-origin. + // In the configuration, we allow example.com to access. + // 1. We simulate example.com and get the correct response + // 2. We simulate example.org and cannot get a response + + // Initialize RouterWeb's CrossOrigin capability + Configuration conf = new Configuration(); + conf.setBoolean(YarnConfiguration.ROUTER_WEBAPP_ENABLE_CORS_FILTER, true); + conf.set("hadoop.http.filter.initializers", HttpCrossOriginFilterInitializer.class.getName()); + conf.set(HttpCrossOriginFilterInitializer.PREFIX + CrossOriginFilter.ALLOWED_ORIGINS, + "example.com"); + conf.set(HttpCrossOriginFilterInitializer.PREFIX + CrossOriginFilter.ALLOWED_HEADERS, + "X-Requested-With,Accept"); + conf.set(HttpCrossOriginFilterInitializer.PREFIX + CrossOriginFilter.ALLOWED_METHODS, + "GET,POST"); + + // Start the router + Router router = new Router(); + router.init(conf); + router.start(); + router.getServices(); + + // Get assigned to Filter. + // The name of the filter is "Cross Origin Filter", + // which is specified in HttpCrossOriginFilterInitializer. + WebApp webApp = router.getWebapp(); + HttpServer2 httpServer2 = webApp.getHttpServer(); + WebAppContext webAppContext = httpServer2.getWebAppContext(); + ServletHandler servletHandler = webAppContext.getServletHandler(); + FilterHolder holder = servletHandler.getFilter("Cross Origin Filter"); + CrossOriginFilter filter = CrossOriginFilter.class.cast(holder.getFilter()); + + // 1. Simulate [example.com] for access + HttpServletRequest mockReq = Mockito.mock(HttpServletRequest.class); + Mockito.when(mockReq.getHeader("Origin")).thenReturn("example.com"); + Mockito.when(mockReq.getHeader("Access-Control-Request-Method")).thenReturn("GET"); + Mockito.when(mockReq.getHeader("Access-Control-Request-Headers")) + .thenReturn("X-Requested-With"); + + // Objects to verify interactions based on request + HttpServletResponseForRouterTest mockRes = new HttpServletResponseForRouterTest(); + FilterChain mockChain = Mockito.mock(FilterChain.class); + + // Object under test + filter.doFilter(mockReq, mockRes, mockChain); + + // Why is 5, because when Filter passes, + // CrossOriginFilter will set 5 values to Map + Assert.assertEquals(5, mockRes.getHeaders().size()); + String allowResult = mockRes.getHeader("Access-Control-Allow-Credentials"); + Assert.assertEquals("true", allowResult); + + // 2. Simulate [example.org] for access + HttpServletRequest mockReq2 = Mockito.mock(HttpServletRequest.class); + Mockito.when(mockReq2.getHeader("Origin")).thenReturn("example.org"); + Mockito.when(mockReq2.getHeader("Access-Control-Request-Method")).thenReturn("GET"); + Mockito.when(mockReq2.getHeader("Access-Control-Request-Headers")) + .thenReturn("X-Requested-With"); + + // Objects to verify interactions based on request + HttpServletResponseForRouterTest mockRes2 = new HttpServletResponseForRouterTest(); + FilterChain mockChain2 = Mockito.mock(FilterChain.class); + + // Object under test + filter.doFilter(mockReq2, mockRes2, mockChain2); + + // Why is 0, because when the Filter fails, + // CrossOriginFilter will not set any value + Assert.assertEquals(0, mockRes2.getHeaders().size()); + + router.stop(); + } + + private class HttpServletResponseForRouterTest extends HttpServletResponseImpl { + private final Map headers = new HashMap<>(1); + @Override + public void setHeader(String name, String value) { + headers.put(name, value); + } + + public String getHeader(String name) { + return headers.get(name); + } + + public Map getHeaders() { + return headers; + } + } + } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/TestRouterMetrics.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/TestRouterMetrics.java index cc36ca2a490..c26df63c954 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/TestRouterMetrics.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/TestRouterMetrics.java @@ -386,12 +386,12 @@ public class TestRouterMetrics { public void getContainerReport() { LOG.info("Mocked: failed getContainerReport call"); - metrics.incrContainerReportFailedRetrieved(); + metrics.incrGetContainerReportFailedRetrieved(); } - public void getContainer() { + public void getContainers() { LOG.info("Mocked: failed getContainer call"); - metrics.incrContainerFailedRetrieved(); + metrics.incrGetContainersFailedRetrieved(); } public void getResourceTypeInfo() { @@ -478,6 +478,71 @@ public class TestRouterMetrics { LOG.info("Mocked: failed getListReservationFailed call"); metrics.incrListReservationFailedRetrieved(); } + + public void getAppActivitiesFailed() { + LOG.info("Mocked: failed getAppActivitiesFailed call"); + metrics.incrGetAppActivitiesFailedRetrieved(); + } + + public void getAppStatisticsFailed() { + LOG.info("Mocked: failed getAppStatisticsFailed call"); + metrics.incrGetAppStatisticsFailedRetrieved(); + } + + public void getAppPriorityFailed() { + LOG.info("Mocked: failed getAppPriorityFailed call"); + metrics.incrGetAppPriorityFailedRetrieved(); + } + + public void getAppQueueFailed() { + LOG.info("Mocked: failed getAppQueueFailed call"); + metrics.incrGetAppQueueFailedRetrieved(); + } + + public void getUpdateQueueFailed() { + LOG.info("Mocked: failed getUpdateQueueFailed call"); + metrics.incrUpdateAppQueueFailedRetrieved(); + } + + public void getAppTimeoutFailed() { + LOG.info("Mocked: failed getAppTimeoutFailed call"); + metrics.incrGetAppTimeoutFailedRetrieved(); + } + + public void getAppTimeoutsFailed() { + LOG.info("Mocked: failed getAppTimeoutsFailed call"); + metrics.incrGetAppTimeoutsFailedRetrieved(); + } + + public void getRMNodeLabelsFailed() { + LOG.info("Mocked: failed getRMNodeLabelsFailed call"); + metrics.incrGetRMNodeLabelsFailedRetrieved(); + } + + public void getCheckUserAccessToQueueFailed() { + LOG.info("Mocked: failed checkUserAccessToQueue call"); + metrics.incrCheckUserAccessToQueueFailedRetrieved(); + } + + public void getDelegationTokenFailed() { + LOG.info("Mocked: failed getDelegationToken call"); + metrics.incrGetDelegationTokenFailedRetrieved(); + } + + public void getRenewDelegationTokenFailed() { + LOG.info("Mocked: failed renewDelegationToken call"); + metrics.incrRenewDelegationTokenFailedRetrieved(); + } + + public void getActivitiesFailed() { + LOG.info("Mocked: failed getBulkActivitie call"); + metrics.incrGetActivitiesFailedRetrieved(); + } + + public void getBulkActivitiesFailed() { + LOG.info("Mocked: failed getBulkActivitie call"); + metrics.incrGetBulkActivitiesFailedRetrieved(); + } } // Records successes for all calls @@ -564,7 +629,7 @@ public class TestRouterMetrics { metrics.succeededGetContainerReportRetrieved(duration); } - public void getContainer(long duration) { + public void getContainers(long duration) { LOG.info("Mocked: successful getContainer call with duration {}", duration); metrics.succeededGetContainersRetrieved(duration); } @@ -653,6 +718,71 @@ public class TestRouterMetrics { LOG.info("Mocked: successful getListReservation call with duration {}", duration); metrics.succeededListReservationRetrieved(duration); } + + public void getAppActivitiesRetrieved(long duration) { + LOG.info("Mocked: successful getAppActivities call with duration {}", duration); + metrics.succeededGetAppActivitiesRetrieved(duration); + } + + public void getAppStatisticsRetrieved(long duration) { + LOG.info("Mocked: successful getAppStatistics call with duration {}", duration); + metrics.succeededGetAppStatisticsRetrieved(duration); + } + + public void getAppPriorityRetrieved(long duration) { + LOG.info("Mocked: successful getAppPriority call with duration {}", duration); + metrics.succeededGetAppPriorityRetrieved(duration); + } + + public void getAppQueueRetrieved(long duration) { + LOG.info("Mocked: successful getAppQueue call with duration {}", duration); + metrics.succeededGetAppQueueRetrieved(duration); + } + + public void getUpdateQueueRetrieved(long duration) { + LOG.info("Mocked: successful getUpdateQueue call with duration {}", duration); + metrics.succeededUpdateAppQueueRetrieved(duration); + } + + public void getAppTimeoutRetrieved(long duration) { + LOG.info("Mocked: successful getAppTimeout call with duration {}", duration); + metrics.succeededGetAppTimeoutRetrieved(duration); + } + + public void getAppTimeoutsRetrieved(long duration) { + LOG.info("Mocked: successful getAppTimeouts call with duration {}", duration); + metrics.succeededGetAppTimeoutsRetrieved(duration); + } + + public void getRMNodeLabelsRetrieved(long duration) { + LOG.info("Mocked: successful getRMNodeLabels call with duration {}", duration); + metrics.succeededGetRMNodeLabelsRetrieved(duration); + } + + public void getCheckUserAccessToQueueRetrieved(long duration) { + LOG.info("Mocked: successful CheckUserAccessToQueue call with duration {}", duration); + metrics.succeededCheckUserAccessToQueueRetrieved(duration); + } + + public void getGetDelegationTokenRetrieved(long duration) { + LOG.info("Mocked: successful GetDelegationToken call with duration {}", duration); + metrics.succeededGetDelegationTokenRetrieved(duration); + } + + public void getRenewDelegationTokenRetrieved(long duration) { + LOG.info("Mocked: successful RenewDelegationToken call with duration {}", duration); + metrics.succeededRenewDelegationTokenRetrieved(duration); + } + + public void getActivitiesRetrieved(long duration) { + LOG.info("Mocked: successful GetActivities call with duration {}", duration); + metrics.succeededGetActivitiesLatencyRetrieved(duration); + } + + public void getBulkActivitiesRetrieved(long duration) { + LOG.info("Mocked: successful GetBulkActivities call with duration {}", duration); + metrics.succeededGetBulkActivitiesRetrieved(duration); + } } @Test @@ -827,12 +957,12 @@ public class TestRouterMetrics { @Test public void testSucceededGetContainers() { long totalGoodBefore = metrics.getNumSucceededGetContainersRetrieved(); - goodSubCluster.getContainer(150); + goodSubCluster.getContainers(150); Assert.assertEquals(totalGoodBefore + 1, metrics.getNumSucceededGetContainersRetrieved()); Assert.assertEquals(150, metrics.getLatencySucceededGetContainersRetrieved(), ASSERT_DOUBLE_DELTA); - goodSubCluster.getContainer(300); + goodSubCluster.getContainers(300); Assert.assertEquals(totalGoodBefore + 2, metrics.getNumSucceededGetContainersRetrieved()); Assert.assertEquals(225, metrics.getLatencySucceededGetContainersRetrieved(), @@ -840,9 +970,9 @@ public class TestRouterMetrics { } @Test - public void testGetContainerFailed() { + public void testGetContainersFailed() { long totalBadBefore = metrics.getContainersFailedRetrieved(); - badSubCluster.getContainer(); + badSubCluster.getContainers(); Assert.assertEquals(totalBadBefore + 1, metrics.getContainersFailedRetrieved()); } @@ -1234,4 +1364,303 @@ public class TestRouterMetrics { Assert.assertEquals(totalBadBefore + 1, metrics.getListReservationFailedRetrieved()); } -} + + @Test + public void testGetAppActivitiesRetrieved() { + long totalGoodBefore = metrics.getNumSucceededGetAppActivitiesRetrieved(); + goodSubCluster.getAppActivitiesRetrieved(150); + Assert.assertEquals(totalGoodBefore + 1, + metrics.getNumSucceededGetAppActivitiesRetrieved()); + Assert.assertEquals(150, + metrics.getLatencySucceededGetAppActivitiesRetrieved(), ASSERT_DOUBLE_DELTA); + goodSubCluster.getAppActivitiesRetrieved(300); + Assert.assertEquals(totalGoodBefore + 2, + metrics.getNumSucceededGetAppActivitiesRetrieved()); + Assert.assertEquals(225, + metrics.getLatencySucceededGetAppActivitiesRetrieved(), ASSERT_DOUBLE_DELTA); + } + + @Test + public void testGetAppActivitiesRetrievedFailed() { + long totalBadBefore = metrics.getAppActivitiesFailedRetrieved(); + badSubCluster.getAppActivitiesFailed(); + Assert.assertEquals(totalBadBefore + 1, + metrics.getAppActivitiesFailedRetrieved()); + } + + @Test + public void testGetAppStatisticsLatencyRetrieved() { + long totalGoodBefore = metrics.getNumSucceededGetAppStatisticsRetrieved(); + goodSubCluster.getAppStatisticsRetrieved(150); + Assert.assertEquals(totalGoodBefore + 1, + metrics.getNumSucceededGetAppStatisticsRetrieved()); + Assert.assertEquals(150, + metrics.getLatencySucceededGetAppStatisticsRetrieved(), ASSERT_DOUBLE_DELTA); + goodSubCluster.getAppStatisticsRetrieved(300); + Assert.assertEquals(totalGoodBefore + 2, + metrics.getNumSucceededGetAppStatisticsRetrieved()); + Assert.assertEquals(225, + metrics.getLatencySucceededGetAppStatisticsRetrieved(), ASSERT_DOUBLE_DELTA); + } + + @Test + public void testGetAppStatisticsRetrievedFailed() { + long totalBadBefore = metrics.getAppStatisticsFailedRetrieved(); + badSubCluster.getAppStatisticsFailed(); + Assert.assertEquals(totalBadBefore + 1, + metrics.getAppStatisticsFailedRetrieved()); + } + + @Test + public void testGetAppPriorityLatencyRetrieved() { + long totalGoodBefore = metrics.getNumSucceededGetAppPriorityRetrieved(); + goodSubCluster.getAppPriorityRetrieved(150); + Assert.assertEquals(totalGoodBefore + 1, + metrics.getNumSucceededGetAppPriorityRetrieved()); + Assert.assertEquals(150, + metrics.getLatencySucceededGetAppPriorityRetrieved(), ASSERT_DOUBLE_DELTA); + goodSubCluster.getAppPriorityRetrieved(300); + Assert.assertEquals(totalGoodBefore + 2, + metrics.getNumSucceededGetAppPriorityRetrieved()); + Assert.assertEquals(225, + metrics.getLatencySucceededGetAppPriorityRetrieved(), ASSERT_DOUBLE_DELTA); + } + + @Test + public void testGetAppPriorityRetrievedFailed() { + long totalBadBefore = metrics.getAppPriorityFailedRetrieved(); + badSubCluster.getAppPriorityFailed(); + Assert.assertEquals(totalBadBefore + 1, + metrics.getAppPriorityFailedRetrieved()); + } + + @Test + public void testGetAppQueueLatencyRetrieved() { + long totalGoodBefore = metrics.getNumSucceededGetAppQueueRetrieved(); + goodSubCluster.getAppQueueRetrieved(150); + Assert.assertEquals(totalGoodBefore + 1, + metrics.getNumSucceededGetAppQueueRetrieved()); + Assert.assertEquals(150, + metrics.getLatencySucceededGetAppQueueRetrieved(), ASSERT_DOUBLE_DELTA); + goodSubCluster.getAppQueueRetrieved(300); + Assert.assertEquals(totalGoodBefore + 2, + metrics.getNumSucceededGetAppQueueRetrieved()); + Assert.assertEquals(225, + metrics.getLatencySucceededGetAppQueueRetrieved(), ASSERT_DOUBLE_DELTA); + } + + @Test + public void testGetAppQueueRetrievedFailed() { + long totalBadBefore = metrics.getAppQueueFailedRetrieved(); + badSubCluster.getAppQueueFailed(); + Assert.assertEquals(totalBadBefore + 1, + metrics.getAppQueueFailedRetrieved()); + } + + @Test + public void testUpdateAppQueueLatencyRetrieved() { + long totalGoodBefore = metrics.getNumSucceededUpdateAppQueueRetrieved(); + goodSubCluster.getUpdateQueueRetrieved(150); + Assert.assertEquals(totalGoodBefore + 1, + metrics.getNumSucceededUpdateAppQueueRetrieved()); + Assert.assertEquals(150, + metrics.getLatencySucceededUpdateAppQueueRetrieved(), ASSERT_DOUBLE_DELTA); + goodSubCluster.getUpdateQueueRetrieved(300); + Assert.assertEquals(totalGoodBefore + 2, + metrics.getNumSucceededUpdateAppQueueRetrieved()); + Assert.assertEquals(225, + metrics.getLatencySucceededUpdateAppQueueRetrieved(), ASSERT_DOUBLE_DELTA); + } + + @Test + public void testUpdateAppQueueRetrievedFailed() { + long totalBadBefore = metrics.getUpdateAppQueueFailedRetrieved(); + badSubCluster.getUpdateQueueFailed(); + Assert.assertEquals(totalBadBefore + 1, + metrics.getUpdateAppQueueFailedRetrieved()); + } + + @Test + public void testGetAppTimeoutLatencyRetrieved() { + long totalGoodBefore = metrics.getNumSucceededGetAppTimeoutRetrieved(); + goodSubCluster.getAppTimeoutRetrieved(150); + Assert.assertEquals(totalGoodBefore + 1, + metrics.getNumSucceededGetAppTimeoutRetrieved()); + Assert.assertEquals(150, + metrics.getLatencySucceededGetAppTimeoutRetrieved(), ASSERT_DOUBLE_DELTA); + goodSubCluster.getAppTimeoutRetrieved(300); + Assert.assertEquals(totalGoodBefore + 2, + metrics.getNumSucceededGetAppTimeoutRetrieved()); + Assert.assertEquals(225, + metrics.getLatencySucceededGetAppTimeoutRetrieved(), ASSERT_DOUBLE_DELTA); + } + + @Test + public void testGetAppTimeoutRetrievedFailed() { + long totalBadBefore = metrics.getAppTimeoutFailedRetrieved(); + badSubCluster.getAppTimeoutFailed(); + Assert.assertEquals(totalBadBefore + 1, + metrics.getAppTimeoutFailedRetrieved()); + } + + @Test + public void testGetAppTimeoutsLatencyRetrieved() { + long totalGoodBefore = metrics.getNumSucceededGetAppTimeoutsRetrieved(); + goodSubCluster.getAppTimeoutsRetrieved(150); + Assert.assertEquals(totalGoodBefore + 1, + metrics.getNumSucceededGetAppTimeoutsRetrieved()); + Assert.assertEquals(150, + metrics.getLatencySucceededGetAppTimeoutsRetrieved(), ASSERT_DOUBLE_DELTA); + goodSubCluster.getAppTimeoutsRetrieved(300); + Assert.assertEquals(totalGoodBefore + 2, + metrics.getNumSucceededGetAppTimeoutsRetrieved()); + Assert.assertEquals(225, + metrics.getLatencySucceededGetAppTimeoutsRetrieved(), ASSERT_DOUBLE_DELTA); + } + + @Test + public void testGetAppTimeoutsRetrievedFailed() { + long totalBadBefore = metrics.getAppTimeoutsFailedRetrieved(); + badSubCluster.getAppTimeoutsFailed(); + Assert.assertEquals(totalBadBefore + 1, + metrics.getAppTimeoutsFailedRetrieved()); + } + + @Test + public void testGetRMNodeLabelsRetrieved() { + long totalGoodBefore = metrics.getNumSucceededGetRMNodeLabelsRetrieved(); + goodSubCluster.getRMNodeLabelsRetrieved(150); + Assert.assertEquals(totalGoodBefore + 1, + metrics.getNumSucceededGetRMNodeLabelsRetrieved()); + Assert.assertEquals(150, + metrics.getLatencySucceededGetRMNodeLabelsRetrieved(), ASSERT_DOUBLE_DELTA); + goodSubCluster.getRMNodeLabelsRetrieved(300); + Assert.assertEquals(totalGoodBefore + 2, + metrics.getNumSucceededGetRMNodeLabelsRetrieved()); + Assert.assertEquals(225, + metrics.getLatencySucceededGetRMNodeLabelsRetrieved(), ASSERT_DOUBLE_DELTA); + } + + @Test + public void testGetRMNodeLabelsRetrievedFailed() { + long totalBadBefore = metrics.getRMNodeLabelsFailedRetrieved(); + badSubCluster.getRMNodeLabelsFailed(); + Assert.assertEquals(totalBadBefore + 1, + metrics.getRMNodeLabelsFailedRetrieved()); + } + + @Test + public void testCheckUserAccessToQueueRetrieved() { + long totalGoodBefore = metrics.getNumSucceededCheckUserAccessToQueueRetrieved(); + goodSubCluster.getCheckUserAccessToQueueRetrieved(150); + Assert.assertEquals(totalGoodBefore + 1, + metrics.getNumSucceededCheckUserAccessToQueueRetrieved()); + Assert.assertEquals(150, + metrics.getLatencySucceededCheckUserAccessToQueueRetrieved(), ASSERT_DOUBLE_DELTA); + goodSubCluster.getCheckUserAccessToQueueRetrieved(300); + Assert.assertEquals(totalGoodBefore + 2, + metrics.getNumSucceededCheckUserAccessToQueueRetrieved()); + Assert.assertEquals(225, + metrics.getLatencySucceededCheckUserAccessToQueueRetrieved(), ASSERT_DOUBLE_DELTA); + } + + @Test + public void testCheckUserAccessToQueueRetrievedFailed() { + long totalBadBefore = metrics.getCheckUserAccessToQueueFailedRetrieved(); + badSubCluster.getCheckUserAccessToQueueFailed(); + Assert.assertEquals(totalBadBefore + 1, + metrics.getCheckUserAccessToQueueFailedRetrieved()); + } + + @Test + public void testGetDelegationTokenRetrieved() { + long totalGoodBefore = metrics.getNumSucceededGetDelegationTokenRetrieved(); + goodSubCluster.getGetDelegationTokenRetrieved(150); + Assert.assertEquals(totalGoodBefore + 1, + metrics.getNumSucceededGetDelegationTokenRetrieved()); + Assert.assertEquals(150, + metrics.getLatencySucceededGetDelegationTokenRetrieved(), ASSERT_DOUBLE_DELTA); + goodSubCluster.getGetDelegationTokenRetrieved(300); + Assert.assertEquals(totalGoodBefore + 2, + metrics.getNumSucceededGetDelegationTokenRetrieved()); + Assert.assertEquals(225, + metrics.getLatencySucceededGetDelegationTokenRetrieved(), ASSERT_DOUBLE_DELTA); + } + + @Test + public void testGetDelegationTokenRetrievedFailed() { + long totalBadBefore = metrics.getDelegationTokenFailedRetrieved(); + badSubCluster.getDelegationTokenFailed(); + Assert.assertEquals(totalBadBefore + 1, + metrics.getDelegationTokenFailedRetrieved()); + } + + @Test + public void testRenewDelegationTokenRetrieved() { + long totalGoodBefore = metrics.getNumSucceededRenewDelegationTokenRetrieved(); + goodSubCluster.getRenewDelegationTokenRetrieved(150); + Assert.assertEquals(totalGoodBefore + 1, + metrics.getNumSucceededRenewDelegationTokenRetrieved()); + Assert.assertEquals(150, + metrics.getLatencySucceededRenewDelegationTokenRetrieved(), ASSERT_DOUBLE_DELTA); + goodSubCluster.getRenewDelegationTokenRetrieved(300); + Assert.assertEquals(totalGoodBefore + 2, + metrics.getNumSucceededRenewDelegationTokenRetrieved()); + Assert.assertEquals(225, + metrics.getLatencySucceededRenewDelegationTokenRetrieved(), ASSERT_DOUBLE_DELTA); + } + + @Test + public void testRenewDelegationTokenRetrievedFailed() { + long totalBadBefore = metrics.getRenewDelegationTokenFailedRetrieved(); + badSubCluster.getRenewDelegationTokenFailed(); + Assert.assertEquals(totalBadBefore + 1, + metrics.getRenewDelegationTokenFailedRetrieved()); + } + + @Test + public void testGetActivitiesRetrieved() { + long totalGoodBefore = metrics.getNumSucceededGetActivitiesRetrieved(); + goodSubCluster.getActivitiesRetrieved(150); + Assert.assertEquals(totalGoodBefore + 1, + metrics.getNumSucceededGetActivitiesRetrieved()); + Assert.assertEquals(150, + metrics.getLatencySucceededGetActivitiesRetrieved(), ASSERT_DOUBLE_DELTA); + goodSubCluster.getActivitiesRetrieved(300); + Assert.assertEquals(totalGoodBefore + 2, + metrics.getNumSucceededGetActivitiesRetrieved()); + Assert.assertEquals(225, + metrics.getLatencySucceededGetActivitiesRetrieved(), ASSERT_DOUBLE_DELTA); + } + + @Test + public void testGetActivitiesRetrievedFailed() { + long totalBadBefore = metrics.getActivitiesFailedRetrieved(); + badSubCluster.getActivitiesFailed(); + Assert.assertEquals(totalBadBefore + 1, + metrics.getActivitiesFailedRetrieved()); + } + + @Test + public void testGetBulkActivitiesRetrieved() { + long totalGoodBefore = metrics.getNumSucceededGetBulkActivitiesRetrieved(); + goodSubCluster.getBulkActivitiesRetrieved(150); + Assert.assertEquals(totalGoodBefore + 1, + metrics.getNumSucceededGetBulkActivitiesRetrieved()); + Assert.assertEquals(150, + metrics.getLatencySucceededGetBulkActivitiesRetrieved(), ASSERT_DOUBLE_DELTA); + goodSubCluster.getBulkActivitiesRetrieved(300); + Assert.assertEquals(totalGoodBefore + 2, + metrics.getNumSucceededGetBulkActivitiesRetrieved()); + Assert.assertEquals(225, + metrics.getLatencySucceededGetBulkActivitiesRetrieved(), ASSERT_DOUBLE_DELTA); + } + + @Test + public void testGetBulkActivitiesRetrievedFailed() { + long totalBadBefore = metrics.getBulkActivitiesFailedRetrieved(); + badSubCluster.getBulkActivitiesFailed(); + Assert.assertEquals(totalBadBefore + 1, + metrics.getBulkActivitiesFailedRetrieved()); + } +} \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/TestRouterServerUtil.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/TestRouterServerUtil.java new file mode 100644 index 00000000000..e82f67d12d5 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/TestRouterServerUtil.java @@ -0,0 +1,125 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */package org.apache.hadoop.yarn.server.router; + +import org.apache.hadoop.test.LambdaTestUtils; +import org.apache.hadoop.util.Time; +import org.apache.hadoop.yarn.api.records.Priority; +import org.apache.hadoop.yarn.api.records.ReservationDefinition; +import org.apache.hadoop.yarn.api.records.ReservationId; +import org.apache.hadoop.yarn.api.records.ReservationRequests; +import org.apache.hadoop.yarn.api.records.ReservationRequest; +import org.apache.hadoop.yarn.api.records.Resource; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationDefinitionInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationRequestInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationRequestsInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationSubmissionRequestInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceInfo; +import org.junit.Test; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.List; + +import static org.apache.hadoop.yarn.server.router.webapp.TestFederationInterceptorREST.getReservationSubmissionRequestInfo; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertNotNull; + +public class TestRouterServerUtil { + + public static final Logger LOG = LoggerFactory.getLogger(TestRouterServerUtil.class); + + @Test + public void testConvertReservationDefinition() { + // Prepare parameters + ReservationId reservationId = ReservationId.newInstance(Time.now(), 1); + ReservationSubmissionRequestInfo requestInfo = + getReservationSubmissionRequestInfo(reservationId); + ReservationDefinitionInfo expectDefinitionInfo = requestInfo.getReservationDefinition(); + + // ReservationDefinitionInfo conversion ReservationDefinition + ReservationDefinition convertDefinition = + RouterServerUtil.convertReservationDefinition(expectDefinitionInfo); + + // reservationDefinition is not null + assertNotNull(convertDefinition); + assertEquals(expectDefinitionInfo.getArrival(), convertDefinition.getArrival()); + assertEquals(expectDefinitionInfo.getDeadline(), convertDefinition.getDeadline()); + + Priority priority = convertDefinition.getPriority(); + assertNotNull(priority); + assertEquals(expectDefinitionInfo.getPriority(), priority.getPriority()); + assertEquals(expectDefinitionInfo.getRecurrenceExpression(), + convertDefinition.getRecurrenceExpression()); + assertEquals(expectDefinitionInfo.getReservationName(), convertDefinition.getReservationName()); + + ReservationRequestsInfo expectRequestsInfo = expectDefinitionInfo.getReservationRequests(); + List expectRequestsInfoList = + expectRequestsInfo.getReservationRequest(); + + ReservationRequests convertReservationRequests = + convertDefinition.getReservationRequests(); + assertNotNull(convertReservationRequests); + + List convertRequestList = + convertReservationRequests.getReservationResources(); + assertNotNull(convertRequestList); + assertEquals(1, convertRequestList.size()); + + ReservationRequestInfo expectResRequestInfo = expectRequestsInfoList.get(0); + ReservationRequest convertResRequest = convertRequestList.get(0); + assertNotNull(convertResRequest); + assertEquals(expectResRequestInfo.getNumContainers(), convertResRequest.getNumContainers()); + assertEquals(expectResRequestInfo.getDuration(), convertResRequest.getDuration()); + + ResourceInfo expectResourceInfo = expectResRequestInfo.getCapability(); + Resource convertResource = convertResRequest.getCapability(); + assertNotNull(expectResourceInfo); + assertEquals(expectResourceInfo.getMemorySize(), convertResource.getMemorySize()); + assertEquals(expectResourceInfo.getvCores(), convertResource.getVirtualCores()); + } + + @Test + public void testConvertReservationDefinitionEmpty() throws Exception { + + // param ReservationDefinitionInfo is Null + ReservationDefinitionInfo definitionInfo = null; + + // null request1 + LambdaTestUtils.intercept(RuntimeException.class, + "definitionInfo Or ReservationRequests is Null.", + () -> RouterServerUtil.convertReservationDefinition(definitionInfo)); + + // param ReservationRequests is Null + ReservationDefinitionInfo definitionInfo2 = new ReservationDefinitionInfo(); + + // null request2 + LambdaTestUtils.intercept(RuntimeException.class, + "definitionInfo Or ReservationRequests is Null.", + () -> RouterServerUtil.convertReservationDefinition(definitionInfo2)); + + // param ReservationRequests is Null + ReservationDefinitionInfo definitionInfo3 = new ReservationDefinitionInfo(); + ReservationRequestsInfo requestsInfo = new ReservationRequestsInfo(); + definitionInfo3.setReservationRequests(requestsInfo); + + // null request3 + LambdaTestUtils.intercept(RuntimeException.class, + "definitionInfo Or ReservationRequests is Null.", + () -> RouterServerUtil.convertReservationDefinition(definitionInfo3)); + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestFederationClientInterceptor.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestFederationClientInterceptor.java index 1fc1e920330..2488fc73b07 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestFederationClientInterceptor.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestFederationClientInterceptor.java @@ -19,6 +19,7 @@ package org.apache.hadoop.yarn.server.router.clientrm; import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; import java.io.IOException; import java.util.ArrayList; @@ -32,6 +33,7 @@ import java.util.stream.Collectors; import java.util.Arrays; import java.util.Collection; +import org.apache.hadoop.io.Text; import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem; import org.apache.hadoop.test.LambdaTestUtils; import org.apache.hadoop.util.Time; @@ -100,6 +102,12 @@ import org.apache.hadoop.yarn.api.protocolrecords.ReservationUpdateRequest; import org.apache.hadoop.yarn.api.protocolrecords.ReservationUpdateResponse; import org.apache.hadoop.yarn.api.protocolrecords.ReservationDeleteRequest; import org.apache.hadoop.yarn.api.protocolrecords.ReservationDeleteResponse; +import org.apache.hadoop.yarn.api.protocolrecords.GetDelegationTokenRequest; +import org.apache.hadoop.yarn.api.protocolrecords.GetDelegationTokenResponse; +import org.apache.hadoop.yarn.api.protocolrecords.RenewDelegationTokenRequest; +import org.apache.hadoop.yarn.api.protocolrecords.RenewDelegationTokenResponse; +import org.apache.hadoop.yarn.api.protocolrecords.CancelDelegationTokenRequest; +import org.apache.hadoop.yarn.api.protocolrecords.CancelDelegationTokenResponse; import org.apache.hadoop.yarn.api.records.ApplicationAttemptId; import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext; @@ -123,10 +131,13 @@ import org.apache.hadoop.yarn.api.records.ReservationRequest; import org.apache.hadoop.yarn.api.records.ReservationDefinition; import org.apache.hadoop.yarn.api.records.ReservationRequestInterpreter; import org.apache.hadoop.yarn.api.records.ReservationRequests; +import org.apache.hadoop.yarn.api.records.Token; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.exceptions.YarnException; +import org.apache.hadoop.yarn.security.client.RMDelegationTokenIdentifier; import org.apache.hadoop.yarn.server.federation.policies.manager.UniformBroadcastPolicyManager; import org.apache.hadoop.yarn.server.federation.store.impl.MemoryFederationStateStore; +import org.apache.hadoop.yarn.server.federation.store.records.RouterRMDTSecretManagerState; import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId; import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo; import org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade; @@ -138,6 +149,9 @@ import org.apache.hadoop.yarn.server.resourcemanager.reservation.ReservationSyst import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMApp; import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppState; import org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptState; +import org.apache.hadoop.yarn.server.router.security.RouterDelegationTokenSecretManager; +import org.apache.hadoop.yarn.util.ConverterUtils; +import org.apache.hadoop.yarn.util.Records; import org.apache.hadoop.yarn.util.Times; import org.apache.hadoop.yarn.util.resource.Resources; import org.junit.Assert; @@ -170,7 +184,7 @@ public class TestFederationClientInterceptor extends BaseRouterClientRMTest { private final static long DEFAULT_DURATION = 10 * 60 * 1000; @Override - public void setUp() { + public void setUp() throws IOException { super.setUpConfig(); interceptor = new TestableFederationClientInterceptor(); @@ -181,6 +195,11 @@ public class TestFederationClientInterceptor extends BaseRouterClientRMTest { interceptor.setConf(this.getConf()); interceptor.init(user); + RouterDelegationTokenSecretManager tokenSecretManager = + interceptor.createRouterRMDelegationTokenSecretManager(this.getConf()); + + tokenSecretManager.startThreads(); + interceptor.setTokenSecretManager(tokenSecretManager); subClusters = new ArrayList<>(); @@ -230,6 +249,7 @@ public class TestFederationClientInterceptor extends BaseRouterClientRMTest { conf.setInt("yarn.scheduler.maximum-allocation-mb", 100 * 1024); conf.setInt("yarn.scheduler.maximum-allocation-vcores", 100); + conf.setBoolean("hadoop.security.authentication", true); return conf; } @@ -1520,4 +1540,168 @@ public class TestFederationClientInterceptor extends BaseRouterClientRMTest { return ReservationDefinition.newInstance(arrival, deadline, requests, username, "0", Priority.UNDEFINED); } + + @Test + public void testGetNumMinThreads() { + // If we don't configure YarnConfiguration.ROUTER_USER_CLIENT_THREAD_POOL_MINIMUM_POOL_SIZE, + // we expect to get 5 threads + int minThreads = interceptor.getNumMinThreads(this.getConf()); + Assert.assertEquals(5, minThreads); + + // If we configure YarnConfiguration.ROUTER_USER_CLIENT_THREAD_POOL_MINIMUM_POOL_SIZE, + // we expect to get 3 threads + this.getConf().unset(YarnConfiguration.ROUTER_USER_CLIENT_THREADS_SIZE); + this.getConf().setInt(YarnConfiguration.ROUTER_USER_CLIENT_THREAD_POOL_MINIMUM_POOL_SIZE, 3); + int minThreads2 = interceptor.getNumMinThreads(this.getConf()); + Assert.assertEquals(3, minThreads2); + } + + @Test + public void testGetNumMaxThreads() { + // If we don't configure YarnConfiguration.ROUTER_USER_CLIENT_THREAD_POOL_MAXIMUM_POOL_SIZE, + // we expect to get 5 threads + int minThreads = interceptor.getNumMaxThreads(this.getConf()); + Assert.assertEquals(5, minThreads); + + // If we configure YarnConfiguration.ROUTER_USER_CLIENT_THREAD_POOL_MAXIMUM_POOL_SIZE, + // we expect to get 8 threads + this.getConf().unset(YarnConfiguration.ROUTER_USER_CLIENT_THREADS_SIZE); + this.getConf().setInt(YarnConfiguration.ROUTER_USER_CLIENT_THREAD_POOL_MAXIMUM_POOL_SIZE, 8); + int minThreads2 = interceptor.getNumMaxThreads(this.getConf()); + Assert.assertEquals(8, minThreads2); + } + + @Test + public void testGetDelegationToken() throws IOException, YarnException { + + // We design such a unit test to check + // that the execution of the GetDelegationToken method is as expected. + // + // 1. Apply for a DelegationToken for renewer1, + // the Router returns the DelegationToken of the user, and the KIND of the token is + // RM_DELEGATION_TOKEN + // + // 2. We maintain the compatibility with RMDelegationTokenIdentifier, + // we can serialize the token into RMDelegationTokenIdentifier. + // + // 3. We can get the issueDate, and compare the data in the StateStore, + // the data should be consistent. + + // Step1. We apply for DelegationToken for renewer1 + // Both response & delegationToken cannot be empty + GetDelegationTokenRequest request = mock(GetDelegationTokenRequest.class); + when(request.getRenewer()).thenReturn("renewer1"); + GetDelegationTokenResponse response = interceptor.getDelegationToken(request); + Assert.assertNotNull(response); + Token delegationToken = response.getRMDelegationToken(); + Assert.assertNotNull(delegationToken); + Assert.assertEquals("RM_DELEGATION_TOKEN", delegationToken.getKind()); + + // Step2. Serialize the returned Token as RMDelegationTokenIdentifier. + org.apache.hadoop.security.token.Token token = + ConverterUtils.convertFromYarn(delegationToken, (Text) null); + RMDelegationTokenIdentifier rMDelegationTokenIdentifier = token.decodeIdentifier(); + Assert.assertNotNull(rMDelegationTokenIdentifier); + + // Step3. Verify the returned data of the token. + String renewer = rMDelegationTokenIdentifier.getRenewer().toString(); + long issueDate = rMDelegationTokenIdentifier.getIssueDate(); + long maxDate = rMDelegationTokenIdentifier.getMaxDate(); + Assert.assertEquals("renewer1", renewer); + + long tokenMaxLifetime = this.getConf().getLong( + YarnConfiguration.RM_DELEGATION_TOKEN_MAX_LIFETIME_KEY, + YarnConfiguration.RM_DELEGATION_TOKEN_MAX_LIFETIME_DEFAULT); + Assert.assertEquals(issueDate + tokenMaxLifetime, maxDate); + + RouterRMDTSecretManagerState managerState = stateStore.getRouterRMSecretManagerState(); + Assert.assertNotNull(managerState); + + Map delegationTokenState = managerState.getTokenState(); + Assert.assertNotNull(delegationTokenState); + Assert.assertTrue(delegationTokenState.containsKey(rMDelegationTokenIdentifier)); + + long tokenRenewInterval = this.getConf().getLong( + YarnConfiguration.RM_DELEGATION_TOKEN_RENEW_INTERVAL_KEY, + YarnConfiguration.RM_DELEGATION_TOKEN_RENEW_INTERVAL_DEFAULT); + long renewDate = delegationTokenState.get(rMDelegationTokenIdentifier); + Assert.assertEquals(issueDate + tokenRenewInterval, renewDate); + } + + @Test + public void testRenewDelegationToken() throws IOException, YarnException { + + // We design such a unit test to check + // that the execution of the GetDelegationToken method is as expected + // 1. Call GetDelegationToken to apply for delegationToken. + // 2. Call renewDelegationToken to refresh delegationToken. + // By looking at the code of AbstractDelegationTokenSecretManager#renewToken, + // we know that renewTime is calculated as Math.min(id.getMaxDate(), now + tokenRenewInterval) + // so renewTime will be less than or equal to maxDate. + // 3. We will compare whether the expirationTime returned to the + // client is consistent with the renewDate in the stateStore. + + // Step1. Call GetDelegationToken to apply for delegationToken. + GetDelegationTokenRequest request = mock(GetDelegationTokenRequest.class); + when(request.getRenewer()).thenReturn("renewer2"); + GetDelegationTokenResponse response = interceptor.getDelegationToken(request); + Assert.assertNotNull(response); + Token delegationToken = response.getRMDelegationToken(); + + org.apache.hadoop.security.token.Token token = + ConverterUtils.convertFromYarn(delegationToken, (Text) null); + RMDelegationTokenIdentifier rMDelegationTokenIdentifier = token.decodeIdentifier(); + String renewer = rMDelegationTokenIdentifier.getRenewer().toString(); + long maxDate = rMDelegationTokenIdentifier.getMaxDate(); + Assert.assertEquals("renewer2", renewer); + + // Step2. Call renewDelegationToken to refresh delegationToken. + RenewDelegationTokenRequest renewRequest = Records.newRecord(RenewDelegationTokenRequest.class); + renewRequest.setDelegationToken(delegationToken); + RenewDelegationTokenResponse renewResponse = interceptor.renewDelegationToken(renewRequest); + Assert.assertNotNull(renewResponse); + + long expDate = renewResponse.getNextExpirationTime(); + Assert.assertTrue(expDate <= maxDate); + + // Step3. Compare whether the expirationTime returned to + // the client is consistent with the renewDate in the stateStore + RouterRMDTSecretManagerState managerState = stateStore.getRouterRMSecretManagerState(); + Map delegationTokenState = managerState.getTokenState(); + Assert.assertNotNull(delegationTokenState); + Assert.assertTrue(delegationTokenState.containsKey(rMDelegationTokenIdentifier)); + long renewDate = delegationTokenState.get(rMDelegationTokenIdentifier); + Assert.assertEquals(expDate, renewDate); + } + + @Test + public void testCancelDelegationToken() throws IOException, YarnException { + + // We design such a unit test to check + // that the execution of the CancelDelegationToken method is as expected + // 1. Call GetDelegationToken to apply for delegationToken. + // 2. Call CancelDelegationToken to cancel delegationToken. + // 3. Query the data in the StateStore and confirm that the Delegation has been deleted. + + // Step1. Call GetDelegationToken to apply for delegationToken. + GetDelegationTokenRequest request = mock(GetDelegationTokenRequest.class); + when(request.getRenewer()).thenReturn("renewer3"); + GetDelegationTokenResponse response = interceptor.getDelegationToken(request); + Assert.assertNotNull(response); + Token delegationToken = response.getRMDelegationToken(); + + // Step2. Call CancelDelegationToken to cancel delegationToken. + CancelDelegationTokenRequest cancelTokenRequest = + CancelDelegationTokenRequest.newInstance(delegationToken); + CancelDelegationTokenResponse cancelTokenResponse = + interceptor.cancelDelegationToken(cancelTokenRequest); + Assert.assertNotNull(cancelTokenResponse); + + // Step3. Query the data in the StateStore and confirm that the Delegation has been deleted. + // At this point, the size of delegationTokenState should be 0. + RouterRMDTSecretManagerState managerState = stateStore.getRouterRMSecretManagerState(); + Map delegationTokenState = managerState.getTokenState(); + Assert.assertNotNull(delegationTokenState); + Assert.assertEquals(0, delegationTokenState.size()); + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestFederationClientInterceptorRetry.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestFederationClientInterceptorRetry.java index 096fa063907..2d0bc6b3507 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestFederationClientInterceptorRetry.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestFederationClientInterceptorRetry.java @@ -18,37 +18,49 @@ package org.apache.hadoop.yarn.server.router.clientrm; +import static org.apache.hadoop.yarn.conf.YarnConfiguration.FEDERATION_POLICY_MANAGER; +import static org.hamcrest.CoreMatchers.is; import static org.mockito.Mockito.mock; import java.io.IOException; import java.util.ArrayList; import java.util.Arrays; +import java.util.Collection; import java.util.List; +import org.apache.hadoop.test.LambdaTestUtils; import org.apache.hadoop.yarn.MockApps; import org.apache.hadoop.yarn.api.protocolrecords.GetNewApplicationRequest; import org.apache.hadoop.yarn.api.protocolrecords.GetNewApplicationResponse; import org.apache.hadoop.yarn.api.protocolrecords.SubmitApplicationRequest; +import org.apache.hadoop.yarn.api.protocolrecords.SubmitApplicationResponse; import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext; import org.apache.hadoop.yarn.api.records.ContainerLaunchContext; import org.apache.hadoop.yarn.api.records.Priority; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.exceptions.YarnException; -import org.apache.hadoop.yarn.server.federation.policies.FederationPolicyUtils; import org.apache.hadoop.yarn.server.federation.policies.manager.UniformBroadcastPolicyManager; import org.apache.hadoop.yarn.server.federation.store.impl.MemoryFederationStateStore; +import org.apache.hadoop.yarn.server.federation.store.records.ApplicationHomeSubCluster; import org.apache.hadoop.yarn.server.federation.store.records.GetApplicationHomeSubClusterRequest; +import org.apache.hadoop.yarn.server.federation.store.records.GetApplicationHomeSubClusterResponse; import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId; import org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade; import org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreTestUtil; import org.apache.hadoop.yarn.server.resourcemanager.ResourceManager; import org.apache.hadoop.yarn.util.resource.Resources; import org.junit.Assert; +import org.junit.Assume; import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.Parameterized; +import org.junit.runners.Parameterized.Parameters; import org.slf4j.Logger; import org.slf4j.LoggerFactory; +import static org.apache.hadoop.yarn.server.federation.policies.FederationPolicyUtils.NO_ACTIVE_SUBCLUSTER_AVAILABLE; + /** * Extends the {@code BaseRouterClientRMTest} and overrides methods in order to * use the {@code RouterClientRMService} pipeline test cases for testing the @@ -59,14 +71,22 @@ import org.slf4j.LoggerFactory; * It tests the case with SubClusters down and the Router logic of retries. We * have 1 good SubCluster and 2 bad ones for all the tests. */ +@RunWith(Parameterized.class) public class TestFederationClientInterceptorRetry extends BaseRouterClientRMTest { private static final Logger LOG = LoggerFactory.getLogger(TestFederationClientInterceptorRetry.class); + @Parameters + public static Collection getParameters() { + return Arrays.asList(new String[][] {{UniformBroadcastPolicyManager.class.getName()}, + {TestSequentialBroadcastPolicyManager.class.getName()}}); + } + private TestableFederationClientInterceptor interceptor; private MemoryFederationStateStore stateStore; private FederationStateStoreTestUtil stateStoreUtil; + private String routerPolicyManagerName; private String user = "test-user"; @@ -77,7 +97,11 @@ public class TestFederationClientInterceptorRetry private static SubClusterId bad1; private static SubClusterId bad2; - private static List scs = new ArrayList(); + private static List scs = new ArrayList<>(); + + public TestFederationClientInterceptorRetry(String policyManagerName) { + this.routerPolicyManagerName = policyManagerName; + } @Override public void setUp() throws IOException { @@ -114,8 +138,7 @@ public class TestFederationClientInterceptorRetry super.tearDown(); } - private void setupCluster(List scsToRegister) - throws YarnException { + private void setupCluster(List scsToRegister) throws YarnException { try { // Clean up the StateStore before every test @@ -132,6 +155,7 @@ public class TestFederationClientInterceptorRetry @Override protected YarnConfiguration createConfiguration() { + YarnConfiguration conf = new YarnConfiguration(); conf.setBoolean(YarnConfiguration.FEDERATION_ENABLED, true); String mockPassThroughInterceptorClass = @@ -145,8 +169,7 @@ public class TestFederationClientInterceptorRetry mockPassThroughInterceptorClass + "," + mockPassThroughInterceptorClass + "," + TestableFederationClientInterceptor.class.getName()); - conf.set(YarnConfiguration.FEDERATION_POLICY_MANAGER, - UniformBroadcastPolicyManager.class.getName()); + conf.set(FEDERATION_POLICY_MANAGER, this.routerPolicyManagerName); // Disable StateStoreFacade cache conf.setInt(YarnConfiguration.FEDERATION_CACHE_TIME_TO_LIVE_SECS, 0); @@ -159,20 +182,14 @@ public class TestFederationClientInterceptorRetry * cluster is composed of only 1 bad SubCluster. */ @Test - public void testGetNewApplicationOneBadSC() - throws YarnException, IOException, InterruptedException { + public void testGetNewApplicationOneBadSC() throws Exception { - System.out.println("Test getNewApplication with one bad SubCluster"); + LOG.info("Test getNewApplication with one bad SubCluster"); setupCluster(Arrays.asList(bad2)); - try { - interceptor.getNewApplication(GetNewApplicationRequest.newInstance()); - Assert.fail(); - } catch (Exception e) { - System.out.println(e.toString()); - Assert.assertTrue(e.getMessage() - .equals(FederationPolicyUtils.NO_ACTIVE_SUBCLUSTER_AVAILABLE)); - } + GetNewApplicationRequest request = GetNewApplicationRequest.newInstance(); + LambdaTestUtils.intercept(YarnException.class, NO_ACTIVE_SUBCLUSTER_AVAILABLE, + () -> interceptor.getNewApplication(request)); } /** @@ -180,19 +197,14 @@ public class TestFederationClientInterceptorRetry * cluster is composed of only 2 bad SubClusters. */ @Test - public void testGetNewApplicationTwoBadSCs() - throws YarnException, IOException, InterruptedException { - System.out.println("Test getNewApplication with two bad SubClusters"); + public void testGetNewApplicationTwoBadSCs() throws Exception { + + LOG.info("Test getNewApplication with two bad SubClusters"); setupCluster(Arrays.asList(bad1, bad2)); - try { - interceptor.getNewApplication(GetNewApplicationRequest.newInstance()); - Assert.fail(); - } catch (Exception e) { - System.out.println(e.toString()); - Assert.assertTrue(e.getMessage() - .equals(FederationPolicyUtils.NO_ACTIVE_SUBCLUSTER_AVAILABLE)); - } + GetNewApplicationRequest request = GetNewApplicationRequest.newInstance(); + LambdaTestUtils.intercept(YarnException.class, NO_ACTIVE_SUBCLUSTER_AVAILABLE, + () -> interceptor.getNewApplication(request)); } /** @@ -200,17 +212,14 @@ public class TestFederationClientInterceptorRetry * cluster is composed of only 1 bad SubCluster and 1 good one. */ @Test - public void testGetNewApplicationOneBadOneGood() - throws YarnException, IOException, InterruptedException { - System.out.println("Test getNewApplication with one bad, one good SC"); + public void testGetNewApplicationOneBadOneGood() throws YarnException, IOException { + + LOG.info("Test getNewApplication with one bad, one good SC"); setupCluster(Arrays.asList(good, bad2)); - GetNewApplicationResponse response = null; - try { - response = - interceptor.getNewApplication(GetNewApplicationRequest.newInstance()); - } catch (Exception e) { - Assert.fail(); - } + GetNewApplicationRequest request = GetNewApplicationRequest.newInstance(); + GetNewApplicationResponse response = interceptor.getNewApplication(request); + + Assert.assertNotNull(response); Assert.assertEquals(ResourceManager.getClusterTimeStamp(), response.getApplicationId().getClusterTimestamp()); } @@ -220,38 +229,27 @@ public class TestFederationClientInterceptorRetry * cluster is composed of only 1 bad SubCluster. */ @Test - public void testSubmitApplicationOneBadSC() - throws YarnException, IOException, InterruptedException { + public void testSubmitApplicationOneBadSC() throws Exception { - System.out.println("Test submitApplication with one bad SubCluster"); + LOG.info("Test submitApplication with one bad SubCluster"); setupCluster(Arrays.asList(bad2)); final ApplicationId appId = ApplicationId.newInstance(System.currentTimeMillis(), 1); - final SubmitApplicationRequest request = mockSubmitApplicationRequest( - appId); - try { - interceptor.submitApplication(request); - Assert.fail(); - } catch (Exception e) { - System.out.println(e); - Assert.assertTrue(e.getMessage() - .equals(FederationPolicyUtils.NO_ACTIVE_SUBCLUSTER_AVAILABLE)); - } + final SubmitApplicationRequest request = mockSubmitApplicationRequest(appId); + LambdaTestUtils.intercept(YarnException.class, NO_ACTIVE_SUBCLUSTER_AVAILABLE, + () -> interceptor.submitApplication(request)); } - private SubmitApplicationRequest mockSubmitApplicationRequest( - ApplicationId appId) { + private SubmitApplicationRequest mockSubmitApplicationRequest(ApplicationId appId) { ContainerLaunchContext amContainerSpec = mock(ContainerLaunchContext.class); ApplicationSubmissionContext context = ApplicationSubmissionContext .newInstance(appId, MockApps.newAppName(), "q1", - Priority.newInstance(0), amContainerSpec, false, false, -1, - Resources.createResource( - YarnConfiguration.DEFAULT_RM_SCHEDULER_MINIMUM_ALLOCATION_MB), - "MockApp"); - SubmitApplicationRequest request = SubmitApplicationRequest - .newInstance(context); + Priority.newInstance(0), amContainerSpec, false, false, -1, + Resources.createResource(YarnConfiguration.DEFAULT_RM_SCHEDULER_MINIMUM_ALLOCATION_MB), + "MockApp"); + SubmitApplicationRequest request = SubmitApplicationRequest.newInstance(context); return request; } @@ -260,24 +258,17 @@ public class TestFederationClientInterceptorRetry * cluster is composed of only 2 bad SubClusters. */ @Test - public void testSubmitApplicationTwoBadSCs() - throws YarnException, IOException, InterruptedException { - System.out.println("Test submitApplication with two bad SubClusters"); + public void testSubmitApplicationTwoBadSCs() throws Exception { + + LOG.info("Test submitApplication with two bad SubClusters."); setupCluster(Arrays.asList(bad1, bad2)); final ApplicationId appId = ApplicationId.newInstance(System.currentTimeMillis(), 1); - final SubmitApplicationRequest request = mockSubmitApplicationRequest( - appId); - try { - interceptor.submitApplication(request); - Assert.fail(); - } catch (Exception e) { - System.out.println(e.toString()); - Assert.assertTrue(e.getMessage() - .equals(FederationPolicyUtils.NO_ACTIVE_SUBCLUSTER_AVAILABLE)); - } + final SubmitApplicationRequest request = mockSubmitApplicationRequest(appId); + LambdaTestUtils.intercept(YarnException.class, NO_ACTIVE_SUBCLUSTER_AVAILABLE, + () -> interceptor.submitApplication(request)); } /** @@ -287,24 +278,79 @@ public class TestFederationClientInterceptorRetry @Test public void testSubmitApplicationOneBadOneGood() throws YarnException, IOException, InterruptedException { - System.out.println("Test submitApplication with one bad, one good SC"); + + LOG.info("Test submitApplication with one bad, one good SC."); setupCluster(Arrays.asList(good, bad2)); final ApplicationId appId = ApplicationId.newInstance(System.currentTimeMillis(), 1); - final SubmitApplicationRequest request = mockSubmitApplicationRequest( - appId); - try { - interceptor.submitApplication(request); - } catch (Exception e) { - Assert.fail(); - } - Assert.assertEquals(good, - stateStore - .getApplicationHomeSubCluster( - GetApplicationHomeSubClusterRequest.newInstance(appId)) - .getApplicationHomeSubCluster().getHomeSubCluster()); + final SubmitApplicationRequest request = mockSubmitApplicationRequest(appId); + SubmitApplicationResponse response = interceptor.submitApplication(request); + Assert.assertNotNull(response); + + GetApplicationHomeSubClusterRequest getAppRequest = + GetApplicationHomeSubClusterRequest.newInstance(appId); + GetApplicationHomeSubClusterResponse getAppResponse = + stateStore.getApplicationHomeSubCluster(getAppRequest); + Assert.assertNotNull(getAppResponse); + + ApplicationHomeSubCluster responseHomeSubCluster = + getAppResponse.getApplicationHomeSubCluster(); + Assert.assertNotNull(responseHomeSubCluster); + SubClusterId respSubClusterId = responseHomeSubCluster.getHomeSubCluster(); + Assert.assertEquals(good, respSubClusterId); } + @Test + public void testSubmitApplicationTwoBadOneGood() throws Exception { + + LOG.info("Test submitApplication with two bad, one good SC."); + + // This test must require the TestSequentialRouterPolicy policy + Assume.assumeThat(routerPolicyManagerName, + is(TestSequentialBroadcastPolicyManager.class.getName())); + + setupCluster(Arrays.asList(bad1, bad2, good)); + final ApplicationId appId = + ApplicationId.newInstance(System.currentTimeMillis(), 1); + + // Use the TestSequentialRouterPolicy strategy, + // which will sort the SubClusterId because good=0, bad1=1, bad2=2 + // We will get 2, 1, 0 [bad2, bad1, good] + // Set the retryNum to 1 + // 1st time will use bad2, 2nd time will use bad1 + // bad1 is updated to stateStore + interceptor.setNumSubmitRetries(1); + final SubmitApplicationRequest request = mockSubmitApplicationRequest(appId); + LambdaTestUtils.intercept(YarnException.class, "RM is stopped", + () -> interceptor.submitApplication(request)); + + // We will get bad1 + checkSubmitSubCluster(appId, bad1); + + // Set the retryNum to 2 + // 1st time will use bad2, 2nd time will use bad1, 3rd good + interceptor.setNumSubmitRetries(2); + SubmitApplicationResponse submitAppResponse = interceptor.submitApplication(request); + Assert.assertNotNull(submitAppResponse); + + // We will get good + checkSubmitSubCluster(appId, good); + } + + private void checkSubmitSubCluster(ApplicationId appId, SubClusterId expectSubCluster) + throws YarnException { + GetApplicationHomeSubClusterRequest getAppRequest = + GetApplicationHomeSubClusterRequest.newInstance(appId); + GetApplicationHomeSubClusterResponse getAppResponse = + stateStore.getApplicationHomeSubCluster(getAppRequest); + Assert.assertNotNull(getAppResponse); + Assert.assertNotNull(getAppResponse); + ApplicationHomeSubCluster responseHomeSubCluster = + getAppResponse.getApplicationHomeSubCluster(); + Assert.assertNotNull(responseHomeSubCluster); + SubClusterId respSubClusterId = responseHomeSubCluster.getHomeSubCluster(); + Assert.assertEquals(expectSubCluster, respSubClusterId); + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestRouterYarnClientUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestRouterYarnClientUtils.java index 33cae612300..346e9e87841 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestRouterYarnClientUtils.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestRouterYarnClientUtils.java @@ -82,10 +82,12 @@ public class TestRouterYarnClientUtils { YarnClusterMetrics resultMetrics = result.getClusterMetrics(); Assert.assertEquals(3, resultMetrics.getNumNodeManagers()); Assert.assertEquals(3, resultMetrics.getNumActiveNodeManagers()); + Assert.assertEquals(3, resultMetrics.getNumDecommissioningNodeManagers()); Assert.assertEquals(3, resultMetrics.getNumDecommissionedNodeManagers()); Assert.assertEquals(3, resultMetrics.getNumLostNodeManagers()); Assert.assertEquals(3, resultMetrics.getNumRebootedNodeManagers()); Assert.assertEquals(3, resultMetrics.getNumUnhealthyNodeManagers()); + Assert.assertEquals(3, resultMetrics.getNumShutdownNodeManagers()); } public GetClusterMetricsResponse getClusterMetricsResponse(int value) { @@ -93,9 +95,11 @@ public class TestRouterYarnClientUtils { metrics.setNumUnhealthyNodeManagers(value); metrics.setNumRebootedNodeManagers(value); metrics.setNumLostNodeManagers(value); + metrics.setNumDecommissioningNodeManagers(value); metrics.setNumDecommissionedNodeManagers(value); metrics.setNumActiveNodeManagers(value); metrics.setNumNodeManagers(value); + metrics.setNumShutdownNodeManagers(value); return GetClusterMetricsResponse.newInstance(metrics); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestSequentialBroadcastPolicyManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestSequentialBroadcastPolicyManager.java new file mode 100644 index 00000000000..dfa8c7136d7 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestSequentialBroadcastPolicyManager.java @@ -0,0 +1,39 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.router.clientrm; + +import org.apache.hadoop.yarn.server.federation.policies.amrmproxy.BroadcastAMRMProxyPolicy; +import org.apache.hadoop.yarn.server.federation.policies.manager.AbstractPolicyManager; + +/** + * This PolicyManager is used for testing and will contain the + * {@link TestSequentialRouterPolicy} policy. + * + * When we test FederationClientInterceptor Retry, + * we hope that SubCluster can return in a certain order, not randomly. + * We can view the policy description by linking to TestSequentialRouterPolicy. + */ +public class TestSequentialBroadcastPolicyManager extends AbstractPolicyManager { + public TestSequentialBroadcastPolicyManager() { + // this structurally hard-codes two compatible policies for Router and + // AMRMProxy. + routerFederationPolicy = TestSequentialRouterPolicy.class; + amrmProxyFederationPolicy = BroadcastAMRMProxyPolicy.class; + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestSequentialRouterPolicy.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestSequentialRouterPolicy.java new file mode 100644 index 00000000000..e702b764fed --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestSequentialRouterPolicy.java @@ -0,0 +1,78 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.router.clientrm; + +import org.apache.commons.collections.CollectionUtils; +import org.apache.hadoop.yarn.exceptions.YarnException; +import org.apache.hadoop.yarn.server.federation.policies.FederationPolicyInitializationContext; +import org.apache.hadoop.yarn.server.federation.policies.FederationPolicyInitializationContextValidator; +import org.apache.hadoop.yarn.server.federation.policies.exceptions.FederationPolicyInitializationException; +import org.apache.hadoop.yarn.server.federation.policies.router.AbstractRouterPolicy; +import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId; +import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo; + +import java.util.ArrayList; +import java.util.List; +import java.util.Map; + +/** + * This is a test strategy, + * the purpose of this strategy is to return subClusters in descending order of subClusterId. + * + * This strategy is to verify the situation of Retry during the use of FederationClientInterceptor. + * The conditions of use are as follows: + * 1.We require subClusterId to be an integer. + * 2.The larger the subCluster, the sooner the representative is selected. + * + * We have 4 subClusters, 2 normal subClusters, 2 bad subClusters. + * We expect to select badSubClusters first and then goodSubClusters during testing. + * We can set the subCluster like this, good1 = [0], good2 = [1], bad1 = [2], bad2 = [3]. + * This strategy will return [3, 2, 1, 0], + * The selection order of subCluster is bad2, bad1, good2, good1. + */ +public class TestSequentialRouterPolicy extends AbstractRouterPolicy { + + @Override + public void reinitialize(FederationPolicyInitializationContext policyContext) + throws FederationPolicyInitializationException { + FederationPolicyInitializationContextValidator.validate(policyContext, + this.getClass().getCanonicalName()); + setPolicyContext(policyContext); + } + + @Override + protected SubClusterId chooseSubCluster(String queue, + Map preSelectSubClusters) throws YarnException { + /** + * This strategy is only suitable for testing. We need to obtain subClusters sequentially. + * We have 3 subClusters, 1 goodSubCluster and 2 badSubClusters. + * The sc-id of goodSubCluster is 0, and the sc-id of badSubCluster is 1 and 2. + * We hope Return in reverse order, that is, return 2, 1, 0 + * Return to badCluster first. + */ + List subClusterIds = new ArrayList<>(preSelectSubClusters.keySet()); + if (subClusterIds.size() > 1) { + subClusterIds.sort((o1, o2) -> Integer.parseInt(o2.getId()) - Integer.parseInt(o1.getId())); + } + if(CollectionUtils.isNotEmpty(subClusterIds)){ + return subClusterIds.get(0); + } + return null; + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestableFederationClientInterceptor.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestableFederationClientInterceptor.java index 8279899e387..fa25bc4d0a5 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestableFederationClientInterceptor.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestableFederationClientInterceptor.java @@ -28,8 +28,10 @@ import java.util.Set; import java.util.Map; import java.util.HashMap; import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeoutException; +import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.test.GenericTestUtils; import org.apache.hadoop.thirdparty.com.google.common.collect.ImmutableSet; import org.apache.hadoop.yarn.api.ApplicationClientProtocol; @@ -38,6 +40,7 @@ import org.apache.hadoop.yarn.api.protocolrecords.SubmitApplicationResponse; import org.apache.hadoop.yarn.api.records.NodeAttribute; import org.apache.hadoop.yarn.api.records.NodeAttributeType; import org.apache.hadoop.yarn.api.records.Resource; +import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.exceptions.YarnException; import org.apache.hadoop.yarn.nodelabels.NodeAttributesManager; import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId; @@ -51,6 +54,7 @@ import org.apache.hadoop.yarn.server.resourcemanager.reservation.ReservationSyst import org.apache.hadoop.yarn.server.resourcemanager.scheduler.YarnScheduler; import org.apache.hadoop.yarn.server.resourcemanager.security.QueueACLsManager; import org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager; +import org.apache.hadoop.yarn.server.router.security.RouterDelegationTokenSecretManager; import org.apache.hadoop.yarn.server.security.ApplicationACLsManager; import org.junit.Assert; import org.slf4j.Logger; @@ -216,4 +220,31 @@ public class TestableFederationClientInterceptor mockRMs.clear(); super.shutdown(); } + + public RouterDelegationTokenSecretManager createRouterRMDelegationTokenSecretManager( + Configuration conf) { + + long secretKeyInterval = conf.getTimeDuration( + YarnConfiguration.RM_DELEGATION_KEY_UPDATE_INTERVAL_KEY, + YarnConfiguration.RM_DELEGATION_KEY_UPDATE_INTERVAL_DEFAULT, + TimeUnit.MILLISECONDS); + + long tokenMaxLifetime = conf.getTimeDuration( + YarnConfiguration.RM_DELEGATION_TOKEN_MAX_LIFETIME_KEY, + YarnConfiguration.RM_DELEGATION_TOKEN_MAX_LIFETIME_DEFAULT, + TimeUnit.MILLISECONDS); + + long tokenRenewInterval = conf.getTimeDuration( + YarnConfiguration.RM_DELEGATION_TOKEN_RENEW_INTERVAL_KEY, + YarnConfiguration.RM_DELEGATION_TOKEN_RENEW_INTERVAL_DEFAULT, + TimeUnit.MILLISECONDS); + + long removeScanInterval = conf.getTimeDuration( + YarnConfiguration.RM_DELEGATION_TOKEN_REMOVE_SCAN_INTERVAL_KEY, + YarnConfiguration.RM_DELEGATION_TOKEN_REMOVE_SCAN_INTERVAL_DEFAULT, + TimeUnit.MILLISECONDS); + + return new RouterDelegationTokenSecretManager(secretKeyInterval, + tokenMaxLifetime, tokenRenewInterval, removeScanInterval); + } } \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/rmadmin/BaseRouterRMAdminTest.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/rmadmin/BaseRouterRMAdminTest.java index a8d730fbe87..33cda8751db 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/rmadmin/BaseRouterRMAdminTest.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/rmadmin/BaseRouterRMAdminTest.java @@ -88,27 +88,36 @@ public abstract class BaseRouterRMAdminTest { @Before public void setUp() { - this.conf = new YarnConfiguration(); + this.conf = createConfiguration(); + this.dispatcher = new AsyncDispatcher(); + this.dispatcher.init(conf); + this.dispatcher.start(); + this.rmAdminService = createAndStartRouterRMAdminService(); + DefaultMetricsSystem.setMiniClusterMode(true); + } + + protected Configuration getConf() { + return this.conf; + } + + public void setUpConfig() { + this.conf = createConfiguration(); + } + + protected Configuration createConfiguration() { + YarnConfiguration config = new YarnConfiguration(); String mockPassThroughInterceptorClass = PassThroughRMAdminRequestInterceptor.class.getName(); // Create a request interceptor pipeline for testing. The last one in the // chain will call the mock resource manager. The others in the chain will // simply forward it to the next one in the chain - this.conf.set(YarnConfiguration.ROUTER_RMADMIN_INTERCEPTOR_CLASS_PIPELINE, - mockPassThroughInterceptorClass + "," + mockPassThroughInterceptorClass - + "," + mockPassThroughInterceptorClass + "," - + MockRMAdminRequestInterceptor.class.getName()); + config.set(YarnConfiguration.ROUTER_RMADMIN_INTERCEPTOR_CLASS_PIPELINE, + mockPassThroughInterceptorClass + "," + mockPassThroughInterceptorClass + "," + + mockPassThroughInterceptorClass + "," + MockRMAdminRequestInterceptor.class.getName()); - this.conf.setInt(YarnConfiguration.ROUTER_PIPELINE_CACHE_MAX_SIZE, - TEST_MAX_CACHE_SIZE); - - this.dispatcher = new AsyncDispatcher(); - this.dispatcher.init(conf); - this.dispatcher.start(); - this.rmAdminService = createAndStartRouterRMAdminService(); - - DefaultMetricsSystem.setMiniClusterMode(true); + config.setInt(YarnConfiguration.ROUTER_PIPELINE_CACHE_MAX_SIZE, TEST_MAX_CACHE_SIZE); + return config; } @After @@ -142,194 +151,154 @@ public abstract class BaseRouterRMAdminTest { protected RefreshQueuesResponse refreshQueues(String user) throws IOException, InterruptedException { return UserGroupInformation.createRemoteUser(user) - .doAs(new PrivilegedExceptionAction() { - @Override - public RefreshQueuesResponse run() throws Exception { - RefreshQueuesRequest req = RefreshQueuesRequest.newInstance(); - RefreshQueuesResponse response = - getRouterRMAdminService().refreshQueues(req); - return response; - } + .doAs((PrivilegedExceptionAction) () -> { + RefreshQueuesRequest req = RefreshQueuesRequest.newInstance(); + RefreshQueuesResponse response = + getRouterRMAdminService().refreshQueues(req); + return response; }); } protected RefreshNodesResponse refreshNodes(String user) throws IOException, InterruptedException { return UserGroupInformation.createRemoteUser(user) - .doAs(new PrivilegedExceptionAction() { - @Override - public RefreshNodesResponse run() throws Exception { - RefreshNodesRequest req = RefreshNodesRequest.newInstance(); - RefreshNodesResponse response = - getRouterRMAdminService().refreshNodes(req); - return response; - } + .doAs((PrivilegedExceptionAction) () -> { + RefreshNodesRequest req = RefreshNodesRequest.newInstance(); + RefreshNodesResponse response = + getRouterRMAdminService().refreshNodes(req); + return response; }); } protected RefreshSuperUserGroupsConfigurationResponse refreshSuperUserGroupsConfiguration( String user) throws IOException, InterruptedException { return UserGroupInformation.createRemoteUser(user).doAs( - new PrivilegedExceptionAction() { - @Override - public RefreshSuperUserGroupsConfigurationResponse run() - throws Exception { - RefreshSuperUserGroupsConfigurationRequest req = - RefreshSuperUserGroupsConfigurationRequest.newInstance(); - RefreshSuperUserGroupsConfigurationResponse response = - getRouterRMAdminService() - .refreshSuperUserGroupsConfiguration(req); - return response; - } + (PrivilegedExceptionAction) () -> { + RefreshSuperUserGroupsConfigurationRequest req = + RefreshSuperUserGroupsConfigurationRequest.newInstance(); + RefreshSuperUserGroupsConfigurationResponse response = + getRouterRMAdminService() + .refreshSuperUserGroupsConfiguration(req); + return response; }); } protected RefreshUserToGroupsMappingsResponse refreshUserToGroupsMappings( String user) throws IOException, InterruptedException { return UserGroupInformation.createRemoteUser(user).doAs( - new PrivilegedExceptionAction() { - @Override - public RefreshUserToGroupsMappingsResponse run() throws Exception { - RefreshUserToGroupsMappingsRequest req = - RefreshUserToGroupsMappingsRequest.newInstance(); - RefreshUserToGroupsMappingsResponse response = - getRouterRMAdminService().refreshUserToGroupsMappings(req); - return response; - } + (PrivilegedExceptionAction) () -> { + RefreshUserToGroupsMappingsRequest req = + RefreshUserToGroupsMappingsRequest.newInstance(); + RefreshUserToGroupsMappingsResponse response = + getRouterRMAdminService().refreshUserToGroupsMappings(req); + return response; }); } protected RefreshAdminAclsResponse refreshAdminAcls(String user) throws IOException, InterruptedException { return UserGroupInformation.createRemoteUser(user) - .doAs(new PrivilegedExceptionAction() { - @Override - public RefreshAdminAclsResponse run() throws Exception { - RefreshAdminAclsRequest req = RefreshAdminAclsRequest.newInstance(); - RefreshAdminAclsResponse response = - getRouterRMAdminService().refreshAdminAcls(req); - return response; - } + .doAs((PrivilegedExceptionAction) () -> { + RefreshAdminAclsRequest req = RefreshAdminAclsRequest.newInstance(); + RefreshAdminAclsResponse response = + getRouterRMAdminService().refreshAdminAcls(req); + return response; }); } protected RefreshServiceAclsResponse refreshServiceAcls(String user) throws IOException, InterruptedException { return UserGroupInformation.createRemoteUser(user) - .doAs(new PrivilegedExceptionAction() { - @Override - public RefreshServiceAclsResponse run() throws Exception { - RefreshServiceAclsRequest req = - RefreshServiceAclsRequest.newInstance(); - RefreshServiceAclsResponse response = - getRouterRMAdminService().refreshServiceAcls(req); - return response; - } + .doAs((PrivilegedExceptionAction) () -> { + RefreshServiceAclsRequest req = + RefreshServiceAclsRequest.newInstance(); + RefreshServiceAclsResponse response = + getRouterRMAdminService().refreshServiceAcls(req); + return response; }); } protected UpdateNodeResourceResponse updateNodeResource(String user) throws IOException, InterruptedException { return UserGroupInformation.createRemoteUser(user) - .doAs(new PrivilegedExceptionAction() { - @Override - public UpdateNodeResourceResponse run() throws Exception { - UpdateNodeResourceRequest req = - UpdateNodeResourceRequest.newInstance(null); - UpdateNodeResourceResponse response = - getRouterRMAdminService().updateNodeResource(req); - return response; - } + .doAs((PrivilegedExceptionAction) () -> { + UpdateNodeResourceRequest req = + UpdateNodeResourceRequest.newInstance(null); + UpdateNodeResourceResponse response = + getRouterRMAdminService().updateNodeResource(req); + return response; }); } protected RefreshNodesResourcesResponse refreshNodesResources(String user) throws IOException, InterruptedException { return UserGroupInformation.createRemoteUser(user) - .doAs(new PrivilegedExceptionAction() { - @Override - public RefreshNodesResourcesResponse run() throws Exception { - RefreshNodesResourcesRequest req = - RefreshNodesResourcesRequest.newInstance(); - RefreshNodesResourcesResponse response = - getRouterRMAdminService().refreshNodesResources(req); - return response; - } + .doAs((PrivilegedExceptionAction) () -> { + RefreshNodesResourcesRequest req = + RefreshNodesResourcesRequest.newInstance(); + RefreshNodesResourcesResponse response = + getRouterRMAdminService().refreshNodesResources(req); + return response; }); } protected AddToClusterNodeLabelsResponse addToClusterNodeLabels(String user) throws IOException, InterruptedException { return UserGroupInformation.createRemoteUser(user) - .doAs(new PrivilegedExceptionAction() { - @Override - public AddToClusterNodeLabelsResponse run() throws Exception { - AddToClusterNodeLabelsRequest req = - AddToClusterNodeLabelsRequest.newInstance(null); - AddToClusterNodeLabelsResponse response = - getRouterRMAdminService().addToClusterNodeLabels(req); - return response; - } + .doAs((PrivilegedExceptionAction) () -> { + AddToClusterNodeLabelsRequest req = + AddToClusterNodeLabelsRequest.newInstance(null); + AddToClusterNodeLabelsResponse response = + getRouterRMAdminService().addToClusterNodeLabels(req); + return response; }); } protected RemoveFromClusterNodeLabelsResponse removeFromClusterNodeLabels( String user) throws IOException, InterruptedException { return UserGroupInformation.createRemoteUser(user).doAs( - new PrivilegedExceptionAction() { - @Override - public RemoveFromClusterNodeLabelsResponse run() throws Exception { - RemoveFromClusterNodeLabelsRequest req = - RemoveFromClusterNodeLabelsRequest.newInstance(null); - RemoveFromClusterNodeLabelsResponse response = - getRouterRMAdminService().removeFromClusterNodeLabels(req); - return response; - } + (PrivilegedExceptionAction) () -> { + RemoveFromClusterNodeLabelsRequest req = + RemoveFromClusterNodeLabelsRequest.newInstance(null); + RemoveFromClusterNodeLabelsResponse response = + getRouterRMAdminService().removeFromClusterNodeLabels(req); + return response; }); } protected ReplaceLabelsOnNodeResponse replaceLabelsOnNode(String user) throws IOException, InterruptedException { return UserGroupInformation.createRemoteUser(user) - .doAs(new PrivilegedExceptionAction() { - @Override - public ReplaceLabelsOnNodeResponse run() throws Exception { - ReplaceLabelsOnNodeRequest req = ReplaceLabelsOnNodeRequest - .newInstance(new HashMap>()); - ReplaceLabelsOnNodeResponse response = - getRouterRMAdminService().replaceLabelsOnNode(req); - return response; - } + .doAs((PrivilegedExceptionAction) () -> { + ReplaceLabelsOnNodeRequest req = ReplaceLabelsOnNodeRequest + .newInstance(new HashMap>()); + ReplaceLabelsOnNodeResponse response = + getRouterRMAdminService().replaceLabelsOnNode(req); + return response; }); } protected CheckForDecommissioningNodesResponse checkForDecommissioningNodes( String user) throws IOException, InterruptedException { return UserGroupInformation.createRemoteUser(user).doAs( - new PrivilegedExceptionAction() { - @Override - public CheckForDecommissioningNodesResponse run() throws Exception { - CheckForDecommissioningNodesRequest req = - CheckForDecommissioningNodesRequest.newInstance(); - CheckForDecommissioningNodesResponse response = - getRouterRMAdminService().checkForDecommissioningNodes(req); - return response; - } + (PrivilegedExceptionAction) () -> { + CheckForDecommissioningNodesRequest req = + CheckForDecommissioningNodesRequest.newInstance(); + CheckForDecommissioningNodesResponse response = + getRouterRMAdminService().checkForDecommissioningNodes(req); + return response; }); } protected RefreshClusterMaxPriorityResponse refreshClusterMaxPriority( String user) throws IOException, InterruptedException { return UserGroupInformation.createRemoteUser(user).doAs( - new PrivilegedExceptionAction() { - @Override - public RefreshClusterMaxPriorityResponse run() throws Exception { - RefreshClusterMaxPriorityRequest req = - RefreshClusterMaxPriorityRequest.newInstance(); - RefreshClusterMaxPriorityResponse response = - getRouterRMAdminService().refreshClusterMaxPriority(req); - return response; - } + (PrivilegedExceptionAction) () -> { + RefreshClusterMaxPriorityRequest req = + RefreshClusterMaxPriorityRequest.newInstance(); + RefreshClusterMaxPriorityResponse response = + getRouterRMAdminService().refreshClusterMaxPriority(req); + return response; }); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/rmadmin/TestFederationRMAdminInterceptor.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/rmadmin/TestFederationRMAdminInterceptor.java new file mode 100644 index 00000000000..977f82dd3cd --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/rmadmin/TestFederationRMAdminInterceptor.java @@ -0,0 +1,262 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.router.rmadmin; + +import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem; +import org.apache.hadoop.test.LambdaTestUtils; +import org.apache.hadoop.yarn.api.records.DecommissionType; +import org.apache.hadoop.yarn.conf.YarnConfiguration; +import org.apache.hadoop.yarn.exceptions.YarnException; +import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshNodesRequest; +import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshQueuesRequest; +import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshQueuesResponse; +import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshSuperUserGroupsConfigurationRequest; +import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshSuperUserGroupsConfigurationResponse; +import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshUserToGroupsMappingsRequest; +import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshUserToGroupsMappingsResponse; +import org.apache.hadoop.yarn.server.federation.store.impl.MemoryFederationStateStore; +import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId; +import org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade; +import org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreTestUtil; +import org.junit.Assert; +import org.junit.Test; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.ArrayList; +import java.util.List; + +import static org.junit.Assert.assertNotNull; + +/** + * Extends the FederationRMAdminInterceptor and overrides methods to provide a + * testable implementation of FederationRMAdminInterceptor. + */ +public class TestFederationRMAdminInterceptor extends BaseRouterRMAdminTest { + + private static final Logger LOG = + LoggerFactory.getLogger(TestFederationRMAdminInterceptor.class); + + //////////////////////////////// + // constant information + //////////////////////////////// + private final static String USER_NAME = "test-user"; + private final static int NUM_SUBCLUSTER = 4; + + private TestableFederationRMAdminInterceptor interceptor; + private FederationStateStoreFacade facade; + private MemoryFederationStateStore stateStore; + private FederationStateStoreTestUtil stateStoreUtil; + private List subClusters; + + @Override + public void setUp() { + + super.setUpConfig(); + + // Initialize facade & stateSore + stateStore = new MemoryFederationStateStore(); + stateStore.init(this.getConf()); + facade = FederationStateStoreFacade.getInstance(); + facade.reinitialize(stateStore, getConf()); + stateStoreUtil = new FederationStateStoreTestUtil(stateStore); + + // Initialize interceptor + interceptor = new TestableFederationRMAdminInterceptor(); + interceptor.setConf(this.getConf()); + interceptor.init(USER_NAME); + + // Storage SubClusters + subClusters = new ArrayList<>(); + try { + for (int i = 0; i < NUM_SUBCLUSTER; i++) { + SubClusterId sc = SubClusterId.newInstance("SC-" + i); + stateStoreUtil.registerSubCluster(sc); + subClusters.add(sc); + } + } catch (YarnException e) { + LOG.error(e.getMessage()); + Assert.fail(); + } + + DefaultMetricsSystem.setMiniClusterMode(true); + } + + @Override + protected YarnConfiguration createConfiguration() { + // Set Enable YarnFederation + YarnConfiguration config = new YarnConfiguration(); + config.setBoolean(YarnConfiguration.FEDERATION_ENABLED, true); + + String mockPassThroughInterceptorClass = + PassThroughRMAdminRequestInterceptor.class.getName(); + + // Create a request interceptor pipeline for testing. The last one in the + // chain will call the mock resource manager. The others in the chain will + // simply forward it to the next one in the chain + config.set(YarnConfiguration.ROUTER_RMADMIN_INTERCEPTOR_CLASS_PIPELINE, + mockPassThroughInterceptorClass + "," + mockPassThroughInterceptorClass + "," + + TestFederationRMAdminInterceptor.class.getName()); + return config; + } + + @Override + public void tearDown() { + interceptor.shutdown(); + super.tearDown(); + } + + @Test + public void testRefreshQueues() throws Exception { + // We will test 2 cases: + // case 1, request is null. + // case 2, normal request. + // If the request is null, a Missing RefreshQueues request exception will be thrown. + + // null request. + LambdaTestUtils.intercept(YarnException.class, "Missing RefreshQueues request.", + () -> interceptor.refreshQueues(null)); + + // normal request. + RefreshQueuesRequest request = RefreshQueuesRequest.newInstance(); + RefreshQueuesResponse response = interceptor.refreshQueues(request); + assertNotNull(response); + } + + @Test + public void testSC1RefreshQueues() throws Exception { + // We will test 2 cases: + // case 1, test the existing subCluster (SC-1). + // case 2, test the non-exist subCluster. + + String existSubCluster = "SC-1"; + RefreshQueuesRequest request = RefreshQueuesRequest.newInstance(existSubCluster); + interceptor.refreshQueues(request); + + String notExistsSubCluster = "SC-NON"; + RefreshQueuesRequest request1 = RefreshQueuesRequest.newInstance(notExistsSubCluster); + LambdaTestUtils.intercept(YarnException.class, + "subClusterId = SC-NON is not an active subCluster.", + () -> interceptor.refreshQueues(request1)); + } + + @Test + public void testRefreshNodes() throws Exception { + // We will test 2 cases: + // case 1, request is null. + // case 2, normal request. + // If the request is null, a Missing RefreshNodes request exception will be thrown. + + // null request. + LambdaTestUtils.intercept(YarnException.class, + "Missing RefreshNodes request.", () -> interceptor.refreshNodes(null)); + + // normal request. + RefreshNodesRequest request = RefreshNodesRequest.newInstance(DecommissionType.NORMAL); + interceptor.refreshNodes(request); + } + + @Test + public void testSC1RefreshNodes() throws Exception { + + // We will test 2 cases: + // case 1, test the existing subCluster (SC-1). + // case 2, test the non-exist subCluster. + + RefreshNodesRequest request = + RefreshNodesRequest.newInstance(DecommissionType.NORMAL, 10, "SC-1"); + interceptor.refreshNodes(request); + + String notExistsSubCluster = "SC-NON"; + RefreshNodesRequest request1 = RefreshNodesRequest.newInstance( + DecommissionType.NORMAL, 10, notExistsSubCluster); + LambdaTestUtils.intercept(YarnException.class, + "subClusterId = SC-NON is not an active subCluster.", + () -> interceptor.refreshNodes(request1)); + } + + @Test + public void testRefreshSuperUserGroupsConfiguration() throws Exception { + // null request. + LambdaTestUtils.intercept(YarnException.class, + "Missing RefreshSuperUserGroupsConfiguration request.", + () -> interceptor.refreshSuperUserGroupsConfiguration(null)); + + // normal request. + // There is no return information defined in RefreshSuperUserGroupsConfigurationResponse, + // as long as it is not empty, it means that the command is successfully executed. + RefreshSuperUserGroupsConfigurationRequest request = + RefreshSuperUserGroupsConfigurationRequest.newInstance(); + RefreshSuperUserGroupsConfigurationResponse response = + interceptor.refreshSuperUserGroupsConfiguration(request); + assertNotNull(response); + } + + @Test + public void testSC1RefreshSuperUserGroupsConfiguration() throws Exception { + + // case 1, test the existing subCluster (SC-1). + String existSubCluster = "SC-1"; + RefreshSuperUserGroupsConfigurationRequest request = + RefreshSuperUserGroupsConfigurationRequest.newInstance(existSubCluster); + RefreshSuperUserGroupsConfigurationResponse response = + interceptor.refreshSuperUserGroupsConfiguration(request); + assertNotNull(response); + + // case 2, test the non-exist subCluster. + String notExistsSubCluster = "SC-NON"; + RefreshSuperUserGroupsConfigurationRequest request1 = + RefreshSuperUserGroupsConfigurationRequest.newInstance(notExistsSubCluster); + LambdaTestUtils.intercept(Exception.class, + "subClusterId = SC-NON is not an active subCluster.", + () -> interceptor.refreshSuperUserGroupsConfiguration(request1)); + } + + @Test + public void testRefreshUserToGroupsMappings() throws Exception { + // null request. + LambdaTestUtils.intercept(YarnException.class, + "Missing RefreshUserToGroupsMappings request.", + () -> interceptor.refreshUserToGroupsMappings(null)); + + // normal request. + RefreshUserToGroupsMappingsRequest request = RefreshUserToGroupsMappingsRequest.newInstance(); + RefreshUserToGroupsMappingsResponse response = interceptor.refreshUserToGroupsMappings(request); + assertNotNull(response); + } + + @Test + public void testSC1RefreshUserToGroupsMappings() throws Exception { + // case 1, test the existing subCluster (SC-1). + String existSubCluster = "SC-1"; + RefreshUserToGroupsMappingsRequest request = + RefreshUserToGroupsMappingsRequest.newInstance(existSubCluster); + RefreshUserToGroupsMappingsResponse response = + interceptor.refreshUserToGroupsMappings(request); + assertNotNull(response); + + // case 2, test the non-exist subCluster. + String notExistsSubCluster = "SC-NON"; + RefreshUserToGroupsMappingsRequest request1 = + RefreshUserToGroupsMappingsRequest.newInstance(notExistsSubCluster); + LambdaTestUtils.intercept(Exception.class, + "subClusterId = SC-NON is not an active subCluster.", + () -> interceptor.refreshUserToGroupsMappings(request1)); + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/rmadmin/TestableFederationRMAdminInterceptor.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/rmadmin/TestableFederationRMAdminInterceptor.java new file mode 100644 index 00000000000..26f50f88b89 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/rmadmin/TestableFederationRMAdminInterceptor.java @@ -0,0 +1,99 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.router.rmadmin; + +import org.apache.commons.collections.MapUtils; +import org.apache.hadoop.yarn.exceptions.YarnException; +import org.apache.hadoop.yarn.server.api.ResourceManagerAdministrationProtocol; +import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId; +import org.apache.hadoop.yarn.server.resourcemanager.AdminService; +import org.apache.hadoop.yarn.server.resourcemanager.ResourceManager; +import org.apache.hadoop.yarn.server.resourcemanager.MockRM; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.IOException; +import java.net.ConnectException; +import java.util.Set; +import java.util.HashSet; +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; + +public class TestableFederationRMAdminInterceptor extends FederationRMAdminInterceptor { + + // Record log information + private static final Logger LOG = + LoggerFactory.getLogger(TestableFederationRMAdminInterceptor.class); + + // Used to Store the relationship between SubClusterId and RM + private ConcurrentHashMap mockRMs = new ConcurrentHashMap<>(); + + // Store Bad subCluster + private Set badSubCluster = new HashSet<>(); + + @Override + protected ResourceManagerAdministrationProtocol getAdminRMProxyForSubCluster( + SubClusterId subClusterId) throws YarnException { + MockRM mockRM; + synchronized (this) { + if (mockRMs.containsKey(subClusterId)) { + mockRM = mockRMs.get(subClusterId); + } else { + mockRM = new MockRM(); + if (badSubCluster.contains(subClusterId)) { + return new MockRMAdminBadService(mockRM); + } + mockRM.init(super.getConf()); + mockRM.start(); + mockRMs.put(subClusterId, mockRM); + } + return mockRM.getAdminService(); + } + } + + // This represents an unserviceable SubCluster + private class MockRMAdminBadService extends AdminService { + MockRMAdminBadService(ResourceManager rm) { + super(rm); + } + + @Override + public void refreshQueues() throws IOException, YarnException { + throw new ConnectException("RM is stopped"); + } + } + + @Override + public void shutdown() { + // if mockRMs is not null + if (MapUtils.isNotEmpty(mockRMs)) { + for (Map.Entry item : mockRMs.entrySet()) { + SubClusterId subClusterId = item.getKey(); + // close mockRM. + MockRM mockRM = item.getValue(); + if (mockRM != null) { + LOG.info("subClusterId = {} mockRM shutdown.", subClusterId); + mockRM.stop(); + } + } + } + mockRMs.clear(); + super.shutdown(); + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/secure/AbstractSecureRouterTest.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/secure/AbstractSecureRouterTest.java index f9d1d047642..062d732e873 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/secure/AbstractSecureRouterTest.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/secure/AbstractSecureRouterTest.java @@ -24,7 +24,9 @@ import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem; import org.apache.hadoop.minikdc.MiniKdc; import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.yarn.conf.YarnConfiguration; +import org.apache.hadoop.yarn.server.federation.store.impl.MemoryFederationStateStore; import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId; +import org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade; import org.apache.hadoop.yarn.server.resourcemanager.MockRM; import org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart; import org.apache.hadoop.yarn.server.router.Router; @@ -179,6 +181,9 @@ public abstract class AbstractSecureRouterTest { */ public synchronized void startSecureRouter() { assertNull("Router is already running", router); + MemoryFederationStateStore stateStore = new MemoryFederationStateStore(); + stateStore.init(getConf()); + FederationStateStoreFacade.getInstance().reinitialize(stateStore, getConf()); UserGroupInformation.setConfiguration(conf); router = new Router(); router.init(conf); @@ -238,4 +243,7 @@ public abstract class AbstractSecureRouterTest { return mockRMs; } + public static Configuration getConf() { + return conf; + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/secure/TestRouterDelegationTokenSecretManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/secure/TestRouterDelegationTokenSecretManager.java new file mode 100644 index 00000000000..eac2c5a03ba --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/secure/TestRouterDelegationTokenSecretManager.java @@ -0,0 +1,201 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.yarn.server.router.secure; + +import org.apache.hadoop.io.Text; +import org.apache.hadoop.security.token.delegation.DelegationKey; +import org.apache.hadoop.test.LambdaTestUtils; +import org.apache.hadoop.util.Time; +import org.apache.hadoop.yarn.exceptions.YarnException; +import org.apache.hadoop.yarn.security.client.RMDelegationTokenIdentifier; +import org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService; +import org.apache.hadoop.yarn.server.router.security.RouterDelegationTokenSecretManager; +import org.junit.Assert; +import org.junit.Test; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.IOException; + +import static org.junit.Assert.assertNotNull; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertArrayEquals; + +public class TestRouterDelegationTokenSecretManager extends AbstractSecureRouterTest { + + private static final Logger LOG = + LoggerFactory.getLogger(TestRouterDelegationTokenSecretManager.class); + + @Test + public void testRouterStoreNewMasterKey() throws Exception { + LOG.info("Test RouterDelegationTokenSecretManager: StoreNewMasterKey."); + + // Start the Router in Secure Mode + startSecureRouter(); + + // Store NewMasterKey + RouterClientRMService routerClientRMService = this.getRouter().getClientRMProxyService(); + RouterDelegationTokenSecretManager secretManager = + routerClientRMService.getRouterDTSecretManager(); + DelegationKey storeKey = new DelegationKey(1234, 4321, "keyBytes".getBytes()); + secretManager.storeNewMasterKey(storeKey); + + // Get DelegationKey + DelegationKey paramKey = new DelegationKey(1234, 4321, "keyBytes".getBytes()); + DelegationKey responseKey = secretManager.getMasterKeyByDelegationKey(paramKey); + + assertNotNull(paramKey); + assertEquals(storeKey.getExpiryDate(), responseKey.getExpiryDate()); + assertEquals(storeKey.getKeyId(), responseKey.getKeyId()); + assertArrayEquals(storeKey.getEncodedKey(), responseKey.getEncodedKey()); + assertEquals(storeKey, responseKey); + + stopSecureRouter(); + } + + @Test + public void testRouterRemoveStoredMasterKey() throws Exception { + LOG.info("Test RouterDelegationTokenSecretManager: RemoveStoredMasterKey."); + + // Start the Router in Secure Mode + startSecureRouter(); + + // Store NewMasterKey + RouterClientRMService routerClientRMService = this.getRouter().getClientRMProxyService(); + RouterDelegationTokenSecretManager secretManager = + routerClientRMService.getRouterDTSecretManager(); + DelegationKey storeKey = new DelegationKey(1234, 4321, "keyBytes".getBytes()); + secretManager.storeNewMasterKey(storeKey); + + // Remove DelegationKey + secretManager.removeStoredMasterKey(storeKey); + + // Get DelegationKey + DelegationKey paramKey = new DelegationKey(1234, 4321, "keyBytes".getBytes()); + LambdaTestUtils.intercept(IOException.class, + "GetMasterKey with keyID: " + storeKey.getKeyId() + " does not exist.", + () -> secretManager.getMasterKeyByDelegationKey(paramKey)); + + stopSecureRouter(); + } + + @Test + public void testRouterStoreNewToken() throws Exception { + LOG.info("Test RouterDelegationTokenSecretManager: StoreNewToken."); + + // Start the Router in Secure Mode + startSecureRouter(); + + // Store new rm-token + RouterClientRMService routerClientRMService = this.getRouter().getClientRMProxyService(); + RouterDelegationTokenSecretManager secretManager = + routerClientRMService.getRouterDTSecretManager(); + RMDelegationTokenIdentifier dtId1 = new RMDelegationTokenIdentifier( + new Text("owner1"), new Text("renewer1"), new Text("realuser1")); + int sequenceNumber = 1; + dtId1.setSequenceNumber(sequenceNumber); + Long renewDate1 = Time.now(); + secretManager.storeNewToken(dtId1, renewDate1); + + // query rm-token + RMDelegationTokenIdentifier dtId2 = new RMDelegationTokenIdentifier( + new Text("owner1"), new Text("renewer1"), new Text("realuser1")); + dtId2.setSequenceNumber(sequenceNumber); + RMDelegationTokenIdentifier dtId3 = secretManager.getTokenByRouterStoreToken(dtId2); + Assert.assertEquals(dtId1, dtId3); + + // query rm-token2 not exists + sequenceNumber++; + dtId2.setSequenceNumber(2); + LambdaTestUtils.intercept(YarnException.class, + "RMDelegationToken: " + dtId2 + " does not exist.", + () -> secretManager.getTokenByRouterStoreToken(dtId2)); + + stopSecureRouter(); + } + + @Test + public void testRouterUpdateNewToken() throws Exception { + LOG.info("Test RouterDelegationTokenSecretManager: UpdateNewToken."); + + // Start the Router in Secure Mode + startSecureRouter(); + + // Store new rm-token + RouterClientRMService routerClientRMService = this.getRouter().getClientRMProxyService(); + RouterDelegationTokenSecretManager secretManager = + routerClientRMService.getRouterDTSecretManager(); + RMDelegationTokenIdentifier dtId1 = new RMDelegationTokenIdentifier( + new Text("owner1"), new Text("renewer1"), new Text("realuser1")); + int sequenceNumber = 1; + dtId1.setSequenceNumber(sequenceNumber); + Long renewDate1 = Time.now(); + secretManager.storeNewToken(dtId1, renewDate1); + + sequenceNumber++; + dtId1.setSequenceNumber(sequenceNumber); + secretManager.updateStoredToken(dtId1, renewDate1); + + // query rm-token + RMDelegationTokenIdentifier dtId2 = new RMDelegationTokenIdentifier( + new Text("owner1"), new Text("renewer1"), new Text("realuser1")); + dtId2.setSequenceNumber(sequenceNumber); + RMDelegationTokenIdentifier dtId3 = secretManager.getTokenByRouterStoreToken(dtId2); + assertNotNull(dtId3); + assertEquals(dtId1.getKind(), dtId3.getKind()); + assertEquals(dtId1.getOwner(), dtId3.getOwner()); + assertEquals(dtId1.getRealUser(), dtId3.getRealUser()); + assertEquals(dtId1.getRenewer(), dtId3.getRenewer()); + assertEquals(dtId1.getIssueDate(), dtId3.getIssueDate()); + assertEquals(dtId1.getMasterKeyId(), dtId3.getMasterKeyId()); + assertEquals(dtId1.getSequenceNumber(), dtId3.getSequenceNumber()); + assertEquals(sequenceNumber, dtId3.getSequenceNumber()); + assertEquals(dtId1, dtId3); + + stopSecureRouter(); + } + + @Test + public void testRouterRemoveToken() throws Exception { + LOG.info("Test RouterDelegationTokenSecretManager: RouterRemoveToken."); + + // Start the Router in Secure Mode + startSecureRouter(); + + // Store new rm-token + RouterClientRMService routerClientRMService = this.getRouter().getClientRMProxyService(); + RouterDelegationTokenSecretManager secretManager = + routerClientRMService.getRouterDTSecretManager(); + RMDelegationTokenIdentifier dtId1 = new RMDelegationTokenIdentifier( + new Text("owner1"), new Text("renewer1"), new Text("realuser1")); + int sequenceNumber = 1; + dtId1.setSequenceNumber(sequenceNumber); + Long renewDate1 = Time.now(); + secretManager.storeNewToken(dtId1, renewDate1); + + // Remove rm-token + secretManager.removeStoredToken(dtId1); + + // query rm-token + LambdaTestUtils.intercept(YarnException.class, + "RMDelegationToken: " + dtId1 + " does not exist.", + () -> secretManager.getTokenByRouterStoreToken(dtId1)); + + stopSecureRouter(); + } +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/BaseRouterWebServicesTest.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/BaseRouterWebServicesTest.java index a4294bc3610..423e0e5a38c 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/BaseRouterWebServicesTest.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/BaseRouterWebServicesTest.java @@ -32,6 +32,7 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.security.authorize.AuthorizationException; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.exceptions.YarnException; +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ActivitiesInfo; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppActivitiesInfo; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppAttemptsInfo; @@ -74,10 +75,18 @@ public abstract class BaseRouterWebServicesTest { private Router router; public final static int TEST_MAX_CACHE_SIZE = 10; + public static final String QUEUE_DEFAULT = "default"; + public static final String QUEUE_DEFAULT_FULL = CapacitySchedulerConfiguration.ROOT + + CapacitySchedulerConfiguration.DOT + QUEUE_DEFAULT; + public static final String QUEUE_DEDICATED = "dedicated"; + public static final String QUEUE_DEDICATED_FULL = CapacitySchedulerConfiguration.ROOT + + CapacitySchedulerConfiguration.DOT + QUEUE_DEDICATED; + private RouterWebServices routerWebService; @Before - public void setUp() { + public void setUp() throws YarnException, IOException { + this.conf = createConfiguration(); router = spy(new Router()); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/MockDefaultRequestInterceptorREST.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/MockDefaultRequestInterceptorREST.java index e0a3148659a..2e118d172c1 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/MockDefaultRequestInterceptorREST.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/MockDefaultRequestInterceptorREST.java @@ -20,28 +20,41 @@ package org.apache.hadoop.yarn.server.router.webapp; import java.io.IOException; import java.net.ConnectException; +import java.security.Principal; import java.util.ArrayList; import java.util.Set; import java.util.Map; import java.util.HashMap; import java.util.Collections; import java.util.Arrays; +import java.util.List; import java.util.concurrent.atomic.AtomicInteger; +import java.util.concurrent.atomic.AtomicLong; import java.util.concurrent.ConcurrentHashMap; import java.util.stream.Collectors; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; +import javax.ws.rs.core.Context; import javax.ws.rs.core.HttpHeaders; import javax.ws.rs.core.Response; import javax.ws.rs.core.Response.Status; import org.apache.commons.lang3.EnumUtils; import org.apache.commons.lang3.StringUtils; +import org.apache.hadoop.classification.VisibleForTesting; +import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem; +import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.security.authorize.AuthorizationException; +import org.apache.hadoop.thirdparty.com.google.common.collect.ImmutableSet; import org.apache.hadoop.util.Sets; import org.apache.hadoop.util.Time; +import org.apache.hadoop.yarn.api.protocolrecords.ReservationDeleteRequest; +import org.apache.hadoop.yarn.api.protocolrecords.ReservationListRequest; +import org.apache.hadoop.yarn.api.protocolrecords.ReservationListResponse; +import org.apache.hadoop.yarn.api.protocolrecords.ReservationSubmissionRequest; +import org.apache.hadoop.yarn.api.protocolrecords.ReservationUpdateRequest; import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.api.records.ReservationId; import org.apache.hadoop.yarn.api.records.Resource; @@ -60,9 +73,11 @@ import org.apache.hadoop.yarn.api.records.ApplicationAttemptReport; import org.apache.hadoop.yarn.api.records.YarnApplicationAttemptState; import org.apache.hadoop.yarn.api.records.ApplicationTimeoutType; import org.apache.hadoop.yarn.api.records.ApplicationTimeout; -import org.apache.hadoop.yarn.api.protocolrecords.ReservationSubmissionRequest; -import org.apache.hadoop.yarn.api.protocolrecords.ReservationListRequest; -import org.apache.hadoop.yarn.api.protocolrecords.ReservationListResponse; +import org.apache.hadoop.yarn.api.records.ReservationRequestInterpreter; +import org.apache.hadoop.yarn.api.records.ReservationRequest; +import org.apache.hadoop.yarn.api.records.ReservationRequests; +import org.apache.hadoop.yarn.api.records.ReservationDefinition; +import org.apache.hadoop.yarn.api.records.QueueACL; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException; import org.apache.hadoop.yarn.exceptions.YarnException; @@ -70,8 +85,8 @@ import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId; import org.apache.hadoop.yarn.server.resourcemanager.ClientRMService; import org.apache.hadoop.yarn.server.resourcemanager.MockRM; import org.apache.hadoop.yarn.server.resourcemanager.RMContext; +import org.apache.hadoop.yarn.server.resourcemanager.ResourceManager; import org.apache.hadoop.yarn.server.resourcemanager.reservation.ReservationSystem; -import org.apache.hadoop.yarn.server.resourcemanager.reservation.ReservationSystemTestUtil; import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMApp; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ActiveUsersManager; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceScheduler; @@ -85,6 +100,8 @@ import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.Capacity import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestUtils; +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueue; +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerTestUtilities; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode; import org.apache.hadoop.yarn.server.resourcemanager.webapp.NodeIDsInfo; @@ -110,19 +127,46 @@ import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppQueue; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.StatisticsItemInfo; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ApplicationStatisticsInfo; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppActivitiesInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NewReservation; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationSubmissionRequestInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationUpdateRequestInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationDeleteRequestInfo; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationListInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationDefinitionInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationRequestInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationRequestsInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationUpdateResponseInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationDeleteResponseInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ActivitiesInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.BulkActivitiesInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWSConsts; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeAllocationInfo; +import org.apache.hadoop.yarn.server.router.RouterServerUtil; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.PartitionInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.RMQueueAclInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.SchedulerTypeInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.SchedulerInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.CapacitySchedulerInfo; import org.apache.hadoop.yarn.server.scheduler.SchedulerRequestKey; import org.apache.hadoop.yarn.server.webapp.dao.AppAttemptInfo; import org.apache.hadoop.yarn.server.webapp.dao.ContainerInfo; import org.apache.hadoop.yarn.server.webapp.dao.ContainersInfo; import org.apache.hadoop.yarn.util.SystemClock; import org.apache.hadoop.yarn.util.resource.Resources; +import org.apache.hadoop.yarn.webapp.BadRequestException; +import org.apache.hadoop.yarn.webapp.ForbiddenException; import org.apache.hadoop.yarn.webapp.NotFoundException; import org.mockito.Mockito; import org.slf4j.Logger; import org.slf4j.LoggerFactory; + +import static org.apache.hadoop.yarn.server.router.webapp.BaseRouterWebServicesTest.QUEUE_DEFAULT; +import static org.apache.hadoop.yarn.server.router.webapp.BaseRouterWebServicesTest.QUEUE_DEFAULT_FULL; +import static org.apache.hadoop.yarn.server.router.webapp.BaseRouterWebServicesTest.QUEUE_DEDICATED; +import static org.apache.hadoop.yarn.server.router.webapp.BaseRouterWebServicesTest.QUEUE_DEDICATED_FULL; import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; /** * This class mocks the RESTRequestInterceptor. @@ -140,19 +184,16 @@ public class MockDefaultRequestInterceptorREST private Map applicationMap = new HashMap<>(); public static final String APP_STATE_RUNNING = "RUNNING"; - private static final String QUEUE_DEFAULT = "default"; - private static final String QUEUE_DEFAULT_FULL = CapacitySchedulerConfiguration.ROOT + - CapacitySchedulerConfiguration.DOT + QUEUE_DEFAULT; - private static final String QUEUE_DEDICATED = "dedicated"; - public static final String QUEUE_DEDICATED_FULL = CapacitySchedulerConfiguration.ROOT + - CapacitySchedulerConfiguration.DOT + QUEUE_DEDICATED; - // duration(milliseconds), 1mins public static final long DURATION = 60*1000; // Containers 4 public static final int NUM_CONTAINERS = 4; + private Map reservationMap = new HashMap<>(); + private AtomicLong resCounter = new AtomicLong(); + private MockRM mockRM = null; + private void validateRunning() throws ConnectException { if (!isRunning) { throw new ConnectException("RM is stopped"); @@ -343,6 +384,23 @@ public class MockDefaultRequestInterceptorREST throw new RuntimeException("RM is stopped"); } + // Try format conversion for app_id + ApplicationId applicationId = null; + try { + applicationId = ApplicationId.fromString(appId); + } catch (Exception e) { + throw new BadRequestException(e); + } + + // Try format conversion for app_attempt_id + ApplicationAttemptId applicationAttemptId = null; + try { + applicationAttemptId = + ApplicationAttemptId.fromString(appAttemptId); + } catch (Exception e) { + throw new BadRequestException(e); + } + // We avoid to check if the Application exists in the system because we need // to validate that each subCluster returns 1 container. ContainersInfo containers = new ContainersInfo(); @@ -453,8 +511,7 @@ public class MockDefaultRequestInterceptorREST throw new RuntimeException("RM is stopped"); } - ContainerId newContainerId = ContainerId.newContainerId( - ApplicationAttemptId.fromString(appAttemptId), Integer.valueOf(containerId)); + ContainerId newContainerId = ContainerId.fromString(containerId); Resource allocatedResource = Resource.newInstance(1024, 2); @@ -505,15 +562,15 @@ public class MockDefaultRequestInterceptorREST throw new NotFoundException("app with id: " + appId + " not found"); } + ApplicationAttemptId attemptId = ApplicationAttemptId.fromString(appAttemptId); + ApplicationReport newApplicationReport = ApplicationReport.newInstance( - applicationId, ApplicationAttemptId.newInstance(applicationId, Integer.parseInt(appAttemptId)), - "user", "queue", "appname", "host", 124, null, + applicationId, attemptId, "user", "queue", "appname", "host", 124, null, YarnApplicationState.RUNNING, "diagnostics", "url", 1, 2, 3, 4, FinalApplicationStatus.SUCCEEDED, null, "N/A", 0.53789f, "YARN", null); ApplicationAttemptReport attempt = ApplicationAttemptReport.newInstance( - ApplicationAttemptId.newInstance(applicationId, Integer.parseInt(appAttemptId)), - "host", 124, "url", "oUrl", "diagnostics", + attemptId, "host", 124, "url", "oUrl", "diagnostics", YarnApplicationAttemptState.FINISHED, ContainerId.newContainerId( newApplicationReport.getCurrentApplicationAttemptId(), 1)); @@ -830,44 +887,206 @@ public class MockDefaultRequestInterceptorREST " Please try again with a valid reservable queue."); } - MockRM mockRM = setupResourceManager(); + ReservationId reservationID = + ReservationId.parseReservationId(reservationId); - ReservationId reservationID = ReservationId.parseReservationId(reservationId); - ReservationSystem reservationSystem = mockRM.getReservationSystem(); - reservationSystem.synchronizePlan(QUEUE_DEDICATED_FULL, true); + if (!reservationMap.containsKey(reservationID)) { + throw new NotFoundException("reservationId with id: " + reservationId + " not found"); + } - // Generate reserved resources ClientRMService clientService = mockRM.getClientRMService(); - // arrival time from which the resource(s) can be allocated. - long arrival = Time.now(); - - // deadline by when the resource(s) must be allocated. - // The reason for choosing 1.05 is because this gives an integer - // DURATION * 0.05 = 3000(ms) - // deadline = arrival + 3000ms - long deadline = (long) (arrival + 1.05 * DURATION); - - // In this test of reserved resources, we will apply for 4 containers (1 core, 1GB memory) - // arrival = Time.now(), and make sure deadline - arrival > duration, - // the current setting is greater than 3000ms - ReservationSubmissionRequest submissionRequest = - ReservationSystemTestUtil.createSimpleReservationRequest( - reservationID, NUM_CONTAINERS, arrival, deadline, DURATION); - clientService.submitReservation(submissionRequest); - // listReservations ReservationListRequest request = ReservationListRequest.newInstance( - queue, reservationID.toString(), startTime, endTime, includeResourceAllocations); + queue, reservationId, startTime, endTime, includeResourceAllocations); ReservationListResponse resRespInfo = clientService.listReservations(request); ReservationListInfo resResponse = new ReservationListInfo(resRespInfo, includeResourceAllocations); - if (mockRM != null) { - mockRM.stop(); + return Response.status(Status.OK).entity(resResponse).build(); + } + + @Override + public Response createNewReservation(HttpServletRequest hsr) + throws AuthorizationException, IOException, InterruptedException { + + if (!isRunning) { + throw new RuntimeException("RM is stopped"); } - return Response.status(Status.OK).entity(resResponse).build(); + ReservationId resId = ReservationId.newInstance(Time.now(), resCounter.incrementAndGet()); + LOG.info("Allocated new reservationId: {}.", resId); + + NewReservation reservationId = new NewReservation(resId.toString()); + return Response.status(Status.OK).entity(reservationId).build(); + } + + @Override + public Response submitReservation(ReservationSubmissionRequestInfo resContext, + HttpServletRequest hsr) throws AuthorizationException, IOException, InterruptedException { + + if (!isRunning) { + throw new RuntimeException("RM is stopped"); + } + + ReservationId reservationId = ReservationId.parseReservationId(resContext.getReservationId()); + ReservationDefinitionInfo definitionInfo = resContext.getReservationDefinition(); + ReservationDefinition definition = + RouterServerUtil.convertReservationDefinition(definitionInfo); + ReservationSubmissionRequest request = ReservationSubmissionRequest.newInstance( + definition, resContext.getQueue(), reservationId); + submitReservation(request); + + LOG.info("Reservation submitted: {}.", reservationId); + + SubClusterId subClusterId = getSubClusterId(); + reservationMap.put(reservationId, subClusterId); + + return Response.status(Status.ACCEPTED).build(); + } + + private void submitReservation(ReservationSubmissionRequest request) { + try { + // synchronize plan + ReservationSystem reservationSystem = mockRM.getReservationSystem(); + reservationSystem.synchronizePlan(QUEUE_DEDICATED_FULL, true); + // Generate reserved resources + ClientRMService clientService = mockRM.getClientRMService(); + clientService.submitReservation(request); + } catch (IOException | YarnException e) { + throw new RuntimeException(e); + } + } + + @Override + public Response updateReservation(ReservationUpdateRequestInfo resContext, + HttpServletRequest hsr) throws AuthorizationException, IOException, InterruptedException { + + if (resContext == null || resContext.getReservationId() == null || + resContext.getReservationDefinition() == null) { + return Response.status(Status.BAD_REQUEST).build(); + } + + String resId = resContext.getReservationId(); + ReservationId reservationId = ReservationId.parseReservationId(resId); + + if (!reservationMap.containsKey(reservationId)) { + throw new NotFoundException("reservationId with id: " + reservationId + " not found"); + } + + // Generate reserved resources + updateReservation(resContext); + + ReservationUpdateResponseInfo resRespInfo = new ReservationUpdateResponseInfo(); + return Response.status(Status.OK).entity(resRespInfo).build(); + } + + private void updateReservation(ReservationUpdateRequestInfo resContext) throws IOException { + + if (resContext == null) { + throw new BadRequestException("Input ReservationSubmissionContext should not be null"); + } + + ReservationDefinitionInfo resInfo = resContext.getReservationDefinition(); + if (resInfo == null) { + throw new BadRequestException("Input ReservationDefinition should not be null"); + } + + ReservationRequestsInfo resReqsInfo = resInfo.getReservationRequests(); + if (resReqsInfo == null || resReqsInfo.getReservationRequest() == null + || resReqsInfo.getReservationRequest().isEmpty()) { + throw new BadRequestException("The ReservationDefinition should " + + "contain at least one ReservationRequest"); + } + + if (resContext.getReservationId() == null) { + throw new BadRequestException("Update operations must specify an existing ReservaitonId"); + } + + ReservationRequestInterpreter[] values = ReservationRequestInterpreter.values(); + ReservationRequestInterpreter requestInterpreter = + values[resReqsInfo.getReservationRequestsInterpreter()]; + List list = new ArrayList<>(); + + for (ReservationRequestInfo resReqInfo : resReqsInfo.getReservationRequest()) { + ResourceInfo rInfo = resReqInfo.getCapability(); + Resource capability = Resource.newInstance(rInfo.getMemorySize(), rInfo.getvCores()); + int numContainers = resReqInfo.getNumContainers(); + int minConcurrency = resReqInfo.getMinConcurrency(); + long duration = resReqInfo.getDuration(); + ReservationRequest rr = ReservationRequest.newInstance( + capability, numContainers, minConcurrency, duration); + list.add(rr); + } + + ReservationRequests reqs = ReservationRequests.newInstance(list, requestInterpreter); + ReservationDefinition rDef = ReservationDefinition.newInstance( + resInfo.getArrival(), resInfo.getDeadline(), reqs, + resInfo.getReservationName(), resInfo.getRecurrenceExpression(), + Priority.newInstance(resInfo.getPriority())); + ReservationUpdateRequest request = ReservationUpdateRequest.newInstance( + rDef, ReservationId.parseReservationId(resContext.getReservationId())); + + ClientRMService clientService = mockRM.getClientRMService(); + try { + clientService.updateReservation(request); + } catch (YarnException ex) { + throw new RuntimeException(ex); + } + } + + @Override + public Response deleteReservation(ReservationDeleteRequestInfo resContext, HttpServletRequest hsr) + throws AuthorizationException, IOException, InterruptedException { + if (!isRunning) { + throw new RuntimeException("RM is stopped"); + } + + try { + String resId = resContext.getReservationId(); + ReservationId reservationId = ReservationId.parseReservationId(resId); + + if (!reservationMap.containsKey(reservationId)) { + throw new NotFoundException("reservationId with id: " + reservationId + " not found"); + } + + ReservationDeleteRequest reservationDeleteRequest = + ReservationDeleteRequest.newInstance(reservationId); + ClientRMService clientService = mockRM.getClientRMService(); + clientService.deleteReservation(reservationDeleteRequest); + + ReservationDeleteResponseInfo resRespInfo = new ReservationDeleteResponseInfo(); + reservationMap.remove(reservationId); + + return Response.status(Status.OK).entity(resRespInfo).build(); + } catch (YarnException e) { + throw new RuntimeException(e); + } + } + + @VisibleForTesting + public MockRM getMockRM() { + return mockRM; + } + + @VisibleForTesting + public void setMockRM(MockRM mockResourceManager) { + this.mockRM = mockResourceManager; + } + + @Override + public NodeLabelsInfo getRMNodeLabels(HttpServletRequest hsr) { + + NodeLabelInfo nodeLabelInfo = new NodeLabelInfo(); + nodeLabelInfo.setExclusivity(true); + nodeLabelInfo.setName("Test-Label"); + nodeLabelInfo.setActiveNMs(10); + PartitionInfo partitionInfo = new PartitionInfo(); + + NodeLabelsInfo nodeLabelsInfo = new NodeLabelsInfo(); + nodeLabelsInfo.getNodeLabelsInfo().add(nodeLabelInfo); + + return nodeLabelsInfo; } private MockRM setupResourceManager() throws Exception { @@ -890,4 +1109,182 @@ public class MockDefaultRequestInterceptorREST rm.registerNode("127.0.0.1:5678", 100*1024, 100); return rm; } + + @Override + public RMQueueAclInfo checkUserAccessToQueue(String queue, String username, + String queueAclType, HttpServletRequest hsr) throws AuthorizationException { + + ResourceManager mockResourceManager = mock(ResourceManager.class); + Configuration conf = new YarnConfiguration(); + + ResourceScheduler mockScheduler = new CapacityScheduler() { + @Override + public synchronized boolean checkAccess(UserGroupInformation callerUGI, + QueueACL acl, String queueName) { + if (acl == QueueACL.ADMINISTER_QUEUE) { + if (callerUGI.getUserName().equals("admin")) { + return true; + } + } else { + if (ImmutableSet.of("admin", "yarn").contains(callerUGI.getUserName())) { + return true; + } + } + return false; + } + }; + + when(mockResourceManager.getResourceScheduler()).thenReturn(mockScheduler); + MockRMWebServices webSvc = new MockRMWebServices(mockResourceManager, conf, + mock(HttpServletResponse.class)); + return webSvc.checkUserAccessToQueue(queue, username, queueAclType, hsr); + } + + class MockRMWebServices { + + @Context + private HttpServletResponse httpServletResponse; + private ResourceManager resourceManager; + + private void initForReadableEndpoints() { + // clear content type + httpServletResponse.setContentType(null); + } + + MockRMWebServices(ResourceManager rm, Configuration conf, HttpServletResponse response) { + this.resourceManager = rm; + this.httpServletResponse = response; + } + + private UserGroupInformation getCallerUserGroupInformation( + HttpServletRequest hsr, boolean usePrincipal) { + + String remoteUser = hsr.getRemoteUser(); + + if (usePrincipal) { + Principal princ = hsr.getUserPrincipal(); + remoteUser = princ == null ? null : princ.getName(); + } + + UserGroupInformation callerUGI = null; + if (remoteUser != null) { + callerUGI = UserGroupInformation.createRemoteUser(remoteUser); + } + + return callerUGI; + } + + public RMQueueAclInfo checkUserAccessToQueue( + String queue, String username, String queueAclType, HttpServletRequest hsr) + throws AuthorizationException { + initForReadableEndpoints(); + + // For the user who invokes this REST call, he/she should have admin access + // to the queue. Otherwise we will reject the call. + UserGroupInformation callerUGI = getCallerUserGroupInformation(hsr, true); + if (callerUGI != null && !this.resourceManager.getResourceScheduler().checkAccess( + callerUGI, QueueACL.ADMINISTER_QUEUE, queue)) { + throw new ForbiddenException( + "User=" + callerUGI.getUserName() + " doesn't haven access to queue=" + + queue + " so it cannot check ACLs for other users."); + } + + // Create UGI for the to-be-checked user. + UserGroupInformation user = UserGroupInformation.createRemoteUser(username); + if (user == null) { + throw new ForbiddenException( + "Failed to retrieve UserGroupInformation for user=" + username); + } + + // Check if the specified queue acl is valid. + QueueACL queueACL; + try { + queueACL = QueueACL.valueOf(queueAclType); + } catch (IllegalArgumentException e) { + throw new BadRequestException("Specified queueAclType=" + queueAclType + + " is not a valid type, valid queue acl types={" + + "SUBMIT_APPLICATIONS/ADMINISTER_QUEUE}"); + } + + if (!this.resourceManager.getResourceScheduler().checkAccess(user, queueACL, queue)) { + return new RMQueueAclInfo(false, user.getUserName(), + "User=" + username + " doesn't have access to queue=" + queue + + " with acl-type=" + queueAclType); + } + + return new RMQueueAclInfo(true, user.getUserName(), ""); + } + } + + @Override + public ActivitiesInfo getActivities(HttpServletRequest hsr, String nodeId, String groupBy) { + if (!EnumUtils.isValidEnum(RMWSConsts.ActivitiesGroupBy.class, groupBy.toUpperCase())) { + String errMessage = "Got invalid groupBy: " + groupBy + ", valid groupBy types: " + + Arrays.asList(RMWSConsts.ActivitiesGroupBy.values()); + throw new IllegalArgumentException(errMessage); + } + + SubClusterId subClusterId = getSubClusterId(); + ActivitiesInfo activitiesInfo = mock(ActivitiesInfo.class); + Mockito.when(activitiesInfo.getNodeId()).thenReturn(nodeId); + Mockito.when(activitiesInfo.getTimestamp()).thenReturn(1673081972L); + Mockito.when(activitiesInfo.getDiagnostic()).thenReturn("Diagnostic:" + subClusterId.getId()); + + List allocationInfos = new ArrayList<>(); + NodeAllocationInfo nodeAllocationInfo = mock(NodeAllocationInfo.class); + Mockito.when(nodeAllocationInfo.getPartition()).thenReturn("p" + subClusterId.getId()); + Mockito.when(nodeAllocationInfo.getFinalAllocationState()).thenReturn("ALLOCATED"); + + allocationInfos.add(nodeAllocationInfo); + Mockito.when(activitiesInfo.getAllocations()).thenReturn(allocationInfos); + return activitiesInfo; + } + + @Override + public BulkActivitiesInfo getBulkActivities(HttpServletRequest hsr, + String groupBy, int activitiesCount) { + + if (activitiesCount <= 0) { + throw new IllegalArgumentException("activitiesCount needs to be greater than 0."); + } + + if (!EnumUtils.isValidEnum(RMWSConsts.ActivitiesGroupBy.class, groupBy.toUpperCase())) { + String errMessage = "Got invalid groupBy: " + groupBy + ", valid groupBy types: " + + Arrays.asList(RMWSConsts.ActivitiesGroupBy.values()); + throw new IllegalArgumentException(errMessage); + } + + BulkActivitiesInfo bulkActivitiesInfo = new BulkActivitiesInfo(); + + for (int i = 0; i < activitiesCount; i++) { + SubClusterId subClusterId = getSubClusterId(); + ActivitiesInfo activitiesInfo = mock(ActivitiesInfo.class); + Mockito.when(activitiesInfo.getNodeId()).thenReturn(subClusterId + "-nodeId-" + i); + Mockito.when(activitiesInfo.getTimestamp()).thenReturn(1673081972L); + Mockito.when(activitiesInfo.getDiagnostic()).thenReturn("Diagnostic:" + subClusterId.getId()); + + List allocationInfos = new ArrayList<>(); + NodeAllocationInfo nodeAllocationInfo = mock(NodeAllocationInfo.class); + Mockito.when(nodeAllocationInfo.getPartition()).thenReturn("p" + subClusterId.getId()); + Mockito.when(nodeAllocationInfo.getFinalAllocationState()).thenReturn("ALLOCATED"); + + allocationInfos.add(nodeAllocationInfo); + Mockito.when(activitiesInfo.getAllocations()).thenReturn(allocationInfos); + bulkActivitiesInfo.getActivities().add(activitiesInfo); + } + + return bulkActivitiesInfo; + } + + public SchedulerTypeInfo getSchedulerInfo() { + try { + ResourceManager resourceManager = CapacitySchedulerTestUtilities.createResourceManager(); + CapacityScheduler cs = (CapacityScheduler) resourceManager.getResourceScheduler(); + CSQueue root = cs.getRootQueue(); + SchedulerInfo schedulerInfo = new CapacitySchedulerInfo(root, cs); + return new SchedulerTypeInfo(schedulerInfo); + } catch (Exception e) { + throw new RuntimeException(e); + } + } } \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/MockRESTRequestInterceptor.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/MockRESTRequestInterceptor.java index 5951676a6d8..a09199b9e85 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/MockRESTRequestInterceptor.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/MockRESTRequestInterceptor.java @@ -185,6 +185,11 @@ public class MockRESTRequestInterceptor extends AbstractRESTRequestInterceptor { return new NodeToLabelsInfo(); } + @Override + public NodeLabelsInfo getRMNodeLabels(HttpServletRequest hsr) throws IOException { + return new NodeLabelsInfo(); + } + @Override public LabelsToNodesInfo getLabelsToNodes(Set labels) throws IOException { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/PassThroughRESTRequestInterceptor.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/PassThroughRESTRequestInterceptor.java index 84a6de3205f..1bffd40db3c 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/PassThroughRESTRequestInterceptor.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/PassThroughRESTRequestInterceptor.java @@ -217,6 +217,11 @@ public class PassThroughRESTRequestInterceptor return getNextInterceptor().getNodeToLabels(hsr); } + @Override + public NodeLabelsInfo getRMNodeLabels(HttpServletRequest hsr) throws IOException { + return getNextInterceptor().getRMNodeLabels(hsr); + } + @Override public LabelsToNodesInfo getLabelsToNodes(Set labels) throws IOException { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/TestFederationInterceptorREST.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/TestFederationInterceptorREST.java index 596f2105288..edaa1e26e93 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/TestFederationInterceptorREST.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/TestFederationInterceptorREST.java @@ -19,6 +19,7 @@ package org.apache.hadoop.yarn.server.router.webapp; import java.io.IOException; +import java.security.Principal; import java.util.ArrayList; import java.util.List; import java.util.HashMap; @@ -26,13 +27,21 @@ import java.util.Map; import java.util.Set; import java.util.HashSet; import java.util.Collections; +import java.util.stream.Collectors; +import java.util.concurrent.TimeUnit; +import javax.servlet.http.HttpServletRequest; import javax.ws.rs.core.Response; import javax.ws.rs.core.Response.Status; +import org.apache.commons.lang3.StringUtils; +import org.apache.hadoop.security.authorize.AuthorizationException; +import org.apache.hadoop.test.LambdaTestUtils; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.http.HttpConfig; +import org.apache.hadoop.util.Lists; import org.apache.hadoop.util.Time; +import org.apache.hadoop.yarn.api.protocolrecords.ReservationSubmissionRequest; import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.api.records.ReservationId; import org.apache.hadoop.yarn.api.records.Resource; @@ -41,8 +50,16 @@ import org.apache.hadoop.yarn.api.records.ApplicationAttemptId; import org.apache.hadoop.yarn.api.records.NodeLabel; import org.apache.hadoop.yarn.api.records.ApplicationTimeoutType; import org.apache.hadoop.yarn.api.records.YarnApplicationState; +import org.apache.hadoop.yarn.api.records.ReservationDefinition; +import org.apache.hadoop.yarn.api.records.Priority; +import org.apache.hadoop.yarn.api.records.ReservationRequest; +import org.apache.hadoop.yarn.api.records.ReservationRequests; +import org.apache.hadoop.yarn.api.records.ReservationRequestInterpreter; +import org.apache.hadoop.yarn.api.records.ContainerId; +import org.apache.hadoop.yarn.api.records.QueueACL; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.exceptions.YarnException; +import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; import org.apache.hadoop.yarn.server.federation.policies.manager.UniformBroadcastPolicyManager; import org.apache.hadoop.yarn.server.federation.store.impl.MemoryFederationStateStore; import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId; @@ -52,8 +69,6 @@ import org.apache.hadoop.yarn.server.federation.store.records.SubClusterState; import org.apache.hadoop.yarn.server.federation.store.records.GetApplicationHomeSubClusterRequest; import org.apache.hadoop.yarn.server.federation.store.records.GetApplicationHomeSubClusterResponse; import org.apache.hadoop.yarn.server.federation.store.records.ApplicationHomeSubCluster; -import org.apache.hadoop.yarn.server.federation.store.records.ReservationHomeSubCluster; -import org.apache.hadoop.yarn.server.federation.store.records.AddReservationHomeSubClusterRequest; import org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade; import org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreTestUtil; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppInfo; @@ -79,25 +94,60 @@ import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppPriority; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppQueue; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationListInfo; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationSubmissionRequestInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.RMQueueAclInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.DelegationToken; import org.apache.hadoop.yarn.server.resourcemanager.webapp.NodeIDsInfo; +import org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService; +import org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService.RequestInterceptorChainWrapper; +import org.apache.hadoop.yarn.server.router.clientrm.TestableFederationClientInterceptor; +import org.apache.hadoop.yarn.server.router.security.RouterDelegationTokenSecretManager; import org.apache.hadoop.yarn.server.router.webapp.cache.RouterAppInfoCacheKey; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ApplicationStatisticsInfo; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppActivitiesInfo; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationDefinitionInfo; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationRequestsInfo; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationRequestInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.SchedulerTypeInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.SchedulerInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.CapacitySchedulerInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.CapacitySchedulerQueueInfoList; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.CapacitySchedulerQueueInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NewReservation; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationUpdateRequestInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationDeleteRequestInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ActivitiesInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeAllocationInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.BulkActivitiesInfo; import org.apache.hadoop.yarn.server.webapp.dao.ContainerInfo; import org.apache.hadoop.yarn.server.webapp.dao.ContainersInfo; +import org.apache.hadoop.yarn.server.router.webapp.dao.FederationRMQueueAclInfo; +import org.apache.hadoop.yarn.server.router.webapp.dao.FederationBulkActivitiesInfo; +import org.apache.hadoop.yarn.server.router.webapp.dao.FederationSchedulerTypeInfo; import org.apache.hadoop.yarn.util.LRUCacheHashMap; import org.apache.hadoop.yarn.util.MonotonicClock; import org.apache.hadoop.yarn.util.Times; +import org.apache.hadoop.yarn.webapp.BadRequestException; import org.apache.hadoop.yarn.webapp.util.WebAppUtils; import org.junit.Assert; import org.junit.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; -import static org.apache.hadoop.yarn.server.router.webapp.MockDefaultRequestInterceptorREST.QUEUE_DEDICATED_FULL; + +import static org.apache.hadoop.yarn.conf.YarnConfiguration.RM_DELEGATION_KEY_UPDATE_INTERVAL_DEFAULT; +import static org.apache.hadoop.yarn.conf.YarnConfiguration.RM_DELEGATION_KEY_UPDATE_INTERVAL_KEY; +import static org.apache.hadoop.yarn.conf.YarnConfiguration.RM_DELEGATION_TOKEN_MAX_LIFETIME_DEFAULT; +import static org.apache.hadoop.yarn.conf.YarnConfiguration.RM_DELEGATION_TOKEN_MAX_LIFETIME_KEY; +import static org.apache.hadoop.yarn.conf.YarnConfiguration.RM_DELEGATION_TOKEN_RENEW_INTERVAL_DEFAULT; +import static org.apache.hadoop.yarn.conf.YarnConfiguration.RM_DELEGATION_TOKEN_RENEW_INTERVAL_KEY; +import static org.apache.hadoop.yarn.conf.YarnConfiguration.RM_DELEGATION_TOKEN_REMOVE_SCAN_INTERVAL_DEFAULT; +import static org.apache.hadoop.yarn.conf.YarnConfiguration.RM_DELEGATION_TOKEN_REMOVE_SCAN_INTERVAL_KEY; + +import static org.apache.hadoop.yarn.server.router.webapp.MockDefaultRequestInterceptorREST.DURATION; +import static org.apache.hadoop.yarn.server.router.webapp.MockDefaultRequestInterceptorREST.NUM_CONTAINERS; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; /** * Extends the {@code BaseRouterClientRMTest} and overrides methods in order to @@ -117,9 +167,10 @@ public class TestFederationInterceptorREST extends BaseRouterWebServicesTest { private MemoryFederationStateStore stateStore; private FederationStateStoreTestUtil stateStoreUtil; private List subClusters; + private static final String TEST_RENEWER = "test-renewer"; + + public void setUp() throws YarnException, IOException { - @Override - public void setUp() { super.setUpConfig(); interceptor = new TestableFederationInterceptorREST(); @@ -134,17 +185,46 @@ public class TestFederationInterceptorREST extends BaseRouterWebServicesTest { subClusters = new ArrayList<>(); - try { - for (int i = 0; i < NUM_SUBCLUSTER; i++) { - SubClusterId sc = SubClusterId.newInstance(Integer.toString(i)); - stateStoreUtil.registerSubCluster(sc); - subClusters.add(sc); - } - } catch (YarnException e) { - LOG.error(e.getMessage()); - Assert.fail(); + for (int i = 0; i < NUM_SUBCLUSTER; i++) { + SubClusterId sc = SubClusterId.newInstance(Integer.toString(i)); + stateStoreUtil.registerSubCluster(sc); + subClusters.add(sc); } + RouterClientRMService routerClientRMService = new RouterClientRMService(); + routerClientRMService.initUserPipelineMap(getConf()); + long secretKeyInterval = this.getConf().getLong( + RM_DELEGATION_KEY_UPDATE_INTERVAL_KEY, RM_DELEGATION_KEY_UPDATE_INTERVAL_DEFAULT); + long tokenMaxLifetime = this.getConf().getLong( + RM_DELEGATION_TOKEN_MAX_LIFETIME_KEY, RM_DELEGATION_TOKEN_MAX_LIFETIME_DEFAULT); + long tokenRenewInterval = this.getConf().getLong( + RM_DELEGATION_TOKEN_RENEW_INTERVAL_KEY, RM_DELEGATION_TOKEN_RENEW_INTERVAL_DEFAULT); + long removeScanInterval = this.getConf().getTimeDuration( + RM_DELEGATION_TOKEN_REMOVE_SCAN_INTERVAL_KEY, + RM_DELEGATION_TOKEN_REMOVE_SCAN_INTERVAL_DEFAULT, TimeUnit.MILLISECONDS); + RouterDelegationTokenSecretManager tokenSecretManager = new RouterDelegationTokenSecretManager( + secretKeyInterval, tokenMaxLifetime, tokenRenewInterval, removeScanInterval); + tokenSecretManager.startThreads(); + routerClientRMService.setRouterDTSecretManager(tokenSecretManager); + + TestableFederationClientInterceptor clientInterceptor = + new TestableFederationClientInterceptor(); + clientInterceptor.setConf(this.getConf()); + clientInterceptor.init(TEST_RENEWER); + clientInterceptor.setTokenSecretManager(tokenSecretManager); + RequestInterceptorChainWrapper wrapper = new RequestInterceptorChainWrapper(); + wrapper.init(clientInterceptor); + routerClientRMService.getUserPipelineMap().put(TEST_RENEWER, wrapper); + interceptor.setRouterClientRMService(routerClientRMService); + + for (SubClusterId subCluster : subClusters) { + SubClusterInfo subClusterInfo = stateStoreUtil.querySubClusterInfo(subCluster); + interceptor.getOrCreateInterceptorForSubCluster( + subCluster, subClusterInfo.getRMWebServiceAddress()); + } + + interceptor.setupResourceManager(); + } @Override @@ -634,23 +714,28 @@ public class TestFederationInterceptorREST extends BaseRouterWebServicesTest { } @Test - public void testGetContainersNotExists() { + public void testGetContainersNotExists() throws Exception { ApplicationId appId = ApplicationId.newInstance(Time.now(), 1); - ContainersInfo response = interceptor.getContainers(null, null, appId.toString(), null); - Assert.assertTrue(response.getContainers().isEmpty()); + LambdaTestUtils.intercept(IllegalArgumentException.class, + "Parameter error, the appAttemptId is empty or null.", + () -> interceptor.getContainers(null, null, appId.toString(), null)); } @Test - public void testGetContainersWrongFormat() { - ContainersInfo response = interceptor.getContainers(null, null, "Application_wrong_id", null); - - Assert.assertNotNull(response); - Assert.assertTrue(response.getContainers().isEmpty()); - + public void testGetContainersWrongFormat() throws Exception { ApplicationId appId = ApplicationId.newInstance(Time.now(), 1); - response = interceptor.getContainers(null, null, appId.toString(), "AppAttempt_wrong_id"); + ApplicationAttemptId appAttempt = ApplicationAttemptId.newInstance(appId, 1); - Assert.assertTrue(response.getContainers().isEmpty()); + // Test Case 1: appId is wrong format, appAttemptId is accurate. + LambdaTestUtils.intercept(IllegalArgumentException.class, + "Invalid ApplicationId prefix: Application_wrong_id. " + + "The valid ApplicationId should start with prefix application", + () -> interceptor.getContainers(null, null, "Application_wrong_id", appAttempt.toString())); + + // Test Case2: appId is accurate, appAttemptId is wrong format. + LambdaTestUtils.intercept(IllegalArgumentException.class, + "Invalid AppAttemptId prefix: AppAttempt_wrong_id", + () -> interceptor.getContainers(null, null, appId.toString(), "AppAttempt_wrong_id")); } @Test @@ -733,26 +818,35 @@ public class TestFederationInterceptorREST extends BaseRouterWebServicesTest { Assert.assertTrue(nodeLabelsName.contains("y")); // null request + interceptor.setAllowPartialResult(false); NodeLabelsInfo nodeLabelsInfo2 = interceptor.getLabelsOnNode(null, "node2"); Assert.assertNotNull(nodeLabelsInfo2); Assert.assertEquals(0, nodeLabelsInfo2.getNodeLabelsName().size()); } @Test - public void testGetContainer() - throws IOException, InterruptedException, YarnException { - // Submit application to multiSubCluster + public void testGetContainer() throws Exception { + // ApplicationId appId = ApplicationId.newInstance(Time.now(), 1); - ApplicationSubmissionContextInfo context = new ApplicationSubmissionContextInfo(); - context.setApplicationId(appId.toString()); + ApplicationAttemptId appAttemptId = ApplicationAttemptId.newInstance(appId, 1); + ContainerId appContainerId = ContainerId.newContainerId(appAttemptId, 1); + String applicationId = appId.toString(); + String attemptId = appAttemptId.toString(); + String containerId = appContainerId.toString(); + // Submit application to multiSubCluster + ApplicationSubmissionContextInfo context = new ApplicationSubmissionContextInfo(); + context.setApplicationId(applicationId); Assert.assertNotNull(interceptor.submitApplication(context, null)); - ApplicationAttemptId appAttemptId = - ApplicationAttemptId.newInstance(appId, 1); + // Test Case1: Wrong ContainerId + LambdaTestUtils.intercept(IllegalArgumentException.class, "Invalid ContainerId prefix: 0", + () -> interceptor.getContainer(null, null, applicationId, attemptId, "0")); - ContainerInfo containerInfo = interceptor.getContainer(null, null, - appId.toString(), appAttemptId.toString(), "0"); + // Test Case2: Correct ContainerId + + ContainerInfo containerInfo = interceptor.getContainer(null, null, applicationId, + attemptId, containerId); Assert.assertNotNull(containerInfo); } @@ -800,9 +894,10 @@ public class TestFederationInterceptorREST extends BaseRouterWebServicesTest { // Generate ApplicationAttemptId information Assert.assertNotNull(interceptor.submitApplication(context, null)); ApplicationAttemptId expectAppAttemptId = ApplicationAttemptId.newInstance(appId, 1); + String appAttemptId = expectAppAttemptId.toString(); org.apache.hadoop.yarn.server.webapp.dao.AppAttemptInfo - appAttemptInfo = interceptor.getAppAttempt(null, null, appId.toString(), "1"); + appAttemptInfo = interceptor.getAppAttempt(null, null, appId.toString(), appAttemptId); Assert.assertNotNull(appAttemptInfo); Assert.assertEquals(expectAppAttemptId.toString(), appAttemptInfo.getAppAttemptId()); @@ -1074,14 +1169,9 @@ public class TestFederationInterceptorREST extends BaseRouterWebServicesTest { @Test public void testListReservation() throws Exception { - // Add ReservationId In stateStore + // submitReservation ReservationId reservationId = ReservationId.newInstance(Time.now(), 1); - SubClusterId homeSubClusterId = subClusters.get(0); - ReservationHomeSubCluster reservationHomeSubCluster = - ReservationHomeSubCluster.newInstance(reservationId, homeSubClusterId); - AddReservationHomeSubClusterRequest request = - AddReservationHomeSubClusterRequest.newInstance(reservationHomeSubCluster); - stateStore.addReservationHomeSubCluster(request); + submitReservation(reservationId); // Call the listReservation method String applyReservationId = reservationId.toString(); @@ -1131,6 +1221,199 @@ public class TestFederationInterceptorREST extends BaseRouterWebServicesTest { Assert.assertEquals(1024, memory); } + @Test + public void testCreateNewReservation() throws Exception { + Response response = interceptor.createNewReservation(null); + Assert.assertNotNull(response); + + Object entity = response.getEntity(); + Assert.assertNotNull(entity); + Assert.assertTrue(entity instanceof NewReservation); + + NewReservation newReservation = (NewReservation) entity; + Assert.assertNotNull(newReservation); + Assert.assertTrue(newReservation.getReservationId().contains("reservation")); + } + + @Test + public void testSubmitReservation() throws Exception { + + // submit reservation + ReservationId reservationId = ReservationId.newInstance(Time.now(), 2); + Response response = submitReservation(reservationId); + Assert.assertNotNull(response); + Assert.assertEquals(Status.ACCEPTED.getStatusCode(), response.getStatus()); + + String applyReservationId = reservationId.toString(); + Response reservationResponse = interceptor.listReservation( + QUEUE_DEDICATED_FULL, applyReservationId, -1, -1, false, null); + Assert.assertNotNull(reservationResponse); + + Object entity = reservationResponse.getEntity(); + Assert.assertNotNull(entity); + Assert.assertNotNull(entity instanceof ReservationListInfo); + + ReservationListInfo listInfo = (ReservationListInfo) entity; + Assert.assertNotNull(listInfo); + + List reservationInfos = listInfo.getReservations(); + Assert.assertNotNull(reservationInfos); + Assert.assertEquals(1, reservationInfos.size()); + + ReservationInfo reservationInfo = reservationInfos.get(0); + Assert.assertNotNull(reservationInfo); + Assert.assertEquals(reservationInfo.getReservationId(), applyReservationId); + } + + @Test + public void testUpdateReservation() throws Exception { + // submit reservation + ReservationId reservationId = ReservationId.newInstance(Time.now(), 3); + Response response = submitReservation(reservationId); + Assert.assertNotNull(response); + Assert.assertEquals(Status.ACCEPTED.getStatusCode(), response.getStatus()); + + // update reservation + ReservationSubmissionRequest resSubRequest = + getReservationSubmissionRequest(reservationId, 6, 2048, 2); + ReservationDefinition reservationDefinition = resSubRequest.getReservationDefinition(); + ReservationDefinitionInfo reservationDefinitionInfo = + new ReservationDefinitionInfo(reservationDefinition); + + ReservationUpdateRequestInfo updateRequestInfo = new ReservationUpdateRequestInfo(); + updateRequestInfo.setReservationId(reservationId.toString()); + updateRequestInfo.setReservationDefinition(reservationDefinitionInfo); + Response updateReservationResp = interceptor.updateReservation(updateRequestInfo, null); + Assert.assertNotNull(updateReservationResp); + Assert.assertEquals(Status.OK.getStatusCode(), updateReservationResp.getStatus()); + + String applyReservationId = reservationId.toString(); + Response reservationResponse = interceptor.listReservation( + QUEUE_DEDICATED_FULL, applyReservationId, -1, -1, false, null); + Assert.assertNotNull(reservationResponse); + + Object entity = reservationResponse.getEntity(); + Assert.assertNotNull(entity); + Assert.assertNotNull(entity instanceof ReservationListInfo); + + ReservationListInfo listInfo = (ReservationListInfo) entity; + Assert.assertNotNull(listInfo); + + List reservationInfos = listInfo.getReservations(); + Assert.assertNotNull(reservationInfos); + Assert.assertEquals(1, reservationInfos.size()); + + ReservationInfo reservationInfo = reservationInfos.get(0); + Assert.assertNotNull(reservationInfo); + Assert.assertEquals(reservationInfo.getReservationId(), applyReservationId); + + ReservationDefinitionInfo resDefinitionInfo = reservationInfo.getReservationDefinition(); + Assert.assertNotNull(resDefinitionInfo); + + ReservationRequestsInfo reservationRequestsInfo = resDefinitionInfo.getReservationRequests(); + Assert.assertNotNull(reservationRequestsInfo); + + ArrayList reservationRequestInfoList = + reservationRequestsInfo.getReservationRequest(); + Assert.assertNotNull(reservationRequestInfoList); + Assert.assertEquals(1, reservationRequestInfoList.size()); + + ReservationRequestInfo reservationRequestInfo = reservationRequestInfoList.get(0); + Assert.assertNotNull(reservationRequestInfo); + Assert.assertEquals(6, reservationRequestInfo.getNumContainers()); + + ResourceInfo resourceInfo = reservationRequestInfo.getCapability(); + Assert.assertNotNull(resourceInfo); + + int vCore = resourceInfo.getvCores(); + long memory = resourceInfo.getMemorySize(); + Assert.assertEquals(2, vCore); + Assert.assertEquals(2048, memory); + } + + @Test + public void testDeleteReservation() throws Exception { + // submit reservation + ReservationId reservationId = ReservationId.newInstance(Time.now(), 4); + Response response = submitReservation(reservationId); + Assert.assertNotNull(response); + Assert.assertEquals(Status.ACCEPTED.getStatusCode(), response.getStatus()); + + String applyResId = reservationId.toString(); + Response reservationResponse = interceptor.listReservation( + QUEUE_DEDICATED_FULL, applyResId, -1, -1, false, null); + Assert.assertNotNull(reservationResponse); + + ReservationDeleteRequestInfo deleteRequestInfo = + new ReservationDeleteRequestInfo(); + deleteRequestInfo.setReservationId(applyResId); + Response delResponse = interceptor.deleteReservation(deleteRequestInfo, null); + Assert.assertNotNull(delResponse); + + LambdaTestUtils.intercept(Exception.class, + "reservationId with id: " + reservationId + " not found", + () -> interceptor.listReservation(QUEUE_DEDICATED_FULL, applyResId, -1, -1, false, null)); + } + + private Response submitReservation(ReservationId reservationId) + throws IOException, InterruptedException { + ReservationSubmissionRequestInfo resSubmissionRequestInfo = + getReservationSubmissionRequestInfo(reservationId); + Response response = interceptor.submitReservation(resSubmissionRequestInfo, null); + return response; + } + + public static ReservationSubmissionRequestInfo getReservationSubmissionRequestInfo( + ReservationId reservationId) { + + ReservationSubmissionRequest resSubRequest = + getReservationSubmissionRequest(reservationId, NUM_CONTAINERS, 1024, 1); + ReservationDefinition reservationDefinition = resSubRequest.getReservationDefinition(); + + ReservationSubmissionRequestInfo resSubmissionRequestInfo = + new ReservationSubmissionRequestInfo(); + resSubmissionRequestInfo.setQueue(resSubRequest.getQueue()); + resSubmissionRequestInfo.setReservationId(reservationId.toString()); + ReservationDefinitionInfo reservationDefinitionInfo = + new ReservationDefinitionInfo(reservationDefinition); + resSubmissionRequestInfo.setReservationDefinition(reservationDefinitionInfo); + + return resSubmissionRequestInfo; + } + + public static ReservationSubmissionRequest getReservationSubmissionRequest( + ReservationId reservationId, int numContainers, int memory, int vcore) { + + // arrival time from which the resource(s) can be allocated. + long arrival = Time.now(); + + // deadline by when the resource(s) must be allocated. + // The reason for choosing 1.05 is because this gives an integer + // DURATION * 0.05 = 3000(ms) + // deadline = arrival + 3000ms + long deadline = (long) (arrival + 1.05 * DURATION); + + ReservationSubmissionRequest submissionRequest = createSimpleReservationRequest( + reservationId, numContainers, arrival, deadline, DURATION, memory, vcore); + + return submissionRequest; + } + + public static ReservationSubmissionRequest createSimpleReservationRequest( + ReservationId reservationId, int numContainers, long arrival, + long deadline, long duration, int memory, int vcore) { + // create a request with a single atomic ask + ReservationRequest r = ReservationRequest.newInstance( + Resource.newInstance(memory, vcore), numContainers, 1, duration); + ReservationRequests reqs = ReservationRequests.newInstance( + Collections.singletonList(r), ReservationRequestInterpreter.R_ALL); + ReservationDefinition rDef = ReservationDefinition.newInstance( + arrival, deadline, reqs, "testClientRMService#reservation", "0", Priority.UNDEFINED); + ReservationSubmissionRequest request = ReservationSubmissionRequest.newInstance( + rDef, QUEUE_DEDICATED_FULL, reservationId); + return request; + } + @Test public void testWebAddressWithScheme() { // The style of the web address reported by the subCluster in the heartbeat is 0.0.0.0:8000 @@ -1154,4 +1437,432 @@ public class TestFederationInterceptorREST extends BaseRouterWebServicesTest { WebAppUtils.getHttpSchemePrefix(this.getConf()) + webAppAddress; Assert.assertEquals(expectedHttpsWebAddress, webAppAddressWithScheme2); } -} + + @Test + public void testCheckUserAccessToQueue() throws Exception { + + interceptor.setAllowPartialResult(false); + + // Case 1: Only queue admin user can access other user's information + HttpServletRequest mockHsr = mockHttpServletRequestByUserName("non-admin"); + String errorMsg1 = "User=non-admin doesn't haven access to queue=queue " + + "so it cannot check ACLs for other users."; + LambdaTestUtils.intercept(YarnRuntimeException.class, errorMsg1, + () -> interceptor.checkUserAccessToQueue("queue", "jack", + QueueACL.SUBMIT_APPLICATIONS.name(), mockHsr)); + + // Case 2: request an unknown ACL causes BAD_REQUEST + HttpServletRequest mockHsr1 = mockHttpServletRequestByUserName("admin"); + String errorMsg2 = "Specified queueAclType=XYZ_ACL is not a valid type, " + + "valid queue acl types={SUBMIT_APPLICATIONS/ADMINISTER_QUEUE}"; + LambdaTestUtils.intercept(YarnRuntimeException.class, errorMsg2, + () -> interceptor.checkUserAccessToQueue("queue", "jack", "XYZ_ACL", mockHsr1)); + + // We design a test, admin user has ADMINISTER_QUEUE, SUBMIT_APPLICATIONS permissions, + // yarn user has SUBMIT_APPLICATIONS permissions, other users have no permissions + + // Case 3: get FORBIDDEN for rejected ACL + checkUserAccessToQueueFailed("queue", "jack", QueueACL.SUBMIT_APPLICATIONS, "admin"); + checkUserAccessToQueueFailed("queue", "jack", QueueACL.ADMINISTER_QUEUE, "admin"); + + // Case 4: get OK for listed ACLs + checkUserAccessToQueueSuccess("queue", "admin", QueueACL.ADMINISTER_QUEUE, "admin"); + checkUserAccessToQueueSuccess("queue", "admin", QueueACL.SUBMIT_APPLICATIONS, "admin"); + + // Case 5: get OK only for SUBMIT_APP acl for "yarn" user + checkUserAccessToQueueFailed("queue", "yarn", QueueACL.ADMINISTER_QUEUE, "admin"); + checkUserAccessToQueueSuccess("queue", "yarn", QueueACL.SUBMIT_APPLICATIONS, "admin"); + + interceptor.setAllowPartialResult(true); + } + + private void checkUserAccessToQueueSuccess(String queue, String userName, + QueueACL queueACL, String mockUser) throws AuthorizationException { + HttpServletRequest mockHsr = mockHttpServletRequestByUserName(mockUser); + RMQueueAclInfo aclInfo = + interceptor.checkUserAccessToQueue(queue, userName, queueACL.name(), mockHsr); + Assert.assertNotNull(aclInfo); + Assert.assertTrue(aclInfo instanceof FederationRMQueueAclInfo); + FederationRMQueueAclInfo fedAclInfo = FederationRMQueueAclInfo.class.cast(aclInfo); + List aclInfos = fedAclInfo.getList(); + Assert.assertNotNull(aclInfos); + Assert.assertEquals(4, aclInfos.size()); + for (RMQueueAclInfo rMQueueAclInfo : aclInfos) { + Assert.assertTrue(rMQueueAclInfo.isAllowed()); + } + } + + private void checkUserAccessToQueueFailed(String queue, String userName, + QueueACL queueACL, String mockUser) throws AuthorizationException { + HttpServletRequest mockHsr = mockHttpServletRequestByUserName(mockUser); + RMQueueAclInfo aclInfo = + interceptor.checkUserAccessToQueue(queue, userName, queueACL.name(), mockHsr); + Assert.assertNotNull(aclInfo); + Assert.assertTrue(aclInfo instanceof FederationRMQueueAclInfo); + FederationRMQueueAclInfo fedAclInfo = FederationRMQueueAclInfo.class.cast(aclInfo); + List aclInfos = fedAclInfo.getList(); + Assert.assertNotNull(aclInfos); + Assert.assertEquals(4, aclInfos.size()); + for (RMQueueAclInfo rMQueueAclInfo : aclInfos) { + Assert.assertFalse(rMQueueAclInfo.isAllowed()); + String expectDiagnostics = "User=" + userName + + " doesn't have access to queue=queue with acl-type=" + queueACL.name(); + Assert.assertEquals(expectDiagnostics, rMQueueAclInfo.getDiagnostics()); + } + } + + private HttpServletRequest mockHttpServletRequestByUserName(String username) { + HttpServletRequest mockHsr = mock(HttpServletRequest.class); + when(mockHsr.getRemoteUser()).thenReturn(username); + Principal principal = mock(Principal.class); + when(principal.getName()).thenReturn(username); + when(mockHsr.getUserPrincipal()).thenReturn(principal); + return mockHsr; + } + + @Test + public void testCheckFederationInterceptorRESTClient() { + SubClusterId subClusterId = SubClusterId.newInstance("SC-1"); + String webAppSocket = "SC-1:WebAddress"; + String webAppAddress = "http://" + webAppSocket; + + Configuration configuration = new Configuration(); + FederationInterceptorREST rest = new FederationInterceptorREST(); + rest.setConf(configuration); + rest.init("router"); + + DefaultRequestInterceptorREST interceptorREST = + rest.getOrCreateInterceptorForSubCluster(subClusterId, webAppSocket); + + Assert.assertNotNull(interceptorREST); + Assert.assertNotNull(interceptorREST.getClient()); + Assert.assertEquals(webAppAddress, interceptorREST.getWebAppAddress()); + } + + @Test + public void testInvokeConcurrent() throws IOException, YarnException { + + // We design such a test case, we call the interceptor's getNodes interface, + // this interface will generate the following test data + // subCluster0 Node 0 + // subCluster1 Node 1 + // subCluster2 Node 2 + // subCluster3 Node 3 + // We use the returned data to verify whether the subClusterId + // of the multi-thread call can match the node data + Map subClusterInfoNodesInfoMap = + interceptor.invokeConcurrentGetNodeLabel(); + Assert.assertNotNull(subClusterInfoNodesInfoMap); + Assert.assertEquals(4, subClusterInfoNodesInfoMap.size()); + + subClusterInfoNodesInfoMap.forEach((subClusterInfo, nodesInfo) -> { + String subClusterId = subClusterInfo.getSubClusterId().getId(); + List nodeInfos = nodesInfo.getNodes(); + Assert.assertNotNull(nodeInfos); + Assert.assertEquals(1, nodeInfos.size()); + + String expectNodeId = "Node " + subClusterId; + String nodeId = nodeInfos.get(0).getNodeId(); + Assert.assertEquals(expectNodeId, nodeId); + }); + } + + @Test + public void testGetSchedulerInfo() { + // In this test case, we will get the return results of 4 sub-clusters. + SchedulerTypeInfo typeInfo = interceptor.getSchedulerInfo(); + Assert.assertNotNull(typeInfo); + Assert.assertTrue(typeInfo instanceof FederationSchedulerTypeInfo); + + FederationSchedulerTypeInfo federationSchedulerTypeInfo = + FederationSchedulerTypeInfo.class.cast(typeInfo); + Assert.assertNotNull(federationSchedulerTypeInfo); + List schedulerTypeInfos = federationSchedulerTypeInfo.getList(); + Assert.assertNotNull(schedulerTypeInfos); + Assert.assertEquals(4, schedulerTypeInfos.size()); + List subClusterIds = + subClusters.stream().map(subClusterId -> subClusterId.getId()). + collect(Collectors.toList()); + + for (SchedulerTypeInfo schedulerTypeInfo : schedulerTypeInfos) { + Assert.assertNotNull(schedulerTypeInfo); + + // 1. Whether the returned subClusterId is in the subCluster list + String subClusterId = schedulerTypeInfo.getSubClusterId(); + Assert.assertTrue(subClusterIds.contains(subClusterId)); + + // 2. We test CapacityScheduler, the returned type should be CapacityScheduler. + SchedulerInfo schedulerInfo = schedulerTypeInfo.getSchedulerInfo(); + Assert.assertNotNull(schedulerInfo); + Assert.assertTrue(schedulerInfo instanceof CapacitySchedulerInfo); + CapacitySchedulerInfo capacitySchedulerInfo = + CapacitySchedulerInfo.class.cast(schedulerInfo); + Assert.assertNotNull(capacitySchedulerInfo); + + // 3. The parent queue name should be root + String queueName = capacitySchedulerInfo.getQueueName(); + Assert.assertEquals("root", queueName); + + // 4. schedulerType should be CapacityScheduler + String schedulerType = capacitySchedulerInfo.getSchedulerType(); + Assert.assertEquals("Capacity Scheduler", schedulerType); + + // 5. queue path should be root + String queuePath = capacitySchedulerInfo.getQueuePath(); + Assert.assertEquals("root", queuePath); + + // 6. mockRM has 2 test queues, [root.a, root.b] + List queues = Lists.newArrayList("root.a", "root.b"); + CapacitySchedulerQueueInfoList csSchedulerQueueInfoList = capacitySchedulerInfo.getQueues(); + Assert.assertNotNull(csSchedulerQueueInfoList); + List csQueueInfoList = + csSchedulerQueueInfoList.getQueueInfoList(); + Assert.assertEquals(2, csQueueInfoList.size()); + for (CapacitySchedulerQueueInfo csQueueInfo : csQueueInfoList) { + Assert.assertNotNull(csQueueInfo); + Assert.assertTrue(queues.contains(csQueueInfo.getQueuePath())); + } + } + } + + @Test + public void testPostDelegationTokenErrorHsr() throws Exception { + // Prepare delegationToken data + DelegationToken token = new DelegationToken(); + token.setRenewer(TEST_RENEWER); + + HttpServletRequest request = mock(HttpServletRequest.class); + + // If we don't set token + LambdaTestUtils.intercept(IllegalArgumentException.class, + "Parameter error, the tokenData or hsr is null.", + () -> interceptor.postDelegationToken(null, request)); + + // If we don't set hsr + LambdaTestUtils.intercept(IllegalArgumentException.class, + "Parameter error, the tokenData or hsr is null.", + () -> interceptor.postDelegationToken(token, null)); + + // If we don't set renewUser, we will get error message. + LambdaTestUtils.intercept(AuthorizationException.class, + "Unable to obtain user name, user not authenticated", + () -> interceptor.postDelegationToken(token, request)); + + Principal principal = mock(Principal.class); + when(principal.getName()).thenReturn(TEST_RENEWER); + when(request.getRemoteUser()).thenReturn(TEST_RENEWER); + when(request.getUserPrincipal()).thenReturn(principal); + + // If we don't set the authentication type, we will get error message. + Response response = interceptor.postDelegationToken(token, request); + Assert.assertNotNull(response); + Assert.assertEquals(response.getStatus(), Status.FORBIDDEN.getStatusCode()); + String errMsg = "Delegation token operations can only be carried out on a " + + "Kerberos authenticated channel. Expected auth type is kerberos, got type null"; + Object entity = response.getEntity(); + Assert.assertNotNull(entity); + Assert.assertTrue(entity instanceof String); + String entityMsg = String.valueOf(entity); + Assert.assertTrue(errMsg.contains(entityMsg)); + } + + @Test + public void testPostDelegationToken() throws Exception { + Long now = Time.now(); + + DelegationToken token = new DelegationToken(); + token.setRenewer(TEST_RENEWER); + + Principal principal = mock(Principal.class); + when(principal.getName()).thenReturn(TEST_RENEWER); + + HttpServletRequest request = mock(HttpServletRequest.class); + when(request.getRemoteUser()).thenReturn(TEST_RENEWER); + when(request.getUserPrincipal()).thenReturn(principal); + when(request.getAuthType()).thenReturn("kerberos"); + + Response response = interceptor.postDelegationToken(token, request); + Assert.assertNotNull(response); + + Object entity = response.getEntity(); + Assert.assertNotNull(entity); + Assert.assertTrue(entity instanceof DelegationToken); + + DelegationToken dtoken = DelegationToken.class.cast(entity); + Assert.assertEquals(TEST_RENEWER, dtoken.getRenewer()); + Assert.assertEquals(TEST_RENEWER, dtoken.getOwner()); + Assert.assertEquals("RM_DELEGATION_TOKEN", dtoken.getKind()); + Assert.assertNotNull(dtoken.getToken()); + Assert.assertTrue(dtoken.getNextExpirationTime() > now); + } + + @Test + public void testPostDelegationTokenExpirationError() throws Exception { + + // If we don't set hsr + LambdaTestUtils.intercept(IllegalArgumentException.class, + "Parameter error, the hsr is null.", + () -> interceptor.postDelegationTokenExpiration(null)); + + Principal principal = mock(Principal.class); + when(principal.getName()).thenReturn(TEST_RENEWER); + + HttpServletRequest request = mock(HttpServletRequest.class); + when(request.getRemoteUser()).thenReturn(TEST_RENEWER); + when(request.getUserPrincipal()).thenReturn(principal); + when(request.getAuthType()).thenReturn("kerberos"); + + // If we don't set the header. + String errorMsg = "Header 'Hadoop-YARN-RM-Delegation-Token' containing encoded token not found"; + LambdaTestUtils.intercept(BadRequestException.class, errorMsg, + () -> interceptor.postDelegationTokenExpiration(request)); + } + + @Test + public void testPostDelegationTokenExpiration() throws Exception { + + DelegationToken token = new DelegationToken(); + token.setRenewer(TEST_RENEWER); + + Principal principal = mock(Principal.class); + when(principal.getName()).thenReturn(TEST_RENEWER); + + HttpServletRequest request = mock(HttpServletRequest.class); + when(request.getRemoteUser()).thenReturn(TEST_RENEWER); + when(request.getUserPrincipal()).thenReturn(principal); + when(request.getAuthType()).thenReturn("kerberos"); + + Response response = interceptor.postDelegationToken(token, request); + Assert.assertNotNull(response); + Object entity = response.getEntity(); + Assert.assertNotNull(entity); + Assert.assertTrue(entity instanceof DelegationToken); + DelegationToken dtoken = DelegationToken.class.cast(entity); + + final String yarnTokenHeader = "Hadoop-YARN-RM-Delegation-Token"; + when(request.getHeader(yarnTokenHeader)).thenReturn(dtoken.getToken()); + + Response renewResponse = interceptor.postDelegationTokenExpiration(request); + Assert.assertNotNull(renewResponse); + + Object renewEntity = renewResponse.getEntity(); + Assert.assertNotNull(renewEntity); + Assert.assertTrue(renewEntity instanceof DelegationToken); + + // renewDelegation, we only return renewDate, other values are NULL. + DelegationToken renewDToken = DelegationToken.class.cast(renewEntity); + Assert.assertNull(renewDToken.getRenewer()); + Assert.assertNull(renewDToken.getOwner()); + Assert.assertNull(renewDToken.getKind()); + Assert.assertTrue(renewDToken.getNextExpirationTime() > dtoken.getNextExpirationTime()); + } + + @Test + public void testCancelDelegationToken() throws Exception { + DelegationToken token = new DelegationToken(); + token.setRenewer(TEST_RENEWER); + + Principal principal = mock(Principal.class); + when(principal.getName()).thenReturn(TEST_RENEWER); + + HttpServletRequest request = mock(HttpServletRequest.class); + when(request.getRemoteUser()).thenReturn(TEST_RENEWER); + when(request.getUserPrincipal()).thenReturn(principal); + when(request.getAuthType()).thenReturn("kerberos"); + + Response response = interceptor.postDelegationToken(token, request); + Assert.assertNotNull(response); + Object entity = response.getEntity(); + Assert.assertNotNull(entity); + Assert.assertTrue(entity instanceof DelegationToken); + DelegationToken dtoken = DelegationToken.class.cast(entity); + + final String yarnTokenHeader = "Hadoop-YARN-RM-Delegation-Token"; + when(request.getHeader(yarnTokenHeader)).thenReturn(dtoken.getToken()); + + Response cancelResponse = interceptor.cancelDelegationToken(request); + Assert.assertNotNull(cancelResponse); + Assert.assertEquals(response.getStatus(), Status.OK.getStatusCode()); + } + + @Test + public void testGetActivitiesNormal() { + ActivitiesInfo activitiesInfo = interceptor.getActivities(null, "1", "DIAGNOSTIC"); + Assert.assertNotNull(activitiesInfo); + + String nodeId = activitiesInfo.getNodeId(); + Assert.assertNotNull(nodeId); + Assert.assertEquals("1", nodeId); + + String diagnostic = activitiesInfo.getDiagnostic(); + Assert.assertNotNull(diagnostic); + Assert.assertTrue(StringUtils.contains(diagnostic, "Diagnostic")); + + long timestamp = activitiesInfo.getTimestamp(); + Assert.assertEquals(1673081972L, timestamp); + + List allocationInfos = activitiesInfo.getAllocations(); + Assert.assertNotNull(allocationInfos); + Assert.assertEquals(1, allocationInfos.size()); + } + + @Test + public void testGetActivitiesError() throws Exception { + // nodeId is empty + LambdaTestUtils.intercept(IllegalArgumentException.class, + "'nodeId' must not be empty.", + () -> interceptor.getActivities(null, "", "DIAGNOSTIC")); + + // groupBy is empty + LambdaTestUtils.intercept(IllegalArgumentException.class, + "'groupBy' must not be empty.", + () -> interceptor.getActivities(null, "1", "")); + + // groupBy value is wrong + LambdaTestUtils.intercept(IllegalArgumentException.class, + "Got invalid groupBy: TEST1, valid groupBy types: [DIAGNOSTIC]", + () -> interceptor.getActivities(null, "1", "TEST1")); + } + + @Test + public void testGetBulkActivitiesNormal() throws InterruptedException { + BulkActivitiesInfo bulkActivitiesInfo = + interceptor.getBulkActivities(null, "DIAGNOSTIC", 5); + Assert.assertNotNull(bulkActivitiesInfo); + + Assert.assertTrue(bulkActivitiesInfo instanceof FederationBulkActivitiesInfo); + + FederationBulkActivitiesInfo federationBulkActivitiesInfo = + FederationBulkActivitiesInfo.class.cast(bulkActivitiesInfo); + Assert.assertNotNull(federationBulkActivitiesInfo); + + List activitiesInfos = federationBulkActivitiesInfo.getList(); + Assert.assertNotNull(activitiesInfos); + Assert.assertEquals(4, activitiesInfos.size()); + + for (BulkActivitiesInfo activitiesInfo : activitiesInfos) { + Assert.assertNotNull(activitiesInfo); + List activitiesInfoList = activitiesInfo.getActivities(); + Assert.assertNotNull(activitiesInfoList); + Assert.assertEquals(5, activitiesInfoList.size()); + } + } + + @Test + public void testGetBulkActivitiesError() throws Exception { + // activitiesCount < 0 + LambdaTestUtils.intercept(IllegalArgumentException.class, + "'activitiesCount' must not be negative.", + () -> interceptor.getBulkActivities(null, "DIAGNOSTIC", -1)); + + // groupBy value is wrong + LambdaTestUtils.intercept(YarnRuntimeException.class, + "Got invalid groupBy: TEST1, valid groupBy types: [DIAGNOSTIC]", + () -> interceptor.getBulkActivities(null, "TEST1", 1)); + + // groupBy is empty + LambdaTestUtils.intercept(IllegalArgumentException.class, + "'groupBy' must not be empty.", + () -> interceptor.getBulkActivities(null, "", 1)); + } +} \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/TestFederationInterceptorRESTRetry.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/TestFederationInterceptorRESTRetry.java index b2f421e25ae..790cf410bed 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/TestFederationInterceptorRESTRetry.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/TestFederationInterceptorRESTRetry.java @@ -25,9 +25,12 @@ import java.util.List; import javax.ws.rs.core.Response; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.test.LambdaTestUtils; import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.exceptions.YarnException; +import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; import org.apache.hadoop.yarn.server.federation.policies.FederationPolicyUtils; import org.apache.hadoop.yarn.server.federation.policies.manager.UniformBroadcastPolicyManager; import org.apache.hadoop.yarn.server.federation.store.impl.MemoryFederationStateStore; @@ -80,10 +83,16 @@ public class TestFederationInterceptorRESTRetry @Override public void setUp() { super.setUpConfig(); + + Configuration conf = this.getConf(); + + // Compatible with historical test cases, we set router.allow-partial-result.enable=false. + conf.setBoolean(YarnConfiguration.ROUTER_INTERCEPTOR_ALLOW_PARTIAL_RESULT_ENABLED, false); + interceptor = new TestableFederationInterceptorREST(); stateStore = new MemoryFederationStateStore(); - stateStore.init(this.getConf()); + stateStore.init(conf); FederationStateStoreFacade.getInstance().reinitialize(stateStore, getConf()); stateStoreUtil = new FederationStateStoreTestUtil(stateStore); @@ -179,6 +188,9 @@ public class TestFederationInterceptorRESTRetry @Test public void testGetNewApplicationTwoBadSCs() throws YarnException, IOException, InterruptedException { + + LOG.info("Test getNewApplication with two bad SCs."); + setupCluster(Arrays.asList(bad1, bad2)); Response response = interceptor.createNewApplication(null); @@ -194,17 +206,21 @@ public class TestFederationInterceptorRESTRetry @Test public void testGetNewApplicationOneBadOneGood() throws YarnException, IOException, InterruptedException { - System.out.println("Test getNewApplication with one bad, one good SC"); + + LOG.info("Test getNewApplication with one bad, one good SC."); + setupCluster(Arrays.asList(good, bad2)); Response response = interceptor.createNewApplication(null); - + Assert.assertNotNull(response); Assert.assertEquals(OK, response.getStatus()); NewApplication newApp = (NewApplication) response.getEntity(); - ApplicationId appId = ApplicationId.fromString(newApp.getApplicationId()); + Assert.assertNotNull(newApp); - Assert.assertEquals(Integer.parseInt(good.getId()), - appId.getClusterTimestamp()); + ApplicationId appId = ApplicationId.fromString(newApp.getApplicationId()); + Assert.assertNotNull(appId); + + Assert.assertEquals(Integer.parseInt(good.getId()), appId.getClusterTimestamp()); } /** @@ -215,6 +231,8 @@ public class TestFederationInterceptorRESTRetry public void testSubmitApplicationOneBadSC() throws YarnException, IOException, InterruptedException { + LOG.info("Test submitApplication with one bad SC."); + setupCluster(Arrays.asList(bad2)); ApplicationId appId = @@ -377,15 +395,12 @@ public class TestFederationInterceptorRESTRetry * composed of only 1 bad SubCluster. */ @Test - public void testGetNodesOneBadSC() - throws YarnException, IOException, InterruptedException { + public void testGetNodesOneBadSC() throws Exception { setupCluster(Arrays.asList(bad2)); - NodesInfo response = interceptor.getNodes(null); - Assert.assertNotNull(response); - Assert.assertEquals(0, response.getNodes().size()); - // The remove duplicate operations is tested in TestRouterWebServiceUtil + LambdaTestUtils.intercept(YarnRuntimeException.class, "RM is stopped", + () -> interceptor.getNodes(null)); } /** @@ -393,14 +408,12 @@ public class TestFederationInterceptorRESTRetry * composed of only 2 bad SubClusters. */ @Test - public void testGetNodesTwoBadSCs() - throws YarnException, IOException, InterruptedException { + public void testGetNodesTwoBadSCs() throws Exception { + setupCluster(Arrays.asList(bad1, bad2)); - NodesInfo response = interceptor.getNodes(null); - Assert.assertNotNull(response); - Assert.assertEquals(0, response.getNodes().size()); - // The remove duplicate operations is tested in TestRouterWebServiceUtil + LambdaTestUtils.intercept(YarnRuntimeException.class, "RM is stopped", + () -> interceptor.getNodes(null)); } /** @@ -408,17 +421,11 @@ public class TestFederationInterceptorRESTRetry * composed of only 1 bad SubCluster and a good one. */ @Test - public void testGetNodesOneBadOneGood() - throws YarnException, IOException, InterruptedException { + public void testGetNodesOneBadOneGood() throws Exception { setupCluster(Arrays.asList(good, bad2)); - NodesInfo response = interceptor.getNodes(null); - Assert.assertNotNull(response); - Assert.assertEquals(1, response.getNodes().size()); - // Check if the only node came from Good SubCluster - Assert.assertEquals(good.getId(), - Long.toString(response.getNodes().get(0).getLastHealthUpdate())); - // The remove duplicate operations is tested in TestRouterWebServiceUtil + LambdaTestUtils.intercept(YarnRuntimeException.class, "RM is stopped", + () -> interceptor.getNodes(null)); } /** @@ -517,4 +524,58 @@ public class TestFederationInterceptorRESTRetry Assert.assertEquals(0, response.getActiveNodes()); Assert.assertEquals(0, response.getShutdownNodes()); } + + @Test + public void testGetNodesOneBadSCAllowPartial() throws Exception { + // We set allowPartialResult to true. + // In this test case, we set up a subCluster, + // and the subCluster status is bad, we can't get the response, + // an exception should be thrown at this time. + interceptor.setAllowPartialResult(true); + setupCluster(Arrays.asList(bad2)); + + NodesInfo nodesInfo = interceptor.getNodes(null); + Assert.assertNotNull(nodesInfo); + + // We need to set allowPartialResult=false + interceptor.setAllowPartialResult(false); + } + + @Test + public void testGetNodesTwoBadSCsAllowPartial() throws Exception { + // We set allowPartialResult to true. + // In this test case, we set up 2 subClusters, + // and the status of these 2 subClusters is bad. When we call the interface, + // an exception should be returned. + interceptor.setAllowPartialResult(true); + setupCluster(Arrays.asList(bad1, bad2)); + + NodesInfo nodesInfo = interceptor.getNodes(null); + Assert.assertNotNull(nodesInfo); + + // We need to set allowPartialResult=false + interceptor.setAllowPartialResult(false); + } + + @Test + public void testGetNodesOneBadOneGoodAllowPartial() throws Exception { + + // allowPartialResult = true, + // We tolerate exceptions and return normal results + interceptor.setAllowPartialResult(true); + setupCluster(Arrays.asList(good, bad2)); + + NodesInfo response = interceptor.getNodes(null); + Assert.assertNotNull(response); + Assert.assertEquals(1, response.getNodes().size()); + // Check if the only node came from Good SubCluster + Assert.assertEquals(good.getId(), + Long.toString(response.getNodes().get(0).getLastHealthUpdate())); + + // allowPartialResult = false, + // We do not tolerate exceptions and will throw exceptions directly + interceptor.setAllowPartialResult(false); + + setupCluster(Arrays.asList(good, bad2)); + } } \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/TestFederationWebApp.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/TestFederationWebApp.java index e0ea4ccdb39..f1501fe1e7a 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/TestFederationWebApp.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/TestFederationWebApp.java @@ -23,14 +23,20 @@ import org.apache.hadoop.yarn.exceptions.YarnException; import org.apache.hadoop.yarn.server.router.Router; import org.apache.hadoop.yarn.webapp.test.WebAppTests; import org.junit.Test; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; import java.io.IOException; -public class TestFederationWebApp { +public class TestFederationWebApp extends TestRouterWebServicesREST { + + private static final Logger LOG = + LoggerFactory.getLogger(TestFederationWebApp.class); @Test public void testFederationWebViewNotEnable() throws InterruptedException, YarnException, IOException { + LOG.info("testFederationWebView - NotEnable Federation."); // Test Federation is not Enabled Configuration config = new YarnConfiguration(); config.setBoolean(YarnConfiguration.FEDERATION_ENABLED, false); @@ -40,9 +46,88 @@ public class TestFederationWebApp { @Test public void testFederationWebViewEnable() throws InterruptedException, YarnException, IOException { + LOG.info("testFederationWebView - Enable Federation."); // Test Federation Enabled Configuration config = new YarnConfiguration(); config.setBoolean(YarnConfiguration.FEDERATION_ENABLED, true); WebAppTests.testPage(FederationPage.class, Router.class, new MockRouter(config)); } + + @Test + public void testFederationAboutViewEnable() + throws InterruptedException, YarnException, IOException { + LOG.info("testFederationAboutViewEnable - Enable Federation."); + // Test Federation Enabled + Configuration config = new YarnConfiguration(); + config.setBoolean(YarnConfiguration.FEDERATION_ENABLED, true); + WebAppTests.testPage(AboutPage.class, Router.class, new MockRouter(config)); + } + + @Test + public void testFederationAboutViewNotEnable() + throws InterruptedException, YarnException, IOException { + LOG.info("testFederationAboutViewNotEnable - NotEnable Federation."); + // Test Federation Not Enabled + Configuration config = new YarnConfiguration(); + config.setBoolean(YarnConfiguration.FEDERATION_ENABLED, false); + WebAppTests.testPage(AboutPage.class, Router.class, new MockRouter(config)); + } + + @Test + public void testFederationNodeViewEnable() + throws InterruptedException, YarnException, IOException { + LOG.info("testFederationNodeViewEnable - Enable Federation."); + // Test Federation Enabled + Configuration config = new YarnConfiguration(); + config.setBoolean(YarnConfiguration.FEDERATION_ENABLED, true); + WebAppTests.testPage(NodesPage.class, Router.class, new MockRouter(config)); + } + + @Test + public void testFederationNodeViewNotEnable() + throws InterruptedException, YarnException, IOException { + LOG.info("testFederationNodeViewNotEnable - NotEnable Federation."); + // Test Federation Not Enabled + Configuration config = new YarnConfiguration(); + config.setBoolean(YarnConfiguration.FEDERATION_ENABLED, false); + WebAppTests.testPage(NodesPage.class, Router.class, new MockRouter(config)); + } + + @Test + public void testFederationAppViewEnable() + throws InterruptedException, YarnException, IOException { + LOG.info("testFederationAppViewEnable - Enable Federation."); + // Test Federation Enabled + Configuration config = new YarnConfiguration(); + config.setBoolean(YarnConfiguration.FEDERATION_ENABLED, true); + WebAppTests.testPage(AppsPage.class, Router.class, new MockRouter(config)); + } + + @Test + public void testFederationAppViewNotEnable() + throws InterruptedException, YarnException, IOException { + LOG.info("testFederationAppViewNotEnable - NotEnable Federation."); + // Test Federation Not Enabled + Configuration config = new YarnConfiguration(); + config.setBoolean(YarnConfiguration.FEDERATION_ENABLED, false); + WebAppTests.testPage(AppsPage.class, Router.class, new MockRouter(config)); + } + + @Test + public void testNodeLabelAppViewNotEnable() + throws InterruptedException, YarnException, IOException { + // Test Federation Not Enabled + Configuration config = new YarnConfiguration(); + config.setBoolean(YarnConfiguration.FEDERATION_ENABLED, false); + WebAppTests.testPage(NodeLabelsPage.class, Router.class, new MockRouter(config)); + } + + @Test + public void testNodeLabelAppViewEnable() + throws InterruptedException, YarnException, IOException { + // Test Federation Not Enabled + Configuration config = new YarnConfiguration(); + config.setBoolean(YarnConfiguration.FEDERATION_ENABLED, true); + WebAppTests.testPage(NodeLabelsPage.class, Router.class, new MockRouter(config)); + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/TestableFederationInterceptorREST.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/TestableFederationInterceptorREST.java index 7126ca515c1..31fd756b664 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/TestableFederationInterceptorREST.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/TestableFederationInterceptorREST.java @@ -18,10 +18,25 @@ package org.apache.hadoop.yarn.server.router.webapp; +import java.io.IOException; import java.util.ArrayList; import java.util.List; +import java.util.Map; +import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem; +import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId; +import org.apache.hadoop.yarn.server.resourcemanager.MockRM; +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceScheduler; +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler; +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import static org.apache.hadoop.yarn.server.router.webapp.BaseRouterWebServicesTest.QUEUE_DEDICATED_FULL; +import static org.apache.hadoop.yarn.server.router.webapp.BaseRouterWebServicesTest.QUEUE_DEFAULT_FULL; +import static org.apache.hadoop.yarn.server.router.webapp.BaseRouterWebServicesTest.QUEUE_DEFAULT; +import static org.apache.hadoop.yarn.server.router.webapp.BaseRouterWebServicesTest.QUEUE_DEDICATED; /** * Extends the FederationInterceptorREST and overrides methods to provide a @@ -30,7 +45,11 @@ import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId; public class TestableFederationInterceptorREST extends FederationInterceptorREST { - private List badSubCluster = new ArrayList(); + private List badSubCluster = new ArrayList<>(); + private MockRM mockRM = null; + + private static final Logger LOG = + LoggerFactory.getLogger(TestableFederationInterceptorREST.class); /** * For testing purpose, some subclusters has to be down to simulate particular @@ -51,4 +70,51 @@ public class TestableFederationInterceptorREST interceptor.setRunning(false); } + protected void setupResourceManager() throws IOException { + + if (mockRM != null) { + return; + } + + try { + + DefaultMetricsSystem.setMiniClusterMode(true); + CapacitySchedulerConfiguration conf = new CapacitySchedulerConfiguration(); + + // Define default queue + conf.setCapacity(QUEUE_DEFAULT_FULL, 20); + // Define dedicated queues + String[] queues = new String[]{QUEUE_DEFAULT, QUEUE_DEDICATED}; + conf.setQueues(CapacitySchedulerConfiguration.ROOT, queues); + conf.setCapacity(QUEUE_DEDICATED_FULL, 80); + conf.setReservable(QUEUE_DEDICATED_FULL, true); + + conf.setClass(YarnConfiguration.RM_SCHEDULER, + CapacityScheduler.class, ResourceScheduler.class); + conf.setBoolean(YarnConfiguration.RM_RESERVATION_SYSTEM_ENABLE, true); + conf.setBoolean(YarnConfiguration.RM_WORK_PRESERVING_RECOVERY_ENABLED, false); + + mockRM = new MockRM(conf); + mockRM.start(); + mockRM.registerNode("127.0.0.1:5678", 100*1024, 100); + + Map interceptors = super.getInterceptors(); + for (DefaultRequestInterceptorREST item : interceptors.values()) { + MockDefaultRequestInterceptorREST interceptor = (MockDefaultRequestInterceptorREST) item; + interceptor.setMockRM(mockRM); + } + } catch (Exception e) { + LOG.error("setupResourceManager failed.", e); + throw new IOException(e); + } + } + + @Override + public void shutdown() { + if (mockRM != null) { + mockRM.stop(); + mockRM = null; + } + super.shutdown(); + } } \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/pom.xml index 2de2c13f16b..07838688d70 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/pom.xml +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/pom.xml @@ -126,6 +126,26 @@ bcprov-jdk15on test + + org.junit.jupiter + junit-jupiter-api + test + + + org.junit.jupiter + junit-jupiter-engine + test + + + org.junit.jupiter + junit-jupiter-params + test + + + org.junit.platform + junit-platform-launcher + test + org.apache.hadoop hadoop-auth diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestContainerManagerSecurity.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestContainerManagerSecurity.java index de944cbc89c..702c5ea7dfb 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestContainerManagerSecurity.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestContainerManagerSecurity.java @@ -18,8 +18,6 @@ package org.apache.hadoop.yarn.server; -import static org.junit.Assert.fail; - import java.io.File; import java.io.IOException; import java.net.InetSocketAddress; @@ -31,6 +29,11 @@ import java.util.List; import java.util.concurrent.TimeoutException; import org.apache.hadoop.conf.Configuration; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; import org.apache.hadoop.fs.CommonConfigurationKeysPublic; import org.apache.hadoop.io.DataInputBuffer; import org.apache.hadoop.minikdc.KerberosSecurityTestcase; @@ -75,18 +78,15 @@ import org.apache.hadoop.yarn.server.security.BaseNMTokenSecretManager; import org.apache.hadoop.yarn.server.utils.BuilderUtils; import org.apache.hadoop.yarn.util.ConverterUtils; import org.apache.hadoop.yarn.util.Records; -import org.junit.After; -import org.junit.Assert; -import org.junit.Before; -import org.junit.Test; -import org.junit.runner.RunWith; -import org.junit.runners.Parameterized; -import org.junit.runners.Parameterized.Parameters; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.params.ParameterizedTest; +import org.junit.jupiter.params.provider.MethodSource; + import org.slf4j.Logger; import org.slf4j.LoggerFactory; -@RunWith(Parameterized.class) public class TestContainerManagerSecurity extends KerberosSecurityTestcase { static Logger LOG = LoggerFactory.getLogger(TestContainerManagerSecurity.class); @@ -94,29 +94,24 @@ public class TestContainerManagerSecurity extends KerberosSecurityTestcase { .getRecordFactory(null); private static MiniYARNCluster yarnCluster; private static final File testRootDir = new File("target", - TestContainerManagerSecurity.class.getName() + "-root"); + TestContainerManagerSecurity.class.getName() + "-root"); private static File httpSpnegoKeytabFile = new File(testRootDir, - "httpSpnegoKeytabFile.keytab"); + "httpSpnegoKeytabFile.keytab"); private static String httpSpnegoPrincipal = "HTTP/localhost@EXAMPLE.COM"; private Configuration conf; - @Before - public void setUp() throws Exception { + @BeforeEach + public void setup() throws Exception { testRootDir.mkdirs(); httpSpnegoKeytabFile.deleteOnExit(); + startMiniKdc(); getKdc().createPrincipal(httpSpnegoKeytabFile, httpSpnegoPrincipal); - UserGroupInformation.setConfiguration(conf); - - yarnCluster = - new MiniYARNCluster(TestContainerManagerSecurity.class.getName(), 1, 1, - 1); - yarnCluster.init(conf); - yarnCluster.start(); } - - @After + + @AfterEach public void tearDown() { + stopMiniKdc(); if (yarnCluster != null) { yarnCluster.stop(); yarnCluster = null; @@ -130,48 +125,56 @@ public class TestContainerManagerSecurity extends KerberosSecurityTestcase { * and to give details in say an IDE. The second is the configuraiton * object to use. */ - @Parameters(name = "{0}") public static Collection configs() { Configuration configurationWithoutSecurity = new Configuration(); configurationWithoutSecurity.set( CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION, "simple"); - + Configuration configurationWithSecurity = new Configuration(); configurationWithSecurity.set( - CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION, "kerberos"); + CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION, "kerberos"); configurationWithSecurity.set( - YarnConfiguration.RM_WEBAPP_SPNEGO_USER_NAME_KEY, httpSpnegoPrincipal); + YarnConfiguration.RM_WEBAPP_SPNEGO_USER_NAME_KEY, httpSpnegoPrincipal); configurationWithSecurity.set( - YarnConfiguration.RM_WEBAPP_SPNEGO_KEYTAB_FILE_KEY, - httpSpnegoKeytabFile.getAbsolutePath()); + YarnConfiguration.RM_WEBAPP_SPNEGO_KEYTAB_FILE_KEY, + httpSpnegoKeytabFile.getAbsolutePath()); configurationWithSecurity.set( - YarnConfiguration.NM_WEBAPP_SPNEGO_USER_NAME_KEY, httpSpnegoPrincipal); + YarnConfiguration.NM_WEBAPP_SPNEGO_USER_NAME_KEY, httpSpnegoPrincipal); configurationWithSecurity.set( - YarnConfiguration.NM_WEBAPP_SPNEGO_KEYTAB_FILE_KEY, - httpSpnegoKeytabFile.getAbsolutePath()); + YarnConfiguration.NM_WEBAPP_SPNEGO_KEYTAB_FILE_KEY, + httpSpnegoKeytabFile.getAbsolutePath()); - return Arrays.asList(new Object[][] { + return Arrays.asList(new Object[][]{ {"Simple", configurationWithoutSecurity}, {"Secure", configurationWithSecurity}}); } - - public TestContainerManagerSecurity(String name, Configuration conf) { + + public void initTestContainerManagerSecurity(String name, Configuration conf) { LOG.info("RUNNING TEST " + name); + UserGroupInformation.setConfiguration(conf); + yarnCluster = + new MiniYARNCluster(TestContainerManagerSecurity.class.getName(), 1, 1, + 1); + yarnCluster.init(conf); + yarnCluster.start(); conf.setLong(YarnConfiguration.RM_AM_EXPIRY_INTERVAL_MS, 100000L); this.conf = conf; } - - @Test - public void testContainerManager() throws Exception { - - // TestNMTokens. - testNMTokens(conf); - - // Testing for container token tampering - testContainerToken(conf); - - // Testing for container token tampering with epoch - testContainerTokenWithEpoch(conf); + + @MethodSource("configs") + @ParameterizedTest(name = "{0}") + void testContainerManager(String name, Configuration conf) throws Exception { + + initTestContainerManagerSecurity(name, conf); + + // TestNMTokens. + testNMTokens(conf); + + // Testing for container token tampering + testContainerToken(conf); + + // Testing for container token tampering with epoch + testContainerTokenWithEpoch(conf); } @@ -182,21 +185,21 @@ public class TestContainerManagerSecurity extends KerberosSecurityTestcase { private void testNMTokens(Configuration testConf) throws Exception { NMTokenSecretManagerInRM nmTokenSecretManagerRM = yarnCluster.getResourceManager().getRMContext() - .getNMTokenSecretManager(); + .getNMTokenSecretManager(); NMTokenSecretManagerInNM nmTokenSecretManagerNM = yarnCluster.getNodeManager(0).getNMContext().getNMTokenSecretManager(); RMContainerTokenSecretManager containerTokenSecretManager = yarnCluster.getResourceManager().getRMContext(). getContainerTokenSecretManager(); - + NodeManager nm = yarnCluster.getNodeManager(0); - + waitForNMToReceiveNMTokenKey(nmTokenSecretManagerNM); - + // Both id should be equal. - Assert.assertEquals(nmTokenSecretManagerNM.getCurrentKey().getKeyId(), + assertEquals(nmTokenSecretManagerNM.getCurrentKey().getKeyId(), nmTokenSecretManagerRM.getCurrentKey().getKeyId()); - + /* * Below cases should be tested. * 1) If Invalid NMToken is used then it should be rejected. @@ -225,25 +228,25 @@ public class TestContainerManagerSecurity extends KerberosSecurityTestcase { yarnCluster.getResourceManager().getRMContext().getRMApps().put(appId, m); ApplicationAttemptId validAppAttemptId = ApplicationAttemptId.newInstance(appId, 1); - + ContainerId validContainerId = ContainerId.newContainerId(validAppAttemptId, 0); - + NodeId validNode = yarnCluster.getNodeManager(0).getNMContext().getNodeId(); NodeId invalidNode = NodeId.newInstance("InvalidHost", 1234); - + org.apache.hadoop.yarn.api.records.Token validNMToken = nmTokenSecretManagerRM.createNMToken(validAppAttemptId, validNode, user); - + org.apache.hadoop.yarn.api.records.Token validContainerToken = containerTokenSecretManager.createContainerToken(validContainerId, 0, validNode, user, r, Priority.newInstance(10), 1234); ContainerTokenIdentifier identifier = BuilderUtils.newContainerTokenIdentifier(validContainerToken); - Assert.assertEquals(Priority.newInstance(10), identifier.getPriority()); - Assert.assertEquals(1234, identifier.getCreationTime()); - + assertEquals(Priority.newInstance(10), identifier.getPriority()); + assertEquals(1234, identifier.getCreationTime()); + StringBuilder sb; // testInvalidNMToken ... creating NMToken using different secret manager. @@ -255,7 +258,7 @@ public class TestContainerManagerSecurity extends KerberosSecurityTestcase { // Making sure key id is different. } while (tempManager.getCurrentKey().getKeyId() == nmTokenSecretManagerRM .getCurrentKey().getKeyId()); - + // Testing that NM rejects the requests when we don't send any token. if (UserGroupInformation.isSecurityEnabled()) { sb = new StringBuilder("Client cannot authenticate via:[TOKEN]"); @@ -266,55 +269,55 @@ public class TestContainerManagerSecurity extends KerberosSecurityTestcase { } String errorMsg = testStartContainer(rpc, validAppAttemptId, validNode, validContainerToken, null, true); - Assert.assertTrue("In calling " + validNode + " exception was '" + assertTrue(errorMsg.contains(sb.toString()), "In calling " + validNode + " exception was '" + errorMsg + "' but doesn't contain '" - + sb.toString() + "'", errorMsg.contains(sb.toString())); - + + sb.toString() + "'"); + org.apache.hadoop.yarn.api.records.Token invalidNMToken = tempManager.createNMToken(validAppAttemptId, validNode, user); sb = new StringBuilder("Given NMToken for application : "); sb.append(validAppAttemptId.toString()) - .append(" seems to have been generated illegally."); - Assert.assertTrue(sb.toString().contains( + .append(" seems to have been generated illegally."); + assertTrue(sb.toString().contains( testStartContainer(rpc, validAppAttemptId, validNode, validContainerToken, invalidNMToken, true))); - + // valid NMToken but belonging to other node invalidNMToken = nmTokenSecretManagerRM.createNMToken(validAppAttemptId, invalidNode, user); sb = new StringBuilder("Given NMToken for application : "); sb.append(validAppAttemptId) - .append(" is not valid for current node manager.expected : ") - .append(validNode.toString()) - .append(" found : ").append(invalidNode.toString()); - Assert.assertTrue(sb.toString().contains( + .append(" is not valid for current node manager.expected : ") + .append(validNode.toString()) + .append(" found : ").append(invalidNode.toString()); + assertTrue(sb.toString().contains( testStartContainer(rpc, validAppAttemptId, validNode, validContainerToken, invalidNMToken, true))); - + // using correct tokens. nmtoken for app attempt should get saved. testConf.setInt(YarnConfiguration.RM_CONTAINER_ALLOC_EXPIRY_INTERVAL_MS, 4 * 60 * 1000); validContainerToken = containerTokenSecretManager.createContainerToken(validContainerId, 0, validNode, user, r, Priority.newInstance(0), 0); - Assert.assertTrue(testStartContainer(rpc, validAppAttemptId, validNode, - validContainerToken, validNMToken, false).isEmpty()); - Assert.assertTrue(nmTokenSecretManagerNM + assertTrue(testStartContainer(rpc, validAppAttemptId, validNode, + validContainerToken, validNMToken, false).isEmpty()); + assertTrue(nmTokenSecretManagerNM .isAppAttemptNMTokenKeyPresent(validAppAttemptId)); - + // using a new compatible version nmtoken, expect container can be started // successfully. ApplicationAttemptId validAppAttemptId2 = ApplicationAttemptId.newInstance(appId, 2); - + ContainerId validContainerId2 = ContainerId.newContainerId(validAppAttemptId2, 0); org.apache.hadoop.yarn.api.records.Token validContainerToken2 = containerTokenSecretManager.createContainerToken(validContainerId2, 0, validNode, user, r, Priority.newInstance(0), 0); - + org.apache.hadoop.yarn.api.records.Token validNMToken2 = nmTokenSecretManagerRM.createNMToken(validAppAttemptId2, validNode, user); // First, get a new NMTokenIdentifier. @@ -323,43 +326,42 @@ public class TestContainerManagerSecurity extends KerberosSecurityTestcase { DataInputBuffer dib = new DataInputBuffer(); dib.reset(tokenIdentifierContent, tokenIdentifierContent.length); newIdentifier.readFields(dib); - + // Then, generate a new version NMTokenIdentifier (NMTokenIdentifierNewForTest) // with additional field of message. - NMTokenIdentifierNewForTest newVersionIdentifier = + NMTokenIdentifierNewForTest newVersionIdentifier = new NMTokenIdentifierNewForTest(newIdentifier, "message"); - + // check new version NMTokenIdentifier has correct info. - Assert.assertEquals("The ApplicationAttemptId is changed after set to " + - "newVersionIdentifier", validAppAttemptId2.getAttemptId(), + assertEquals(validAppAttemptId2.getAttemptId(), newVersionIdentifier.getApplicationAttemptId().getAttemptId() - ); - - Assert.assertEquals("The message is changed after set to newVersionIdentifier", - "message", newVersionIdentifier.getMessage()); - - Assert.assertEquals("The NodeId is changed after set to newVersionIdentifier", - validNode, newVersionIdentifier.getNodeId()); - + , + "The ApplicationAttemptId is changed after set to " + + "newVersionIdentifier"); + + assertEquals("message", newVersionIdentifier.getMessage(), "The message is changed after set to newVersionIdentifier"); + + assertEquals(validNode, newVersionIdentifier.getNodeId(), "The NodeId is changed after set to newVersionIdentifier"); + // create new Token based on new version NMTokenIdentifier. org.apache.hadoop.yarn.api.records.Token newVersionedNMToken = BaseNMTokenSecretManager.newInstance( - nmTokenSecretManagerRM.retrievePassword(newVersionIdentifier), + nmTokenSecretManagerRM.retrievePassword(newVersionIdentifier), newVersionIdentifier); - + // Verify startContainer is successful and no exception is thrown. - Assert.assertTrue(testStartContainer(rpc, validAppAttemptId2, validNode, + assertTrue(testStartContainer(rpc, validAppAttemptId2, validNode, validContainerToken2, newVersionedNMToken, false).isEmpty()); - Assert.assertTrue(nmTokenSecretManagerNM + assertTrue(nmTokenSecretManagerNM .isAppAttemptNMTokenKeyPresent(validAppAttemptId2)); - + //Now lets wait till container finishes and is removed from node manager. waitForContainerToFinishOnNM(validContainerId); sb = new StringBuilder("Attempt to relaunch the same container with id "); sb.append(validContainerId); - Assert.assertTrue(testStartContainer(rpc, validAppAttemptId, validNode, + assertTrue(testStartContainer(rpc, validAppAttemptId, validNode, validContainerToken, validNMToken, true).contains(sb.toString())); - + // Container is removed from node manager's memory by this time. // trying to stop the container. It should not throw any exception. testStopContainer(rpc, validAppAttemptId, validNode, validContainerId, @@ -370,25 +372,25 @@ public class TestContainerManagerSecurity extends KerberosSecurityTestcase { rollNMTokenMasterKey(nmTokenSecretManagerRM, nmTokenSecretManagerNM); // Key rolled over once.. rolling over again rollNMTokenMasterKey(nmTokenSecretManagerRM, nmTokenSecretManagerNM); - + // trying get container status. Now saved nmToken should be used for // authentication... It should complain saying container was recently // stopped. sb = new StringBuilder("Container "); sb.append(validContainerId) .append(" was recently stopped on node manager"); - Assert.assertTrue(testGetContainer(rpc, validAppAttemptId, validNode, + assertTrue(testGetContainer(rpc, validAppAttemptId, validNode, validContainerId, validNMToken, true).contains(sb.toString())); // Now lets remove the container from nm-memory nm.getNodeStatusUpdater().clearFinishedContainersFromCache(); - + // This should fail as container is removed from recently tracked finished // containers. sb = new StringBuilder("Container ") .append(validContainerId.toString()) .append(" is not handled by this NodeManager"); - Assert.assertTrue(testGetContainer(rpc, validAppAttemptId, validNode, + assertTrue(testGetContainer(rpc, validAppAttemptId, validNode, validContainerId, validNMToken, false).contains(sb.toString())); // using appAttempt-1 NMtoken for launching container for appAttempt-2 @@ -396,13 +398,13 @@ public class TestContainerManagerSecurity extends KerberosSecurityTestcase { ApplicationAttemptId attempt2 = ApplicationAttemptId.newInstance(appId, 2); Token attempt1NMToken = nmTokenSecretManagerRM - .createNMToken(validAppAttemptId, validNode, user); + .createNMToken(validAppAttemptId, validNode, user); org.apache.hadoop.yarn.api.records.Token newContainerToken = containerTokenSecretManager.createContainerToken( - ContainerId.newContainerId(attempt2, 1), 0, validNode, user, r, + ContainerId.newContainerId(attempt2, 1), 0, validNode, user, r, Priority.newInstance(0), 0); - Assert.assertTrue(testStartContainer(rpc, attempt2, validNode, - newContainerToken, attempt1NMToken, false).isEmpty()); + assertTrue(testStartContainer(rpc, attempt2, validNode, + newContainerToken, attempt1NMToken, false).isEmpty()); } private void waitForContainerToFinishOnNM(ContainerId containerId) @@ -419,7 +421,7 @@ public class TestContainerManagerSecurity extends KerberosSecurityTestcase { LOG.info("Waiting for " + containerId + " to get to state " + ContainerState.COMPLETE); GenericTestUtils.waitFor(() -> ContainerState.COMPLETE.equals( - waitContainer.cloneAndGetContainerStatus().getState()), + waitContainer.cloneAndGetContainerStatus().getState()), 500, timeout); } catch (TimeoutException te) { LOG.error("TimeoutException", te); @@ -433,7 +435,7 @@ public class TestContainerManagerSecurity extends KerberosSecurityTestcase { // Normally, Containers will be removed from NM context after they are // explicitly acked by RM. Now, manually remove it for testing. yarnCluster.getNodeManager(0).getNodeStatusUpdater() - .addCompletedContainer(containerId); + .addCompletedContainer(containerId); LOG.info("Removing container from NMContext, containerID = " + containerId); nmContext.getContainers().remove(containerId); } @@ -458,16 +460,16 @@ public class TestContainerManagerSecurity extends KerberosSecurityTestcase { Thread.sleep(1000); } nmTokenSecretManagerRM.activateNextMasterKey(); - Assert.assertTrue((nmTokenSecretManagerNM.getCurrentKey().getKeyId() + assertTrue((nmTokenSecretManagerNM.getCurrentKey().getKeyId() == nmTokenSecretManagerRM.getCurrentKey().getKeyId())); } - + private String testStopContainer(YarnRPC rpc, ApplicationAttemptId appAttemptId, NodeId nodeId, ContainerId containerId, Token nmToken, boolean isExceptionExpected) { try { stopContainer(rpc, nmToken, - Arrays.asList(new ContainerId[] {containerId}), appAttemptId, + Arrays.asList(new ContainerId[]{containerId}), appAttemptId, nodeId); if (isExceptionExpected) { fail("Exception was expected!!"); @@ -505,8 +507,8 @@ public class TestContainerManagerSecurity extends KerberosSecurityTestcase { try { startContainer(rpc, nmToken, containerToken, nodeId, appAttemptId.toString()); - if (isExceptionExpected){ - fail("Exception was expected!!"); + if (isExceptionExpected) { + fail("Exception was expected!!"); } return ""; } catch (Exception e) { @@ -514,7 +516,7 @@ public class TestContainerManagerSecurity extends KerberosSecurityTestcase { return e.getMessage(); } } - + private void stopContainer(YarnRPC rpc, Token nmToken, List containerId, ApplicationAttemptId appAttemptId, NodeId nodeId) throws Exception { @@ -537,13 +539,12 @@ public class TestContainerManagerSecurity extends KerberosSecurityTestcase { } } } - - private void - getContainerStatus(YarnRPC rpc, - org.apache.hadoop.yarn.api.records.Token nmToken, - ContainerId containerId, - ApplicationAttemptId appAttemptId, NodeId nodeId, - boolean isExceptionExpected) throws Exception { + + private void getContainerStatus(YarnRPC rpc, + org.apache.hadoop.yarn.api.records.Token nmToken, + ContainerId containerId, + ApplicationAttemptId appAttemptId, NodeId nodeId, + boolean isExceptionExpected) throws Exception { List containerIds = new ArrayList(); containerIds.add(containerId); GetContainerStatusesRequest request = @@ -558,7 +559,7 @@ public class TestContainerManagerSecurity extends KerberosSecurityTestcase { if (statuses.getFailedRequests() != null && statuses.getFailedRequests().containsKey(containerId)) { parseAndThrowException(statuses.getFailedRequests().get(containerId) - .deSerialize()); + .deSerialize()); } } finally { if (proxy != null) { @@ -566,7 +567,7 @@ public class TestContainerManagerSecurity extends KerberosSecurityTestcase { } } } - + private void startContainer(final YarnRPC rpc, org.apache.hadoop.yarn.api.records.Token nmToken, org.apache.hadoop.yarn.api.records.Token containerToken, @@ -584,7 +585,7 @@ public class TestContainerManagerSecurity extends KerberosSecurityTestcase { try { proxy = getContainerManagementProtocolProxy(rpc, nmToken, nodeId, user); StartContainersResponse response = proxy.startContainers(allRequests); - for(SerializedException ex : response.getFailedRequests().values()){ + for (SerializedException ex : response.getFailedRequests().values()) { parseAndThrowException(ex.deSerialize()); } } finally { @@ -613,11 +614,11 @@ public class TestContainerManagerSecurity extends KerberosSecurityTestcase { final InetSocketAddress addr = new InetSocketAddress(nodeId.getHost(), nodeId.getPort()); if (nmToken != null) { - ugi.addToken(ConverterUtils.convertFromYarn(nmToken, addr)); + ugi.addToken(ConverterUtils.convertFromYarn(nmToken, addr)); } proxy = NMProxy.createNMProxy(conf, ContainerManagementProtocol.class, ugi, - rpc, addr); + rpc, addr); return proxy; } @@ -642,7 +643,7 @@ public class TestContainerManagerSecurity extends KerberosSecurityTestcase { */ NMTokenSecretManagerInRM nmTokenSecretManagerInRM = yarnCluster.getResourceManager().getRMContext() - .getNMTokenSecretManager(); + .getNMTokenSecretManager(); ApplicationId appId = ApplicationId.newInstance(1, 1); ApplicationAttemptId appAttemptId = ApplicationAttemptId.newInstance(appId, 0); @@ -651,46 +652,46 @@ public class TestContainerManagerSecurity extends KerberosSecurityTestcase { NMTokenSecretManagerInNM nmTokenSecretManagerInNM = nm.getNMContext().getNMTokenSecretManager(); String user = "test"; - + waitForNMToReceiveNMTokenKey(nmTokenSecretManagerInNM); NodeId nodeId = nm.getNMContext().getNodeId(); - + // Both id should be equal. - Assert.assertEquals(nmTokenSecretManagerInNM.getCurrentKey().getKeyId(), + assertEquals(nmTokenSecretManagerInNM.getCurrentKey().getKeyId(), nmTokenSecretManagerInRM.getCurrentKey().getKeyId()); - - + + RMContainerTokenSecretManager containerTokenSecretManager = yarnCluster.getResourceManager().getRMContext(). getContainerTokenSecretManager(); - + Resource r = Resource.newInstance(1230, 2); - - Token containerToken = + + Token containerToken = containerTokenSecretManager.createContainerToken( cId, 0, nodeId, user, r, Priority.newInstance(0), 0); - - ContainerTokenIdentifier containerTokenIdentifier = + + ContainerTokenIdentifier containerTokenIdentifier = getContainerTokenIdentifierFromToken(containerToken); - + // Verify new compatible version ContainerTokenIdentifier // can work successfully. - ContainerTokenIdentifierForTest newVersionTokenIdentifier = + ContainerTokenIdentifierForTest newVersionTokenIdentifier = new ContainerTokenIdentifierForTest(containerTokenIdentifier, "message"); - byte[] password = + byte[] password = containerTokenSecretManager.createPassword(newVersionTokenIdentifier); - + Token newContainerToken = BuilderUtils.newContainerToken( nodeId, password, newVersionTokenIdentifier); - + Token nmToken = - nmTokenSecretManagerInRM.createNMToken(appAttemptId, nodeId, user); + nmTokenSecretManagerInRM.createNMToken(appAttemptId, nodeId, user); YarnRPC rpc = YarnRPC.create(conf); - Assert.assertTrue(testStartContainer(rpc, appAttemptId, nodeId, + assertTrue(testStartContainer(rpc, appAttemptId, nodeId, newContainerToken, nmToken, false).isEmpty()); - + // Creating a tampered Container Token RMContainerTokenSecretManager tamperedContainerTokenSecretManager = new RMContainerTokenSecretManager(conf); @@ -700,17 +701,17 @@ public class TestContainerManagerSecurity extends KerberosSecurityTestcase { tamperedContainerTokenSecretManager.activateNextMasterKey(); } while (containerTokenSecretManager.getCurrentKey().getKeyId() == tamperedContainerTokenSecretManager.getCurrentKey().getKeyId()); - + ContainerId cId2 = ContainerId.newContainerId(appAttemptId, 1); // Creating modified containerToken Token containerToken2 = tamperedContainerTokenSecretManager.createContainerToken(cId2, 0, nodeId, user, r, Priority.newInstance(0), 0); - + StringBuilder sb = new StringBuilder("Given Container "); sb.append(cId2) .append(" seems to have an illegally generated token."); - Assert.assertTrue(testStartContainer(rpc, appAttemptId, nodeId, + assertTrue(testStartContainer(rpc, appAttemptId, nodeId, containerToken2, nmToken, true).contains(sb.toString())); } @@ -754,7 +755,7 @@ public class TestContainerManagerSecurity extends KerberosSecurityTestcase { NodeId nodeId = nm.getNMContext().getNodeId(); // Both id should be equal. - Assert.assertEquals(nmTokenSecretManagerInNM.getCurrentKey().getKeyId(), + assertEquals(nmTokenSecretManagerInNM.getCurrentKey().getKeyId(), nmTokenSecretManagerInRM.getCurrentKey().getKeyId()); // Creating a normal Container Token @@ -765,17 +766,17 @@ public class TestContainerManagerSecurity extends KerberosSecurityTestcase { Token containerToken = containerTokenSecretManager.createContainerToken(cId, 0, nodeId, user, r, Priority.newInstance(0), 0); - + ContainerTokenIdentifier containerTokenIdentifier = new ContainerTokenIdentifier(); byte[] tokenIdentifierContent = containerToken.getIdentifier().array(); DataInputBuffer dib = new DataInputBuffer(); dib.reset(tokenIdentifierContent, tokenIdentifierContent.length); containerTokenIdentifier.readFields(dib); - - - Assert.assertEquals(cId, containerTokenIdentifier.getContainerID()); - Assert.assertEquals( + + + assertEquals(cId, containerTokenIdentifier.getContainerID()); + assertEquals( cId.toString(), containerTokenIdentifier.getContainerID().toString()); Token nmToken = @@ -791,10 +792,10 @@ public class TestContainerManagerSecurity extends KerberosSecurityTestcase { = getContainerManagementProtocolProxy(rpc, nmToken, nodeId, user); GetContainerStatusesResponse res = proxy.getContainerStatuses( GetContainerStatusesRequest.newInstance(containerIds)); - Assert.assertNotNull(res.getContainerStatuses().get(0)); - Assert.assertEquals( + assertNotNull(res.getContainerStatuses().get(0)); + assertEquals( cId, res.getContainerStatuses().get(0).getContainerId()); - Assert.assertEquals(cId.toString(), + assertEquals(cId.toString(), res.getContainerStatuses().get(0).getContainerId().toString()); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestDiskFailures.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestDiskFailures.java index 23bb0399930..daeb21db0cb 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestDiskFailures.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestDiskFailures.java @@ -18,6 +18,8 @@ package org.apache.hadoop.yarn.server; +import static org.junit.jupiter.api.Assertions.assertEquals; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileContext; import org.apache.hadoop.fs.FileUtil; @@ -37,11 +39,10 @@ import java.io.IOException; import java.util.Iterator; import java.util.List; -import org.junit.AfterClass; -import org.junit.BeforeClass; -import org.junit.Test; +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.Test; -import org.junit.Assert; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -73,7 +74,7 @@ public class TestDiskFailures { private static MiniYARNCluster yarnCluster; LocalDirsHandlerService dirsHandler; - @BeforeClass + @BeforeAll public static void setup() throws AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException { localFS = FileContext.getLocalFSFileContext(); @@ -82,7 +83,7 @@ public class TestDiskFailures { // Do not start cluster here } - @AfterClass + @AfterAll public static void teardown() { if (yarnCluster != null) { yarnCluster.stop(); @@ -99,7 +100,7 @@ public class TestDiskFailures { * @throws IOException */ @Test - public void testLocalDirsFailures() throws IOException { + void testLocalDirsFailures() throws IOException { testDirsFailures(true); } @@ -111,7 +112,7 @@ public class TestDiskFailures { * @throws IOException */ @Test - public void testLogDirsFailures() throws IOException { + void testLogDirsFailures() throws IOException { testDirsFailures(false); } @@ -122,7 +123,7 @@ public class TestDiskFailures { * @throws IOException */ @Test - public void testDirFailuresOnStartup() throws IOException { + void testDirFailuresOnStartup() throws IOException { Configuration conf = new YarnConfiguration(); String localDir1 = new File(testDir, "localDir1").getPath(); String localDir2 = new File(testDir, "localDir2").getPath(); @@ -137,11 +138,11 @@ public class TestDiskFailures { LocalDirsHandlerService dirSvc = new LocalDirsHandlerService(); dirSvc.init(conf); List localDirs = dirSvc.getLocalDirs(); - Assert.assertEquals(1, localDirs.size()); - Assert.assertEquals(new Path(localDir2).toString(), localDirs.get(0)); + assertEquals(1, localDirs.size()); + assertEquals(new Path(localDir2).toString(), localDirs.get(0)); List logDirs = dirSvc.getLogDirs(); - Assert.assertEquals(1, logDirs.size()); - Assert.assertEquals(new Path(logDir1).toString(), logDirs.get(0)); + assertEquals(1, logDirs.size()); + assertEquals(new Path(logDir1).toString(), logDirs.get(0)); } private void testDirsFailures(boolean localORLogDirs) throws IOException { @@ -177,8 +178,7 @@ public class TestDiskFailures { List list = localORLogDirs ? dirsHandler.getLocalDirs() : dirsHandler.getLogDirs(); String[] dirs = list.toArray(new String[list.size()]); - Assert.assertEquals("Number of nm-" + dirType + "-dirs is wrong.", - numLocalDirs, dirs.length); + assertEquals(numLocalDirs, dirs.length, "Number of nm-" + dirType + "-dirs is wrong."); String expectedDirs = StringUtils.join(",", list); // validate the health of disks initially verifyDisksHealth(localORLogDirs, expectedDirs, true); @@ -225,11 +225,9 @@ public class TestDiskFailures { String seenDirs = StringUtils.join(",", list); LOG.info("ExpectedDirs=" + expectedDirs); LOG.info("SeenDirs=" + seenDirs); - Assert.assertTrue("NodeManager could not identify disk failure.", - expectedDirs.equals(seenDirs)); + assertEquals(expectedDirs, seenDirs); - Assert.assertEquals("Node's health in terms of disks is wrong", - isHealthy, dirsHandler.areDisksHealthy()); + assertEquals(isHealthy, dirsHandler.areDisksHealthy(), "Node's health in terms of disks is wrong"); for (int i = 0; i < 10; i++) { Iterator iter = yarnCluster.getResourceManager().getRMContext() .getRMNodes().values().iterator(); @@ -247,8 +245,7 @@ public class TestDiskFailures { } Iterator iter = yarnCluster.getResourceManager().getRMContext() .getRMNodes().values().iterator(); - Assert.assertEquals("RM is not updated with the health status of a node", - isHealthy, iter.next().getState() != NodeState.UNHEALTHY); + assertEquals(isHealthy, iter.next().getState() != NodeState.UNHEALTHY, "RM is not updated with the health status of a node"); } /** diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestMiniYARNClusterForHA.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestMiniYARNClusterForHA.java index ed596340d7c..8bd7e59eb3f 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestMiniYARNClusterForHA.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestMiniYARNClusterForHA.java @@ -18,21 +18,21 @@ package org.apache.hadoop.yarn.server; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertTrue; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertTrue; import java.io.IOException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.exceptions.YarnException; -import org.junit.Before; -import org.junit.Test; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; public class TestMiniYARNClusterForHA { MiniYARNCluster cluster; - @Before + @BeforeEach public void setup() throws IOException, InterruptedException { Configuration conf = new YarnConfiguration(); conf.setBoolean(YarnConfiguration.AUTO_FAILOVER_ENABLED, false); @@ -43,12 +43,12 @@ public class TestMiniYARNClusterForHA { cluster.init(conf); cluster.start(); - assertFalse("RM never turned active", -1 == cluster.getActiveRMIndex()); + assertFalse(-1 == cluster.getActiveRMIndex(), "RM never turned active"); } @Test - public void testClusterWorks() throws YarnException, InterruptedException { - assertTrue("NMs fail to connect to the RM", - cluster.waitForNodeManagersToConnect(5000)); + void testClusterWorks() throws YarnException, InterruptedException { + assertTrue(cluster.waitForNodeManagersToConnect(5000), + "NMs fail to connect to the RM"); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestMiniYarnCluster.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestMiniYarnCluster.java index ff7fafc2001..c5c2e3e0f5e 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestMiniYarnCluster.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestMiniYarnCluster.java @@ -18,18 +18,23 @@ package org.apache.hadoop.yarn.server; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertNull; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.yarn.conf.HAUtil; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.server.resourcemanager.HATestUtil; -import org.junit.Assert; -import org.junit.Test; +import org.junit.jupiter.api.Test; + import java.io.IOException; public class TestMiniYarnCluster { @Test - public void testTimelineServiceStartInMiniCluster() throws Exception { + void testTimelineServiceStartInMiniCluster() throws Exception { Configuration conf = new YarnConfiguration(); int numNodeManagers = 1; int numLocalDirs = 1; @@ -45,14 +50,14 @@ public class TestMiniYarnCluster { try (MiniYARNCluster cluster = new MiniYARNCluster(TestMiniYarnCluster.class.getSimpleName(), numNodeManagers, numLocalDirs, numLogDirs, numLogDirs, - enableAHS)) { + enableAHS)) { cluster.init(conf); cluster.start(); //verify that the timeline service is not started. - Assert.assertNull("Timeline Service should not have been started", - cluster.getApplicationHistoryServer()); + assertNull(cluster.getApplicationHistoryServer(), + "Timeline Service should not have been started"); } /* @@ -64,25 +69,25 @@ public class TestMiniYarnCluster { try (MiniYARNCluster cluster = new MiniYARNCluster(TestMiniYarnCluster.class.getSimpleName(), numNodeManagers, numLocalDirs, numLogDirs, numLogDirs, - enableAHS)) { + enableAHS)) { cluster.init(conf); // Verify that the timeline-service starts on ephemeral ports by default String hostname = MiniYARNCluster.getHostname(); - Assert.assertEquals(hostname + ":0", - conf.get(YarnConfiguration.TIMELINE_SERVICE_ADDRESS)); + assertEquals(hostname + ":0", + conf.get(YarnConfiguration.TIMELINE_SERVICE_ADDRESS)); cluster.start(); //Timeline service may sometime take a while to get started int wait = 0; - while(cluster.getApplicationHistoryServer() == null && wait < 20) { + while (cluster.getApplicationHistoryServer() == null && wait < 20) { Thread.sleep(500); wait++; } //verify that the timeline service is started. - Assert.assertNotNull("Timeline Service should have been started", - cluster.getApplicationHistoryServer()); + assertNotNull(cluster.getApplicationHistoryServer(), + "Timeline Service should have been started"); } /* * Timeline service should start if TIMELINE_SERVICE_ENABLED == false @@ -93,24 +98,24 @@ public class TestMiniYarnCluster { try (MiniYARNCluster cluster = new MiniYARNCluster(TestMiniYarnCluster.class.getSimpleName(), numNodeManagers, numLocalDirs, numLogDirs, numLogDirs, - enableAHS)) { + enableAHS)) { cluster.init(conf); cluster.start(); //Timeline service may sometime take a while to get started int wait = 0; - while(cluster.getApplicationHistoryServer() == null && wait < 20) { + while (cluster.getApplicationHistoryServer() == null && wait < 20) { Thread.sleep(500); wait++; } //verify that the timeline service is started. - Assert.assertNotNull("Timeline Service should have been started", - cluster.getApplicationHistoryServer()); + assertNotNull(cluster.getApplicationHistoryServer(), + "Timeline Service should have been started"); } } @Test - public void testMultiRMConf() throws IOException { + void testMultiRMConf() throws IOException { String RM1_NODE_ID = "rm1", RM2_NODE_ID = "rm2"; int RM1_PORT_BASE = 10000, RM2_PORT_BASE = 20000; Configuration conf = new YarnConfiguration(); @@ -130,22 +135,22 @@ public class TestMiniYarnCluster { cluster.init(conf); Configuration conf1 = cluster.getResourceManager(0).getConfig(), conf2 = cluster.getResourceManager(1).getConfig(); - Assert.assertFalse(conf1 == conf2); - Assert.assertEquals("0.0.0.0:18032", + assertFalse(conf1 == conf2); + assertEquals("0.0.0.0:18032", conf1.get(HAUtil.addSuffix(YarnConfiguration.RM_ADDRESS, RM1_NODE_ID))); - Assert.assertEquals("0.0.0.0:28032", + assertEquals("0.0.0.0:28032", conf1.get(HAUtil.addSuffix(YarnConfiguration.RM_ADDRESS, RM2_NODE_ID))); - Assert.assertEquals("rm1", conf1.get(YarnConfiguration.RM_HA_ID)); + assertEquals("rm1", conf1.get(YarnConfiguration.RM_HA_ID)); - Assert.assertEquals("0.0.0.0:18032", + assertEquals("0.0.0.0:18032", conf2.get(HAUtil.addSuffix(YarnConfiguration.RM_ADDRESS, RM1_NODE_ID))); - Assert.assertEquals("0.0.0.0:28032", + assertEquals("0.0.0.0:28032", conf2.get(HAUtil.addSuffix(YarnConfiguration.RM_ADDRESS, RM2_NODE_ID))); - Assert.assertEquals("rm2", conf2.get(YarnConfiguration.RM_HA_ID)); + assertEquals("rm2", conf2.get(YarnConfiguration.RM_HA_ID)); } } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestMiniYarnClusterNodeUtilization.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestMiniYarnClusterNodeUtilization.java index d1ccc9a6fec..9c7f4447c6c 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestMiniYarnClusterNodeUtilization.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestMiniYarnClusterNodeUtilization.java @@ -18,9 +18,9 @@ package org.apache.hadoop.yarn.server; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertTrue; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertTrue; import java.io.IOException; import java.util.ArrayList; @@ -39,8 +39,9 @@ import org.apache.hadoop.yarn.server.resourcemanager.RMContext; import org.apache.hadoop.yarn.server.resourcemanager.ResourceManager; import org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNode; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode; -import org.junit.Before; -import org.junit.Test; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; public class TestMiniYarnClusterNodeUtilization { // Mini YARN cluster setup @@ -72,7 +73,7 @@ public class TestMiniYarnClusterNodeUtilization { private NodeStatus nodeStatus; - @Before + @BeforeEach public void setup() { conf = new YarnConfiguration(); conf.set(YarnConfiguration.RM_WEBAPP_ADDRESS, "localhost:0"); @@ -81,7 +82,7 @@ public class TestMiniYarnClusterNodeUtilization { cluster = new MiniYARNCluster(name, NUM_RM, NUM_NM, 1, 1); cluster.init(conf); cluster.start(); - assertFalse("RM never turned active", -1 == cluster.getActiveRMIndex()); + assertFalse(-1 == cluster.getActiveRMIndex(), "RM never turned active"); nm = (CustomNodeManager)cluster.getNodeManager(0); nodeStatus = createNodeStatus(nm.getNMContext().getNodeId(), 0, @@ -95,11 +96,12 @@ public class TestMiniYarnClusterNodeUtilization { * both the RMNode and SchedulerNode have been updated with the new * utilization. */ - @Test(timeout=60000) - public void testUpdateNodeUtilization() + @Test + @Timeout(60000) + void testUpdateNodeUtilization() throws InterruptedException, IOException, YarnException { - assertTrue("NMs fail to connect to the RM", - cluster.waitForNodeManagersToConnect(10000)); + assertTrue(cluster.waitForNodeManagersToConnect(10000), + "NMs fail to connect to the RM"); // Give the heartbeat time to propagate to the RM verifySimulatedUtilization(); @@ -119,11 +121,12 @@ public class TestMiniYarnClusterNodeUtilization { * Verify both the RMNode and SchedulerNode have been updated with the new * utilization. */ - @Test(timeout=60000) - public void testMockNodeStatusHeartbeat() + @Test + @Timeout(60000) + void testMockNodeStatusHeartbeat() throws InterruptedException, YarnException { - assertTrue("NMs fail to connect to the RM", - cluster.waitForNodeManagersToConnect(10000)); + assertTrue(cluster.waitForNodeManagersToConnect(10000), + "NMs fail to connect to the RM"); NodeStatusUpdater updater = nm.getNodeStatusUpdater(); updater.sendOutofBandHeartBeat(); @@ -196,12 +199,12 @@ public class TestMiniYarnClusterNodeUtilization { // Give the heartbeat time to propagate to the RM (max 10 seconds) // We check if the nodeUtilization is up to date - for (int i=0; i<100; i++) { + for (int i = 0; i < 100; i++) { for (RMNode ni : rmContext.getRMNodes().values()) { if (ni.getNodeUtilization() != null) { - if (ni.getNodeUtilization().equals(nodeUtilization)) { - break; - } + if (ni.getNodeUtilization().equals(nodeUtilization)) { + break; + } } } Thread.sleep(100); @@ -210,22 +213,18 @@ public class TestMiniYarnClusterNodeUtilization { // Verify the data is readable from the RM and scheduler nodes for (RMNode ni : rmContext.getRMNodes().values()) { ResourceUtilization cu = ni.getAggregatedContainersUtilization(); - assertEquals("Containers Utillization not propagated to RMNode", - containersUtilization, cu); + assertEquals(containersUtilization, cu, "Containers Utillization not propagated to RMNode"); ResourceUtilization nu = ni.getNodeUtilization(); - assertEquals("Node Utillization not propagated to RMNode", - nodeUtilization, nu); + assertEquals(nodeUtilization, nu, "Node Utillization not propagated to RMNode"); - SchedulerNode scheduler = - rmContext.getScheduler().getSchedulerNode(ni.getNodeID()); + SchedulerNode scheduler = rmContext.getScheduler().getSchedulerNode(ni.getNodeID()); cu = scheduler.getAggregatedContainersUtilization(); - assertEquals("Containers Utillization not propagated to SchedulerNode", - containersUtilization, cu); + assertEquals(containersUtilization, cu, + "Containers Utillization not propagated to SchedulerNode"); nu = scheduler.getNodeUtilization(); - assertEquals("Node Utillization not propagated to SchedulerNode", - nodeUtilization, nu); + assertEquals(nodeUtilization, nu, "Node Utillization not propagated to SchedulerNode"); } } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestRMNMSecretKeys.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestRMNMSecretKeys.java index ba14491bf51..ae12658a299 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestRMNMSecretKeys.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestRMNMSecretKeys.java @@ -22,10 +22,12 @@ import java.io.File; import java.io.IOException; import java.util.UUID; -import org.junit.AfterClass; -import org.junit.Assert; -import org.junit.BeforeClass; import org.apache.hadoop.fs.CommonConfigurationKeysPublic; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertNull; + import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.event.Dispatcher; @@ -36,7 +38,10 @@ import org.apache.hadoop.yarn.server.api.records.MasterKey; import org.apache.hadoop.yarn.server.resourcemanager.MockNM; import org.apache.hadoop.yarn.server.resourcemanager.ResourceManager; import org.apache.kerby.util.IOUtil; -import org.junit.Test; +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; public class TestRMNMSecretKeys { private static final String KRB5_CONF = "java.security.krb5.conf"; @@ -44,7 +49,7 @@ public class TestRMNMSecretKeys { System.getProperty("test.build.dir", "target/test-dir"), UUID.randomUUID().toString()); - @BeforeClass + @BeforeAll public static void setup() throws IOException { KRB5_CONF_ROOT_DIR.mkdir(); File krb5ConfFile = new File(KRB5_CONF_ROOT_DIR, "krb5.conf"); @@ -63,17 +68,18 @@ public class TestRMNMSecretKeys { System.setProperty(KRB5_CONF, krb5ConfFile.getAbsolutePath()); } - @AfterClass + @AfterAll public static void tearDown() throws IOException { KRB5_CONF_ROOT_DIR.delete(); } - @Test(timeout = 1000000) - public void testNMUpdation() throws Exception { + @Test + @Timeout(1000000) + void testNMUpdation() throws Exception { YarnConfiguration conf = new YarnConfiguration(); // validating RM NM keys for Unsecured environment validateRMNMKeyExchange(conf); - + // validating RM NM keys for secured environment conf.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION, "kerberos"); @@ -113,30 +119,30 @@ public class TestRMNMSecretKeys { MasterKey containerTokenMasterKey = registrationResponse.getContainerTokenMasterKey(); - Assert.assertNotNull(containerToken - + "Registration should cause a key-update!", containerTokenMasterKey); + assertNotNull(containerTokenMasterKey, containerToken + + "Registration should cause a key-update!"); MasterKey nmTokenMasterKey = registrationResponse.getNMTokenMasterKey(); - Assert.assertNotNull(nmToken - + "Registration should cause a key-update!", nmTokenMasterKey); + assertNotNull(nmTokenMasterKey, nmToken + + "Registration should cause a key-update!"); dispatcher.await(); NodeHeartbeatResponse response = nm.nodeHeartbeat(true); - Assert.assertNull(containerToken + - "First heartbeat after registration shouldn't get any key updates!", - response.getContainerTokenMasterKey()); - Assert.assertNull(nmToken + - "First heartbeat after registration shouldn't get any key updates!", - response.getNMTokenMasterKey()); + assertNull(response.getContainerTokenMasterKey(), + containerToken + + "First heartbeat after registration shouldn't get any key updates!"); + assertNull(response.getNMTokenMasterKey(), + nmToken + + "First heartbeat after registration shouldn't get any key updates!"); dispatcher.await(); response = nm.nodeHeartbeat(true); - Assert.assertNull(containerToken + - "Even second heartbeat after registration shouldn't get any key updates!", - response.getContainerTokenMasterKey()); - Assert.assertNull(nmToken + - "Even second heartbeat after registration shouldn't get any key updates!", - response.getContainerTokenMasterKey()); + assertNull(response.getContainerTokenMasterKey(), + containerToken + + "Even second heartbeat after registration shouldn't get any key updates!"); + assertNull(response.getContainerTokenMasterKey(), + nmToken + + "Even second heartbeat after registration shouldn't get any key updates!"); dispatcher.await(); @@ -146,30 +152,30 @@ public class TestRMNMSecretKeys { // Heartbeats after roll-over and before activation should be fine. response = nm.nodeHeartbeat(true); - Assert.assertNotNull(containerToken + - "Heartbeats after roll-over and before activation should not err out.", - response.getContainerTokenMasterKey()); - Assert.assertNotNull(nmToken + - "Heartbeats after roll-over and before activation should not err out.", - response.getNMTokenMasterKey()); + assertNotNull(response.getContainerTokenMasterKey(), + containerToken + + "Heartbeats after roll-over and before activation should not err out."); + assertNotNull(response.getNMTokenMasterKey(), + nmToken + + "Heartbeats after roll-over and before activation should not err out."); - Assert.assertEquals(containerToken + - "Roll-over should have incremented the key-id only by one!", - containerTokenMasterKey.getKeyId() + 1, - response.getContainerTokenMasterKey().getKeyId()); - Assert.assertEquals(nmToken + - "Roll-over should have incremented the key-id only by one!", - nmTokenMasterKey.getKeyId() + 1, - response.getNMTokenMasterKey().getKeyId()); + assertEquals(containerTokenMasterKey.getKeyId() + 1, + response.getContainerTokenMasterKey().getKeyId(), + containerToken + + "Roll-over should have incremented the key-id only by one!"); + assertEquals(nmTokenMasterKey.getKeyId() + 1, + response.getNMTokenMasterKey().getKeyId(), + nmToken + + "Roll-over should have incremented the key-id only by one!"); dispatcher.await(); response = nm.nodeHeartbeat(true); - Assert.assertNull(containerToken + - "Second heartbeat after roll-over shouldn't get any key updates!", - response.getContainerTokenMasterKey()); - Assert.assertNull(nmToken + - "Second heartbeat after roll-over shouldn't get any key updates!", - response.getNMTokenMasterKey()); + assertNull(response.getContainerTokenMasterKey(), + containerToken + + "Second heartbeat after roll-over shouldn't get any key updates!"); + assertNull(response.getNMTokenMasterKey(), + nmToken + + "Second heartbeat after roll-over shouldn't get any key updates!"); dispatcher.await(); // Let's force activation @@ -177,21 +183,21 @@ public class TestRMNMSecretKeys { rm.getRMContext().getNMTokenSecretManager().activateNextMasterKey(); response = nm.nodeHeartbeat(true); - Assert.assertNull(containerToken - + "Activation shouldn't cause any key updates!", - response.getContainerTokenMasterKey()); - Assert.assertNull(nmToken - + "Activation shouldn't cause any key updates!", - response.getNMTokenMasterKey()); + assertNull(response.getContainerTokenMasterKey(), + containerToken + + "Activation shouldn't cause any key updates!"); + assertNull(response.getNMTokenMasterKey(), + nmToken + + "Activation shouldn't cause any key updates!"); dispatcher.await(); response = nm.nodeHeartbeat(true); - Assert.assertNull(containerToken + - "Even second heartbeat after activation shouldn't get any key updates!", - response.getContainerTokenMasterKey()); - Assert.assertNull(nmToken + - "Even second heartbeat after activation shouldn't get any key updates!", - response.getNMTokenMasterKey()); + assertNull(response.getContainerTokenMasterKey(), + containerToken + + "Even second heartbeat after activation shouldn't get any key updates!"); + assertNull(response.getNMTokenMasterKey(), + nmToken + + "Even second heartbeat after activation shouldn't get any key updates!"); dispatcher.await(); rm.stop(); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/TestTimelineServiceClientIntegration.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/TestTimelineServiceClientIntegration.java index 21a77d89e2c..871cbb6c036 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/TestTimelineServiceClientIntegration.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/TestTimelineServiceClientIntegration.java @@ -19,7 +19,7 @@ package org.apache.hadoop.yarn.server.timelineservice; -import static org.junit.Assert.fail; +import static org.junit.jupiter.api.Assertions.fail; import static org.mockito.ArgumentMatchers.any; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.when; @@ -54,9 +54,9 @@ import org.apache.hadoop.yarn.server.timelineservice.collector.NodeTimelineColle import org.apache.hadoop.yarn.server.timelineservice.collector.PerNodeTimelineCollectorsAuxService; import org.apache.hadoop.yarn.server.timelineservice.storage.FileSystemTimelineWriterImpl; import org.apache.hadoop.yarn.server.timelineservice.storage.TimelineWriter; -import org.junit.AfterClass; -import org.junit.BeforeClass; -import org.junit.Test; +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.Test; public class TestTimelineServiceClientIntegration { private static final String ROOT_DIR = new File("target", @@ -66,7 +66,7 @@ public class TestTimelineServiceClientIntegration { private static PerNodeTimelineCollectorsAuxService auxService; private static Configuration conf; - @BeforeClass + @BeforeAll public static void setupClass() throws Exception { try { collectorManager = new MockNodeTimelineCollectorManager(); @@ -88,7 +88,7 @@ public class TestTimelineServiceClientIntegration { } } - @AfterClass + @AfterAll public static void tearDownClass() throws Exception { if (auxService != null) { auxService.stop(); @@ -97,7 +97,7 @@ public class TestTimelineServiceClientIntegration { } @Test - public void testPutEntities() throws Exception { + void testPutEntities() throws Exception { TimelineV2Client client = TimelineV2Client.createTimelineClient(ApplicationId.newInstance(0, 1)); try { @@ -123,7 +123,7 @@ public class TestTimelineServiceClientIntegration { } @Test - public void testPutExtendedEntities() throws Exception { + void testPutExtendedEntities() throws Exception { ApplicationId appId = ApplicationId.newInstance(0, 1); TimelineV2Client client = TimelineV2Client.createTimelineClient(appId); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/security/TestTimelineAuthFilterForV2.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/security/TestTimelineAuthFilterForV2.java index f773807f05d..4b041d70155 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/security/TestTimelineAuthFilterForV2.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/security/TestTimelineAuthFilterForV2.java @@ -18,11 +18,11 @@ package org.apache.hadoop.yarn.server.timelineservice.security; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertNotEquals; -import static org.junit.Assert.assertNotNull; -import static org.junit.Assert.assertTrue; -import static org.junit.Assert.fail; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotEquals; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; import static org.mockito.ArgumentMatchers.any; import static org.mockito.ArgumentMatchers.eq; import static org.mockito.Mockito.atLeastOnce; @@ -75,18 +75,16 @@ import org.apache.hadoop.yarn.server.timelineservice.storage.FileSystemTimelineR import org.apache.hadoop.yarn.server.timelineservice.storage.FileSystemTimelineWriterImpl; import org.apache.hadoop.yarn.server.timelineservice.storage.TimelineWriter; import static org.apache.hadoop.yarn.conf.YarnConfiguration.TIMELINE_HTTP_AUTH_PREFIX; -import org.junit.After; -import org.junit.AfterClass; -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.Test; -import org.junit.runner.RunWith; -import org.junit.runners.Parameterized; +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.params.ParameterizedTest; +import org.junit.jupiter.params.provider.MethodSource; /** * Tests timeline authentication filter based security for timeline service v2. */ -@RunWith(Parameterized.class) public class TestTimelineAuthFilterForV2 { private static final String FOO_USER = "foo"; @@ -106,9 +104,8 @@ public class TestTimelineAuthFilterForV2 { // First param indicates whether HTTPS access or HTTP access and second param // indicates whether it is kerberos access or token based access. - @Parameterized.Parameters public static Collection params() { - return Arrays.asList(new Object[][] {{false, true}, {false, false}, + return Arrays.asList(new Object[][]{{false, true}, {false, false}, {true, false}, {true, true}}); } @@ -117,11 +114,14 @@ public class TestTimelineAuthFilterForV2 { private static String sslConfDir; private static Configuration conf; private static UserGroupInformation nonKerberosUser; + static { try { nonKerberosUser = UserGroupInformation.getCurrentUser(); - } catch (IOException e) {} + } catch (IOException e) { + } } + // Indicates whether HTTPS or HTTP access. private boolean withSsl; // Indicates whether Kerberos based login is used or token based access is @@ -129,13 +129,14 @@ public class TestTimelineAuthFilterForV2 { private boolean withKerberosLogin; private NodeTimelineCollectorManager collectorManager; private PerNodeTimelineCollectorsAuxService auxService; - public TestTimelineAuthFilterForV2(boolean withSsl, + + public void initTestTimelineAuthFilterForV2(boolean withSsl, boolean withKerberosLogin) { this.withSsl = withSsl; this.withKerberosLogin = withKerberosLogin; } - @BeforeClass + @BeforeAll public static void setup() { try { testMiniKDC = new MiniKdc(MiniKdc.createConf(), TEST_ROOT_DIR); @@ -181,7 +182,7 @@ public class TestTimelineAuthFilterForV2 { } } - @Before + @BeforeEach public void initialize() throws Exception { if (withSsl) { conf.set(YarnConfiguration.YARN_HTTP_POLICY_KEY, @@ -221,7 +222,7 @@ public class TestTimelineAuthFilterForV2 { appId, UserGroupInformation.getCurrentUser().getUserName()); if (!withKerberosLogin) { AppLevelTimelineCollector collector = - (AppLevelTimelineCollector)collectorManager.get(appId); + (AppLevelTimelineCollector) collectorManager.get(appId); Token token = collector.getDelegationTokenForApp(); token.setService(new Text("localhost" + token.getService().toString(). @@ -243,7 +244,7 @@ public class TestTimelineAuthFilterForV2 { return client; } - @AfterClass + @AfterAll public static void tearDown() throws Exception { if (testMiniKDC != null) { testMiniKDC.stop(); @@ -251,7 +252,7 @@ public class TestTimelineAuthFilterForV2 { FileUtil.fullyDelete(TEST_ROOT_DIR); } - @After + @AfterEach public void destroy() throws Exception { if (auxService != null) { auxService.stop(); @@ -318,7 +319,7 @@ public class TestTimelineAuthFilterForV2 { String entityType, int numEntities) throws Exception { TimelineV2Client client = createTimelineClientForUGI(appId); try { - // Sync call. Results available immediately. + // Sync call. Results available immediately. client.putEntities(createEntity("entity1", entityType)); assertEquals(numEntities, entityTypeDir.listFiles().length); verifyEntity(entityTypeDir, "entity1", entityType); @@ -343,8 +344,10 @@ public class TestTimelineAuthFilterForV2 { return false; } - @Test - public void testPutTimelineEntities() throws Exception { + @MethodSource("params") + @ParameterizedTest + void testPutTimelineEntities(boolean withSsl, boolean withKerberosLogin) throws Exception { + initTestTimelineAuthFilterForV2(withSsl, withKerberosLogin); final String entityType = ENTITY_TYPE + ENTITY_TYPE_SUFFIX.getAndIncrement(); ApplicationId appId = ApplicationId.newInstance(0, 1); @@ -364,8 +367,8 @@ public class TestTimelineAuthFilterForV2 { } }); } else { - assertTrue("Entities should have been published successfully.", - publishWithRetries(appId, entityTypeDir, entityType, 1)); + assertTrue(publishWithRetries(appId, entityTypeDir, entityType, 1), + "Entities should have been published successfully."); AppLevelTimelineCollector collector = (AppLevelTimelineCollector) collectorManager.get(appId); @@ -377,12 +380,12 @@ public class TestTimelineAuthFilterForV2 { // published. Thread.sleep(1000); // Entities should publish successfully after renewal. - assertTrue("Entities should have been published successfully.", - publishWithRetries(appId, entityTypeDir, entityType, 2)); + assertTrue(publishWithRetries(appId, entityTypeDir, entityType, 2), + "Entities should have been published successfully."); assertNotNull(collector); verify(collectorManager.getTokenManagerService(), atLeastOnce()). renewToken(eq(collector.getDelegationTokenForApp()), - any(String.class)); + any(String.class)); // Wait to ensure lifetime of token expires and ensure its regenerated // automatically. @@ -393,8 +396,9 @@ public class TestTimelineAuthFilterForV2 { } Thread.sleep(50); } - assertNotEquals("Token should have been regenerated.", token, - collector.getDelegationTokenForApp()); + assertNotEquals(token, + collector.getDelegationTokenForApp(), + "Token should have been regenerated."); Thread.sleep(1000); // Try publishing with the old token in UGI. Publishing should fail due // to invalid token. @@ -402,8 +406,8 @@ public class TestTimelineAuthFilterForV2 { publishAndVerifyEntity(appId, entityTypeDir, entityType, 2); fail("Exception should have been thrown due to Invalid Token."); } catch (YarnException e) { - assertTrue("Exception thrown should have been due to Invalid Token.", - e.getCause().getMessage().contains("InvalidToken")); + assertTrue(e.getCause().getMessage().contains("InvalidToken"), + "Exception thrown should have been due to Invalid Token."); } // Update the regenerated token in UGI and retry publishing entities. @@ -411,10 +415,10 @@ public class TestTimelineAuthFilterForV2 { collector.getDelegationTokenForApp(); regeneratedToken.setService(new Text("localhost" + regeneratedToken.getService().toString().substring( - regeneratedToken.getService().toString().indexOf(":")))); + regeneratedToken.getService().toString().indexOf(":")))); UserGroupInformation.getCurrentUser().addToken(regeneratedToken); - assertTrue("Entities should have been published successfully.", - publishWithRetries(appId, entityTypeDir, entityType, 2)); + assertTrue(publishWithRetries(appId, entityTypeDir, entityType, 2), + "Entities should have been published successfully."); // Token was generated twice, once when app collector was created and // later after token lifetime expiry. verify(collectorManager.getTokenManagerService(), times(2)). @@ -432,11 +436,11 @@ public class TestTimelineAuthFilterForV2 { } Thread.sleep(50); } - assertNotNull("Error reading entityTypeDir", entities); + assertNotNull(entities, "Error reading entityTypeDir"); assertEquals(2, entities.length); verifyEntity(entityTypeDir, "entity2", entityType); AppLevelTimelineCollector collector = - (AppLevelTimelineCollector)collectorManager.get(appId); + (AppLevelTimelineCollector) collectorManager.get(appId); assertNotNull(collector); auxService.removeApplication(appId); verify(collectorManager.getTokenManagerService()).cancelToken( @@ -446,24 +450,20 @@ public class TestTimelineAuthFilterForV2 { private static class DummyNodeTimelineCollectorManager extends NodeTimelineCollectorManager { private volatile int tokenExpiredCnt = 0; - DummyNodeTimelineCollectorManager() { - super(); - } private int getTokenExpiredCnt() { return tokenExpiredCnt; } @Override - protected TimelineV2DelegationTokenSecretManagerService - createTokenManagerService() { + protected TimelineV2DelegationTokenSecretManagerService createTokenManagerService() { return spy(new TimelineV2DelegationTokenSecretManagerService() { @Override protected AbstractDelegationTokenSecretManager - - createTimelineDelegationTokenSecretManager(long secretKeyInterval, - long tokenMaxLifetime, long tokenRenewInterval, - long tokenRemovalScanInterval) { + createTimelineDelegationTokenSecretManager(long + secretKeyInterval, + long tokenMaxLifetime, long tokenRenewInterval, + long tokenRemovalScanInterval) { return spy(new TimelineV2DelegationTokenSecretManager( secretKeyInterval, tokenMaxLifetime, tokenRenewInterval, 2000L) { @Override diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore/pom.xml index 17596405caf..56089a42ea8 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore/pom.xml +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore/pom.xml @@ -122,6 +122,10 @@ io.netty netty-handler-proxy + + xml-apis + xml-apis + diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/pom.xml index 65af3afadda..5a2823ad5ef 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/pom.xml +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/pom.xml @@ -125,17 +125,26 @@ test - - junit - junit - test - - org.assertj assertj-core test + + org.junit.jupiter + junit-jupiter-api + test + + + org.junit.jupiter + junit-jupiter-engine + test + + + org.junit.platform + junit-platform-launcher + test + org.mockito diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/FileSystemTimelineReaderImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/FileSystemTimelineReaderImpl.java index 197fa5a4072..913b8360e2f 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/FileSystemTimelineReaderImpl.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/FileSystemTimelineReaderImpl.java @@ -93,7 +93,7 @@ public class FileSystemTimelineReaderImpl extends AbstractService private static final String STORAGE_DIR_ROOT = "timeline_service_data"; private final CSVFormat csvFormat = - CSVFormat.DEFAULT.withHeader("APP", "USER", "FLOW", "FLOWRUN"); + CSVFormat.Builder.create().setHeader("APP", "USER", "FLOW", "FLOWRUN").build(); public FileSystemTimelineReaderImpl() { super(FileSystemTimelineReaderImpl.class.getName()); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/collector/TestNMTimelineCollectorManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/collector/TestNMTimelineCollectorManager.java index b109be5fa48..d17ed277296 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/collector/TestNMTimelineCollectorManager.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/collector/TestNMTimelineCollectorManager.java @@ -19,17 +19,6 @@ package org.apache.hadoop.yarn.server.timelineservice.collector; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertNotNull; -import static org.junit.Assert.assertTrue; -import static org.junit.Assert.fail; -import static org.mockito.ArgumentMatchers.any; -import static org.mockito.Mockito.doReturn; -import static org.mockito.Mockito.mock; -import static org.mockito.Mockito.spy; -import static org.mockito.Mockito.when; - import java.io.IOException; import java.util.ArrayList; import java.util.List; @@ -38,6 +27,11 @@ import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.Future; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.conf.YarnConfiguration; @@ -47,14 +41,22 @@ import org.apache.hadoop.yarn.server.api.protocolrecords.GetTimelineCollectorCon import org.apache.hadoop.yarn.server.api.protocolrecords.GetTimelineCollectorContextResponse; import org.apache.hadoop.yarn.server.timelineservice.storage.FileSystemTimelineWriterImpl; import org.apache.hadoop.yarn.server.timelineservice.storage.TimelineWriter; -import org.junit.After; -import org.junit.Before; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.Mockito.doReturn; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.spy; +import static org.mockito.Mockito.when; public class TestNMTimelineCollectorManager { private NodeTimelineCollectorManager collectorManager; - @Before + @BeforeEach public void setup() throws Exception { collectorManager = createCollectorManager(); Configuration conf = new YarnConfiguration(); @@ -66,7 +68,7 @@ public class TestNMTimelineCollectorManager { collectorManager.start(); } - @After + @AfterEach public void tearDown() throws Exception { if (collectorManager != null) { collectorManager.stop(); @@ -74,12 +76,12 @@ public class TestNMTimelineCollectorManager { } @Test - public void testStartingWriterFlusher() throws Exception { + void testStartingWriterFlusher() throws Exception { assertTrue(collectorManager.writerFlusherRunning()); } @Test - public void testStartWebApp() throws Exception { + void testStartWebApp() throws Exception { assertNotNull(collectorManager.getRestServerBindAddress()); String address = collectorManager.getRestServerBindAddress(); String[] parts = address.split(":"); @@ -89,8 +91,9 @@ public class TestNMTimelineCollectorManager { Integer.valueOf(parts[1]) <= 30100); } - @Test(timeout=60000) - public void testMultithreadedAdd() throws Exception { + @Test + @Timeout(60000) + void testMultithreadedAdd() throws Exception { final int numApps = 5; List> tasks = new ArrayList>(); for (int i = 0; i < numApps; i++) { @@ -107,7 +110,7 @@ public class TestNMTimelineCollectorManager { ExecutorService executor = Executors.newFixedThreadPool(numApps); try { List> futures = executor.invokeAll(tasks); - for (Future future: futures) { + for (Future future : futures) { assertTrue(future.get()); } } finally { @@ -121,7 +124,7 @@ public class TestNMTimelineCollectorManager { } @Test - public void testMultithreadedAddAndRemove() throws Exception { + void testMultithreadedAddAndRemove() throws Exception { final int numApps = 5; List> tasks = new ArrayList>(); for (int i = 0; i < numApps; i++) { @@ -140,7 +143,7 @@ public class TestNMTimelineCollectorManager { ExecutorService executor = Executors.newFixedThreadPool(numApps); try { List> futures = executor.invokeAll(tasks); - for (Future future: futures) { + for (Future future : futures) { assertTrue(future.get()); } } finally { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/collector/TestPerNodeAggTimelineCollectorMetrics.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/collector/TestPerNodeAggTimelineCollectorMetrics.java index c9ff3037633..c43bc3d2031 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/collector/TestPerNodeAggTimelineCollectorMetrics.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/collector/TestPerNodeAggTimelineCollectorMetrics.java @@ -18,11 +18,14 @@ package org.apache.hadoop.yarn.server.timelineservice.collector; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + import org.apache.hadoop.yarn.server.timelineservice.metrics.PerNodeAggTimelineCollectorMetrics; -import org.junit.After; -import org.junit.Assert; -import org.junit.Before; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotNull; /** * Test PerNodeAggTimelineCollectorMetrics. @@ -32,24 +35,24 @@ public class TestPerNodeAggTimelineCollectorMetrics { private PerNodeAggTimelineCollectorMetrics metrics; @Test - public void testTimelineCollectorMetrics() { - Assert.assertNotNull(metrics); - Assert.assertEquals(10, + void testTimelineCollectorMetrics() { + assertNotNull(metrics); + assertEquals(10, metrics.getPutEntitiesSuccessLatency().getInterval()); - Assert.assertEquals(10, + assertEquals(10, metrics.getPutEntitiesFailureLatency().getInterval()); - Assert.assertEquals(10, + assertEquals(10, metrics.getAsyncPutEntitiesSuccessLatency().getInterval()); - Assert.assertEquals(10, + assertEquals(10, metrics.getAsyncPutEntitiesFailureLatency().getInterval()); } - @Before + @BeforeEach public void setup() { metrics = PerNodeAggTimelineCollectorMetrics.getInstance(); } - @After + @AfterEach public void tearDown() { PerNodeAggTimelineCollectorMetrics.destroy(); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/collector/TestPerNodeTimelineCollectorsAuxService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/collector/TestPerNodeTimelineCollectorsAuxService.java index 811465f7f97..66474b14851 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/collector/TestPerNodeTimelineCollectorsAuxService.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/collector/TestPerNodeTimelineCollectorsAuxService.java @@ -18,19 +18,13 @@ package org.apache.hadoop.yarn.server.timelineservice.collector; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertTrue; -import static org.junit.Assert.fail; -import static org.mockito.ArgumentMatchers.any; -import static org.mockito.Mockito.doReturn; -import static org.mockito.Mockito.mock; -import static org.mockito.Mockito.spy; -import static org.mockito.Mockito.when; - import java.io.IOException; import java.util.concurrent.Future; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.util.ExitUtil; import org.apache.hadoop.util.Shell; @@ -47,9 +41,16 @@ import org.apache.hadoop.yarn.server.api.protocolrecords.GetTimelineCollectorCon import org.apache.hadoop.yarn.server.api.protocolrecords.GetTimelineCollectorContextResponse; import org.apache.hadoop.yarn.server.timelineservice.storage.FileSystemTimelineWriterImpl; import org.apache.hadoop.yarn.server.timelineservice.storage.TimelineWriter; -import org.junit.After; -import org.junit.Assert; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.Mockito.doReturn; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.spy; +import static org.mockito.Mockito.when; public class TestPerNodeTimelineCollectorsAuxService { private ApplicationAttemptId appAttemptId; @@ -70,7 +71,7 @@ public class TestPerNodeTimelineCollectorsAuxService { 1000L); } - @After + @AfterEach public void tearDown() throws Shell.ExitCodeException { if (auxService != null) { auxService.stop(); @@ -78,7 +79,7 @@ public class TestPerNodeTimelineCollectorsAuxService { } @Test - public void testAddApplication() throws Exception { + void testAddApplication() throws Exception { auxService = createCollectorAndAddApplication(); // auxService should have a single app assertTrue(auxService.hasApplication(appAttemptId.getApplicationId())); @@ -86,7 +87,7 @@ public class TestPerNodeTimelineCollectorsAuxService { } @Test - public void testAddApplicationNonAMContainer() throws Exception { + void testAddApplicationNonAMContainer() throws Exception { auxService = createCollector(); ContainerId containerId = getContainerId(2L); // not an AM @@ -99,7 +100,7 @@ public class TestPerNodeTimelineCollectorsAuxService { } @Test - public void testRemoveApplication() throws Exception { + void testRemoveApplication() throws Exception { auxService = createCollectorAndAddApplication(); // auxService should have a single app assertTrue(auxService.hasApplication(appAttemptId.getApplicationId())); @@ -118,7 +119,7 @@ public class TestPerNodeTimelineCollectorsAuxService { } @Test - public void testRemoveApplicationNonAMContainer() throws Exception { + void testRemoveApplicationNonAMContainer() throws Exception { auxService = createCollectorAndAddApplication(); // auxService should have a single app assertTrue(auxService.hasApplication(appAttemptId.getApplicationId())); @@ -133,8 +134,9 @@ public class TestPerNodeTimelineCollectorsAuxService { auxService.close(); } - @Test(timeout = 60000) - public void testLaunch() throws Exception { + @Test + @Timeout(60000) + void testLaunch() throws Exception { ExitUtil.disableSystemExit(); try { auxService = @@ -192,7 +194,7 @@ public class TestPerNodeTimelineCollectorsAuxService { try { future.get(); } catch (Exception e) { - Assert.fail("Expeption thrown while removing collector"); + fail("Expeption thrown while removing collector"); } return future; } @@ -228,8 +230,9 @@ public class TestPerNodeTimelineCollectorsAuxService { return ContainerId.newContainerId(appAttemptId, id); } - @Test(timeout = 60000) - public void testRemoveAppWhenSecondAttemptAMCotainerIsLaunchedSameNode() + @Test + @Timeout(60000) + void testRemoveAppWhenSecondAttemptAMCotainerIsLaunchedSameNode() throws Exception { // add first attempt collector auxService = createCollectorAndAddApplication(); @@ -241,25 +244,25 @@ public class TestPerNodeTimelineCollectorsAuxService { createContainerInitalizationContext(2); auxService.initializeContainer(containerInitalizationContext); - assertTrue("Applicatin not found in collectors.", - auxService.hasApplication(appAttemptId.getApplicationId())); + assertTrue(auxService.hasApplication(appAttemptId.getApplicationId()), + "Applicatin not found in collectors."); // first attempt stop container ContainerTerminationContext context = createContainerTerminationContext(1); auxService.stopContainer(context); // 2nd attempt container removed, still collector should hold application id - assertTrue("collector has removed application though 2nd attempt" - + " is running this node", - auxService.hasApplication(appAttemptId.getApplicationId())); + assertTrue(auxService.hasApplication(appAttemptId.getApplicationId()), + "collector has removed application though 2nd attempt" + + " is running this node"); // second attempt stop container context = createContainerTerminationContext(2); auxService.stopContainer(context); // auxService should not have that app - assertFalse("Application is not removed from collector", - auxService.hasApplication(appAttemptId.getApplicationId())); + assertFalse(auxService.hasApplication(appAttemptId.getApplicationId()), + "Application is not removed from collector"); auxService.close(); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/collector/TestTimelineCollector.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/collector/TestTimelineCollector.java index 09a1e6cb3d3..c2995d5daed 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/collector/TestTimelineCollector.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/collector/TestTimelineCollector.java @@ -18,35 +18,35 @@ package org.apache.hadoop.yarn.server.timelineservice.collector; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.security.UserGroupInformation; -import org.apache.hadoop.util.Sets; -import org.apache.hadoop.yarn.api.records.timeline.TimelineHealth; -import org.apache.hadoop.yarn.api.records.timelineservice.TimelineDomain; -import org.apache.hadoop.yarn.api.records.timelineservice.TimelineMetricOperation; -import org.apache.hadoop.yarn.api.records.ApplicationId; -import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntities; -import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntity; -import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntityType; -import org.apache.hadoop.yarn.api.records.timelineservice.TimelineMetric; -import org.apache.hadoop.yarn.api.records.timelineservice.TimelineWriteResponse; -import org.apache.hadoop.yarn.conf.YarnConfiguration; -import org.apache.hadoop.yarn.server.timelineservice.collector.TimelineCollector.AggregationStatusTable; -import org.apache.hadoop.yarn.server.timelineservice.storage.TimelineWriter; -import org.junit.Test; - -import org.mockito.internal.stubbing.answers.AnswersWithDelay; -import org.mockito.internal.stubbing.answers.Returns; - import java.io.IOException; import java.util.HashSet; import java.util.Map; import java.util.Set; +import org.junit.jupiter.api.Test; +import org.mockito.internal.stubbing.answers.AnswersWithDelay; +import org.mockito.internal.stubbing.answers.Returns; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.security.UserGroupInformation; +import org.apache.hadoop.util.Sets; +import org.apache.hadoop.yarn.api.records.ApplicationId; +import org.apache.hadoop.yarn.api.records.timeline.TimelineHealth; +import org.apache.hadoop.yarn.api.records.timelineservice.TimelineDomain; +import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntities; +import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntity; +import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntityType; +import org.apache.hadoop.yarn.api.records.timelineservice.TimelineMetric; +import org.apache.hadoop.yarn.api.records.timelineservice.TimelineMetricOperation; +import org.apache.hadoop.yarn.api.records.timelineservice.TimelineWriteResponse; +import org.apache.hadoop.yarn.conf.YarnConfiguration; +import org.apache.hadoop.yarn.server.timelineservice.collector.TimelineCollector.AggregationStatusTable; +import org.apache.hadoop.yarn.server.timelineservice.storage.TimelineWriter; + import static org.assertj.core.api.Assertions.assertThat; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertTrue; -import static org.junit.Assert.fail; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; import static org.mockito.ArgumentMatchers.any; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.never; @@ -104,7 +104,7 @@ public class TestTimelineCollector { } @Test - public void testAggregation() throws Exception { + void testAggregation() throws Exception { // Test aggregation with multiple groups. int groups = 3; int n = 50; @@ -154,7 +154,7 @@ public class TestTimelineCollector { * putEntity() calls. */ @Test - public void testPutEntity() throws IOException { + void testPutEntity() throws IOException { TimelineWriter writer = mock(TimelineWriter.class); TimelineHealth timelineHealth = new TimelineHealth(TimelineHealth. TimelineHealthStatus.RUNNING, ""); @@ -163,7 +163,7 @@ public class TestTimelineCollector { Configuration conf = new Configuration(); conf.setInt(YarnConfiguration.TIMELINE_SERVICE_CLIENT_MAX_RETRIES, 5); conf.setLong(YarnConfiguration.TIMELINE_SERVICE_CLIENT_RETRY_INTERVAL_MS, - 500L); + 500L); TimelineCollector collector = new TimelineCollectorForTest(writer); collector.init(conf); @@ -179,7 +179,7 @@ public class TestTimelineCollector { @Test - public void testPutEntityWithStorageDown() throws IOException { + void testPutEntityWithStorageDown() throws IOException { TimelineWriter writer = mock(TimelineWriter.class); TimelineHealth timelineHealth = new TimelineHealth(TimelineHealth. TimelineHealthStatus.CONNECTION_FAILURE, ""); @@ -188,7 +188,7 @@ public class TestTimelineCollector { Configuration conf = new Configuration(); conf.setInt(YarnConfiguration.TIMELINE_SERVICE_CLIENT_MAX_RETRIES, 5); conf.setLong(YarnConfiguration.TIMELINE_SERVICE_CLIENT_RETRY_INTERVAL_MS, - 500L); + 500L); TimelineCollector collector = new TimelineCollectorForTest(writer); collector.init(conf); @@ -203,8 +203,8 @@ public class TestTimelineCollector { exceptionCaught = true; } } - assertTrue("TimelineCollector putEntity failed to " + - "handle storage down", exceptionCaught); + assertTrue(exceptionCaught, "TimelineCollector putEntity failed to " + + "handle storage down"); } /** @@ -212,7 +212,7 @@ public class TestTimelineCollector { * putEntityAsync() calls. */ @Test - public void testPutEntityAsync() throws Exception { + void testPutEntityAsync() throws Exception { TimelineWriter writer = mock(TimelineWriter.class); TimelineCollector collector = new TimelineCollectorForTest(writer); collector.init(new Configuration()); @@ -232,7 +232,7 @@ public class TestTimelineCollector { * write is taking too much time. */ @Test - public void testAsyncEntityDiscard() throws Exception { + void testAsyncEntityDiscard() throws Exception { TimelineWriter writer = mock(TimelineWriter.class); when(writer.write(any(), any(), any())).thenAnswer( @@ -261,7 +261,8 @@ public class TestTimelineCollector { * Test TimelineCollector's interaction with TimelineWriter upon * putDomain() calls. */ - @Test public void testPutDomain() throws IOException { + @Test + void testPutDomain() throws IOException { TimelineWriter writer = mock(TimelineWriter.class); TimelineHealth timelineHealth = new TimelineHealth(TimelineHealth. TimelineHealthStatus.RUNNING, ""); @@ -329,32 +330,32 @@ public class TestTimelineCollector { } @Test - public void testClearPreviousEntitiesOnAggregation() throws Exception { + void testClearPreviousEntitiesOnAggregation() throws Exception { final long ts = System.currentTimeMillis(); TimelineCollector collector = new TimelineCollector("") { - @Override - public TimelineCollectorContext getTimelineEntityContext() { - return new TimelineCollectorContext("cluster", "user", "flow", "1", - 1L, ApplicationId.newInstance(ts, 1).toString()); - } + @Override + public TimelineCollectorContext getTimelineEntityContext() { + return new TimelineCollectorContext("cluster", "user", "flow", "1", + 1L, ApplicationId.newInstance(ts, 1).toString()); + } }; TimelineWriter writer = mock(TimelineWriter.class); TimelineHealth timelineHealth = new TimelineHealth(TimelineHealth. - TimelineHealthStatus.RUNNING, ""); + TimelineHealthStatus.RUNNING, ""); when(writer.getHealthStatus()).thenReturn(timelineHealth); Configuration conf = new Configuration(); conf.setInt(YarnConfiguration.TIMELINE_SERVICE_CLIENT_MAX_RETRIES, 5); conf.setLong(YarnConfiguration.TIMELINE_SERVICE_CLIENT_RETRY_INTERVAL_MS, - 500L); + 500L); collector.init(conf); collector.setWriter(writer); // Put 5 entities with different metric values. TimelineEntities entities = new TimelineEntities(); - for (int i = 1; i <=5; i++) { + for (int i = 1; i <= 5; i++) { TimelineEntity entity = createEntity("e" + i, "type"); entity.addMetric(createDummyMetric(ts + i, Long.valueOf(i * 50))); entities.addEntity(entity); @@ -368,7 +369,7 @@ public class TestTimelineCollector { assertEquals(Sets.newHashSet("type"), aggregationGroups.keySet()); TimelineEntity aggregatedEntity = TimelineCollector. aggregateWithoutGroupId(aggregationGroups, currContext.getAppId(), - TimelineEntityType.YARN_APPLICATION.toString()); + TimelineEntityType.YARN_APPLICATION.toString()); TimelineMetric aggregatedMetric = aggregatedEntity.getMetrics().iterator().next(); assertEquals(750L, aggregatedMetric.getValues().values().iterator().next()); @@ -378,7 +379,7 @@ public class TestTimelineCollector { // Aggregate entities. aggregatedEntity = TimelineCollector. aggregateWithoutGroupId(aggregationGroups, currContext.getAppId(), - TimelineEntityType.YARN_APPLICATION.toString()); + TimelineEntityType.YARN_APPLICATION.toString()); aggregatedMetric = aggregatedEntity.getMetrics().iterator().next(); // No values aggregated as no metrics put for an entity between this // aggregation and the previous one. @@ -388,7 +389,7 @@ public class TestTimelineCollector { // Put 3 entities. entities = new TimelineEntities(); - for (int i = 1; i <=3; i++) { + for (int i = 1; i <= 3; i++) { TimelineEntity entity = createEntity("e" + i, "type"); entity.addMetric(createDummyMetric(System.currentTimeMillis() + i, 50L)); entities.addEntity(entity); @@ -399,7 +400,7 @@ public class TestTimelineCollector { // Aggregate entities. aggregatedEntity = TimelineCollector. aggregateWithoutGroupId(aggregationGroups, currContext.getAppId(), - TimelineEntityType.YARN_APPLICATION.toString()); + TimelineEntityType.YARN_APPLICATION.toString()); // Last 3 entities picked up for aggregation. aggregatedMetric = aggregatedEntity.getMetrics().iterator().next(); assertEquals(150L, aggregatedMetric.getValues().values().iterator().next()); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/collector/TestTimelineCollectorManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/collector/TestTimelineCollectorManager.java index f8e83998311..c7537b0278d 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/collector/TestTimelineCollectorManager.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/collector/TestTimelineCollectorManager.java @@ -18,38 +18,49 @@ package org.apache.hadoop.yarn.server.timelineservice.collector; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; import org.apache.hadoop.yarn.server.timelineservice.storage.FileSystemTimelineWriterImpl; import org.apache.hadoop.yarn.server.timelineservice.storage.TimelineWriter; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertThrows; /** * Unit tests for TimelineCollectorManager. */ public class TestTimelineCollectorManager{ - @Test(timeout = 60000, expected = YarnRuntimeException.class) - public void testTimelineCollectorManagerWithInvalidTimelineWriter() { - Configuration conf = new YarnConfiguration(); - conf.set(YarnConfiguration.TIMELINE_SERVICE_WRITER_CLASS, - Object.class.getName()); - runTimelineCollectorManagerWithConfig(conf); + @Test + @Timeout(60000) + void testTimelineCollectorManagerWithInvalidTimelineWriter() { + assertThrows(YarnRuntimeException.class, () -> { + Configuration conf = new YarnConfiguration(); + conf.set(YarnConfiguration.TIMELINE_SERVICE_WRITER_CLASS, + Object.class.getName()); + runTimelineCollectorManagerWithConfig(conf); + }); } - @Test(timeout = 60000, expected = YarnRuntimeException.class) - public void testTimelineCollectorManagerWithNonexistentTimelineWriter() { - String nonexistentTimelineWriterClass = "org.apache.org.yarn.server." + - "timelineservice.storage.XXXXXXXX"; - Configuration conf = new YarnConfiguration(); - conf.set(YarnConfiguration.TIMELINE_SERVICE_WRITER_CLASS, - nonexistentTimelineWriterClass); - runTimelineCollectorManagerWithConfig(conf); + @Test + @Timeout(60000) + void testTimelineCollectorManagerWithNonexistentTimelineWriter() { + assertThrows(YarnRuntimeException.class, () -> { + String nonexistentTimelineWriterClass = "org.apache.org.yarn.server." + + "timelineservice.storage.XXXXXXXX"; + Configuration conf = new YarnConfiguration(); + conf.set(YarnConfiguration.TIMELINE_SERVICE_WRITER_CLASS, + nonexistentTimelineWriterClass); + runTimelineCollectorManagerWithConfig(conf); + }); } - @Test(timeout = 60000) - public void testTimelineCollectorManagerWithFileSystemWriter() { + @Test + @Timeout(60000) + void testTimelineCollectorManagerWithFileSystemWriter() { Configuration conf = new YarnConfiguration(); conf.setClass(YarnConfiguration.TIMELINE_SERVICE_WRITER_CLASS, FileSystemTimelineWriterImpl.class, TimelineWriter.class); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderMetrics.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderMetrics.java index fa74689d9bc..35f027b5141 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderMetrics.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderMetrics.java @@ -18,11 +18,14 @@ package org.apache.hadoop.yarn.server.timelineservice.reader; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + import org.apache.hadoop.yarn.server.timelineservice.metrics.TimelineReaderMetrics; -import org.junit.After; -import org.junit.Assert; -import org.junit.Before; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotNull; /** * Test TimelineReaderMetrics. @@ -32,24 +35,24 @@ public class TestTimelineReaderMetrics { private TimelineReaderMetrics metrics; @Test - public void testTimelineReaderMetrics() { - Assert.assertNotNull(metrics); - Assert.assertEquals(10, + void testTimelineReaderMetrics() { + assertNotNull(metrics); + assertEquals(10, metrics.getGetEntitiesSuccessLatency().getInterval()); - Assert.assertEquals(10, + assertEquals(10, metrics.getGetEntitiesFailureLatency().getInterval()); - Assert.assertEquals(10, + assertEquals(10, metrics.getGetEntityTypesSuccessLatency().getInterval()); - Assert.assertEquals(10, + assertEquals(10, metrics.getGetEntityTypesFailureLatency().getInterval()); } - @Before + @BeforeEach public void setup() { metrics = TimelineReaderMetrics.getInstance(); } - @After + @AfterEach public void tearDown() { TimelineReaderMetrics.destroy(); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderServer.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderServer.java index 6fc46cc7b03..46331bf1e43 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderServer.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderServer.java @@ -18,7 +18,8 @@ package org.apache.hadoop.yarn.server.timelineservice.reader; -import static org.junit.Assert.assertEquals; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.service.Service.STATE; @@ -26,12 +27,15 @@ import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.exceptions.YarnRuntimeException; import org.apache.hadoop.yarn.server.timelineservice.storage.FileSystemTimelineReaderImpl; import org.apache.hadoop.yarn.server.timelineservice.storage.TimelineReader; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertThrows; public class TestTimelineReaderServer { - @Test(timeout = 60000) - public void testStartStopServer() throws Exception { + @Test + @Timeout(60000) + void testStartStopServer() throws Exception { @SuppressWarnings("resource") TimelineReaderServer server = new TimelineReaderServer(); Configuration config = new YarnConfiguration(); @@ -56,30 +60,36 @@ public class TestTimelineReaderServer { } } - @Test(timeout = 60000, expected = YarnRuntimeException.class) - public void testTimelineReaderServerWithInvalidTimelineReader() { - Configuration conf = new YarnConfiguration(); - conf.setBoolean(YarnConfiguration.TIMELINE_SERVICE_ENABLED, true); - conf.setFloat(YarnConfiguration.TIMELINE_SERVICE_VERSION, 2.0f); - conf.set(YarnConfiguration.TIMELINE_SERVICE_READER_WEBAPP_ADDRESS, - "localhost:0"); - conf.set(YarnConfiguration.TIMELINE_SERVICE_READER_CLASS, - Object.class.getName()); - runTimelineReaderServerWithConfig(conf); + @Test + @Timeout(60000) + void testTimelineReaderServerWithInvalidTimelineReader() { + assertThrows(YarnRuntimeException.class, () -> { + Configuration conf = new YarnConfiguration(); + conf.setBoolean(YarnConfiguration.TIMELINE_SERVICE_ENABLED, true); + conf.setFloat(YarnConfiguration.TIMELINE_SERVICE_VERSION, 2.0f); + conf.set(YarnConfiguration.TIMELINE_SERVICE_READER_WEBAPP_ADDRESS, + "localhost:0"); + conf.set(YarnConfiguration.TIMELINE_SERVICE_READER_CLASS, + Object.class.getName()); + runTimelineReaderServerWithConfig(conf); + }); } - @Test(timeout = 60000, expected = YarnRuntimeException.class) - public void testTimelineReaderServerWithNonexistentTimelineReader() { - String nonexistentTimelineReaderClass = "org.apache.org.yarn.server." + - "timelineservice.storage.XXXXXXXX"; - Configuration conf = new YarnConfiguration(); - conf.setBoolean(YarnConfiguration.TIMELINE_SERVICE_ENABLED, true); - conf.setFloat(YarnConfiguration.TIMELINE_SERVICE_VERSION, 2.0f); - conf.set(YarnConfiguration.TIMELINE_SERVICE_READER_WEBAPP_ADDRESS, - "localhost:0"); - conf.set(YarnConfiguration.TIMELINE_SERVICE_READER_CLASS, - nonexistentTimelineReaderClass); - runTimelineReaderServerWithConfig(conf); + @Test + @Timeout(60000) + void testTimelineReaderServerWithNonexistentTimelineReader() { + assertThrows(YarnRuntimeException.class, () -> { + String nonexistentTimelineReaderClass = "org.apache.org.yarn.server." + + "timelineservice.storage.XXXXXXXX"; + Configuration conf = new YarnConfiguration(); + conf.setBoolean(YarnConfiguration.TIMELINE_SERVICE_ENABLED, true); + conf.setFloat(YarnConfiguration.TIMELINE_SERVICE_VERSION, 2.0f); + conf.set(YarnConfiguration.TIMELINE_SERVICE_READER_WEBAPP_ADDRESS, + "localhost:0"); + conf.set(YarnConfiguration.TIMELINE_SERVICE_READER_CLASS, + nonexistentTimelineReaderClass); + runTimelineReaderServerWithConfig(conf); + }); } /** diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderUtils.java index bc5eb9c5af5..feafeda02ce 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderUtils.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderUtils.java @@ -18,38 +18,38 @@ package org.apache.hadoop.yarn.server.timelineservice.reader; -import static org.junit.Assert.assertArrayEquals; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertNull; - import java.util.List; -import org.junit.Test; +import org.junit.jupiter.api.Test; + +import static org.junit.jupiter.api.Assertions.assertArrayEquals; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNull; public class TestTimelineReaderUtils { @Test - public void testSplitUsingEscapeAndDelimChar() throws Exception { + void testSplitUsingEscapeAndDelimChar() throws Exception { List list = TimelineReaderUtils.split("*!cluster!*!b**o***!xer!oozie**", '!', '*'); String[] arr = new String[list.size()]; arr = list.toArray(arr); - assertArrayEquals(new String[] {"!cluster", "!b*o*!xer", "oozie*"}, arr); + assertArrayEquals(new String[]{"!cluster", "!b*o*!xer", "oozie*"}, arr); list = TimelineReaderUtils.split("*!cluster!*!b**o***!xer!!", '!', '*'); arr = new String[list.size()]; arr = list.toArray(arr); - assertArrayEquals(new String[] {"!cluster", "!b*o*!xer", "", ""}, arr); + assertArrayEquals(new String[]{"!cluster", "!b*o*!xer", "", ""}, arr); } @Test - public void testJoinAndEscapeStrings() throws Exception { + void testJoinAndEscapeStrings() throws Exception { assertEquals("*!cluster!*!b**o***!xer!oozie**", TimelineReaderUtils.joinAndEscapeStrings( - new String[] {"!cluster", "!b*o*!xer", "oozie*"}, '!', '*')); + new String[]{"!cluster", "!b*o*!xer", "oozie*"}, '!', '*')); assertEquals("*!cluster!*!b**o***!xer!!", TimelineReaderUtils.joinAndEscapeStrings( - new String[] {"!cluster", "!b*o*!xer", "", ""}, '!', '*')); + new String[]{"!cluster", "!b*o*!xer", "", ""}, '!', '*')); assertNull(TimelineReaderUtils.joinAndEscapeStrings( - new String[] {"!cluster", "!b*o*!xer", null, ""}, '!', '*')); + new String[]{"!cluster", "!b*o*!xer", null, ""}, '!', '*')); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWebServices.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWebServices.java index ef74716b564..e71f3be4383 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWebServices.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWebServices.java @@ -18,11 +18,6 @@ package org.apache.hadoop.yarn.server.timelineservice.reader; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertNotNull; -import static org.junit.Assert.assertTrue; - import java.io.File; import java.io.IOException; import java.lang.reflect.UndeclaredThrowableException; @@ -30,9 +25,22 @@ import java.net.HttpURLConnection; import java.net.URI; import java.net.URL; import java.util.Set; - import javax.ws.rs.core.MediaType; +import com.sun.jersey.api.client.Client; +import com.sun.jersey.api.client.ClientResponse; +import com.sun.jersey.api.client.ClientResponse.Status; +import com.sun.jersey.api.client.GenericType; +import com.sun.jersey.api.client.config.ClientConfig; +import com.sun.jersey.api.client.config.DefaultClientConfig; +import com.sun.jersey.client.urlconnection.HttpURLConnectionFactory; +import com.sun.jersey.client.urlconnection.URLConnectionClientHandler; +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + import org.apache.commons.io.FileUtils; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.http.JettyUtils; @@ -45,21 +53,12 @@ import org.apache.hadoop.yarn.server.timelineservice.storage.FileSystemTimelineR import org.apache.hadoop.yarn.server.timelineservice.storage.TestFileSystemTimelineReaderImpl; import org.apache.hadoop.yarn.server.timelineservice.storage.TimelineReader; import org.apache.hadoop.yarn.webapp.YarnJacksonJaxbJsonProvider; -import org.junit.After; -import org.junit.AfterClass; -import org.junit.Assert; -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.Test; -import com.sun.jersey.api.client.Client; -import com.sun.jersey.api.client.ClientResponse; -import com.sun.jersey.api.client.ClientResponse.Status; -import com.sun.jersey.api.client.GenericType; -import com.sun.jersey.api.client.config.ClientConfig; -import com.sun.jersey.api.client.config.DefaultClientConfig; -import com.sun.jersey.client.urlconnection.HttpURLConnectionFactory; -import com.sun.jersey.client.urlconnection.URLConnectionClientHandler; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; public class TestTimelineReaderWebServices { @@ -69,17 +68,17 @@ public class TestTimelineReaderWebServices { private int serverPort; private TimelineReaderServer server; - @BeforeClass + @BeforeAll public static void setup() throws Exception { TestFileSystemTimelineReaderImpl.initializeDataDirectory(ROOT_DIR); } - @AfterClass + @AfterAll public static void tearDown() throws Exception { FileUtils.deleteDirectory(new File(ROOT_DIR)); } - @Before + @BeforeEach public void init() throws Exception { try { Configuration config = new YarnConfiguration(); @@ -97,11 +96,11 @@ public class TestTimelineReaderWebServices { server.start(); serverPort = server.getWebServerPort(); } catch (Exception e) { - Assert.fail("Web server failed to start"); + fail("Web server failed to start"); } } - @After + @AfterEach public void stop() throws Exception { if (server != null) { server.stop(); @@ -165,21 +164,21 @@ public class TestTimelineReaderWebServices { } @Test - public void testAbout() throws Exception { + void testAbout() throws Exception { URI uri = URI.create("http://localhost:" + serverPort + "/ws/v2/timeline/"); Client client = createClient(); try { ClientResponse resp = getResponse(client, uri); TimelineAbout about = resp.getEntity(TimelineAbout.class); - Assert.assertNotNull(about); - Assert.assertEquals("Timeline Reader API", about.getAbout()); + assertNotNull(about); + assertEquals("Timeline Reader API", about.getAbout()); } finally { client.destroy(); } } @Test - public void testGetEntityDefaultView() throws Exception { + void testGetEntityDefaultView() throws Exception { Client client = createClient(); try { URI uri = URI.create("http://localhost:" + serverPort + "/ws/v2/" + @@ -191,7 +190,7 @@ public class TestTimelineReaderWebServices { assertNotNull(entity); assertEquals("id_1", entity.getId()); assertEquals("app", entity.getType()); - assertEquals((Long)1425016502000L, entity.getCreatedTime()); + assertEquals((Long) 1425016502000L, entity.getCreatedTime()); // Default view i.e. when no fields are specified, entity contains only // entity id, entity type and created time. assertEquals(0, entity.getConfigs().size()); @@ -202,7 +201,7 @@ public class TestTimelineReaderWebServices { } @Test - public void testGetEntityWithUserAndFlowInfo() throws Exception { + void testGetEntityWithUserAndFlowInfo() throws Exception { Client client = createClient(); try { URI uri = URI.create("http://localhost:" + serverPort + "/ws/v2/" + @@ -215,14 +214,14 @@ public class TestTimelineReaderWebServices { assertNotNull(entity); assertEquals("id_1", entity.getId()); assertEquals("app", entity.getType()); - assertEquals((Long)1425016502000L, entity.getCreatedTime()); + assertEquals((Long) 1425016502000L, entity.getCreatedTime()); } finally { client.destroy(); } } @Test - public void testGetEntityCustomFields() throws Exception { + void testGetEntityCustomFields() throws Exception { Client client = createClient(); try { // Fields are case insensitive. @@ -238,8 +237,8 @@ public class TestTimelineReaderWebServices { assertEquals("app", entity.getType()); assertEquals(3, entity.getConfigs().size()); assertEquals(3, entity.getMetrics().size()); - assertTrue("UID should be present", - entity.getInfo().containsKey(TimelineReaderUtils.UID_KEY)); + assertTrue(entity.getInfo().containsKey(TimelineReaderUtils.UID_KEY), + "UID should be present"); // Includes UID. assertEquals(3, entity.getInfo().size()); // No events will be returned as events are not part of fields. @@ -250,7 +249,7 @@ public class TestTimelineReaderWebServices { } @Test - public void testGetEntityAllFields() throws Exception { + void testGetEntityAllFields() throws Exception { Client client = createClient(); try { URI uri = URI.create("http://localhost:" + serverPort + "/ws/v2/" + @@ -265,8 +264,8 @@ public class TestTimelineReaderWebServices { assertEquals("app", entity.getType()); assertEquals(3, entity.getConfigs().size()); assertEquals(3, entity.getMetrics().size()); - assertTrue("UID should be present", - entity.getInfo().containsKey(TimelineReaderUtils.UID_KEY)); + assertTrue(entity.getInfo().containsKey(TimelineReaderUtils.UID_KEY), + "UID should be present"); // Includes UID. assertEquals(3, entity.getInfo().size()); assertEquals(2, entity.getEvents().size()); @@ -276,7 +275,7 @@ public class TestTimelineReaderWebServices { } @Test - public void testGetEntityNotPresent() throws Exception { + void testGetEntityNotPresent() throws Exception { Client client = createClient(); try { URI uri = URI.create("http://localhost:" + serverPort + "/ws/v2/" + @@ -288,7 +287,7 @@ public class TestTimelineReaderWebServices { } @Test - public void testQueryWithoutCluster() throws Exception { + void testQueryWithoutCluster() throws Exception { Client client = createClient(); try { URI uri = URI.create("http://localhost:" + serverPort + "/ws/v2/" + @@ -305,7 +304,8 @@ public class TestTimelineReaderWebServices { "timeline/apps/app1/entities/app"); resp = getResponse(client, uri); Set entities = - resp.getEntity(new GenericType>(){}); + resp.getEntity(new GenericType>(){ + }); assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8, resp.getType().toString()); assertNotNull(entities); @@ -316,52 +316,55 @@ public class TestTimelineReaderWebServices { } @Test - public void testGetEntities() throws Exception { + void testGetEntities() throws Exception { Client client = createClient(); try { URI uri = URI.create("http://localhost:" + serverPort + "/ws/v2/" + "timeline/clusters/cluster1/apps/app1/entities/app"); ClientResponse resp = getResponse(client, uri); Set entities = - resp.getEntity(new GenericType>(){}); + resp.getEntity(new GenericType>(){ + }); assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8, resp.getType().toString()); assertNotNull(entities); assertEquals(4, entities.size()); - assertTrue("Entities id_1, id_2, id_3 and id_4 should have been" + - " present in response", - entities.contains(newEntity("app", "id_1")) && + assertTrue(entities.contains(newEntity("app", "id_1")) && entities.contains(newEntity("app", "id_2")) && entities.contains(newEntity("app", "id_3")) && - entities.contains(newEntity("app", "id_4"))); + entities.contains(newEntity("app", "id_4")), + "Entities id_1, id_2, id_3 and id_4 should have been" + + " present in response"); } finally { client.destroy(); } } @Test - public void testGetEntitiesWithLimit() throws Exception { + void testGetEntitiesWithLimit() throws Exception { Client client = createClient(); try { URI uri = URI.create("http://localhost:" + serverPort + "/ws/v2/" + "timeline/clusters/cluster1/apps/app1/entities/app?limit=2"); ClientResponse resp = getResponse(client, uri); Set entities = - resp.getEntity(new GenericType>(){}); + resp.getEntity(new GenericType>(){ + }); assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8, resp.getType().toString()); assertNotNull(entities); assertEquals(2, entities.size()); // Entities returned are based on most recent created time. - assertTrue("Entities with id_1 and id_4 should have been present " + - "in response based on entity created time.", - entities.contains(newEntity("app", "id_1")) && - entities.contains(newEntity("app", "id_4"))); + assertTrue(entities.contains(newEntity("app", "id_1")) && + entities.contains(newEntity("app", "id_4")), + "Entities with id_1 and id_4 should have been present " + + "in response based on entity created time."); uri = URI.create("http://localhost:" + serverPort + "/ws/v2/timeline/" + "clusters/cluster1/apps/app1/entities/app?limit=3"); resp = getResponse(client, uri); - entities = resp.getEntity(new GenericType>(){}); + entities = resp.getEntity(new GenericType>(){ + }); assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8, resp.getType().toString()); assertNotNull(entities); @@ -374,7 +377,7 @@ public class TestTimelineReaderWebServices { } @Test - public void testGetEntitiesBasedOnCreatedTime() throws Exception { + void testGetEntitiesBasedOnCreatedTime() throws Exception { Client client = createClient(); try { URI uri = URI.create("http://localhost:" + serverPort + "/ws/v2/" + @@ -382,44 +385,47 @@ public class TestTimelineReaderWebServices { "createdtimestart=1425016502030&createdtimeend=1425016502060"); ClientResponse resp = getResponse(client, uri); Set entities = - resp.getEntity(new GenericType>(){}); + resp.getEntity(new GenericType>(){ + }); assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8, resp.getType().toString()); assertNotNull(entities); assertEquals(1, entities.size()); - assertTrue("Entity with id_4 should have been present in response.", - entities.contains(newEntity("app", "id_4"))); + assertTrue(entities.contains(newEntity("app", "id_4")), + "Entity with id_4 should have been present in response."); uri = URI.create("http://localhost:" + serverPort + "/ws/v2/timeline/" + "clusters/cluster1/apps/app1/entities/app?createdtimeend" + "=1425016502010"); resp = getResponse(client, uri); - entities = resp.getEntity(new GenericType>(){}); + entities = resp.getEntity(new GenericType>(){ + }); assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8, resp.getType().toString()); assertNotNull(entities); assertEquals(3, entities.size()); - assertFalse("Entity with id_4 should not have been present in response.", - entities.contains(newEntity("app", "id_4"))); + assertFalse(entities.contains(newEntity("app", "id_4")), + "Entity with id_4 should not have been present in response."); uri = URI.create("http://localhost:" + serverPort + "/ws/v2/timeline/" + "clusters/cluster1/apps/app1/entities/app?createdtimestart=" + "1425016502010"); resp = getResponse(client, uri); - entities = resp.getEntity(new GenericType>(){}); + entities = resp.getEntity(new GenericType>(){ + }); assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8, resp.getType().toString()); assertNotNull(entities); assertEquals(1, entities.size()); - assertTrue("Entity with id_4 should have been present in response.", - entities.contains(newEntity("app", "id_4"))); + assertTrue(entities.contains(newEntity("app", "id_4")), + "Entity with id_4 should have been present in response."); } finally { client.destroy(); } } @Test - public void testGetEntitiesByRelations() throws Exception { + void testGetEntitiesByRelations() throws Exception { Client client = createClient(); try { URI uri = URI.create("http://localhost:" + serverPort + "/ws/v2/" + @@ -427,44 +433,47 @@ public class TestTimelineReaderWebServices { "flow:flow1"); ClientResponse resp = getResponse(client, uri); Set entities = - resp.getEntity(new GenericType>(){}); + resp.getEntity(new GenericType>(){ + }); assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8, resp.getType().toString()); assertNotNull(entities); assertEquals(1, entities.size()); - assertTrue("Entity with id_1 should have been present in response.", - entities.contains(newEntity("app", "id_1"))); + assertTrue(entities.contains(newEntity("app", "id_1")), + "Entity with id_1 should have been present in response."); uri = URI.create("http://localhost:" + serverPort + "/ws/v2/timeline/" + "clusters/cluster1/apps/app1/entities/app?isrelatedto=" + "type1:tid1_2,type2:tid2_1%60"); resp = getResponse(client, uri); - entities = resp.getEntity(new GenericType>(){}); + entities = resp.getEntity(new GenericType>(){ + }); assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8, resp.getType().toString()); assertNotNull(entities); assertEquals(1, entities.size()); - assertTrue("Entity with id_1 should have been present in response.", - entities.contains(newEntity("app", "id_1"))); + assertTrue(entities.contains(newEntity("app", "id_1")), + "Entity with id_1 should have been present in response."); uri = URI.create("http://localhost:" + serverPort + "/ws/v2/timeline/" + "clusters/cluster1/apps/app1/entities/app?isrelatedto=" + "type1:tid1_1:tid1_2,type2:tid2_1%60"); resp = getResponse(client, uri); - entities = resp.getEntity(new GenericType>(){}); + entities = resp.getEntity(new GenericType>(){ + }); assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8, resp.getType().toString()); assertNotNull(entities); assertEquals(1, entities.size()); - assertTrue("Entity with id_1 should have been present in response.", - entities.contains(newEntity("app", "id_1"))); + assertTrue(entities.contains(newEntity("app", "id_1")), + "Entity with id_1 should have been present in response."); } finally { client.destroy(); } } @Test - public void testGetEntitiesByConfigFilters() throws Exception { + void testGetEntitiesByConfigFilters() throws Exception { Client client = createClient(); try { URI uri = URI.create("http://localhost:" + serverPort + "/ws/v2/" + @@ -472,20 +481,21 @@ public class TestTimelineReaderWebServices { "conffilters=config_1%20eq%20123%20AND%20config_3%20eq%20abc"); ClientResponse resp = getResponse(client, uri); Set entities = - resp.getEntity(new GenericType>(){}); + resp.getEntity(new GenericType>(){ + }); assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8, resp.getType().toString()); assertNotNull(entities); assertEquals(1, entities.size()); - assertTrue("Entity with id_3 should have been present in response.", - entities.contains(newEntity("app", "id_3"))); + assertTrue(entities.contains(newEntity("app", "id_3")), + "Entity with id_3 should have been present in response."); } finally { client.destroy(); } } @Test - public void testGetEntitiesByInfoFilters() throws Exception { + void testGetEntitiesByInfoFilters() throws Exception { Client client = createClient(); try { URI uri = URI.create("http://localhost:" + serverPort + "/ws/v2/" + @@ -493,20 +503,21 @@ public class TestTimelineReaderWebServices { "infofilters=info2%20eq%203.5"); ClientResponse resp = getResponse(client, uri); Set entities = - resp.getEntity(new GenericType>(){}); + resp.getEntity(new GenericType>(){ + }); assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8, resp.getType().toString()); assertNotNull(entities); assertEquals(1, entities.size()); - assertTrue("Entity with id_3 should have been present in response.", - entities.contains(newEntity("app", "id_3"))); + assertTrue(entities.contains(newEntity("app", "id_3")), + "Entity with id_3 should have been present in response."); } finally { client.destroy(); } } @Test - public void testGetEntitiesByMetricFilters() throws Exception { + void testGetEntitiesByMetricFilters() throws Exception { Client client = createClient(); try { URI uri = URI.create("http://localhost:" + serverPort + "/ws/v2/" + @@ -514,22 +525,23 @@ public class TestTimelineReaderWebServices { "metricfilters=metric3%20ge%200"); ClientResponse resp = getResponse(client, uri); Set entities = - resp.getEntity(new GenericType>(){}); + resp.getEntity(new GenericType>(){ + }); assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8, resp.getType().toString()); assertNotNull(entities); assertEquals(2, entities.size()); - assertTrue("Entities with id_1 and id_2 should have been present" + - " in response.", - entities.contains(newEntity("app", "id_1")) && - entities.contains(newEntity("app", "id_2"))); + assertTrue(entities.contains(newEntity("app", "id_1")) && + entities.contains(newEntity("app", "id_2")), + "Entities with id_1 and id_2 should have been present" + + " in response."); } finally { client.destroy(); } } @Test - public void testGetEntitiesByEventFilters() throws Exception { + void testGetEntitiesByEventFilters() throws Exception { Client client = createClient(); try { URI uri = URI.create("http://localhost:" + serverPort + "/ws/v2/" + @@ -537,31 +549,33 @@ public class TestTimelineReaderWebServices { "eventfilters=event_2,event_4"); ClientResponse resp = getResponse(client, uri); Set entities = - resp.getEntity(new GenericType>(){}); + resp.getEntity(new GenericType>(){ + }); assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8, resp.getType().toString()); assertNotNull(entities); assertEquals(1, entities.size()); - assertTrue("Entity with id_3 should have been present in response.", - entities.contains(newEntity("app", "id_3"))); + assertTrue(entities.contains(newEntity("app", "id_3")), + "Entity with id_3 should have been present in response."); } finally { client.destroy(); } } @Test - public void testGetEntitiesNoMatch() throws Exception { + void testGetEntitiesNoMatch() throws Exception { Client client = createClient(); try { URI uri = URI.create("http://localhost:" + serverPort + "/ws/v2/" + "timeline/clusters/cluster1/apps/app1/entities/app?" + - "metricfilters=metric7%20ge%200&isrelatedto=type1:tid1_1:tid1_2,"+ + "metricfilters=metric7%20ge%200&isrelatedto=type1:tid1_1:tid1_2," + "type2:tid2_1%60&relatesto=flow:flow1&eventfilters=event_2,event_4" + "&infofilters=info2%20eq%203.5&createdtimestart=1425016502030&" + "createdtimeend=1425016502060"); ClientResponse resp = getResponse(client, uri); Set entities = - resp.getEntity(new GenericType>(){}); + resp.getEntity(new GenericType>(){ + }); assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8, resp.getType().toString()); assertNotNull(entities); @@ -572,7 +586,7 @@ public class TestTimelineReaderWebServices { } @Test - public void testInvalidValuesHandling() throws Exception { + void testInvalidValuesHandling() throws Exception { Client client = createClient(); try { URI uri = URI.create("http://localhost:" + serverPort + "/ws/v2/" + @@ -592,7 +606,7 @@ public class TestTimelineReaderWebServices { } @Test - public void testGetAppAttempts() throws Exception { + void testGetAppAttempts() throws Exception { Client client = createClient(); try { URI uri = URI.create("http://localhost:" + serverPort + "/ws/v2/" @@ -608,15 +622,15 @@ public class TestTimelineReaderWebServices { int totalEntities = entities.size(); assertEquals(2, totalEntities); assertTrue( - "Entity with app-attempt-2 should have been present in response.", entities.contains( newEntity(TimelineEntityType.YARN_APPLICATION_ATTEMPT.toString(), - "app-attempt-1"))); + "app-attempt-1")), + "Entity with app-attempt-2 should have been present in response."); assertTrue( - "Entity with app-attempt-2 should have been present in response.", entities.contains( newEntity(TimelineEntityType.YARN_APPLICATION_ATTEMPT.toString(), - "app-attempt-2"))); + "app-attempt-2")), + "Entity with app-attempt-2 should have been present in response."); uri = URI.create("http://localhost:" + serverPort + "/ws/v2/" + "timeline/clusters/cluster1/apps/app1/appattempts"); @@ -628,15 +642,15 @@ public class TestTimelineReaderWebServices { int retrievedEntity = entities.size(); assertEquals(2, retrievedEntity); assertTrue( - "Entity with app-attempt-2 should have been present in response.", entities.contains( newEntity(TimelineEntityType.YARN_APPLICATION_ATTEMPT.toString(), - "app-attempt-1"))); + "app-attempt-1")), + "Entity with app-attempt-2 should have been present in response."); assertTrue( - "Entity with app-attempt-2 should have been present in response.", entities.contains( newEntity(TimelineEntityType.YARN_APPLICATION_ATTEMPT.toString(), - "app-attempt-2"))); + "app-attempt-2")), + "Entity with app-attempt-2 should have been present in response."); assertEquals(totalEntities, retrievedEntity); @@ -646,7 +660,7 @@ public class TestTimelineReaderWebServices { } @Test - public void testGetAppAttempt() throws Exception { + void testGetAppAttempt() throws Exception { Client client = createClient(); try { URI uri = URI.create("http://localhost:" + serverPort + "/ws/v2/" @@ -677,7 +691,7 @@ public class TestTimelineReaderWebServices { } @Test - public void testGetContainers() throws Exception { + void testGetContainers() throws Exception { Client client = createClient(); try { // total 3 containers in a application. @@ -693,17 +707,17 @@ public class TestTimelineReaderWebServices { int totalEntities = entities.size(); assertEquals(3, totalEntities); assertTrue( - "Entity with container_1_1 should have been present in response.", entities.contains(newEntity( - TimelineEntityType.YARN_CONTAINER.toString(), "container_1_1"))); + TimelineEntityType.YARN_CONTAINER.toString(), "container_1_1")), + "Entity with container_1_1 should have been present in response."); assertTrue( - "Entity with container_2_1 should have been present in response.", entities.contains(newEntity( - TimelineEntityType.YARN_CONTAINER.toString(), "container_2_1"))); + TimelineEntityType.YARN_CONTAINER.toString(), "container_2_1")), + "Entity with container_2_1 should have been present in response."); assertTrue( - "Entity with container_2_2 should have been present in response.", entities.contains(newEntity( - TimelineEntityType.YARN_CONTAINER.toString(), "container_2_2"))); + TimelineEntityType.YARN_CONTAINER.toString(), "container_2_2")), + "Entity with container_2_2 should have been present in response."); // for app-attempt1 1 container has run uri = URI.create("http://localhost:" + serverPort + "/ws/v2/" @@ -717,9 +731,9 @@ public class TestTimelineReaderWebServices { int retrievedEntity = entities.size(); assertEquals(1, retrievedEntity); assertTrue( - "Entity with container_1_1 should have been present in response.", entities.contains(newEntity( - TimelineEntityType.YARN_CONTAINER.toString(), "container_1_1"))); + TimelineEntityType.YARN_CONTAINER.toString(), "container_1_1")), + "Entity with container_1_1 should have been present in response."); // for app-attempt2 2 containers has run uri = URI.create("http://localhost:" + serverPort + "/ws/v2/" @@ -733,13 +747,13 @@ public class TestTimelineReaderWebServices { retrievedEntity += entities.size(); assertEquals(2, entities.size()); assertTrue( - "Entity with container_2_1 should have been present in response.", entities.contains(newEntity( - TimelineEntityType.YARN_CONTAINER.toString(), "container_2_1"))); + TimelineEntityType.YARN_CONTAINER.toString(), "container_2_1")), + "Entity with container_2_1 should have been present in response."); assertTrue( - "Entity with container_2_2 should have been present in response.", entities.contains(newEntity( - TimelineEntityType.YARN_CONTAINER.toString(), "container_2_2"))); + TimelineEntityType.YARN_CONTAINER.toString(), "container_2_2")), + "Entity with container_2_2 should have been present in response."); assertEquals(totalEntities, retrievedEntity); @@ -749,7 +763,7 @@ public class TestTimelineReaderWebServices { } @Test - public void testGetContainer() throws Exception { + void testGetContainer() throws Exception { Client client = createClient(); try { URI uri = URI.create("http://localhost:" + serverPort + "/ws/v2/" @@ -780,11 +794,11 @@ public class TestTimelineReaderWebServices { } @Test - public void testHealthCheck() throws Exception { + void testHealthCheck() throws Exception { Client client = createClient(); try { URI uri = URI.create("http://localhost:" + serverPort + "/ws/v2/" - + "timeline/health"); + + "timeline/health"); ClientResponse resp = getResponse(client, uri); TimelineHealth timelineHealth = resp.getEntity(new GenericType() { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWebServicesACL.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWebServicesACL.java index fbd042bd015..316c1949ad1 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWebServicesACL.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWebServicesACL.java @@ -18,33 +18,14 @@ package org.apache.hadoop.yarn.server.timelineservice.reader; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertNotNull; -import static org.junit.Assert.assertTrue; - import java.io.File; import java.io.IOException; import java.lang.reflect.UndeclaredThrowableException; import java.net.HttpURLConnection; import java.net.URI; import java.net.URL; - import javax.ws.rs.core.MediaType; -import org.apache.commons.io.FileUtils; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.yarn.conf.YarnConfiguration; -import org.apache.hadoop.yarn.server.timelineservice.storage.FileSystemTimelineReaderImpl; -import org.apache.hadoop.yarn.server.timelineservice.storage.TestFileSystemTimelineReaderImpl; -import org.apache.hadoop.yarn.server.timelineservice.storage.TimelineReader; -import org.apache.hadoop.yarn.webapp.YarnJacksonJaxbJsonProvider; -import org.junit.After; -import org.junit.AfterClass; -import org.junit.Assert; -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.Test; - import com.sun.jersey.api.client.Client; import com.sun.jersey.api.client.ClientResponse; import com.sun.jersey.api.client.ClientResponse.Status; @@ -52,6 +33,24 @@ import com.sun.jersey.api.client.config.ClientConfig; import com.sun.jersey.api.client.config.DefaultClientConfig; import com.sun.jersey.client.urlconnection.HttpURLConnectionFactory; import com.sun.jersey.client.urlconnection.URLConnectionClientHandler; +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import org.apache.commons.io.FileUtils; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.yarn.conf.YarnConfiguration; +import org.apache.hadoop.yarn.server.timelineservice.storage.FileSystemTimelineReaderImpl; +import org.apache.hadoop.yarn.server.timelineservice.storage.TestFileSystemTimelineReaderImpl; +import org.apache.hadoop.yarn.server.timelineservice.storage.TimelineReader; +import org.apache.hadoop.yarn.webapp.YarnJacksonJaxbJsonProvider; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; /** * Tests ACL check while retrieving entity-types per application. @@ -66,17 +65,17 @@ public class TestTimelineReaderWebServicesACL { private TimelineReaderServer server; private static final String ADMIN = "yarn"; - @BeforeClass + @BeforeAll public static void setup() throws Exception { TestFileSystemTimelineReaderImpl.initializeDataDirectory(ROOT_DIR); } - @AfterClass + @AfterAll public static void tearDown() throws Exception { FileUtils.deleteDirectory(new File(ROOT_DIR)); } - @Before + @BeforeEach public void init() throws Exception { try { Configuration config = new YarnConfiguration(); @@ -97,11 +96,11 @@ public class TestTimelineReaderWebServicesACL { server.start(); serverPort = server.getWebServerPort(); } catch (Exception e) { - Assert.fail("Web server failed to start"); + fail("Web server failed to start"); } } - @After + @AfterEach public void stop() throws Exception { if (server != null) { server.stop(); @@ -141,35 +140,35 @@ public class TestTimelineReaderWebServicesACL { } @Test - public void testGetEntityTypes() throws Exception { + void testGetEntityTypes() throws Exception { Client client = createClient(); try { - String unAuthorizedUser ="user2"; + String unAuthorizedUser = "user2"; URI uri = URI.create("http://localhost:" + serverPort + "/ws/v2/" + - "timeline/apps/app1/entity-types?user.name="+unAuthorizedUser); + "timeline/apps/app1/entity-types?user.name=" + unAuthorizedUser); String msg = "User " + unAuthorizedUser + " is not allowed to read TimelineService V2 data."; ClientResponse resp = verifyHttpResponse(client, uri, Status.FORBIDDEN); assertTrue(resp.getEntity(String.class).contains(msg)); - String authorizedUser ="user1"; + String authorizedUser = "user1"; uri = URI.create("http://localhost:" + serverPort + "/ws/v2/" + - "timeline/apps/app1/entity-types?user.name="+authorizedUser); + "timeline/apps/app1/entity-types?user.name=" + authorizedUser); verifyHttpResponse(client, uri, Status.OK); uri = URI.create("http://localhost:" + serverPort + "/ws/v2/" + - "timeline/apps/app1/entity-types?user.name="+ADMIN); + "timeline/apps/app1/entity-types?user.name=" + ADMIN); verifyHttpResponse(client, uri, Status.OK); // Verify with Query Parameter userid uri = URI.create("http://localhost:" + serverPort + "/ws/v2/" + - "timeline/apps/app1/entity-types?user.name="+authorizedUser - + "&userid="+authorizedUser); + "timeline/apps/app1/entity-types?user.name=" + authorizedUser + + "&userid=" + authorizedUser); verifyHttpResponse(client, uri, Status.OK); uri = URI.create("http://localhost:" + serverPort + "/ws/v2/" + - "timeline/apps/app1/entity-types?user.name="+authorizedUser - + "&userid="+unAuthorizedUser); + "timeline/apps/app1/entity-types?user.name=" + authorizedUser + + "&userid=" + unAuthorizedUser); verifyHttpResponse(client, uri, Status.FORBIDDEN); } finally { client.destroy(); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWebServicesBasicAcl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWebServicesBasicAcl.java index 6ad44272a89..fffa1b50f8a 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWebServicesBasicAcl.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWebServicesBasicAcl.java @@ -18,18 +18,23 @@ package org.apache.hadoop.yarn.server.timelineservice.reader; +import java.util.LinkedHashSet; +import java.util.Set; + +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntity; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.webapp.ForbiddenException; -import org.junit.After; -import org.junit.Assert; -import org.junit.Before; -import org.junit.Test; -import java.util.LinkedHashSet; -import java.util.Set; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; public class TestTimelineReaderWebServicesBasicAcl { @@ -39,11 +44,11 @@ public class TestTimelineReaderWebServicesBasicAcl { UserGroupInformation.createRemoteUser(adminUser); private Configuration config; - @Before public void setUp() throws Exception { + @BeforeEach public void setUp() throws Exception { config = new YarnConfiguration(); } - @After public void tearDown() throws Exception { + @AfterEach public void tearDown() throws Exception { if (manager != null) { manager.stop(); manager = null; @@ -51,7 +56,8 @@ public class TestTimelineReaderWebServicesBasicAcl { config = null; } - @Test public void testTimelineReaderManagerAclsWhenDisabled() + @Test + void testTimelineReaderManagerAclsWhenDisabled() throws Exception { config.setBoolean(YarnConfiguration.YARN_ACL_ENABLE, false); config.set(YarnConfiguration.YARN_ADMIN_ACL, adminUser); @@ -60,14 +66,15 @@ public class TestTimelineReaderWebServicesBasicAcl { manager.start(); // when acls are disabled, always return true - Assert.assertTrue(manager.checkAccess(null)); + assertTrue(manager.checkAccess(null)); // filter is disabled, so should return false - Assert.assertFalse( + assertFalse( TimelineReaderWebServices.isDisplayEntityPerUserFilterEnabled(config)); } - @Test public void testTimelineReaderManagerAclsWhenEnabled() + @Test + void testTimelineReaderManagerAclsWhenEnabled() throws Exception { Configuration config = new YarnConfiguration(); config.setBoolean(YarnConfiguration.YARN_ACL_ENABLE, true); @@ -85,30 +92,30 @@ public class TestTimelineReaderWebServicesBasicAcl { UserGroupInformation.createRemoteUser(user2); // false because ugi is null - Assert.assertFalse(TimelineReaderWebServices + assertFalse(TimelineReaderWebServices .validateAuthUserWithEntityUser(manager, null, user1)); // false because ugi is null in non-secure cluster. User must pass // ?user.name as query params in REST end points. try { TimelineReaderWebServices.checkAccess(manager, null, user1); - Assert.fail("user1Ugi is not allowed to view user1"); + fail("user1Ugi is not allowed to view user1"); } catch (ForbiddenException e) { // expected } // incoming ugi is admin asking for entity owner user1 - Assert.assertTrue( + assertTrue( TimelineReaderWebServices.checkAccess(manager, adminUgi, user1)); // incoming ugi is admin asking for entity owner user1 - Assert.assertTrue( + assertTrue( TimelineReaderWebServices.checkAccess(manager, adminUgi, user2)); // incoming ugi is non-admin i.e user1Ugi asking for entity owner user2 try { TimelineReaderWebServices.checkAccess(manager, user1Ugi, user2); - Assert.fail("user1Ugi is not allowed to view user2"); + fail("user1Ugi is not allowed to view user2"); } catch (ForbiddenException e) { // expected } @@ -116,7 +123,7 @@ public class TestTimelineReaderWebServicesBasicAcl { // incoming ugi is non-admin i.e user2Ugi asking for entity owner user1 try { TimelineReaderWebServices.checkAccess(manager, user1Ugi, user2); - Assert.fail("user2Ugi is not allowed to view user1"); + fail("user2Ugi is not allowed to view user1"); } catch (ForbiddenException e) { // expected } @@ -127,25 +134,23 @@ public class TestTimelineReaderWebServicesBasicAcl { TimelineReaderWebServices .checkAccess(manager, adminUgi, entities, userKey, true); // admin is allowed to view other entities - Assert.assertTrue(entities.size() == 10); + assertEquals(10, entities.size()); // incoming ugi is user1Ugi asking for entities // only user1 entities are allowed to view entities = createEntities(5, userKey); TimelineReaderWebServices .checkAccess(manager, user1Ugi, entities, userKey, true); - Assert.assertTrue(entities.size() == 1); - Assert - .assertEquals(user1, entities.iterator().next().getInfo().get(userKey)); + assertEquals(1, entities.size()); + assertEquals(user1, entities.iterator().next().getInfo().get(userKey)); // incoming ugi is user2Ugi asking for entities // only user2 entities are allowed to view entities = createEntities(8, userKey); TimelineReaderWebServices .checkAccess(manager, user2Ugi, entities, userKey, true); - Assert.assertTrue(entities.size() == 1); - Assert - .assertEquals(user2, entities.iterator().next().getInfo().get(userKey)); + assertEquals(1, entities.size()); + assertEquals(user2, entities.iterator().next().getInfo().get(userKey)); } Set createEntities(int noOfUsers, String userKey) { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWebServicesUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWebServicesUtils.java index 882189c6820..80f462c2fe9 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWebServicesUtils.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWebServicesUtils.java @@ -18,10 +18,7 @@ package org.apache.hadoop.yarn.server.timelineservice.reader; -import static org.junit.Assert.assertNotNull; -import static org.junit.Assert.assertNull; -import static org.junit.Assert.assertTrue; -import static org.junit.Assert.fail; +import org.junit.jupiter.api.Test; import org.apache.hadoop.util.Sets; import org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineCompareFilter; @@ -32,20 +29,19 @@ import org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineFilte import org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineKeyValueFilter; import org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineKeyValuesFilter; import org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelinePrefixFilter; -import org.junit.Assert; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNull; +import static org.junit.jupiter.api.Assertions.fail; public class TestTimelineReaderWebServicesUtils { private static void verifyFilterList(String expr, TimelineFilterList list, - TimelineFilterList expectedList) throws Exception { - assertNotNull(list); - assertTrue("Unexpected List received after parsing expression " + expr + - ". Expected=" + expectedList + " but Actual=" + list, - list.equals(expectedList)); + TimelineFilterList expectedList) { + assertEquals(expectedList, list); } @Test - public void testMetricFiltersParsing() throws Exception { + void testMetricFiltersParsing() throws Exception { String expr = "(((key11 ne 234 AND key12 gt 23) AND " + "(key13 lt 34 OR key14 ge 567)) OR (key21 lt 24 OR key22 le 45))"; TimelineFilterList expectedList = new TimelineFilterList( @@ -168,7 +164,7 @@ public class TestTimelineReaderWebServicesUtils { TimelineReaderWebServicesUtils.parseMetricFilters(expr), expectedList); // Test with unnecessary spaces. - expr = " abc ne 234 AND def gt 23 OR rst lt "+ + expr = " abc ne 234 AND def gt 23 OR rst lt " + " 24 OR xyz le 456 AND pqr ge 2 "; expectedList = new TimelineFilterList( new TimelineFilterList( @@ -283,7 +279,8 @@ public class TestTimelineReaderWebServicesUtils { try { TimelineReaderWebServicesUtils.parseMetricFilters(expr); fail("Improper brackers. Exception should have been thrown."); - } catch (TimelineParseException e) {} + } catch (TimelineParseException e) { + } expr = "(((key11 ne 234 AND key12 gt v3 OR key13 lt 24 OR key14 le 456 " + "AND key15 ge 2) AND (key16 lt 34 OR key17 ge 567)) OR (key21 lt 24 " + @@ -291,7 +288,8 @@ public class TestTimelineReaderWebServicesUtils { try { TimelineReaderWebServicesUtils.parseMetricFilters(expr); fail("Non Numeric value. Exception should have been thrown."); - } catch (TimelineParseException e) {} + } catch (TimelineParseException e) { + } expr = "(((key11 ne (234 AND key12 gt 3 OR key13 lt 24 OR key14 le 456 " + "AND key15 ge 2) AND (key16 lt 34 OR key17 ge 567)) OR (key21 lt 24 " + @@ -299,7 +297,8 @@ public class TestTimelineReaderWebServicesUtils { try { TimelineReaderWebServicesUtils.parseMetricFilters(expr); fail("Unexpected opening bracket. Exception should have been thrown."); - } catch (TimelineParseException e) {} + } catch (TimelineParseException e) { + } expr = "(((k)ey11 ne 234 AND key12 gt 3 OR key13 lt 24 OR key14 le 456 " + "AND key15 ge 2) AND (key16 lt 34 OR key17 ge 567)) OR (key21 lt 24 " + @@ -307,7 +306,8 @@ public class TestTimelineReaderWebServicesUtils { try { TimelineReaderWebServicesUtils.parseMetricFilters(expr); fail("Unexpected closing bracket. Exception should have been thrown."); - } catch (TimelineParseException e) {} + } catch (TimelineParseException e) { + } expr = "(((key11 rs 234 AND key12 gt 3 OR key13 lt 24 OR key14 le 456 " + "AND key15 ge 2) AND (key16 lt 34 OR key17 ge 567)) OR (key21 lt 24 " + @@ -315,7 +315,8 @@ public class TestTimelineReaderWebServicesUtils { try { TimelineReaderWebServicesUtils.parseMetricFilters(expr); fail("Improper compare op. Exception should have been thrown."); - } catch (TimelineParseException e) {} + } catch (TimelineParseException e) { + } expr = "(((key11 ne 234 PI key12 gt 3 OR key13 lt 24 OR key14 le 456 " + "AND key15 ge 2) AND (key16 lt 34 OR key17 ge 567)) OR (key21 lt 24 " + @@ -323,7 +324,8 @@ public class TestTimelineReaderWebServicesUtils { try { TimelineReaderWebServicesUtils.parseMetricFilters(expr); fail("Improper op. Exception should have been thrown."); - } catch (TimelineParseException e) {} + } catch (TimelineParseException e) { + } expr = "(((key11 ne 234 PI key12 gt 3 OR key13 lt 24 OR key14 le 456 " + "AND key15 ge 2) AND (key16 lt 34 OR key17 ge 567)) OR (key21 lt 24 " + @@ -331,32 +333,36 @@ public class TestTimelineReaderWebServicesUtils { try { TimelineReaderWebServicesUtils.parseMetricFilters(expr); fail("Improper op. Exception should have been thrown."); - } catch (TimelineParseException e) {} + } catch (TimelineParseException e) { + } expr = "(key11 ne 234 AND key12 gt 3)) OR (key13 lt 24 OR key14 le 456)"; try { TimelineReaderWebServicesUtils.parseMetricFilters(expr); fail("Unbalanced brackets. Exception should have been thrown."); - } catch (TimelineParseException e) {} + } catch (TimelineParseException e) { + } expr = "(key11 rne 234 AND key12 gt 3) OR (key13 lt 24 OR key14 le 456)"; try { TimelineReaderWebServicesUtils.parseMetricFilters(expr); fail("Invalid compareop. Exception should have been thrown."); - } catch (TimelineParseException e) {} + } catch (TimelineParseException e) { + } expr = "(key11 ne 234 AND key12 gt 3) OR (key13 lt 24 OR key14 le"; try { TimelineReaderWebServicesUtils.parseMetricFilters(expr); fail("Compareop cant be parsed. Exception should have been thrown."); - } catch (TimelineParseException e) {} + } catch (TimelineParseException e) { + } assertNull(TimelineReaderWebServicesUtils.parseMetricFilters(null)); assertNull(TimelineReaderWebServicesUtils.parseMetricFilters(" ")); } @Test - public void testConfigFiltersParsing() throws Exception { + void testConfigFiltersParsing() throws Exception { String expr = "(((key11 ne 234 AND key12 eq val12) AND " + "(key13 ene val13 OR key14 eq 567)) OR (key21 eq val_21 OR key22 eq " + "val.22))"; @@ -412,7 +418,7 @@ public class TestTimelineReaderWebServicesUtils { parseKVFilters(expr, true), expectedList); // Test with unnecessary spaces. - expr = " abc ne 234 AND def eq 23 OR rst ene "+ + expr = " abc ne 234 AND def eq 23 OR rst ene " + " 24 OR xyz eq 456 AND pqr eq 2 "; expectedList = new TimelineFilterList( new TimelineFilterList( @@ -439,10 +445,12 @@ public class TestTimelineReaderWebServicesUtils { TimelineReaderWebServicesUtils.parseKVFilters(expr, true); fail("Invalid compareop specified for config filters. Should be either" + " eq,ne or ene and exception should have been thrown."); - } catch (TimelineParseException e){} + } catch (TimelineParseException e) { + } } + @Test - public void testInfoFiltersParsing() throws Exception { + void testInfoFiltersParsing() throws Exception { String expr = "(((key11 ne 234 AND key12 eq val12) AND " + "(key13 ene val13 OR key14 eq 567)) OR (key21 eq val_21 OR key22 eq " + "5.0))"; @@ -499,7 +507,7 @@ public class TestTimelineReaderWebServicesUtils { parseKVFilters(expr, false), expectedList); // Test with unnecessary spaces. - expr = " abc ne 234 AND def eq 23 OR rst ene "+ + expr = " abc ne 234 AND def eq 23 OR rst ene " + " 24 OR xyz eq 456 AND pqr eq 2 "; expectedList = new TimelineFilterList( new TimelineFilterList( @@ -524,7 +532,7 @@ public class TestTimelineReaderWebServicesUtils { expr = "abdeq"; try { TimelineReaderWebServicesUtils.parseKVFilters(expr, false); - Assert.fail("Expression valuation should throw exception."); + fail("Expression valuation should throw exception."); } catch (TimelineParseException e) { // expected: do nothing } @@ -532,7 +540,7 @@ public class TestTimelineReaderWebServicesUtils { expr = "abc gt 234 AND defeq"; try { TimelineReaderWebServicesUtils.parseKVFilters(expr, false); - Assert.fail("Expression valuation should throw exception."); + fail("Expression valuation should throw exception."); } catch (TimelineParseException e) { // expected: do nothing } @@ -540,14 +548,14 @@ public class TestTimelineReaderWebServicesUtils { expr = "((key11 ne 234 AND key12 eq val12) AND (key13eq OR key14 eq va14))"; try { TimelineReaderWebServicesUtils.parseKVFilters(expr, false); - Assert.fail("Expression valuation should throw exception."); + fail("Expression valuation should throw exception."); } catch (TimelineParseException e) { // expected: do nothing } } @Test - public void testEventFiltersParsing() throws Exception { + void testEventFiltersParsing() throws Exception { String expr = "abc,def"; TimelineFilterList expectedList = new TimelineFilterList( new TimelineExistsFilter(TimelineCompareOp.EQUAL, "abc"), @@ -641,85 +649,96 @@ public class TestTimelineReaderWebServicesUtils { try { TimelineReaderWebServicesUtils.parseEventFilters(expr); fail("Improper brackets. Exception should have been thrown"); - } catch (TimelineParseException e){} + } catch (TimelineParseException e) { + } expr = "(((!(abc,def,uvc) (OR (rst, uvx)) AND (!(abcdefg) OR !(ghj,tyu)))" + " OR ((bcd,tyu) AND uvb))"; try { TimelineReaderWebServicesUtils.parseEventFilters(expr); fail("Unexpected opening bracket. Exception should have been thrown"); - } catch (TimelineParseException e){} + } catch (TimelineParseException e) { + } expr = "(((!(abc,def,uvc) OR) (rst, uvx)) AND (!(abcdefg) OR !(ghj,tyu)))" + " OR ((bcd,tyu) AND uvb))"; try { TimelineReaderWebServicesUtils.parseEventFilters(expr); fail("Unexpected closing bracket. Exception should have been thrown"); - } catch (TimelineParseException e){} + } catch (TimelineParseException e) { + } expr = "(((!(abc,def,uvc) PI (rst, uvx)) AND (!(abcdefg) OR !(ghj,tyu)))" + " OR ((bcd,tyu) AND uvb))"; try { TimelineReaderWebServicesUtils.parseEventFilters(expr); fail("Invalid op. Exception should have been thrown"); - } catch (TimelineParseException e){} + } catch (TimelineParseException e) { + } expr = "(((!(abc,def,uvc) !OR (rst, uvx)) AND (!(abcdefg) OR !(ghj,tyu)))" + " OR ((bcd,tyu) AND uvb))"; try { TimelineReaderWebServicesUtils.parseEventFilters(expr); fail("Unexpected ! char. Exception should have been thrown"); - } catch (TimelineParseException e){} + } catch (TimelineParseException e) { + } expr = "abc,def,uvc) OR (rst, uvx)"; try { TimelineReaderWebServicesUtils.parseEventFilters(expr); fail("Unexpected closing bracket. Exception should have been thrown"); - } catch (TimelineParseException e){} + } catch (TimelineParseException e) { + } expr = "abc,def,uvc OR )rst, uvx)"; try { TimelineReaderWebServicesUtils.parseEventFilters(expr); fail("Unexpected closing bracket. Exception should have been thrown"); - } catch (TimelineParseException e){} + } catch (TimelineParseException e) { + } expr = "abc,def,uvc OR ,rst, uvx)"; try { TimelineReaderWebServicesUtils.parseEventFilters(expr); fail("Unexpected delimiter. Exception should have been thrown"); - } catch (TimelineParseException e){} + } catch (TimelineParseException e) { + } expr = "abc,def,uvc OR ! "; try { TimelineReaderWebServicesUtils.parseEventFilters(expr); fail("Unexpected not char. Exception should have been thrown"); - } catch (TimelineParseException e){} + } catch (TimelineParseException e) { + } expr = "(abc,def,uvc)) OR (rst, uvx)"; try { TimelineReaderWebServicesUtils.parseEventFilters(expr); fail("Unbalanced brackets. Exception should have been thrown"); - } catch (TimelineParseException e){} + } catch (TimelineParseException e) { + } expr = "(((! ,(abc,def,uvc) OR (rst, uvx)) AND (!(abcdefg) OR !(ghj,tyu" + "))) OR ((bcd,tyu) AND uvb))"; try { TimelineReaderWebServicesUtils.parseEventFilters(expr); fail("( should follow ! char. Exception should have been thrown"); - } catch (TimelineParseException e){} + } catch (TimelineParseException e) { + } assertNull(TimelineReaderWebServicesUtils.parseEventFilters(null)); assertNull(TimelineReaderWebServicesUtils.parseEventFilters(" ")); } @Test - public void testRelationFiltersParsing() throws Exception { + void testRelationFiltersParsing() throws Exception { String expr = "type1:entity11,type2:entity21:entity22"; TimelineFilterList expectedList = new TimelineFilterList( new TimelineKeyValuesFilter(TimelineCompareOp.EQUAL, - "type1", Sets.newHashSet((Object)"entity11")), + "type1", Sets.newHashSet((Object) "entity11")), new TimelineKeyValuesFilter(TimelineCompareOp.EQUAL, - "type2", Sets.newHashSet((Object)"entity21", "entity22")) + "type2", Sets.newHashSet((Object) "entity21", "entity22")) ); verifyFilterList(expr, TimelineReaderWebServicesUtils. parseRelationFilters(expr), expectedList); @@ -733,16 +752,16 @@ public class TestTimelineReaderWebServicesUtils { expectedList = new TimelineFilterList(Operator.OR, new TimelineFilterList( new TimelineKeyValuesFilter(TimelineCompareOp.EQUAL, - "type1", Sets.newHashSet((Object)"entity11")), + "type1", Sets.newHashSet((Object) "entity11")), new TimelineKeyValuesFilter(TimelineCompareOp.EQUAL, - "type2", Sets.newHashSet((Object)"entity21", "entity22")) + "type2", Sets.newHashSet((Object) "entity21", "entity22")) ), new TimelineFilterList( new TimelineKeyValuesFilter(TimelineCompareOp.EQUAL, "type3", Sets.newHashSet( - (Object)"entity31", "entity32", "entity33")), + (Object) "entity31", "entity32", "entity33")), new TimelineKeyValuesFilter(TimelineCompareOp.EQUAL, - "type1", Sets.newHashSet((Object)"entity11", "entity12")) + "type1", Sets.newHashSet((Object) "entity11", "entity12")) ) ); verifyFilterList(expr, TimelineReaderWebServicesUtils. @@ -754,25 +773,25 @@ public class TestTimelineReaderWebServicesUtils { expectedList = new TimelineFilterList(Operator.OR, new TimelineFilterList( new TimelineKeyValuesFilter(TimelineCompareOp.NOT_EQUAL, - "type1", Sets.newHashSet((Object)"entity11")), + "type1", Sets.newHashSet((Object) "entity11")), new TimelineKeyValuesFilter(TimelineCompareOp.NOT_EQUAL, - "type2", Sets.newHashSet((Object)"entity21", "entity22")), + "type2", Sets.newHashSet((Object) "entity21", "entity22")), new TimelineKeyValuesFilter(TimelineCompareOp.NOT_EQUAL, - "type5", Sets.newHashSet((Object)"entity51")) + "type5", Sets.newHashSet((Object) "entity51")) ), new TimelineFilterList( new TimelineKeyValuesFilter(TimelineCompareOp.EQUAL, "type3", Sets.newHashSet( - (Object)"entity31", "entity32", "entity33")), + (Object) "entity31", "entity32", "entity33")), new TimelineKeyValuesFilter(TimelineCompareOp.EQUAL, - "type1", Sets.newHashSet((Object)"entity11", "entity12")) + "type1", Sets.newHashSet((Object) "entity11", "entity12")) ) ); verifyFilterList(expr, TimelineReaderWebServicesUtils. parseRelationFilters(expr), expectedList); expr = "(((!(type1:entity11,type2:entity21:entity22,type5:entity51) OR " + - "(type3:entity31:entity32:entity33,type1:entity11:entity12)) AND "+ + "(type3:entity31:entity32:entity33,type1:entity11:entity12)) AND " + "(!(type11:entity111) OR !(type4:entity43:entity44:entity47:entity49," + "type7:entity71))) OR ((type2:entity2,type8:entity88) AND t9:e:e1))"; expectedList = new TimelineFilterList(Operator.OR, @@ -780,45 +799,45 @@ public class TestTimelineReaderWebServicesUtils { new TimelineFilterList(Operator.OR, new TimelineFilterList( new TimelineKeyValuesFilter(TimelineCompareOp.NOT_EQUAL, - "type1", Sets.newHashSet((Object)"entity11")), + "type1", Sets.newHashSet((Object) "entity11")), new TimelineKeyValuesFilter(TimelineCompareOp.NOT_EQUAL, "type2", Sets.newHashSet( - (Object)"entity21", "entity22")), + (Object) "entity21", "entity22")), new TimelineKeyValuesFilter(TimelineCompareOp.NOT_EQUAL, - "type5", Sets.newHashSet((Object)"entity51")) + "type5", Sets.newHashSet((Object) "entity51")) ), new TimelineFilterList( new TimelineKeyValuesFilter(TimelineCompareOp.EQUAL, "type3", Sets.newHashSet( - (Object)"entity31", "entity32", "entity33")), + (Object) "entity31", "entity32", "entity33")), new TimelineKeyValuesFilter(TimelineCompareOp.EQUAL, "type1", Sets.newHashSet( - (Object)"entity11", "entity12")) + (Object) "entity11", "entity12")) ) ), new TimelineFilterList(Operator.OR, new TimelineFilterList( new TimelineKeyValuesFilter(TimelineCompareOp.NOT_EQUAL, - "type11", Sets.newHashSet((Object)"entity111")) + "type11", Sets.newHashSet((Object) "entity111")) ), new TimelineFilterList( new TimelineKeyValuesFilter(TimelineCompareOp.NOT_EQUAL, - "type4", Sets.newHashSet((Object)"entity43", "entity44", + "type4", Sets.newHashSet((Object) "entity43", "entity44", "entity47", "entity49")), new TimelineKeyValuesFilter(TimelineCompareOp.NOT_EQUAL, - "type7", Sets.newHashSet((Object)"entity71")) + "type7", Sets.newHashSet((Object) "entity71")) ) ) ), new TimelineFilterList( new TimelineFilterList( new TimelineKeyValuesFilter(TimelineCompareOp.EQUAL, - "type2", Sets.newHashSet((Object)"entity2")), + "type2", Sets.newHashSet((Object) "entity2")), new TimelineKeyValuesFilter(TimelineCompareOp.EQUAL, - "type8", Sets.newHashSet((Object)"entity88")) + "type8", Sets.newHashSet((Object) "entity88")) ), new TimelineKeyValuesFilter(TimelineCompareOp.EQUAL, "t9", - Sets.newHashSet((Object)"e", "e1")) + Sets.newHashSet((Object) "e", "e1")) ) ); verifyFilterList(expr, TimelineReaderWebServicesUtils. @@ -834,18 +853,19 @@ public class TestTimelineReaderWebServicesUtils { parseRelationFilters(expr), expectedList); expr = "(((!(type1 : entity11,type2:entity21:entity22,type5:entity51) OR " + - "(type3:entity31:entity32:entity33,type1:entity11:entity12)) AND "+ + "(type3:entity31:entity32:entity33,type1:entity11:entity12)) AND " + "(!(type11:entity111) OR !(type4:entity43:entity44:entity47:entity49," + "type7:entity71))) OR ((type2:entity2,type8:entity88) AND t9:e:e1))"; try { TimelineReaderWebServicesUtils.parseRelationFilters(expr); fail("Space not allowed in relation expression. Exception should have " + "been thrown"); - } catch (TimelineParseException e){} + } catch (TimelineParseException e) { + } } @Test - public void testDataToRetrieve() throws Exception { + void testDataToRetrieve() throws Exception { String expr = "abc,def"; TimelineFilterList expectedList = new TimelineFilterList(Operator.OR, new TimelinePrefixFilter(TimelineCompareOp.EQUAL, "abc"), @@ -913,28 +933,32 @@ public class TestTimelineReaderWebServicesUtils { try { TimelineReaderWebServicesUtils.parseDataToRetrieve(expr); fail("No closing bracket. Exception should have been thrown"); - } catch (TimelineParseException e){} + } catch (TimelineParseException e) { + } expr = "!abc,def,xyz"; try { TimelineReaderWebServicesUtils.parseDataToRetrieve(expr); fail("NOT(!) should be followed by opening bracket. Exception should " + "have been thrown"); - } catch (TimelineParseException e){} + } catch (TimelineParseException e) { + } expr = "!abc,def,xyz"; try { TimelineReaderWebServicesUtils.parseDataToRetrieve(expr); fail("NOT(!) should be followed by opening bracket. Exception should " + "have been thrown"); - } catch (TimelineParseException e){} + } catch (TimelineParseException e) { + } expr = "! r( abc,def,xyz)"; try { TimelineReaderWebServicesUtils.parseDataToRetrieve(expr); fail("NOT(!) should be followed by opening bracket. Exception should " + "have been thrown"); - } catch (TimelineParseException e){} + } catch (TimelineParseException e) { + } assertNull(TimelineReaderWebServicesUtils.parseDataToRetrieve(null)); assertNull(TimelineReaderWebServicesUtils.parseDataToRetrieve(" ")); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWhitelistAuthorizationFilter.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWhitelistAuthorizationFilter.java index 576699d12b9..86b20f8592c 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWhitelistAuthorizationFilter.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWhitelistAuthorizationFilter.java @@ -18,12 +18,6 @@ package org.apache.hadoop.yarn.server.timelineservice.reader; -import static org.mockito.ArgumentMatchers.eq; -import static org.mockito.Mockito.mock; -import static org.mockito.Mockito.when; -import static org.mockito.Mockito.times; -import static org.mockito.Mockito.verify; - import java.io.IOException; import java.security.Principal; import java.security.PrivilegedExceptionAction; @@ -31,18 +25,24 @@ import java.util.Collections; import java.util.Enumeration; import java.util.HashMap; import java.util.Map; - import javax.servlet.FilterConfig; import javax.servlet.ServletContext; import javax.servlet.ServletException; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; +import org.junit.jupiter.api.Test; +import org.mockito.Mockito; + import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.server.timelineservice.reader.security.TimelineReaderWhitelistAuthorizationFilter; -import org.junit.Test; -import org.mockito.Mockito; + +import static org.mockito.ArgumentMatchers.eq; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; /** * Unit tests for {@link TimelineReaderWhitelistAuthorizationFilter}. @@ -85,7 +85,7 @@ public class TestTimelineReaderWhitelistAuthorizationFilter { } @Test - public void checkFilterAllowedUser() throws ServletException, IOException { + void checkFilterAllowedUser() throws ServletException, IOException { Map map = new HashMap(); map.put(YarnConfiguration.TIMELINE_SERVICE_READ_AUTH_ENABLED, "true"); map.put(YarnConfiguration.TIMELINE_SERVICE_READ_ALLOWED_USERS, @@ -111,7 +111,7 @@ public class TestTimelineReaderWhitelistAuthorizationFilter { } @Test - public void checkFilterNotAllowedUser() throws ServletException, IOException { + void checkFilterNotAllowedUser() throws ServletException, IOException { Map map = new HashMap(); map.put(YarnConfiguration.TIMELINE_SERVICE_READ_AUTH_ENABLED, "true"); map.put(YarnConfiguration.TIMELINE_SERVICE_READ_ALLOWED_USERS, @@ -138,7 +138,7 @@ public class TestTimelineReaderWhitelistAuthorizationFilter { } @Test - public void checkFilterAllowedUserGroup() + void checkFilterAllowedUserGroup() throws ServletException, IOException, InterruptedException { Map map = new HashMap(); map.put(YarnConfiguration.TIMELINE_SERVICE_READ_AUTH_ENABLED, "true"); @@ -172,7 +172,7 @@ public class TestTimelineReaderWhitelistAuthorizationFilter { } @Test - public void checkFilterNotAlloweGroup() + void checkFilterNotAlloweGroup() throws ServletException, IOException, InterruptedException { Map map = new HashMap(); map.put(YarnConfiguration.TIMELINE_SERVICE_READ_AUTH_ENABLED, "true"); @@ -207,7 +207,7 @@ public class TestTimelineReaderWhitelistAuthorizationFilter { } @Test - public void checkFilterAllowAdmins() + void checkFilterAllowAdmins() throws ServletException, IOException, InterruptedException { // check that users in admin acl list are allowed to read Map map = new HashMap(); @@ -243,7 +243,7 @@ public class TestTimelineReaderWhitelistAuthorizationFilter { } @Test - public void checkFilterAllowAdminsWhenNoUsersSet() + void checkFilterAllowAdminsWhenNoUsersSet() throws ServletException, IOException, InterruptedException { // check that users in admin acl list are allowed to read Map map = new HashMap(); @@ -277,7 +277,7 @@ public class TestTimelineReaderWhitelistAuthorizationFilter { } @Test - public void checkFilterAllowNoOneWhenAdminAclsEmptyAndUserAclsEmpty() + void checkFilterAllowNoOneWhenAdminAclsEmptyAndUserAclsEmpty() throws ServletException, IOException, InterruptedException { // check that users in admin acl list are allowed to read Map map = new HashMap(); @@ -311,7 +311,7 @@ public class TestTimelineReaderWhitelistAuthorizationFilter { } @Test - public void checkFilterReadAuthDisabledNoAclSettings() + void checkFilterReadAuthDisabledNoAclSettings() throws ServletException, IOException, InterruptedException { // Default settings for Read Auth Enabled (false) // No values in admin acls or allowed read user list @@ -344,7 +344,7 @@ public class TestTimelineReaderWhitelistAuthorizationFilter { } @Test - public void checkFilterReadAuthDisabledButAclSettingsPopulated() + void checkFilterReadAuthDisabledButAclSettingsPopulated() throws ServletException, IOException, InterruptedException { Map map = new HashMap(); // Default settings for Read Auth Enabled (false) diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineUIDConverter.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineUIDConverter.java index 12b3fc0140f..7b08616b346 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineUIDConverter.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineUIDConverter.java @@ -18,16 +18,16 @@ package org.apache.hadoop.yarn.server.timelineservice.reader; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertNull; -import static org.junit.Assert.fail; +import org.junit.jupiter.api.Test; -import org.junit.Test; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNull; +import static org.junit.jupiter.api.Assertions.fail; public class TestTimelineUIDConverter { @Test - public void testUIDEncodingDecoding() throws Exception { + void testUIDEncodingDecoding() throws Exception { TimelineReaderContext context = new TimelineReaderContext( "!cluster", "!b*o*!xer", "oozie*", null, null, null, null); String uid = TimelineUIDConverter.FLOW_UID.encodeUID(context); @@ -80,7 +80,7 @@ public class TestTimelineUIDConverter { } @Test - public void testUIDNotProperlyEscaped() throws Exception { + void testUIDNotProperlyEscaped() throws Exception { try { TimelineUIDConverter.FLOW_UID.decodeUID("*!cluster!*!b*o***!xer!oozie**"); fail("UID not properly escaped. Exception should have been thrown."); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestFileSystemTimelineReaderImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestFileSystemTimelineReaderImpl.java index 46873ab9904..cf94749e883 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestFileSystemTimelineReaderImpl.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestFileSystemTimelineReaderImpl.java @@ -30,6 +30,11 @@ import java.util.HashSet; import java.util.Map; import java.util.Set; +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + import org.apache.commons.csv.CSVFormat; import org.apache.commons.csv.CSVPrinter; import org.apache.commons.io.FileUtils; @@ -53,11 +58,9 @@ import org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineKeyVa import org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineKeyValuesFilter; import org.apache.hadoop.yarn.server.timelineservice.storage.TimelineReader.Field; import org.apache.hadoop.yarn.util.timeline.TimelineUtils; -import org.junit.AfterClass; -import org.junit.Assert; -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.fail; public class TestFileSystemTimelineReaderImpl { @@ -65,7 +68,7 @@ public class TestFileSystemTimelineReaderImpl { TestFileSystemTimelineReaderImpl.class.getSimpleName()).getAbsolutePath(); private FileSystemTimelineReaderImpl reader; - @BeforeClass + @BeforeAll public static void setup() throws Exception { initializeDataDirectory(ROOT_DIR); } @@ -74,7 +77,7 @@ public class TestFileSystemTimelineReaderImpl { loadEntityData(rootDir); // Create app flow mapping file. CSVFormat format = - CSVFormat.DEFAULT.withHeader("APP", "USER", "FLOW", "FLOWRUN"); + CSVFormat.Builder.create().setHeader("APP", "USER", "FLOW", "FLOWRUN").build(); String appFlowMappingFile = rootDir + File.separator + "entities" + File.separator + "cluster1" + File.separator + FileSystemTimelineReaderImpl.APP_FLOW_MAPPING_FILE; @@ -89,12 +92,12 @@ public class TestFileSystemTimelineReaderImpl { (new File(rootDir)).deleteOnExit(); } - @AfterClass + @AfterAll public static void tearDown() throws Exception { FileUtils.deleteDirectory(new File(ROOT_DIR)); } - @Before + @BeforeEach public void init() throws Exception { reader = new FileSystemTimelineReaderImpl(); Configuration conf = new YarnConfiguration(); @@ -313,141 +316,141 @@ public class TestFileSystemTimelineReaderImpl { } @Test - public void testGetEntityDefaultView() throws Exception { + void testGetEntityDefaultView() throws Exception { // If no fields are specified, entity is returned with default view i.e. // only the id, type and created time. TimelineEntity result = reader.getEntity( new TimelineReaderContext("cluster1", "user1", "flow1", 1L, "app1", - "app", "id_1"), + "app", "id_1"), new TimelineDataToRetrieve(null, null, null, null, null, null)); - Assert.assertEquals( + assertEquals( (new TimelineEntity.Identifier("app", "id_1")).toString(), result.getIdentifier().toString()); - Assert.assertEquals((Long)1425016502000L, result.getCreatedTime()); - Assert.assertEquals(0, result.getConfigs().size()); - Assert.assertEquals(0, result.getMetrics().size()); + assertEquals((Long) 1425016502000L, result.getCreatedTime()); + assertEquals(0, result.getConfigs().size()); + assertEquals(0, result.getMetrics().size()); } @Test - public void testGetEntityByClusterAndApp() throws Exception { + void testGetEntityByClusterAndApp() throws Exception { // Cluster and AppId should be enough to get an entity. TimelineEntity result = reader.getEntity( new TimelineReaderContext("cluster1", null, null, null, "app1", "app", - "id_1"), + "id_1"), new TimelineDataToRetrieve(null, null, null, null, null, null)); - Assert.assertEquals( + assertEquals( (new TimelineEntity.Identifier("app", "id_1")).toString(), result.getIdentifier().toString()); - Assert.assertEquals((Long)1425016502000L, result.getCreatedTime()); - Assert.assertEquals(0, result.getConfigs().size()); - Assert.assertEquals(0, result.getMetrics().size()); + assertEquals((Long) 1425016502000L, result.getCreatedTime()); + assertEquals(0, result.getConfigs().size()); + assertEquals(0, result.getMetrics().size()); } /** This test checks whether we can handle commas in app flow mapping csv. */ @Test - public void testAppFlowMappingCsv() throws Exception { + void testAppFlowMappingCsv() throws Exception { // Test getting an entity by cluster and app where flow entry // in app flow mapping csv has commas. TimelineEntity result = reader.getEntity( new TimelineReaderContext("cluster1", null, null, null, "app2", - "app", "id_5"), + "app", "id_5"), new TimelineDataToRetrieve(null, null, null, null, null, null)); - Assert.assertEquals( + assertEquals( (new TimelineEntity.Identifier("app", "id_5")).toString(), result.getIdentifier().toString()); - Assert.assertEquals((Long)1425016502050L, result.getCreatedTime()); + assertEquals((Long) 1425016502050L, result.getCreatedTime()); } @Test - public void testGetEntityCustomFields() throws Exception { + void testGetEntityCustomFields() throws Exception { // Specified fields in addition to default view will be returned. TimelineEntity result = reader.getEntity( new TimelineReaderContext("cluster1", "user1", "flow1", 1L, "app1", - "app", "id_1"), + "app", "id_1"), new TimelineDataToRetrieve(null, null, - EnumSet.of(Field.INFO, Field.CONFIGS, Field.METRICS), null, null, - null)); - Assert.assertEquals( + EnumSet.of(Field.INFO, Field.CONFIGS, Field.METRICS), null, null, + null)); + assertEquals( (new TimelineEntity.Identifier("app", "id_1")).toString(), result.getIdentifier().toString()); - Assert.assertEquals((Long)1425016502000L, result.getCreatedTime()); - Assert.assertEquals(3, result.getConfigs().size()); - Assert.assertEquals(3, result.getMetrics().size()); - Assert.assertEquals(2, result.getInfo().size()); + assertEquals((Long) 1425016502000L, result.getCreatedTime()); + assertEquals(3, result.getConfigs().size()); + assertEquals(3, result.getMetrics().size()); + assertEquals(2, result.getInfo().size()); // No events will be returned - Assert.assertEquals(0, result.getEvents().size()); + assertEquals(0, result.getEvents().size()); } @Test - public void testGetEntityAllFields() throws Exception { + void testGetEntityAllFields() throws Exception { // All fields of TimelineEntity will be returned. TimelineEntity result = reader.getEntity( new TimelineReaderContext("cluster1", "user1", "flow1", 1L, "app1", - "app", "id_1"), + "app", "id_1"), new TimelineDataToRetrieve(null, null, EnumSet.of(Field.ALL), null, - null, null)); - Assert.assertEquals( + null, null)); + assertEquals( (new TimelineEntity.Identifier("app", "id_1")).toString(), result.getIdentifier().toString()); - Assert.assertEquals((Long)1425016502000L, result.getCreatedTime()); - Assert.assertEquals(3, result.getConfigs().size()); - Assert.assertEquals(3, result.getMetrics().size()); + assertEquals((Long) 1425016502000L, result.getCreatedTime()); + assertEquals(3, result.getConfigs().size()); + assertEquals(3, result.getMetrics().size()); // All fields including events will be returned. - Assert.assertEquals(2, result.getEvents().size()); + assertEquals(2, result.getEvents().size()); } @Test - public void testGetAllEntities() throws Exception { + void testGetAllEntities() throws Exception { Set result = reader.getEntities( new TimelineReaderContext("cluster1", "user1", "flow1", 1L, "app1", - "app", null), new TimelineEntityFilters.Builder().build(), + "app", null), new TimelineEntityFilters.Builder().build(), new TimelineDataToRetrieve(null, null, EnumSet.of(Field.ALL), null, - null, null)); + null, null)); // All 4 entities will be returned - Assert.assertEquals(4, result.size()); + assertEquals(4, result.size()); } @Test - public void testGetEntitiesWithLimit() throws Exception { + void testGetEntitiesWithLimit() throws Exception { Set result = reader.getEntities( new TimelineReaderContext("cluster1", "user1", "flow1", 1L, "app1", - "app", null), + "app", null), new TimelineEntityFilters.Builder().entityLimit(2L).build(), new TimelineDataToRetrieve()); - Assert.assertEquals(2, result.size()); + assertEquals(2, result.size()); // Needs to be rewritten once hashcode and equals for // TimelineEntity is implemented // Entities with id_1 and id_4 should be returned, // based on created time, descending. for (TimelineEntity entity : result) { if (!entity.getId().equals("id_1") && !entity.getId().equals("id_4")) { - Assert.fail("Entity not sorted by created time"); + fail("Entity not sorted by created time"); } } result = reader.getEntities( new TimelineReaderContext("cluster1", "user1", "flow1", 1L, "app1", - "app", null), + "app", null), new TimelineEntityFilters.Builder().entityLimit(3L).build(), new TimelineDataToRetrieve()); // Even though 2 entities out of 4 have same created time, one entity // is left out due to limit - Assert.assertEquals(3, result.size()); + assertEquals(3, result.size()); } @Test - public void testGetEntitiesByTimeWindows() throws Exception { + void testGetEntitiesByTimeWindows() throws Exception { // Get entities based on created time start and end time range. Set result = reader.getEntities( new TimelineReaderContext("cluster1", "user1", "flow1", 1L, "app1", - "app", null), + "app", null), new TimelineEntityFilters.Builder().createdTimeBegin(1425016502030L) .createTimeEnd(1425016502060L).build(), new TimelineDataToRetrieve()); - Assert.assertEquals(1, result.size()); + assertEquals(1, result.size()); // Only one entity with ID id_4 should be returned. for (TimelineEntity entity : result) { if (!entity.getId().equals("id_4")) { - Assert.fail("Incorrect filtering based on created time range"); + fail("Incorrect filtering based on created time range"); } } @@ -458,44 +461,44 @@ public class TestFileSystemTimelineReaderImpl { new TimelineEntityFilters.Builder().createTimeEnd(1425016502010L) .build(), new TimelineDataToRetrieve()); - Assert.assertEquals(3, result.size()); + assertEquals(3, result.size()); for (TimelineEntity entity : result) { if (entity.getId().equals("id_4")) { - Assert.fail("Incorrect filtering based on created time range"); + fail("Incorrect filtering based on created time range"); } } // Get entities if only created time start is specified. result = reader.getEntities( new TimelineReaderContext("cluster1", "user1", "flow1", 1L, "app1", - "app", null), + "app", null), new TimelineEntityFilters.Builder().createdTimeBegin(1425016502010L) .build(), new TimelineDataToRetrieve()); - Assert.assertEquals(1, result.size()); + assertEquals(1, result.size()); for (TimelineEntity entity : result) { if (!entity.getId().equals("id_4")) { - Assert.fail("Incorrect filtering based on created time range"); + fail("Incorrect filtering based on created time range"); } } } @Test - public void testGetFilteredEntities() throws Exception { + void testGetFilteredEntities() throws Exception { // Get entities based on info filters. TimelineFilterList infoFilterList = new TimelineFilterList(); infoFilterList.addFilter( new TimelineKeyValueFilter(TimelineCompareOp.EQUAL, "info2", 3.5)); Set result = reader.getEntities( new TimelineReaderContext("cluster1", "user1", "flow1", 1L, "app1", - "app", null), + "app", null), new TimelineEntityFilters.Builder().infoFilters(infoFilterList).build(), new TimelineDataToRetrieve()); - Assert.assertEquals(1, result.size()); + assertEquals(1, result.size()); // Only one entity with ID id_3 should be returned. for (TimelineEntity entity : result) { if (!entity.getId().equals("id_3")) { - Assert.fail("Incorrect filtering based on info filters"); + fail("Incorrect filtering based on info filters"); } } @@ -507,14 +510,14 @@ public class TestFileSystemTimelineReaderImpl { new TimelineKeyValueFilter(TimelineCompareOp.EQUAL, "config_3", "abc")); result = reader.getEntities( new TimelineReaderContext("cluster1", "user1", "flow1", 1L, "app1", - "app", null), + "app", null), new TimelineEntityFilters.Builder().configFilters(confFilterList) .build(), new TimelineDataToRetrieve()); - Assert.assertEquals(1, result.size()); + assertEquals(1, result.size()); for (TimelineEntity entity : result) { if (!entity.getId().equals("id_3")) { - Assert.fail("Incorrect filtering based on config filters"); + fail("Incorrect filtering based on config filters"); } } @@ -526,13 +529,13 @@ public class TestFileSystemTimelineReaderImpl { new TimelineExistsFilter(TimelineCompareOp.EQUAL, "event_4")); result = reader.getEntities( new TimelineReaderContext("cluster1", "user1", "flow1", 1L, "app1", - "app", null), + "app", null), new TimelineEntityFilters.Builder().eventFilters(eventFilters).build(), new TimelineDataToRetrieve()); - Assert.assertEquals(1, result.size()); + assertEquals(1, result.size()); for (TimelineEntity entity : result) { if (!entity.getId().equals("id_3")) { - Assert.fail("Incorrect filtering based on event filters"); + fail("Incorrect filtering based on event filters"); } } @@ -542,15 +545,15 @@ public class TestFileSystemTimelineReaderImpl { TimelineCompareOp.GREATER_OR_EQUAL, "metric3", 0L)); result = reader.getEntities( new TimelineReaderContext("cluster1", "user1", "flow1", 1L, "app1", - "app", null), + "app", null), new TimelineEntityFilters.Builder().metricFilters(metricFilterList) .build(), new TimelineDataToRetrieve()); - Assert.assertEquals(2, result.size()); + assertEquals(2, result.size()); // Two entities with IDs' id_1 and id_2 should be returned. for (TimelineEntity entity : result) { if (!entity.getId().equals("id_1") && !entity.getId().equals("id_2")) { - Assert.fail("Incorrect filtering based on metric filters"); + fail("Incorrect filtering based on metric filters"); } } @@ -569,14 +572,14 @@ public class TestFileSystemTimelineReaderImpl { new TimelineFilterList(Operator.OR, list1, list2); result = reader.getEntities( new TimelineReaderContext("cluster1", "user1", "flow1", 1L, "app1", - "app", null), + "app", null), new TimelineEntityFilters.Builder().configFilters(confFilterList1) .build(), new TimelineDataToRetrieve()); - Assert.assertEquals(2, result.size()); + assertEquals(2, result.size()); for (TimelineEntity entity : result) { if (!entity.getId().equals("id_1") && !entity.getId().equals("id_2")) { - Assert.fail("Incorrect filtering based on config filters"); + fail("Incorrect filtering based on config filters"); } } @@ -592,14 +595,14 @@ public class TestFileSystemTimelineReaderImpl { new TimelineFilterList(Operator.OR, list3, list4); result = reader.getEntities( new TimelineReaderContext("cluster1", "user1", "flow1", 1L, "app1", - "app", null), + "app", null), new TimelineEntityFilters.Builder().configFilters(confFilterList2) .build(), new TimelineDataToRetrieve()); - Assert.assertEquals(2, result.size()); + assertEquals(2, result.size()); for (TimelineEntity entity : result) { if (!entity.getId().equals("id_1") && !entity.getId().equals("id_2")) { - Assert.fail("Incorrect filtering based on config filters"); + fail("Incorrect filtering based on config filters"); } } @@ -610,14 +613,14 @@ public class TestFileSystemTimelineReaderImpl { TimelineCompareOp.NOT_EQUAL, "config_3", "abc")); result = reader.getEntities( new TimelineReaderContext("cluster1", "user1", "flow1", 1L, "app1", - "app", null), + "app", null), new TimelineEntityFilters.Builder().configFilters(confFilterList3) .build(), new TimelineDataToRetrieve()); - Assert.assertEquals(1, result.size()); - for(TimelineEntity entity : result) { + assertEquals(1, result.size()); + for (TimelineEntity entity : result) { if (!entity.getId().equals("id_2")) { - Assert.fail("Incorrect filtering based on config filters"); + fail("Incorrect filtering based on config filters"); } } @@ -628,11 +631,11 @@ public class TestFileSystemTimelineReaderImpl { TimelineCompareOp.EQUAL, "config_3", "def")); result = reader.getEntities( new TimelineReaderContext("cluster1", "user1", "flow1", 1L, "app1", - "app", null), + "app", null), new TimelineEntityFilters.Builder().configFilters(confFilterList4) .build(), new TimelineDataToRetrieve()); - Assert.assertEquals(0, result.size()); + assertEquals(0, result.size()); TimelineFilterList confFilterList5 = new TimelineFilterList(Operator.OR); confFilterList5.addFilter(new TimelineKeyValueFilter( @@ -641,14 +644,14 @@ public class TestFileSystemTimelineReaderImpl { TimelineCompareOp.EQUAL, "config_3", "def")); result = reader.getEntities( new TimelineReaderContext("cluster1", "user1", "flow1", 1L, "app1", - "app", null), + "app", null), new TimelineEntityFilters.Builder().configFilters(confFilterList5) .build(), new TimelineDataToRetrieve()); - Assert.assertEquals(1, result.size()); + assertEquals(1, result.size()); for (TimelineEntity entity : result) { if (!entity.getId().equals("id_2")) { - Assert.fail("Incorrect filtering based on config filters"); + fail("Incorrect filtering based on config filters"); } } @@ -665,15 +668,15 @@ public class TestFileSystemTimelineReaderImpl { new TimelineFilterList(Operator.OR, list6, list7); result = reader.getEntities( new TimelineReaderContext("cluster1", "user1", "flow1", 1L, "app1", - "app", null), + "app", null), new TimelineEntityFilters.Builder().metricFilters(metricFilterList1) .build(), new TimelineDataToRetrieve()); - Assert.assertEquals(2, result.size()); + assertEquals(2, result.size()); // Two entities with IDs' id_2 and id_3 should be returned. for (TimelineEntity entity : result) { if (!entity.getId().equals("id_2") && !entity.getId().equals("id_3")) { - Assert.fail("Incorrect filtering based on metric filters"); + fail("Incorrect filtering based on metric filters"); } } @@ -684,14 +687,14 @@ public class TestFileSystemTimelineReaderImpl { TimelineCompareOp.LESS_OR_EQUAL, "metric3", 23)); result = reader.getEntities( new TimelineReaderContext("cluster1", "user1", "flow1", 1L, "app1", - "app", null), + "app", null), new TimelineEntityFilters.Builder().metricFilters(metricFilterList2) .build(), new TimelineDataToRetrieve()); - Assert.assertEquals(1, result.size()); + assertEquals(1, result.size()); for (TimelineEntity entity : result) { if (!entity.getId().equals("id_1")) { - Assert.fail("Incorrect filtering based on metric filters"); + fail("Incorrect filtering based on metric filters"); } } @@ -702,11 +705,11 @@ public class TestFileSystemTimelineReaderImpl { TimelineCompareOp.LESS_OR_EQUAL, "metric3", 23)); result = reader.getEntities( new TimelineReaderContext("cluster1", "user1", "flow1", 1L, "app1", - "app", null), + "app", null), new TimelineEntityFilters.Builder().metricFilters(metricFilterList3) .build(), new TimelineDataToRetrieve()); - Assert.assertEquals(0, result.size()); + assertEquals(0, result.size()); TimelineFilterList metricFilterList4 = new TimelineFilterList(Operator.OR); metricFilterList4.addFilter(new TimelineCompareFilter( @@ -715,14 +718,14 @@ public class TestFileSystemTimelineReaderImpl { TimelineCompareOp.LESS_OR_EQUAL, "metric3", 23)); result = reader.getEntities( new TimelineReaderContext("cluster1", "user1", "flow1", 1L, "app1", - "app", null), + "app", null), new TimelineEntityFilters.Builder().metricFilters(metricFilterList4) .build(), new TimelineDataToRetrieve()); - Assert.assertEquals(2, result.size()); + assertEquals(2, result.size()); for (TimelineEntity entity : result) { if (!entity.getId().equals("id_1") && !entity.getId().equals("id_2")) { - Assert.fail("Incorrect filtering based on metric filters"); + fail("Incorrect filtering based on metric filters"); } } @@ -731,14 +734,14 @@ public class TestFileSystemTimelineReaderImpl { TimelineCompareOp.NOT_EQUAL, "metric2", 74)); result = reader.getEntities( new TimelineReaderContext("cluster1", "user1", "flow1", 1L, "app1", - "app", null), + "app", null), new TimelineEntityFilters.Builder().metricFilters(metricFilterList5) .build(), new TimelineDataToRetrieve()); - Assert.assertEquals(2, result.size()); + assertEquals(2, result.size()); for (TimelineEntity entity : result) { if (!entity.getId().equals("id_1") && !entity.getId().equals("id_2")) { - Assert.fail("Incorrect filtering based on metric filters"); + fail("Incorrect filtering based on metric filters"); } } @@ -749,11 +752,11 @@ public class TestFileSystemTimelineReaderImpl { new TimelineKeyValueFilter(TimelineCompareOp.NOT_EQUAL, "info4", 20)); result = reader.getEntities( new TimelineReaderContext("cluster1", "user1", "flow1", 1L, "app1", - "app", null), + "app", null), new TimelineEntityFilters.Builder().infoFilters(infoFilterList1) .build(), new TimelineDataToRetrieve()); - Assert.assertEquals(0, result.size()); + assertEquals(0, result.size()); TimelineFilterList infoFilterList2 = new TimelineFilterList(Operator.OR); infoFilterList2.addFilter( @@ -762,14 +765,14 @@ public class TestFileSystemTimelineReaderImpl { new TimelineKeyValueFilter(TimelineCompareOp.EQUAL, "info1", "val1")); result = reader.getEntities( new TimelineReaderContext("cluster1", "user1", "flow1", 1L, "app1", - "app", null), + "app", null), new TimelineEntityFilters.Builder().infoFilters(infoFilterList2) .build(), new TimelineDataToRetrieve()); - Assert.assertEquals(2, result.size()); + assertEquals(2, result.size()); for (TimelineEntity entity : result) { if (!entity.getId().equals("id_1") && !entity.getId().equals("id_3")) { - Assert.fail("Incorrect filtering based on info filters"); + fail("Incorrect filtering based on info filters"); } } @@ -780,11 +783,11 @@ public class TestFileSystemTimelineReaderImpl { new TimelineKeyValueFilter(TimelineCompareOp.EQUAL, "info2", "val5")); result = reader.getEntities( new TimelineReaderContext("cluster1", "user1", "flow1", 1L, "app1", - "app", null), + "app", null), new TimelineEntityFilters.Builder().infoFilters(infoFilterList3) .build(), new TimelineDataToRetrieve()); - Assert.assertEquals(0, result.size()); + assertEquals(0, result.size()); TimelineFilterList infoFilterList4 = new TimelineFilterList(Operator.OR); infoFilterList4.addFilter( @@ -793,55 +796,55 @@ public class TestFileSystemTimelineReaderImpl { new TimelineKeyValueFilter(TimelineCompareOp.EQUAL, "info2", "val5")); result = reader.getEntities( new TimelineReaderContext("cluster1", "user1", "flow1", 1L, "app1", - "app", null), + "app", null), new TimelineEntityFilters.Builder().infoFilters(infoFilterList4) .build(), new TimelineDataToRetrieve()); - Assert.assertEquals(1, result.size()); + assertEquals(1, result.size()); for (TimelineEntity entity : result) { if (!entity.getId().equals("id_1")) { - Assert.fail("Incorrect filtering based on info filters"); + fail("Incorrect filtering based on info filters"); } } } @Test - public void testGetEntitiesByRelations() throws Exception { + void testGetEntitiesByRelations() throws Exception { // Get entities based on relatesTo. TimelineFilterList relatesTo = new TimelineFilterList(Operator.OR); Set relatesToIds = - new HashSet(Arrays.asList((Object)"flow1")); + new HashSet(Arrays.asList((Object) "flow1")); relatesTo.addFilter(new TimelineKeyValuesFilter( TimelineCompareOp.EQUAL, "flow", relatesToIds)); Set result = reader.getEntities( new TimelineReaderContext("cluster1", "user1", "flow1", 1L, "app1", - "app", null), + "app", null), new TimelineEntityFilters.Builder().relatesTo(relatesTo).build(), new TimelineDataToRetrieve()); - Assert.assertEquals(1, result.size()); + assertEquals(1, result.size()); // Only one entity with ID id_1 should be returned. for (TimelineEntity entity : result) { if (!entity.getId().equals("id_1")) { - Assert.fail("Incorrect filtering based on relatesTo"); + fail("Incorrect filtering based on relatesTo"); } } // Get entities based on isRelatedTo. TimelineFilterList isRelatedTo = new TimelineFilterList(Operator.OR); Set isRelatedToIds = - new HashSet(Arrays.asList((Object)"tid1_2")); + new HashSet(Arrays.asList((Object) "tid1_2")); isRelatedTo.addFilter(new TimelineKeyValuesFilter( TimelineCompareOp.EQUAL, "type1", isRelatedToIds)); result = reader.getEntities( new TimelineReaderContext("cluster1", "user1", "flow1", 1L, "app1", - "app", null), + "app", null), new TimelineEntityFilters.Builder().isRelatedTo(isRelatedTo).build(), new TimelineDataToRetrieve()); - Assert.assertEquals(2, result.size()); + assertEquals(2, result.size()); // Two entities with IDs' id_1 and id_3 should be returned. for (TimelineEntity entity : result) { if (!entity.getId().equals("id_1") && !entity.getId().equals("id_3")) { - Assert.fail("Incorrect filtering based on isRelatedTo"); + fail("Incorrect filtering based on isRelatedTo"); } } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestFileSystemTimelineWriterImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestFileSystemTimelineWriterImpl.java index b880b9a6482..efed104eeea 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestFileSystemTimelineWriterImpl.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestFileSystemTimelineWriterImpl.java @@ -17,8 +17,6 @@ */ package org.apache.hadoop.yarn.server.timelineservice.storage; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertTrue; import java.io.BufferedReader; import java.io.File; @@ -29,6 +27,9 @@ import java.util.HashMap; import java.util.List; import java.util.Map; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.io.TempDir; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; @@ -41,13 +42,14 @@ import org.apache.hadoop.yarn.api.records.timelineservice.TimelineMetricOperatio import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.server.timelineservice.collector.TimelineCollectorContext; import org.apache.hadoop.yarn.util.timeline.TimelineUtils; -import org.junit.Rule; -import org.junit.Test; -import org.junit.rules.TemporaryFolder; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertTrue; public class TestFileSystemTimelineWriterImpl { - @Rule - public TemporaryFolder tmpFolder = new TemporaryFolder(); + @TempDir + private File tmpFolder; /** * Unit test for PoC YARN 3264. @@ -55,7 +57,7 @@ public class TestFileSystemTimelineWriterImpl { * @throws Exception */ @Test - public void testWriteEntityToFile() throws Exception { + void testWriteEntityToFile() throws Exception { TimelineEntities te = new TimelineEntities(); TimelineEntity entity = new TimelineEntity(); String id = "hello"; @@ -89,7 +91,7 @@ public class TestFileSystemTimelineWriterImpl { try { fsi = new FileSystemTimelineWriterImpl(); Configuration conf = new YarnConfiguration(); - String outputRoot = tmpFolder.newFolder().getAbsolutePath(); + String outputRoot = tmpFolder.getAbsolutePath(); conf.set(FileSystemTimelineWriterImpl.TIMELINE_SERVICE_STORAGE_DIR_ROOT, outputRoot); fsi.init(conf); @@ -107,14 +109,13 @@ public class TestFileSystemTimelineWriterImpl { FileSystemTimelineWriterImpl.TIMELINE_SERVICE_STORAGE_EXTENSION; Path path = new Path(fileName); FileSystem fs = FileSystem.get(conf); - assertTrue("Specified path(" + fileName + ") should exist: ", - fs.exists(path)); + assertTrue(fs.exists(path), + "Specified path(" + fileName + ") should exist: "); FileStatus fileStatus = fs.getFileStatus(path); - assertTrue("Specified path should be a file", - !fileStatus.isDirectory()); + assertFalse(fileStatus.isDirectory(), "Specified path should be a file"); List data = readFromFile(fs, path); // ensure there's only one entity + 1 new line - assertTrue("data size is:" + data.size(), data.size() == 2); + assertEquals(2, data.size(), "data size is:" + data.size()); String d = data.get(0); // confirm the contents same as what was written assertEquals(d, TimelineUtils.dumpTimelineRecordtoJSON(entity)); @@ -127,14 +128,13 @@ public class TestFileSystemTimelineWriterImpl { File.separator + type2 + File.separator + id2 + FileSystemTimelineWriterImpl.TIMELINE_SERVICE_STORAGE_EXTENSION; Path path2 = new Path(fileName2); - assertTrue("Specified path(" + fileName + ") should exist: ", - fs.exists(path2)); + assertTrue(fs.exists(path2), + "Specified path(" + fileName + ") should exist: "); FileStatus fileStatus2 = fs.getFileStatus(path2); - assertTrue("Specified path should be a file", - !fileStatus2.isDirectory()); + assertFalse(fileStatus2.isDirectory(), "Specified path should be a file"); List data2 = readFromFile(fs, path2); // ensure there's only one entity + 1 new line - assertTrue("data size is:" + data2.size(), data2.size() == 2); + assertEquals(2, data2.size(), "data size is:" + data2.size()); String metricToString = data2.get(0); // confirm the contents same as what was written assertEquals(metricToString, @@ -147,7 +147,7 @@ public class TestFileSystemTimelineWriterImpl { } @Test - public void testWriteMultipleEntities() throws Exception { + void testWriteMultipleEntities() throws Exception { String id = "appId"; String type = "app"; @@ -169,7 +169,7 @@ public class TestFileSystemTimelineWriterImpl { try { fsi = new FileSystemTimelineWriterImpl(); Configuration conf = new YarnConfiguration(); - String outputRoot = tmpFolder.newFolder().getAbsolutePath(); + String outputRoot = tmpFolder.getAbsolutePath(); conf.set(FileSystemTimelineWriterImpl.TIMELINE_SERVICE_STORAGE_DIR_ROOT, outputRoot); fsi.init(conf); @@ -191,13 +191,12 @@ public class TestFileSystemTimelineWriterImpl { FileSystemTimelineWriterImpl.TIMELINE_SERVICE_STORAGE_EXTENSION; Path path = new Path(fileName); FileSystem fs = FileSystem.get(conf); - assertTrue("Specified path(" + fileName + ") should exist: ", - fs.exists(path)); + assertTrue(fs.exists(path), + "Specified path(" + fileName + ") should exist: "); FileStatus fileStatus = fs.getFileStatus(path); - assertTrue("Specified path should be a file", - !fileStatus.isDirectory()); + assertFalse(fileStatus.isDirectory(), "Specified path should be a file"); List data = readFromFile(fs, path); - assertTrue("data size is:" + data.size(), data.size() == 3); + assertEquals(3, data.size(), "data size is:" + data.size()); String d = data.get(0); // confirm the contents same as what was written assertEquals(d, TimelineUtils.dumpTimelineRecordtoJSON(entity)); @@ -215,7 +214,7 @@ public class TestFileSystemTimelineWriterImpl { } @Test - public void testWriteEntitiesWithEmptyFlowName() throws Exception { + void testWriteEntitiesWithEmptyFlowName() throws Exception { String id = "appId"; String type = "app"; @@ -230,7 +229,7 @@ public class TestFileSystemTimelineWriterImpl { try { fsi = new FileSystemTimelineWriterImpl(); Configuration conf = new YarnConfiguration(); - String outputRoot = tmpFolder.newFolder().getAbsolutePath(); + String outputRoot = tmpFolder.getAbsolutePath(); conf.set(FileSystemTimelineWriterImpl.TIMELINE_SERVICE_STORAGE_DIR_ROOT, outputRoot); fsi.init(conf); @@ -248,13 +247,12 @@ public class TestFileSystemTimelineWriterImpl { FileSystemTimelineWriterImpl.TIMELINE_SERVICE_STORAGE_EXTENSION; Path path = new Path(fileName); FileSystem fs = FileSystem.get(conf); - assertTrue("Specified path(" + fileName + ") should exist: ", - fs.exists(path)); + assertTrue(fs.exists(path), + "Specified path(" + fileName + ") should exist: "); FileStatus fileStatus = fs.getFileStatus(path); - assertTrue("Specified path should be a file", - !fileStatus.isDirectory()); + assertFalse(fileStatus.isDirectory(), "specified path should be a file"); List data = readFromFile(fs, path); - assertTrue("data size is:" + data.size(), data.size() == 2); + assertEquals(2, data.size(), "data size is:" + data.size()); String d = data.get(0); // confirm the contents same as what was written assertEquals(d, TimelineUtils.dumpTimelineRecordtoJSON(entity)); @@ -278,4 +276,5 @@ public class TestFileSystemTimelineWriterImpl { } return data; } + } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestTimelineSchemaCreator.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestTimelineSchemaCreator.java index 16b6d995d4f..02014966746 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestTimelineSchemaCreator.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestTimelineSchemaCreator.java @@ -18,10 +18,12 @@ package org.apache.hadoop.yarn.server.timelineservice.storage; +import org.junit.jupiter.api.Test; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.yarn.conf.YarnConfiguration; -import org.junit.Assert; -import org.junit.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; /** * Test cases for {@link TimelineSchemaCreator}. @@ -29,13 +31,13 @@ import org.junit.Test; public class TestTimelineSchemaCreator { @Test - public void testTimelineSchemaCreation() throws Exception { + void testTimelineSchemaCreation() throws Exception { Configuration conf = new Configuration(); conf.set(YarnConfiguration.TIMELINE_SERVICE_SCHEMA_CREATOR_CLASS, "org.apache.hadoop.yarn.server.timelineservice.storage" + ".DummyTimelineSchemaCreator"); TimelineSchemaCreator timelineSchemaCreator = new TimelineSchemaCreator(); - Assert.assertEquals(0, timelineSchemaCreator + assertEquals(0, timelineSchemaCreator .createTimelineSchema(new String[]{}, conf)); } } \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/Federation.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/Federation.md index 3f2e05aee63..f547e8d6b77 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/Federation.md +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/Federation.md @@ -191,7 +191,9 @@ SQL: one must setup the following parameters: |`yarn.federation.state-store.sql.username` | `` | For SQLFederationStateStore the username for the DB connection. | |`yarn.federation.state-store.sql.password` | `` | For SQLFederationStateStore the password for the DB connection. | -We provide scripts for MySQL and Microsoft SQL Server. +We provide scripts for **MySQL** and **Microsoft SQL Server**. + +> MySQL For MySQL, one must download the latest jar version 5.x from [MVN Repository](https://mvnrepository.com/artifact/mysql/mysql-connector-java) and add it to the CLASSPATH. Then the DB schema is created by executing the following SQL scripts in the database: @@ -205,9 +207,23 @@ In the same directory we provide scripts to drop the Stored Procedures, the Tabl **Note:** the FederationStateStoreUser.sql defines a default user/password for the DB that you are **highly encouraged** to set this to a proper strong password. +**The versions supported by MySQL are MySQL 5.7 and above:** + +1. MySQL 5.7 +2. MySQL 8.0 + +> Microsoft SQL Server + For SQL-Server, the process is similar, but the jdbc driver is already included. SQL-Server scripts are located in **sbin/FederationStateStore/SQLServer/**. +**The versions supported by SQL-Server are SQL Server 2008 R2 and above:** + +1. SQL Server 2008 R2 Enterprise +2. SQL Server 2012 Enterprise +3. SQL Server 2016 Enterprise +4. SQL Server 2017 Enterprise +5. SQL Server 2019 Enterprise ####Optional: @@ -254,6 +270,7 @@ Optional: |`yarn.router.admin.address` | `0.0.0.0:8052` | Admin address at the router. | |`yarn.router.webapp.https.address` | `0.0.0.0:8091` | Secure webapp address at the router. | |`yarn.router.submit.retry` | `3` | The number of retries in the router before we give up. | +|`yarn.router.submit.interval.time` | `10ms` | The interval between two retry, the default value is 10ms. | |`yarn.federation.statestore.max-connections` | `10` | This is the maximum number of parallel connections each Router makes to the state-store. | |`yarn.federation.cache-ttl.secs` | `60` | The Router caches informations, and this is the time to leave before the cache is invalidated. | |`yarn.router.webapp.interceptor-class.pipeline` | `org.apache.hadoop.yarn.server.router.webapp.FederationInterceptorREST` | A comma-separated list of interceptor classes to be run at the router when interfacing with the client via REST interface. The last step of this pipeline must be the Federation Interceptor REST. | @@ -268,6 +285,16 @@ Kerberos supported in federation. | `yarn.router.kerberos.principal` | | The Router service principal. This is typically set to router/_HOST@REALM.TLD. Each Router will substitute _HOST with its own fully qualified hostname at startup. The _HOST placeholder allows using the same configuration setting on all Routers in setup. | | `yarn.router.kerberos.principal.hostname` | | Optional. The hostname for the Router containing this configuration file. Will be different for each machine. Defaults to current hostname. | +Enabling CORS support: + +To enable cross-origin support (CORS) for the Yarn Router, please set the following configuration parameters: + +| Property | Example | Description | +| ----------------------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | +| `hadoop.http.filter.initializers` | `org.apache.hadoop.security.HttpCrossOriginFilterInitializer` | Optional. Set the filter to HttpCrossOriginFilterInitializer, Configure this parameter in core-site.xml. | +| `yarn.router.webapp.cross-origin.enabled` | `true` | Optional. Enable/disable CORS filter.Configure this parameter in yarn-site.xml. | + + ###ON NMs: These are extra configurations that should appear in the **conf/yarn-site.xml** at each NodeManager. diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeLabel.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeLabel.md index 9c39e995024..0f02e67f75d 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeLabel.md +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeLabel.md @@ -150,7 +150,7 @@ yarn.scheduler.capacity.root.marketing.accessible-node-labels.GPU.capacity=50 yarn.scheduler.capacity.root.engineering.default-node-label-expression=GPU ``` -You can see root.engineering/marketing/sales.capacity=33, so each of them can has guaranteed resource equals to 1/3 of resource **without partition**. So each of them can use 1/3 resource of h1..h4, which is 24 * 4 * (1/3) = (32G mem, 32 v-cores). +You can see root.engineering/marketing/sales.capacity=33, so each of them has guaranteed resource equals to 1/3 of resource **without partition**. So each of them can use 1/3 resource of h1..h4, which is 24 * 4 * (1/3) = (32G mem, 32 v-cores). And only engineering/marketing queue has permission to access GPU partition (see root.``.accessible-node-labels). diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServer.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServer.md index a326f45d9f9..5b3cca9ac71 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServer.md +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServer.md @@ -89,13 +89,13 @@ The Timeline Domain offers a namespace for Timeline server allowing users to host multiple entities, isolating them from other users and applications. Timeline server Security is defined at this level. -A "Domain" primarily stores owner info, read and& write ACL information, +A "Domain" primarily stores owner info, read and write ACL information, created and modified time stamp information. Each Domain is identified by an ID which must be unique across all users in the YARN cluster. #### Timeline Entity -A Timeline Entity contains the the meta information of a conceptual entity +A Timeline Entity contains the meta information of a conceptual entity and its related events. The entity can be an application, an application attempt, a container or @@ -199,7 +199,7 @@ to `kerberos`, after which the following configuration options are available: | `yarn.timeline-service.http-authentication.type` | Defines authentication used for the timeline server HTTP endpoint. Supported values are: `simple` / `kerberos` / #AUTHENTICATION_HANDLER_CLASSNAME#. Defaults to `simple`. | | `yarn.timeline-service.http-authentication.simple.anonymous.allowed` | Indicates if anonymous requests are allowed by the timeline server when using 'simple' authentication. Defaults to `true`. | | `yarn.timeline-service.principal` | The Kerberos principal for the timeline server. | -| `yarn.timeline-service.keytab` | The Kerberos keytab for the timeline server. Defaults on Unix to to `/etc/krb5.keytab`. | +| `yarn.timeline-service.keytab` | The Kerberos keytab for the timeline server. Defaults on Unix to `/etc/krb5.keytab`. | | `yarn.timeline-service.delegation.key.update-interval` | Defaults to `86400000` (1 day). | | `yarn.timeline-service.delegation.token.renew-interval` | Defaults to `86400000` (1 day). | | `yarn.timeline-service.delegation.token.max-lifetime` | Defaults to `604800000` (7 days). | diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServiceV2.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServiceV2.md index 35cbc580aa1..b879996e035 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServiceV2.md +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServiceV2.md @@ -865,7 +865,7 @@ none of the apps match the predicates, an empty list will be returned. 1. `conffilters` - If specified, matched applications must have exact matches to the given config name and must be either equal or not equal to the given config value. Both the config name and value must be strings. conffilters are represented in the same form as infofilters. 1. `metricfilters` - If specified, matched applications must have exact matches to the given metric and satisfy the specified relation with the - metric value. Metric id must be a string and and metric value must be an integral value. metricfilters are represented as an expression of the form :
    + metric value. Metric id must be a string and metric value must be an integral value. metricfilters are represented as an expression of the form :
    "(<metricid> <compareop> <metricvalue>) <op> (<metricid> <compareop> <metricvalue>)".
    Here op can be either of AND or OR. And compareop can be either of "eq", "ne", "ene", "gt", "ge", "lt" and "le".
    "eq" means equals, "ne" means not equals and existence of metric is not required for a match, "ene" means not equals but existence of metric is @@ -998,7 +998,7 @@ match the predicates, an empty list will be returned. 1. `conffilters` - If specified, matched applications must have exact matches to the given config name and must be either equal or not equal to the given config value. Both the config name and value must be strings. conffilters are represented in the same form as infofilters. 1. `metricfilters` - If specified, matched applications must have exact matches to the given metric and satisfy the specified relation with the - metric value. Metric id must be a string and and metric value must be an integral value. metricfilters are represented as an expression of the form :
    + metric value. Metric id must be a string and metric value must be an integral value. metricfilters are represented as an expression of the form :
    "(<metricid> <compareop> <metricvalue>) <op> (<metricid> <compareop> <metricvalue>)".
    Here op can be either of AND or OR. And compareop can be either of "eq", "ne", "ene", "gt", "ge", "lt" and "le".
    "eq" means equals, "ne" means not equals and existence of metric is not required for a match, "ene" means not equals but existence of metric is @@ -1205,7 +1205,7 @@ If none of the entities match the predicates, an empty list will be returned. 1. `conffilters` - If specified, matched entities must have exact matches to the given config name and must be either equal or not equal to the given config value. Both the config name and value must be strings. conffilters are represented in the same form as infofilters. 1. `metricfilters` - If specified, matched entities must have exact matches to the given metric and satisfy the specified relation with the - metric value. Metric id must be a string and and metric value must be an integral value. metricfilters are represented as an expression of the form :
    + metric value. Metric id must be a string and metric value must be an integral value. metricfilters are represented as an expression of the form :
    "(<metricid> <compareop> <metricvalue>) <op> (<metricid> <compareop> <metricvalue>)"
    Here op can be either of AND or OR. And compareop can be either of "eq", "ne", "ene", "gt", "ge", "lt" and "le".
    "eq" means equals, "ne" means not equals and existence of metric is not required for a match, "ene" means not equals but existence of metric is @@ -1341,7 +1341,7 @@ If none of the entities match the predicates, an empty list will be returned. 1. `conffilters` - If specified, matched entities must have exact matches to the given config name and must be either equal or not equal to the given config value. Both the config name and value must be strings. conffilters are represented in the same form as infofilters. 1. `metricfilters` - If specified, matched entities must have exact matches to the given metric and satisfy the specified relation with the - metric value. Metric id must be a string and and metric value must be an integral value. metricfilters are represented as an expression of the form :
    + metric value. Metric id must be a string and metric value must be an integral value. metricfilters are represented as an expression of the form :
    "(<metricid> <compareop> <metricvalue>) <op> (<metricid> <compareop> <metricvalue>)"
    Here op can be either of AND or OR. And compareop can be either of "eq", "ne", "ene", "gt", "ge", "lt" and "le".
    "eq" means equals, "ne" means not equals and existence of metric is not required for a match, "ene" means not equals but existence of metric is diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnCommands.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnCommands.md index 5328cd89113..3e549398fec 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnCommands.md +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnCommands.md @@ -30,7 +30,7 @@ YARN has an option parsing framework that employs parsing generic options as wel |:---- |:---- | | SHELL\_OPTIONS | The common set of shell options. These are documented on the [Commands Manual](../../hadoop-project-dist/hadoop-common/CommandsManual.html#Shell_Options) page. | | GENERIC\_OPTIONS | The common set of options supported by multiple commands. See the Hadoop [Commands Manual](../../hadoop-project-dist/hadoop-common/CommandsManual.html#Generic_Options) for more information. | -| COMMAND COMMAND\_OPTIONS | Various commands with their options are described in the following sections. The commands have been grouped into [User Commands](#User_Commands) and [Administration Commands](#Administration_Commands). | +| COMMAND\_OPTIONS | Various commands with their options are described in the following sections. The commands have been grouped into [User Commands](#User_Commands) and [Administration Commands](#Administration_Commands). | User Commands ------------- diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnUI2.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnUI2.md index e05f0256bf9..b4d5386a798 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnUI2.md +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnUI2.md @@ -41,6 +41,7 @@ origin (CORS) support. | `yarn.timeline-service.http-cross-origin.enabled` | true | Enable CORS support for Timeline Server | | `yarn.resourcemanager.webapp.cross-origin.enabled` | true | Enable CORS support for Resource Manager | | `yarn.nodemanager.webapp.cross-origin.enabled` | true | Enable CORS support for Node Manager | +| `yarn.router.webapp.cross-origin.enabled` | true | Enable CORS support for Yarn Router | Also please ensure that CORS related configurations are enabled in `core-site.xml`. Kindly refer [here](../../hadoop-project-dist/hadoop-common/HttpAuthentication.html) diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/YarnServiceAPI.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/YarnServiceAPI.md index 8211d5c3618..58758980113 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/YarnServiceAPI.md +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/YarnServiceAPI.md @@ -535,7 +535,7 @@ PUT URL - http://localhost:8088/app/v1/services/hello-world #### PUT Request JSON -Note, irrespective of what the current lifetime value is, this update request will set the lifetime of the service to be 3600 seconds (1 hour) from the time the request is submitted. Hence, if a a service has remaining lifetime of 5 mins (say) and would like to extend it to an hour OR if an application has remaining lifetime of 5 hours (say) and would like to reduce it down to an hour, then for both scenarios you need to submit the same request below. +Note, irrespective of what the current lifetime value is, this update request will set the lifetime of the service to be 3600 seconds (1 hour) from the time the request is submitted. Hence, if a service has remaining lifetime of 5 mins (say) and would like to extend it to an hour OR if an application has remaining lifetime of 5 hours (say) and would like to reduce it down to an hour, then for both scenarios you need to submit the same request below. ```json { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/bower.json b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/bower.json index bdf7a9aedb0..625aedb9171 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/bower.json +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/bower.json @@ -17,12 +17,13 @@ "more-js": "0.8.2", "bootstrap": "3.4.1", "d3": "~3.5.6", - "datatables": "~1.10.8", "spin.js": "~2.3.2", "momentjs": "~2.10.6", "select2": "4.0.0", "snippet-ss": "~1.11.0", "alasql": "^0.4.3", - "x2js": "1.2.0" + "x2js": "1.2.0", + "datatables.net": "1.11.5", + "datatables.net-dt": "3.2.2" } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js index 09da39b4ec8..e73f9b7a997 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js @@ -39,8 +39,8 @@ module.exports = function(defaults) { } }); - app.import("bower_components/datatables/media/css/jquery.dataTables.min.css"); - app.import("bower_components/datatables/media/js/jquery.dataTables.min.js"); + app.import("bower_components/datatables.net-dt/css/jquery.dataTables.min.css"); + app.import("bower_components/datatables.net/js/jquery.dataTables.min.js"); app.import("bower_components/momentjs/min/moment.min.js"); app.import("bower_components/moment-timezone/builds/moment-timezone-with-data-10-year-range.min.js"); app.import("bower_components/select2/dist/css/select2.min.css"); diff --git a/hadoop-yarn-project/pom.xml b/hadoop-yarn-project/pom.xml index abaf2e869c4..298fa597e99 100644 --- a/hadoop-yarn-project/pom.xml +++ b/hadoop-yarn-project/pom.xml @@ -81,6 +81,11 @@ org.apache.hadoop hadoop-yarn-services-core + + org.apache.hadoop + hadoop-yarn-applications-catalog-webapp + war + diff --git a/pom.xml b/pom.xml index fca42f71630..68d890a8b48 100644 --- a/pom.xml +++ b/pom.xml @@ -95,7 +95,7 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/x 2.8.1 - 3.11.0 + 3.9.1 1.5 1.7 2.4 @@ -118,6 +118,7 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/x 4.2.0 1.1.1 3.10.1 + 2.7.3 bash @@ -484,6 +485,19 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/x maven-compiler-plugin ${maven-compiler-plugin.version} + + org.cyclonedx + cyclonedx-maven-plugin + ${cyclonedx.version} + + + package + + makeBom + + + + @@ -592,6 +606,10 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/x com.github.spotbugs spotbugs-maven-plugin + + org.cyclonedx + cyclonedx-maven-plugin +