From e1637a57dfd41385dbce5de90620c48a45abb263 Mon Sep 17 00:00:00 2001 From: Masatake Iwasaki Date: Fri, 22 Jul 2022 02:26:23 +0000 Subject: [PATCH] Make upstream aware of 3.2.4 release. --- .../markdown/release/3.2.4/CHANGELOG.3.2.4.md | 213 + .../release/3.2.4/RELEASENOTES.3.2.4.md | 55 + .../jdiff/Apache_Hadoop_HDFS_3.2.4.xml | 674 + .../Apache_Hadoop_MapReduce_Common_3.2.4.xml | 113 + .../Apache_Hadoop_MapReduce_Core_3.2.4.xml | 28073 ++++++++++++++++ ...pache_Hadoop_MapReduce_JobClient_3.2.4.xml | 16 + .../jdiff/Apache_Hadoop_YARN_API_3.2.4.xml | 25332 ++++++++++++++ .../jdiff/Apache_Hadoop_YARN_Client_3.2.4.xml | 3006 ++ .../jdiff/Apache_Hadoop_YARN_Common_3.2.4.xml | 3964 +++ ...Apache_Hadoop_YARN_Server_Common_3.2.4.xml | 1412 + 10 files changed, 62858 insertions(+) create mode 100644 hadoop-common-project/hadoop-common/src/site/markdown/release/3.2.4/CHANGELOG.3.2.4.md create mode 100644 hadoop-common-project/hadoop-common/src/site/markdown/release/3.2.4/RELEASENOTES.3.2.4.md create mode 100644 hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff/Apache_Hadoop_HDFS_3.2.4.xml create mode 100644 hadoop-mapreduce-project/dev-support/jdiff/Apache_Hadoop_MapReduce_Common_3.2.4.xml create mode 100644 hadoop-mapreduce-project/dev-support/jdiff/Apache_Hadoop_MapReduce_Core_3.2.4.xml create mode 100644 hadoop-mapreduce-project/dev-support/jdiff/Apache_Hadoop_MapReduce_JobClient_3.2.4.xml create mode 100644 hadoop-yarn-project/hadoop-yarn/dev-support/jdiff/Apache_Hadoop_YARN_API_3.2.4.xml create mode 100644 hadoop-yarn-project/hadoop-yarn/dev-support/jdiff/Apache_Hadoop_YARN_Client_3.2.4.xml create mode 100644 hadoop-yarn-project/hadoop-yarn/dev-support/jdiff/Apache_Hadoop_YARN_Common_3.2.4.xml create mode 100644 hadoop-yarn-project/hadoop-yarn/dev-support/jdiff/Apache_Hadoop_YARN_Server_Common_3.2.4.xml diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/release/3.2.4/CHANGELOG.3.2.4.md b/hadoop-common-project/hadoop-common/src/site/markdown/release/3.2.4/CHANGELOG.3.2.4.md new file mode 100644 index 00000000000..fc0079d1c9b --- /dev/null +++ b/hadoop-common-project/hadoop-common/src/site/markdown/release/3.2.4/CHANGELOG.3.2.4.md @@ -0,0 +1,213 @@ + + +# Apache Hadoop Changelog + +## Release 3.2.4 - 2022-07-12 + + + +### NEW FEATURES: + +| JIRA | Summary | Priority | Component | Reporter | Contributor | +|:---- |:---- | :--- |:---- |:---- |:---- | +| [HDFS-16337](https://issues.apache.org/jira/browse/HDFS-16337) | Show start time of Datanode on Web | Minor | . | Tao Li | Tao Li | + + +### IMPROVEMENTS: + +| JIRA | Summary | Priority | Component | Reporter | Contributor | +|:---- |:---- | :--- |:---- |:---- |:---- | +| [HDFS-15075](https://issues.apache.org/jira/browse/HDFS-15075) | Remove process command timing from BPServiceActor | Major | . | Íñigo Goiri | Xiaoqiao He | +| [HDFS-15150](https://issues.apache.org/jira/browse/HDFS-15150) | Introduce read write lock to Datanode | Major | datanode | Stephen O'Donnell | Stephen O'Donnell | +| [HDFS-16175](https://issues.apache.org/jira/browse/HDFS-16175) | Improve the configurable value of Server #PURGE\_INTERVAL\_NANOS | Major | ipc | JiangHua Zhu | JiangHua Zhu | +| [HDFS-16173](https://issues.apache.org/jira/browse/HDFS-16173) | Improve CopyCommands#Put#executor queue configurability | Major | fs | JiangHua Zhu | JiangHua Zhu | +| [HADOOP-17897](https://issues.apache.org/jira/browse/HADOOP-17897) | Allow nested blocks in switch case in checkstyle settings | Minor | build | Masatake Iwasaki | Masatake Iwasaki | +| [HADOOP-17857](https://issues.apache.org/jira/browse/HADOOP-17857) | Check real user ACLs in addition to proxied user ACLs | Major | . | Eric Payne | Eric Payne | +| [HDFS-14997](https://issues.apache.org/jira/browse/HDFS-14997) | BPServiceActor processes commands from NameNode asynchronously | Major | datanode | Xiaoqiao He | Xiaoqiao He | +| [HADOOP-17926](https://issues.apache.org/jira/browse/HADOOP-17926) | Maven-eclipse-plugin is no longer needed since Eclipse can import Maven projects by itself. | Minor | documentation | Rintaro Ikeda | Rintaro Ikeda | +| [YARN-10935](https://issues.apache.org/jira/browse/YARN-10935) | AM Total Queue Limit goes below per-user AM Limit if parent is full. | Major | capacity scheduler, capacityscheduler | Eric Payne | Eric Payne | +| [HDFS-16241](https://issues.apache.org/jira/browse/HDFS-16241) | Standby close reconstruction thread | Major | . | zhanghuazong | zhanghuazong | +| [YARN-1115](https://issues.apache.org/jira/browse/YARN-1115) | Provide optional means for a scheduler to check real user ACLs | Major | capacity scheduler, scheduler | Eric Payne | | +| [HDFS-16279](https://issues.apache.org/jira/browse/HDFS-16279) | Print detail datanode info when process first storage report | Minor | . | Tao Li | Tao Li | +| [HDFS-16294](https://issues.apache.org/jira/browse/HDFS-16294) | Remove invalid DataNode#CONFIG\_PROPERTY\_SIMULATED | Major | datanode | JiangHua Zhu | JiangHua Zhu | +| [HDFS-16299](https://issues.apache.org/jira/browse/HDFS-16299) | Fix bug for TestDataNodeVolumeMetrics#verifyDataNodeVolumeMetrics | Minor | . | Tao Li | Tao Li | +| [HDFS-16301](https://issues.apache.org/jira/browse/HDFS-16301) | Improve BenchmarkThroughput#SIZE naming standardization | Minor | benchmarks, test | JiangHua Zhu | JiangHua Zhu | +| [YARN-10997](https://issues.apache.org/jira/browse/YARN-10997) | Revisit allocation and reservation logging | Major | . | Andras Gyori | Andras Gyori | +| [HDFS-16315](https://issues.apache.org/jira/browse/HDFS-16315) | Add metrics related to Transfer and NativeCopy for DataNode | Major | . | Tao Li | Tao Li | +| [HADOOP-17998](https://issues.apache.org/jira/browse/HADOOP-17998) | Allow get command to run with multi threads. | Major | fs | Chengwei Wang | Chengwei Wang | +| [HDFS-16345](https://issues.apache.org/jira/browse/HDFS-16345) | Fix test cases fail in TestBlockStoragePolicy | Major | build | guophilipse | guophilipse | +| [HADOOP-18035](https://issues.apache.org/jira/browse/HADOOP-18035) | Skip unit test failures to run all the unit tests | Major | build | Akira Ajisaka | Akira Ajisaka | +| [HADOOP-18040](https://issues.apache.org/jira/browse/HADOOP-18040) | Use maven.test.failure.ignore instead of ignoreTestFailure | Major | build | Akira Ajisaka | Akira Ajisaka | +| [HDFS-16352](https://issues.apache.org/jira/browse/HDFS-16352) | return the real datanode numBlocks in #getDatanodeStorageReport | Major | . | qinyuren | qinyuren | +| [HDFS-16386](https://issues.apache.org/jira/browse/HDFS-16386) | Reduce DataNode load when FsDatasetAsyncDiskService is working | Major | datanode | JiangHua Zhu | JiangHua Zhu | +| [HDFS-16391](https://issues.apache.org/jira/browse/HDFS-16391) | Avoid evaluation of LOG.debug statement in NameNodeHeartbeatService | Trivial | . | wangzhaohui | wangzhaohui | +| [YARN-8234](https://issues.apache.org/jira/browse/YARN-8234) | Improve RM system metrics publisher's performance by pushing events to timeline server in batch | Critical | resourcemanager, timelineserver | Hu Ziqian | Ashutosh Gupta | +| [HDFS-16430](https://issues.apache.org/jira/browse/HDFS-16430) | Validate maximum blocks in EC group when adding an EC policy | Minor | ec, erasure-coding | daimin | daimin | +| [HDFS-16403](https://issues.apache.org/jira/browse/HDFS-16403) | Improve FUSE IO performance by supporting FUSE parameter max\_background | Minor | fuse-dfs | daimin | daimin | +| [HADOOP-18136](https://issues.apache.org/jira/browse/HADOOP-18136) | Verify FileUtils.unTar() handling of missing .tar files | Minor | test, util | Steve Loughran | Steve Loughran | +| [HDFS-16529](https://issues.apache.org/jira/browse/HDFS-16529) | Remove unnecessary setObserverRead in TestConsistentReadsObserver | Trivial | test | wangzhaohui | wangzhaohui | +| [HDFS-16530](https://issues.apache.org/jira/browse/HDFS-16530) | setReplication debug log creates a new string even if debug is disabled | Major | namenode | Stephen O'Donnell | Stephen O'Donnell | +| [HDFS-16427](https://issues.apache.org/jira/browse/HDFS-16427) | Add debug log for BlockManager#chooseExcessRedundancyStriped | Minor | erasure-coding | Tao Li | Tao Li | +| [HDFS-16389](https://issues.apache.org/jira/browse/HDFS-16389) | Improve NNThroughputBenchmark test mkdirs | Major | benchmarks, namenode | JiangHua Zhu | JiangHua Zhu | +| [MAPREDUCE-7373](https://issues.apache.org/jira/browse/MAPREDUCE-7373) | Building MapReduce NativeTask fails on Fedora 34+ | Major | build, nativetask | Kengo Seki | Kengo Seki | +| [HDFS-16355](https://issues.apache.org/jira/browse/HDFS-16355) | Improve the description of dfs.block.scanner.volume.bytes.per.second | Minor | documentation, hdfs | guophilipse | guophilipse | +| [HADOOP-18088](https://issues.apache.org/jira/browse/HADOOP-18088) | Replace log4j 1.x with reload4j | Major | . | Wei-Chiu Chuang | Wei-Chiu Chuang | +| [HDFS-16501](https://issues.apache.org/jira/browse/HDFS-16501) | Print the exception when reporting a bad block | Major | datanode | qinyuren | qinyuren | +| [YARN-11116](https://issues.apache.org/jira/browse/YARN-11116) | Migrate Times util from SimpleDateFormat to thread-safe DateTimeFormatter class | Minor | . | Jonathan Turner Eagles | Jonathan Turner Eagles | +| [YARN-10080](https://issues.apache.org/jira/browse/YARN-10080) | Support show app id on localizer thread pool | Major | nodemanager | zhoukang | Ashutosh Gupta | +| [HADOOP-18240](https://issues.apache.org/jira/browse/HADOOP-18240) | Upgrade Yetus to 0.14.0 | Major | build | Akira Ajisaka | Ashutosh Gupta | +| [HDFS-16585](https://issues.apache.org/jira/browse/HDFS-16585) | Add @VisibleForTesting in Dispatcher.java after HDFS-16268 | Trivial | . | Wei-Chiu Chuang | Ashutosh Gupta | +| [HDFS-16610](https://issues.apache.org/jira/browse/HDFS-16610) | Make fsck read timeout configurable | Major | hdfs-client | Stephen O'Donnell | Stephen O'Donnell | + + +### BUG FIXES: + +| JIRA | Summary | Priority | Component | Reporter | Contributor | +|:---- |:---- | :--- |:---- |:---- |:---- | +| [HDFS-13983](https://issues.apache.org/jira/browse/HDFS-13983) | TestOfflineImageViewer crashes in windows | Major | . | Vinayakumar B | Vinayakumar B | +| [YARN-9744](https://issues.apache.org/jira/browse/YARN-9744) | RollingLevelDBTimelineStore.getEntityByTime fails with NPE | Major | timelineserver | Prabhu Joseph | Prabhu Joseph | +| [HDFS-15113](https://issues.apache.org/jira/browse/HDFS-15113) | Missing IBR when NameNode restart if open processCommand async feature | Blocker | datanode | Xiaoqiao He | Xiaoqiao He | +| [HADOOP-16985](https://issues.apache.org/jira/browse/HADOOP-16985) | Handle release package related issues | Major | . | Vinayakumar B | Vinayakumar B | +| [HADOOP-17116](https://issues.apache.org/jira/browse/HADOOP-17116) | Skip Retry INFO logging on first failover from a proxy | Major | ha | Hanisha Koneru | Hanisha Koneru | +| [HDFS-15651](https://issues.apache.org/jira/browse/HDFS-15651) | Client could not obtain block when DN CommandProcessingThread exit | Major | . | Yiqun Lin | Mingxiang Li | +| [HDFS-15963](https://issues.apache.org/jira/browse/HDFS-15963) | Unreleased volume references cause an infinite loop | Critical | datanode | Shuyan Zhang | Shuyan Zhang | +| [HDFS-14575](https://issues.apache.org/jira/browse/HDFS-14575) | LeaseRenewer#daemon threads leak in DFSClient | Major | . | Tao Yang | Renukaprasad C | +| [HADOOP-17796](https://issues.apache.org/jira/browse/HADOOP-17796) | Upgrade jetty version to 9.4.43 | Major | . | Wei-Chiu Chuang | Renukaprasad C | +| [HDFS-15175](https://issues.apache.org/jira/browse/HDFS-15175) | Multiple CloseOp shared block instance causes the standby namenode to crash when rolling editlog | Critical | . | Yicong Cai | Wan Chang | +| [HDFS-16177](https://issues.apache.org/jira/browse/HDFS-16177) | Bug fix for Util#receiveFile | Minor | . | Tao Li | Tao Li | +| [YARN-10814](https://issues.apache.org/jira/browse/YARN-10814) | YARN shouldn't start with empty hadoop.http.authentication.signature.secret.file | Major | . | Benjamin Teke | Tamas Domok | +| [HADOOP-17874](https://issues.apache.org/jira/browse/HADOOP-17874) | ExceptionsHandler to add terse/suppressed Exceptions in thread-safe manner | Major | . | Viraj Jasani | Viraj Jasani | +| [HADOOP-15129](https://issues.apache.org/jira/browse/HADOOP-15129) | Datanode caches namenode DNS lookup failure and cannot startup | Minor | ipc | Karthik Palaniappan | Chris Nauroth | +| [YARN-10901](https://issues.apache.org/jira/browse/YARN-10901) | Permission checking error on an existing directory in LogAggregationFileController#verifyAndCreateRemoteLogDir | Major | nodemanager | Tamas Domok | Tamas Domok | +| [HDFS-16207](https://issues.apache.org/jira/browse/HDFS-16207) | Remove NN logs stack trace for non-existent xattr query | Major | namenode | Ahmed Hussein | Ahmed Hussein | +| [HDFS-16187](https://issues.apache.org/jira/browse/HDFS-16187) | SnapshotDiff behaviour with Xattrs and Acls is not consistent across NN restarts with checkpointing | Major | snapshots | Srinivasu Majeti | Shashikant Banerjee | +| [HDFS-16198](https://issues.apache.org/jira/browse/HDFS-16198) | Short circuit read leaks Slot objects when InvalidToken exception is thrown | Major | . | Eungsop Yoo | Eungsop Yoo | +| [YARN-10870](https://issues.apache.org/jira/browse/YARN-10870) | Missing user filtering check -\> yarn.webapp.filter-entity-list-by-user for RM Scheduler page | Major | yarn | Siddharth Ahuja | Gergely Pollák | +| [HADOOP-17919](https://issues.apache.org/jira/browse/HADOOP-17919) | Fix command line example in Hadoop Cluster Setup documentation | Minor | documentation | Rintaro Ikeda | Rintaro Ikeda | +| [HDFS-16235](https://issues.apache.org/jira/browse/HDFS-16235) | Deadlock in LeaseRenewer for static remove method | Major | hdfs | angerszhu | angerszhu | +| [HDFS-16181](https://issues.apache.org/jira/browse/HDFS-16181) | [SBN Read] Fix metric of RpcRequestCacheMissAmount can't display when tailEditLog form JN | Critical | . | wangzhaohui | wangzhaohui | +| [HADOOP-17925](https://issues.apache.org/jira/browse/HADOOP-17925) | BUILDING.txt should not encourage to activate docs profile on building binary artifacts | Minor | documentation | Rintaro Ikeda | Masatake Iwasaki | +| [HADOOP-16532](https://issues.apache.org/jira/browse/HADOOP-16532) | Fix TestViewFsTrash to use the correct homeDir. | Minor | test, viewfs | Steve Loughran | Xing Lin | +| [HDFS-16268](https://issues.apache.org/jira/browse/HDFS-16268) | Balancer stuck when moving striped blocks due to NPE | Major | balancer & mover, erasure-coding | Leon Gao | Leon Gao | +| [HDFS-7612](https://issues.apache.org/jira/browse/HDFS-7612) | TestOfflineEditsViewer.testStored() uses incorrect default value for cacheDir | Major | test | Konstantin Shvachko | Michael Kuchenbecker | +| [HDFS-16311](https://issues.apache.org/jira/browse/HDFS-16311) | Metric metadataOperationRate calculation error in DataNodeVolumeMetrics | Major | . | Tao Li | Tao Li | +| [HDFS-16182](https://issues.apache.org/jira/browse/HDFS-16182) | numOfReplicas is given the wrong value in BlockPlacementPolicyDefault$chooseTarget can cause DataStreamer to fail with Heterogeneous Storage | Major | namanode | Max Xie | Max Xie | +| [HADOOP-17999](https://issues.apache.org/jira/browse/HADOOP-17999) | No-op implementation of setWriteChecksum and setVerifyChecksum in ViewFileSystem | Major | . | Abhishek Das | Abhishek Das | +| [HDFS-16329](https://issues.apache.org/jira/browse/HDFS-16329) | Fix log format for BlockManager | Minor | . | Tao Li | Tao Li | +| [HDFS-16330](https://issues.apache.org/jira/browse/HDFS-16330) | Fix incorrect placeholder for Exception logs in DiskBalancer | Major | . | Viraj Jasani | Viraj Jasani | +| [HDFS-16328](https://issues.apache.org/jira/browse/HDFS-16328) | Correct disk balancer param desc | Minor | documentation, hdfs | guophilipse | guophilipse | +| [HDFS-16343](https://issues.apache.org/jira/browse/HDFS-16343) | Add some debug logs when the dfsUsed are not used during Datanode startup | Major | datanode | Mukul Kumar Singh | Mukul Kumar Singh | +| [YARN-10991](https://issues.apache.org/jira/browse/YARN-10991) | Fix to ignore the grouping "[]" for resourcesStr in parseResourcesString method | Minor | distributed-shell | Ashutosh Gupta | Ashutosh Gupta | +| [HADOOP-17975](https://issues.apache.org/jira/browse/HADOOP-17975) | Fallback to simple auth does not work for a secondary DistributedFileSystem instance | Major | ipc | István Fajth | István Fajth | +| [HDFS-16350](https://issues.apache.org/jira/browse/HDFS-16350) | Datanode start time should be set after RPC server starts successfully | Minor | . | Viraj Jasani | Viraj Jasani | +| [YARN-11007](https://issues.apache.org/jira/browse/YARN-11007) | Correct words in YARN documents | Minor | documentation | guophilipse | guophilipse | +| [HDFS-16332](https://issues.apache.org/jira/browse/HDFS-16332) | Expired block token causes slow read due to missing handling in sasl handshake | Major | datanode, dfs, dfsclient | Shinya Yoshida | Shinya Yoshida | +| [YARN-9063](https://issues.apache.org/jira/browse/YARN-9063) | ATS 1.5 fails to start if RollingLevelDb files are corrupt or missing | Major | timelineserver, timelineservice | Tarun Parimi | Ashutosh Gupta | +| [HDFS-16333](https://issues.apache.org/jira/browse/HDFS-16333) | fix balancer bug when transfer an EC block | Major | balancer & mover, erasure-coding | qinyuren | qinyuren | +| [HDFS-16373](https://issues.apache.org/jira/browse/HDFS-16373) | Fix MiniDFSCluster restart in case of multiple namenodes | Major | . | Ayush Saxena | Ayush Saxena | +| [HDFS-16377](https://issues.apache.org/jira/browse/HDFS-16377) | Should CheckNotNull before access FsDatasetSpi | Major | . | Tao Li | Tao Li | +| [YARN-6862](https://issues.apache.org/jira/browse/YARN-6862) | Nodemanager resource usage metrics sometimes are negative | Major | nodemanager | YunFan Zhou | Benjamin Teke | +| [YARN-10178](https://issues.apache.org/jira/browse/YARN-10178) | Global Scheduler async thread crash caused by 'Comparison method violates its general contract | Major | capacity scheduler | tuyu | Andras Gyori | +| [HDFS-16395](https://issues.apache.org/jira/browse/HDFS-16395) | Remove useless NNThroughputBenchmark#dummyActionNoSynch() | Major | benchmarks, namenode | JiangHua Zhu | JiangHua Zhu | +| [HADOOP-18063](https://issues.apache.org/jira/browse/HADOOP-18063) | Remove unused import AbstractJavaKeyStoreProvider in Shell class | Minor | . | JiangHua Zhu | JiangHua Zhu | +| [HDFS-16409](https://issues.apache.org/jira/browse/HDFS-16409) | Fix typo: testHasExeceptionsReturnsCorrectValue -\> testHasExceptionsReturnsCorrectValue | Trivial | . | Ashutosh Gupta | Ashutosh Gupta | +| [HDFS-16408](https://issues.apache.org/jira/browse/HDFS-16408) | Ensure LeaseRecheckIntervalMs is greater than zero | Major | namenode | Jingxuan Fu | Jingxuan Fu | +| [YARN-11055](https://issues.apache.org/jira/browse/YARN-11055) | In cgroups-operations.c some fprintf format strings don't end with "\\n" | Minor | nodemanager | Gera Shegalov | Gera Shegalov | +| [HDFS-16303](https://issues.apache.org/jira/browse/HDFS-16303) | Losing over 100 datanodes in state decommissioning results in full blockage of all datanode decommissioning | Major | . | Kevin Wikant | Kevin Wikant | +| [HDFS-16443](https://issues.apache.org/jira/browse/HDFS-16443) | Fix edge case where DatanodeAdminDefaultMonitor doubly enqueues a DatanodeDescriptor on exception | Major | hdfs | Kevin Wikant | Kevin Wikant | +| [HDFS-16449](https://issues.apache.org/jira/browse/HDFS-16449) | Fix hadoop web site release notes and changelog not available | Minor | documentation | guophilipse | guophilipse | +| [HADOOP-18192](https://issues.apache.org/jira/browse/HADOOP-18192) | Fix multiple\_bindings warning about slf4j-reload4j | Major | . | Masatake Iwasaki | Masatake Iwasaki | +| [HDFS-16479](https://issues.apache.org/jira/browse/HDFS-16479) | EC: NameNode should not send a reconstruction work when the source datanodes are insufficient | Critical | ec, erasure-coding | Yuanbo Liu | Takanobu Asanuma | +| [HDFS-16509](https://issues.apache.org/jira/browse/HDFS-16509) | Fix decommission UnsupportedOperationException: Remove unsupported | Major | namenode | daimin | daimin | +| [HDFS-16456](https://issues.apache.org/jira/browse/HDFS-16456) | EC: Decommission a rack with only on dn will fail when the rack number is equal with replication | Critical | ec, namenode | caozhiqiang | caozhiqiang | +| [HDFS-16437](https://issues.apache.org/jira/browse/HDFS-16437) | ReverseXML processor doesn't accept XML files without the SnapshotDiffSection. | Critical | hdfs | yanbin.zhang | yanbin.zhang | +| [HDFS-16507](https://issues.apache.org/jira/browse/HDFS-16507) | [SBN read] Avoid purging edit log which is in progress | Critical | . | Tao Li | Tao Li | +| [YARN-10720](https://issues.apache.org/jira/browse/YARN-10720) | YARN WebAppProxyServlet should support connection timeout to prevent proxy server from hanging | Critical | . | Qi Zhu | Qi Zhu | +| [HDFS-16428](https://issues.apache.org/jira/browse/HDFS-16428) | Source path with storagePolicy cause wrong typeConsumed while rename | Major | hdfs, namenode | lei w | lei w | +| [YARN-11014](https://issues.apache.org/jira/browse/YARN-11014) | YARN incorrectly validates maximum capacity resources on the validation API | Major | . | Benjamin Teke | Benjamin Teke | +| [YARN-11075](https://issues.apache.org/jira/browse/YARN-11075) | Explicitly declare serialVersionUID in LogMutation class | Major | . | Benjamin Teke | Benjamin Teke | +| [HDFS-11041](https://issues.apache.org/jira/browse/HDFS-11041) | Unable to unregister FsDatasetState MBean if DataNode is shutdown twice | Trivial | datanode | Wei-Chiu Chuang | Wei-Chiu Chuang | +| [HDFS-16538](https://issues.apache.org/jira/browse/HDFS-16538) | EC decoding failed due to not enough valid inputs | Major | erasure-coding | qinyuren | qinyuren | +| [HDFS-16544](https://issues.apache.org/jira/browse/HDFS-16544) | EC decoding failed due to invalid buffer | Major | erasure-coding | qinyuren | qinyuren | +| [HDFS-16546](https://issues.apache.org/jira/browse/HDFS-16546) | Fix UT TestOfflineImageViewer#testReverseXmlWithoutSnapshotDiffSection to branch branch-3.2 | Major | test | daimin | daimin | +| [HDFS-16552](https://issues.apache.org/jira/browse/HDFS-16552) | Fix NPE for TestBlockManager | Major | . | Tao Li | Tao Li | +| [MAPREDUCE-7246](https://issues.apache.org/jira/browse/MAPREDUCE-7246) | In MapredAppMasterRest#Mapreduce\_Application\_Master\_Info\_API, the datatype of appId should be "string". | Major | documentation | jenny | Ashutosh Gupta | +| [YARN-10187](https://issues.apache.org/jira/browse/YARN-10187) | Removing hadoop-yarn-project/hadoop-yarn/README as it is no longer maintained. | Minor | documentation | N Sanketh Reddy | Ashutosh Gupta | +| [HDFS-16185](https://issues.apache.org/jira/browse/HDFS-16185) | Fix comment in LowRedundancyBlocks.java | Minor | documentation | Akira Ajisaka | Ashutosh Gupta | +| [HADOOP-17479](https://issues.apache.org/jira/browse/HADOOP-17479) | Fix the examples of hadoop config prefix | Minor | documentation | Akira Ajisaka | Ashutosh Gupta | +| [HDFS-16579](https://issues.apache.org/jira/browse/HDFS-16579) | Fix build failure for TestBlockManager on branch-3.2 | Major | . | Tao Li | Tao Li | +| [YARN-11092](https://issues.apache.org/jira/browse/YARN-11092) | Upgrade jquery ui to 1.13.1 | Major | . | D M Murali Krishna Reddy | Ashutosh Gupta | +| [YARN-11133](https://issues.apache.org/jira/browse/YARN-11133) | YarnClient gets the wrong EffectiveMinCapacity value | Major | api | Zilong Zhu | Zilong Zhu | +| [YARN-10850](https://issues.apache.org/jira/browse/YARN-10850) | TimelineService v2 lists containers for all attempts when filtering for one | Major | timelinereader | Benjamin Teke | Benjamin Teke | +| [YARN-11126](https://issues.apache.org/jira/browse/YARN-11126) | ZKConfigurationStore Java deserialisation vulnerability | Major | yarn | Tamas Domok | Tamas Domok | +| [YARN-11162](https://issues.apache.org/jira/browse/YARN-11162) | Set the zk acl for nodes created by ZKConfigurationStore. | Major | resourcemanager | Owen O'Malley | Owen O'Malley | +| [HDFS-16586](https://issues.apache.org/jira/browse/HDFS-16586) | Purge FsDatasetAsyncDiskService threadgroup; it causes BPServiceActor$CommandProcessingThread IllegalThreadStateException 'fatal exception and exit' | Major | datanode | Michael Stack | Michael Stack | +| [HADOOP-18251](https://issues.apache.org/jira/browse/HADOOP-18251) | Fix failure of extracting JIRA id from commit message in git\_jira\_fix\_version\_check.py | Minor | build | Masatake Iwasaki | Masatake Iwasaki | +| [HDFS-16583](https://issues.apache.org/jira/browse/HDFS-16583) | DatanodeAdminDefaultMonitor can get stuck in an infinite loop | Major | . | Stephen O'Donnell | Stephen O'Donnell | +| [HDFS-16623](https://issues.apache.org/jira/browse/HDFS-16623) | IllegalArgumentException in LifelineSender | Major | . | ZanderXu | ZanderXu | +| [HDFS-16064](https://issues.apache.org/jira/browse/HDFS-16064) | Determine when to invalidate corrupt replicas based on number of usable replicas | Major | datanode, namenode | Kevin Wikant | Kevin Wikant | +| [HADOOP-18100](https://issues.apache.org/jira/browse/HADOOP-18100) | Change scope of inner classes in InodeTree to make them accessible outside package | Major | . | Abhishek Das | Abhishek Das | +| [HADOOP-18334](https://issues.apache.org/jira/browse/HADOOP-18334) | Fix create-release to address removal of GPG\_AGENT\_INFO in branch-3.2 | Major | build | Masatake Iwasaki | Masatake Iwasaki | + + +### TESTS: + +| JIRA | Summary | Priority | Component | Reporter | Contributor | +|:---- |:---- | :--- |:---- |:---- |:---- | +| [MAPREDUCE-7342](https://issues.apache.org/jira/browse/MAPREDUCE-7342) | Stop RMService in TestClientRedirect.testRedirect() | Minor | . | Zhengxi Li | Zhengxi Li | +| [MAPREDUCE-7311](https://issues.apache.org/jira/browse/MAPREDUCE-7311) | Fix non-idempotent test in TestTaskProgressReporter | Minor | . | Zhengxi Li | Zhengxi Li | +| [HDFS-15862](https://issues.apache.org/jira/browse/HDFS-15862) | Make TestViewfsWithNfs3.testNfsRenameSingleNN() idempotent | Minor | nfs | Zhengxi Li | Zhengxi Li | + + +### SUB-TASKS: + +| JIRA | Summary | Priority | Component | Reporter | Contributor | +|:---- |:---- | :--- |:---- |:---- |:---- | +| [HDFS-15457](https://issues.apache.org/jira/browse/HDFS-15457) | TestFsDatasetImpl fails intermittently | Major | hdfs | Ahmed Hussein | Ahmed Hussein | +| [HDFS-15818](https://issues.apache.org/jira/browse/HDFS-15818) | Fix TestFsDatasetImpl.testReadLockCanBeDisabledByConfig | Minor | test | Leon Gao | Leon Gao | +| [YARN-10503](https://issues.apache.org/jira/browse/YARN-10503) | Support queue capacity in terms of absolute resources with custom resourceType. | Critical | . | Qi Zhu | Qi Zhu | +| [HADOOP-17126](https://issues.apache.org/jira/browse/HADOOP-17126) | implement non-guava Precondition checkNotNull | Major | . | Ahmed Hussein | Ahmed Hussein | +| [HADOOP-17929](https://issues.apache.org/jira/browse/HADOOP-17929) | implement non-guava Precondition checkArgument | Major | . | Ahmed Hussein | Ahmed Hussein | +| [HADOOP-17947](https://issues.apache.org/jira/browse/HADOOP-17947) | Provide alternative to Guava VisibleForTesting | Major | . | Viraj Jasani | Viraj Jasani | +| [HADOOP-17930](https://issues.apache.org/jira/browse/HADOOP-17930) | implement non-guava Precondition checkState | Major | . | Ahmed Hussein | Ahmed Hussein | +| [HADOOP-17374](https://issues.apache.org/jira/browse/HADOOP-17374) | AliyunOSS: support ListObjectsV2 | Major | fs/oss | wujinhu | wujinhu | +| [HDFS-16336](https://issues.apache.org/jira/browse/HDFS-16336) | De-flake TestRollingUpgrade#testRollback | Minor | hdfs, test | Kevin Wikant | Viraj Jasani | +| [HDFS-16171](https://issues.apache.org/jira/browse/HDFS-16171) | De-flake testDecommissionStatus | Major | . | Viraj Jasani | Viraj Jasani | +| [HDFS-16169](https://issues.apache.org/jira/browse/HDFS-16169) | Fix TestBlockTokenWithDFSStriped#testEnd2End failure | Major | test | Hui Fei | secfree | +| [HDFS-16484](https://issues.apache.org/jira/browse/HDFS-16484) | [SPS]: Fix an infinite loop bug in SPSPathIdProcessor thread | Major | . | qinyuren | qinyuren | +| [HADOOP-16663](https://issues.apache.org/jira/browse/HADOOP-16663) | Backport "HADOOP-16560 [YARN] use protobuf-maven-plugin to generate protobuf classes" to all active branches | Major | . | Duo Zhang | Duo Zhang | +| [HADOOP-16664](https://issues.apache.org/jira/browse/HADOOP-16664) | Backport "HADOOP-16561 [MAPREDUCE] use protobuf-maven-plugin to generate protobuf classes" to all active branches | Major | . | Duo Zhang | Duo Zhang | + + +### OTHER: + +| JIRA | Summary | Priority | Component | Reporter | Contributor | +|:---- |:---- | :--- |:---- |:---- |:---- | +| [HDFS-16298](https://issues.apache.org/jira/browse/HDFS-16298) | Improve error msg for BlockMissingException | Minor | . | Tao Li | Tao Li | +| [HDFS-16312](https://issues.apache.org/jira/browse/HDFS-16312) | Fix typo for DataNodeVolumeMetrics and ProfilingFileIoEvents | Minor | . | Tao Li | Tao Li | +| [HDFS-16326](https://issues.apache.org/jira/browse/HDFS-16326) | Simplify the code for DiskBalancer | Minor | . | Tao Li | Tao Li | +| [HDFS-16339](https://issues.apache.org/jira/browse/HDFS-16339) | Show the threshold when mover threads quota is exceeded | Minor | . | Tao Li | Tao Li | +| [YARN-10820](https://issues.apache.org/jira/browse/YARN-10820) | Make GetClusterNodesRequestPBImpl thread safe | Major | client | Prabhu Joseph | SwathiChandrashekar | +| [HADOOP-13464](https://issues.apache.org/jira/browse/HADOOP-13464) | update GSON to 2.7+ | Minor | build | Sean Busbey | Igor Dvorzhak | +| [HADOOP-18191](https://issues.apache.org/jira/browse/HADOOP-18191) | Log retry count while handling exceptions in RetryInvocationHandler | Minor | . | Viraj Jasani | Viraj Jasani | +| [HDFS-16551](https://issues.apache.org/jira/browse/HDFS-16551) | Backport HADOOP-17588 to 3.3 and other active old branches. | Major | . | Renukaprasad C | Renukaprasad C | +| [HDFS-16618](https://issues.apache.org/jira/browse/HDFS-16618) | sync\_file\_range error should include more volume and file info | Minor | . | Viraj Jasani | Viraj Jasani | +| [HADOOP-18300](https://issues.apache.org/jira/browse/HADOOP-18300) | Update Gson to 2.9.0 | Minor | build | Igor Dvorzhak | Igor Dvorzhak | + + diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/release/3.2.4/RELEASENOTES.3.2.4.md b/hadoop-common-project/hadoop-common/src/site/markdown/release/3.2.4/RELEASENOTES.3.2.4.md new file mode 100644 index 00000000000..fac976d655d --- /dev/null +++ b/hadoop-common-project/hadoop-common/src/site/markdown/release/3.2.4/RELEASENOTES.3.2.4.md @@ -0,0 +1,55 @@ + + +# Apache Hadoop 3.2.4 Release Notes + +These release notes cover new developer and user-facing incompatibilities, important issues, features, and major improvements. + + +--- + +* [YARN-10820](https://issues.apache.org/jira/browse/YARN-10820) | *Major* | **Make GetClusterNodesRequestPBImpl thread safe** + +Added syncronization so that the "yarn node list" command does not fail intermittently + + +--- + +* [YARN-8234](https://issues.apache.org/jira/browse/YARN-8234) | *Critical* | **Improve RM system metrics publisher's performance by pushing events to timeline server in batch** + +When Timeline Service V1 or V1.5 is used, if "yarn.resourcemanager.system-metrics-publisher.timeline-server-v1.enable-batch" is set to true, ResourceManager sends timeline events in batch. The default value is false. If this functionality is enabled, the maximum number that events published in batch is configured by "yarn.resourcemanager.system-metrics-publisher.timeline-server-v1.batch-size". The default value is 1000. The interval of publishing events can be configured by "yarn.resourcemanager.system-metrics-publisher.timeline-server-v1.interval-seconds". By default, it is set to 60 seconds. + + +--- + +* [HADOOP-18088](https://issues.apache.org/jira/browse/HADOOP-18088) | *Major* | **Replace log4j 1.x with reload4j** + +log4j 1 was replaced with reload4j which is fork of log4j 1.2.17 with the goal of fixing pressing security issues. + +If you are depending on the hadoop artifacts in your build were explicitly excluding log4 artifacts, and now want to exclude the reload4j files, you will need to update your exclusion lists +\ + \org.slf4j\ + \slf4j-reload4j\ +\ +\ + \ch.qos.reload4j\ + \reload4j\ +\ + + + diff --git a/hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff/Apache_Hadoop_HDFS_3.2.4.xml b/hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff/Apache_Hadoop_HDFS_3.2.4.xml new file mode 100644 index 00000000000..2aa6ef1cdb5 --- /dev/null +++ b/hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff/Apache_Hadoop_HDFS_3.2.4.xml @@ -0,0 +1,674 @@ + + + + + + + + + + + A distributed implementation of {@link +org.apache.hadoop.fs.FileSystem}. This is loosely modelled after +Google's GFS.

+ +

The most important difference is that unlike GFS, Hadoop DFS files +have strictly one writer at any one time. Bytes are always appended +to the end of the writer's stream. There is no notion of "record appends" +or "mutations" that are then checked or reordered. Writers simply emit +a byte stream. That byte stream is guaranteed to be stored in the +order written.

]]> +
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + This method must return as quickly as possible, since it's called + in a critical section of the NameNode's operation. + + @param succeeded Whether authorization succeeded. + @param userName Name of the user executing the request. + @param addr Remote address of the request. + @param cmd The requested command. + @param src Path of affected source file. + @param dst Path of affected destination file (if any). + @param stat File information for operations that change the file's + metadata (permissions, owner, times, etc).]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
diff --git a/hadoop-mapreduce-project/dev-support/jdiff/Apache_Hadoop_MapReduce_Common_3.2.4.xml b/hadoop-mapreduce-project/dev-support/jdiff/Apache_Hadoop_MapReduce_Common_3.2.4.xml new file mode 100644 index 00000000000..ff74bb7afb2 --- /dev/null +++ b/hadoop-mapreduce-project/dev-support/jdiff/Apache_Hadoop_MapReduce_Common_3.2.4.xml @@ -0,0 +1,113 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/hadoop-mapreduce-project/dev-support/jdiff/Apache_Hadoop_MapReduce_Core_3.2.4.xml b/hadoop-mapreduce-project/dev-support/jdiff/Apache_Hadoop_MapReduce_Core_3.2.4.xml new file mode 100644 index 00000000000..239b01b4612 --- /dev/null +++ b/hadoop-mapreduce-project/dev-support/jdiff/Apache_Hadoop_MapReduce_Core_3.2.4.xml @@ -0,0 +1,28073 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + FileStatus of a given cache file on hdfs + @throws IOException]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + DistributedCache is a facility provided by the Map-Reduce + framework to cache files (text, archives, jars etc.) needed by applications. +

+ +

Applications specify the files, via urls (hdfs:// or http://) to be cached + via the {@link org.apache.hadoop.mapred.JobConf}. The + DistributedCache assumes that the files specified via urls are + already present on the {@link FileSystem} at the path specified by the url + and are accessible by every machine in the cluster.

+ +

The framework will copy the necessary files on to the worker node before + any tasks for the job are executed on that node. Its efficiency stems from + the fact that the files are only copied once per job and the ability to + cache archives which are un-archived on the workers.

+ +

DistributedCache can be used to distribute simple, read-only + data/text files and/or more complex types such as archives, jars etc. + Archives (zip, tar and tgz/tar.gz files) are un-archived at the worker nodes. + Jars may be optionally added to the classpath of the tasks, a rudimentary + software distribution mechanism. Files have execution permissions. + In older version of Hadoop Map/Reduce users could optionally ask for symlinks + to be created in the working directory of the child task. In the current + version symlinks are always created. If the URL does not have a fragment + the name of the file or directory will be used. If multiple files or + directories map to the same link name, the last one added, will be used. All + others will not even be downloaded.

+ +

DistributedCache tracks modification timestamps of the cache + files. Clearly the cache files should not be modified by the application + or externally while the job is executing.

+ +

Here is an illustrative example on how to use the + DistributedCache:

+

+     // Setting up the cache for the application
+
+     1. Copy the requisite files to the FileSystem:
+
+     $ bin/hadoop fs -copyFromLocal lookup.dat /myapp/lookup.dat
+     $ bin/hadoop fs -copyFromLocal map.zip /myapp/map.zip
+     $ bin/hadoop fs -copyFromLocal mylib.jar /myapp/mylib.jar
+     $ bin/hadoop fs -copyFromLocal mytar.tar /myapp/mytar.tar
+     $ bin/hadoop fs -copyFromLocal mytgz.tgz /myapp/mytgz.tgz
+     $ bin/hadoop fs -copyFromLocal mytargz.tar.gz /myapp/mytargz.tar.gz
+
+     2. Setup the application's JobConf:
+
+     JobConf job = new JobConf();
+     DistributedCache.addCacheFile(new URI("/myapp/lookup.dat#lookup.dat"),
+                                   job);
+     DistributedCache.addCacheArchive(new URI("/myapp/map.zip"), job);
+     DistributedCache.addFileToClassPath(new Path("/myapp/mylib.jar"), job);
+     DistributedCache.addCacheArchive(new URI("/myapp/mytar.tar"), job);
+     DistributedCache.addCacheArchive(new URI("/myapp/mytgz.tgz"), job);
+     DistributedCache.addCacheArchive(new URI("/myapp/mytargz.tar.gz"), job);
+
+     3. Use the cached files in the {@link org.apache.hadoop.mapred.Mapper}
+     or {@link org.apache.hadoop.mapred.Reducer}:
+
+     public static class MapClass extends MapReduceBase
+     implements Mapper<K, V, K, V> {
+
+       private Path[] localArchives;
+       private Path[] localFiles;
+
+       public void configure(JobConf job) {
+         // Get the cached archives/files
+         File f = new File("./map.zip/some/file/in/zip.txt");
+       }
+
+       public void map(K key, V value,
+                       OutputCollector<K, V> output, Reporter reporter)
+       throws IOException {
+         // Use data from the cached archives/files here
+         // ...
+         // ...
+         output.collect(k, v);
+       }
+     }
+
+ 
+ + It is also very common to use the DistributedCache by using + {@link org.apache.hadoop.util.GenericOptionsParser}. + + This class includes methods that should be used by users + (specifically those mentioned in the example above, as well + as {@link DistributedCache#addArchiveToClassPath(Path, Configuration)}), + as well as methods intended for use by the MapReduce framework + (e.g., {@link org.apache.hadoop.mapred.JobClient}). + + @see org.apache.hadoop.mapred.JobConf + @see org.apache.hadoop.mapred.JobClient + @see org.apache.hadoop.mapreduce.Job]]> +
+
+ +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + JobTracker, + as {@link JobTracker.State} + + {@link JobTracker.State} should no longer be used on M/R 2.x. The function + is kept to be compatible with M/R 1.x applications. + + @return the invalid state of the JobTracker.]]> + + + + + + + + + + + + + + ClusterStatus provides clients with information such as: +
    +
  1. + Size of the cluster. +
  2. +
  3. + Name of the trackers. +
  4. +
  5. + Task capacity of the cluster. +
  6. +
  7. + The number of currently running map and reduce tasks. +
  8. +
  9. + State of the JobTracker. +
  10. +
  11. + Details regarding black listed trackers. +
  12. +
+ +

Clients can query for the latest ClusterStatus, via + {@link JobClient#getClusterStatus()}.

+ + @see JobClient]]> +
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Counters represent global counters, defined either by the + Map-Reduce framework or applications. Each Counter can be of + any {@link Enum} type.

+ +

Counters are bunched into {@link Group}s, each comprising of + counters from a particular Enum class.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Group of counters, comprising of counters from a particular + counter {@link Enum} class. + +

Grouphandles localization of the class name and the + counter names.

]]> +
+
+ + + + + + + + + + + + + + + + + + + + + + + + + FileInputFormat always returns + true. Implementations that may deal with non-splittable files must + override this method. + + FileInputFormat implementations can override this and return + false to ensure that individual input files are never split-up + so that {@link Mapper}s process entire files. + + @param fs the file system that the file is on + @param filename the file name to check + @return is this file splitable?]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + FileInputFormat is the base class for all file-based + InputFormats. This provides a generic implementation of + {@link #getSplits(JobConf, int)}. + + Implementations of FileInputFormat can also override the + {@link #isSplitable(FileSystem, Path)} method to prevent input files + from being split-up in certain situations. Implementations that may + deal with non-splittable files must override this method, since + the default implementation assumes splitting is always possible.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + true if the job output should be compressed, + false otherwise]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Tasks' Side-Effect Files + +

Note: The following is valid only if the {@link OutputCommitter} + is {@link FileOutputCommitter}. If OutputCommitter is not + a FileOutputCommitter, the task's temporary output + directory is same as {@link #getOutputPath(JobConf)} i.e. + ${mapreduce.output.fileoutputformat.outputdir}$

+ +

Some applications need to create/write-to side-files, which differ from + the actual job-outputs. + +

In such cases there could be issues with 2 instances of the same TIP + (running simultaneously e.g. speculative tasks) trying to open/write-to the + same file (path) on HDFS. Hence the application-writer will have to pick + unique names per task-attempt (e.g. using the attemptid, say + attempt_200709221812_0001_m_000000_0), not just per TIP.

+ +

To get around this the Map-Reduce framework helps the application-writer + out by maintaining a special + ${mapreduce.output.fileoutputformat.outputdir}/_temporary/_${taskid} + sub-directory for each task-attempt on HDFS where the output of the + task-attempt goes. On successful completion of the task-attempt the files + in the ${mapreduce.output.fileoutputformat.outputdir}/_temporary/_${taskid} (only) + are promoted to ${mapreduce.output.fileoutputformat.outputdir}. Of course, the + framework discards the sub-directory of unsuccessful task-attempts. This + is completely transparent to the application.

+ +

The application-writer can take advantage of this by creating any + side-files required in ${mapreduce.task.output.dir} during execution + of his reduce-task i.e. via {@link #getWorkOutputPath(JobConf)}, and the + framework will move them out similarly - thus she doesn't have to pick + unique paths per task-attempt.

+ +

Note: the value of ${mapreduce.task.output.dir} during + execution of a particular task-attempt is actually + ${mapreduce.output.fileoutputformat.outputdir}/_temporary/_{$taskid}, and this value is + set by the map-reduce framework. So, just create any side-files in the + path returned by {@link #getWorkOutputPath(JobConf)} from map/reduce + task to take advantage of this feature.

+ +

The entire discussion holds true for maps of jobs with + reducer=NONE (i.e. 0 reduces) since output of the map, in that case, + goes directly to HDFS.

+ + @return the {@link Path} to the task's temporary output directory + for the map-reduce job.]]> +
+
+ + + + + + + + + + + + + The generated name can be used to create custom files from within the + different tasks for the job, the names for different tasks will not collide + with each other.

+ +

The given name is postfixed with the task type, 'm' for maps, 'r' for + reduces and the task partition number. For example, give a name 'test' + running on the first map o the job the generated name will be + 'test-m-00000'.

+ + @param conf the configuration for the job. + @param name the name to make unique. + @return a unique name accross all tasks of the job.]]> +
+
+ + + + + The path can be used to create custom files from within the map and + reduce tasks. The path name will be unique for each task. The path parent + will be the job output directory.

ls + +

This method uses the {@link #getUniqueName} method to make the file name + unique for the task.

+ + @param conf the configuration for the job. + @param name the name for the file. + @return a unique path accross all tasks of the job.]]> +
+
+ + + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
or + conf.setInt(FixedLengthInputFormat.FIXED_RECORD_LENGTH, recordLength); +

+ @see FixedLengthRecordReader]]> +
+
+ + + + + + + + + + + + + + + + + + + + + + + Each {@link InputSplit} is then assigned to an individual {@link Mapper} + for processing.

+ +

Note: The split is a logical split of the inputs and the + input files are not physically split into chunks. For e.g. a split could + be <input-file-path, start, offset> tuple. + + @param job job configuration. + @param numSplits the desired number of splits, a hint. + @return an array of {@link InputSplit}s for the job.]]> + + + + + + + + + It is the responsibility of the RecordReader to respect + record boundaries while processing the logical split to present a + record-oriented view to the individual task.

+ + @param split the {@link InputSplit} + @param job the job that this split belongs to + @return a {@link RecordReader}]]> +
+
+ + InputFormat describes the input-specification for a + Map-Reduce job. + +

The Map-Reduce framework relies on the InputFormat of the + job to:

+

    +
  1. + Validate the input-specification of the job. +
  2. + Split-up the input file(s) into logical {@link InputSplit}s, each of + which is then assigned to an individual {@link Mapper}. +
  3. +
  4. + Provide the {@link RecordReader} implementation to be used to glean + input records from the logical InputSplit for processing by + the {@link Mapper}. +
  5. +
+ +

The default behavior of file-based {@link InputFormat}s, typically + sub-classes of {@link FileInputFormat}, is to split the + input into logical {@link InputSplit}s based on the total size, in + bytes, of the input files. However, the {@link FileSystem} blocksize of + the input files is treated as an upper bound for input splits. A lower bound + on the split size can be set via + + mapreduce.input.fileinputformat.split.minsize.

+ +

Clearly, logical splits based on input-size is insufficient for many + applications since record boundaries are to be respected. In such cases, the + application has to also implement a {@link RecordReader} on whom lies the + responsibilty to respect record-boundaries and present a record-oriented + view of the logical InputSplit to the individual task. + + @see InputSplit + @see RecordReader + @see JobClient + @see FileInputFormat]]> + + + + + + + + + + InputSplit. + + @return the number of bytes in the input split. + @throws IOException]]> + + + + + + InputSplit is + located as an array of Strings. + @throws IOException]]> + + + + InputSplit represents the data to be processed by an + individual {@link Mapper}. + +

Typically, it presents a byte-oriented view on the input and is the + responsibility of {@link RecordReader} of the job to process this and present + a record-oriented view. + + @see InputFormat + @see RecordReader]]> + + + + + + + + + + SplitLocationInfos describing how the split + data is stored at each location. A null value indicates that all the + locations have the data stored on disk. + @throws IOException]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + JobClient.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + jobid doesn't correspond to any known job. + @throws IOException]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + JobClient is the primary interface for the user-job to interact + with the cluster. + + JobClient provides facilities to submit jobs, track their + progress, access component-tasks' reports/logs, get the Map-Reduce cluster + status information etc. + +

The job submission process involves: +

    +
  1. + Checking the input and output specifications of the job. +
  2. +
  3. + Computing the {@link InputSplit}s for the job. +
  4. +
  5. + Setup the requisite accounting information for the {@link DistributedCache} + of the job, if necessary. +
  6. +
  7. + Copying the job's jar and configuration to the map-reduce system directory + on the distributed file-system. +
  8. +
  9. + Submitting the job to the cluster and optionally monitoring + it's status. +
  10. +
+ + Normally the user creates the application, describes various facets of the + job via {@link JobConf} and then uses the JobClient to submit + the job and monitor its progress. + +

Here is an example on how to use JobClient:

+

+     // Create a new JobConf
+     JobConf job = new JobConf(new Configuration(), MyJob.class);
+     
+     // Specify various job-specific parameters     
+     job.setJobName("myjob");
+     
+     job.setInputPath(new Path("in"));
+     job.setOutputPath(new Path("out"));
+     
+     job.setMapperClass(MyJob.MyMapper.class);
+     job.setReducerClass(MyJob.MyReducer.class);
+
+     // Submit the job, then poll for progress until the job is complete
+     JobClient.runJob(job);
+ 
+ + Job Control + +

At times clients would chain map-reduce jobs to accomplish complex tasks + which cannot be done via a single map-reduce job. This is fairly easy since + the output of the job, typically, goes to distributed file-system and that + can be used as the input for the next job.

+ +

However, this also means that the onus on ensuring jobs are complete + (success/failure) lies squarely on the clients. In such situations the + various job-control options are: +

    +
  1. + {@link #runJob(JobConf)} : submits the job and returns only after + the job has completed. +
  2. +
  3. + {@link #submitJob(JobConf)} : only submits the job, then poll the + returned handle to the {@link RunningJob} to query status and make + scheduling decisions. +
  4. +
  5. + {@link JobConf#setJobEndNotificationURI(String)} : setup a notification + on job-completion, thus avoiding polling. +
  6. +
+ + @see JobConf + @see ClusterStatus + @see Tool + @see DistributedCache]]> +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + If the parameter {@code loadDefaults} is false, the new instance + will not load resources from the default files. + + @param loadDefaults specifies whether to load from the default files]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + true if framework should keep the intermediate files + for failed tasks, false otherwise.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + true if the outputs of the maps are to be compressed, + false otherwise.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + This comparator should be provided if the equivalence rules for keys + for sorting the intermediates are different from those for grouping keys + before each call to + {@link Reducer#reduce(Object, java.util.Iterator, OutputCollector, Reporter)}.

+ +

For key-value pairs (K1,V1) and (K2,V2), the values (V1, V2) are passed + in a single call to the reduce function if K1 and K2 compare as equal.

+ +

Since {@link #setOutputKeyComparatorClass(Class)} can be used to control + how keys are sorted, this can be used in conjunction to simulate + secondary sort on values.

+ +

Note: This is not a guarantee of the combiner sort being + stable in any sense. (In any case, with the order of available + map-outputs to the combiner being non-deterministic, it wouldn't make + that much sense.)

+ + @param theClass the comparator class to be used for grouping keys for the + combiner. It should implement RawComparator. + @see #setOutputKeyComparatorClass(Class)]]> +
+
+ + + + This comparator should be provided if the equivalence rules for keys + for sorting the intermediates are different from those for grouping keys + before each call to + {@link Reducer#reduce(Object, java.util.Iterator, OutputCollector, Reporter)}.

+ +

For key-value pairs (K1,V1) and (K2,V2), the values (V1, V2) are passed + in a single call to the reduce function if K1 and K2 compare as equal.

+ +

Since {@link #setOutputKeyComparatorClass(Class)} can be used to control + how keys are sorted, this can be used in conjunction to simulate + secondary sort on values.

+ +

Note: This is not a guarantee of the reduce sort being + stable in any sense. (In any case, with the order of available + map-outputs to the reduce being non-deterministic, it wouldn't make + that much sense.)

+ + @param theClass the comparator class to be used for grouping keys. + It should implement RawComparator. + @see #setOutputKeyComparatorClass(Class) + @see #setCombinerKeyGroupingComparator(Class)]]> +
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + combiner class used to combine map-outputs + before being sent to the reducers. Typically the combiner is same as the + the {@link Reducer} for the job i.e. {@link #getReducerClass()}. + + @return the user-defined combiner class used to combine map-outputs.]]> + + + + + + combiner class used to combine map-outputs + before being sent to the reducers. + +

The combiner is an application-specified aggregation operation, which + can help cut down the amount of data transferred between the + {@link Mapper} and the {@link Reducer}, leading to better performance.

+ +

The framework may invoke the combiner 0, 1, or multiple times, in both + the mapper and reducer tasks. In general, the combiner is called as the + sort/merge result is written to disk. The combiner must: +

    +
  • be side-effect free
  • +
  • have the same input and output key types and the same input and + output value types
  • +
+ +

Typically the combiner is same as the Reducer for the + job i.e. {@link #setReducerClass(Class)}.

+ + @param theClass the user-defined combiner class used to combine + map-outputs.]]> +
+
+ + + true. + + @return true if speculative execution be used for this job, + false otherwise.]]> + + + + + + true if speculative execution + should be turned on, else false.]]> + + + + + true. + + @return true if speculative execution be + used for this job for map tasks, + false otherwise.]]> + + + + + + true if speculative execution + should be turned on for map tasks, + else false.]]> + + + + + true. + + @return true if speculative execution be used + for reduce tasks for this job, + false otherwise.]]> + + + + + + true if speculative execution + should be turned on for reduce tasks, + else false.]]> + + + + + 1. + + @return the number of map tasks for this job.]]> + + + + + + Note: This is only a hint to the framework. The actual + number of spawned map tasks depends on the number of {@link InputSplit}s + generated by the job's {@link InputFormat#getSplits(JobConf, int)}. + + A custom {@link InputFormat} is typically used to accurately control + the number of map tasks for the job.

+ + How many maps? + +

The number of maps is usually driven by the total size of the inputs + i.e. total number of blocks of the input files.

+ +

The right level of parallelism for maps seems to be around 10-100 maps + per-node, although it has been set up to 300 or so for very cpu-light map + tasks. Task setup takes awhile, so it is best if the maps take at least a + minute to execute.

+ +

The default behavior of file-based {@link InputFormat}s is to split the + input into logical {@link InputSplit}s based on the total size, in + bytes, of input files. However, the {@link FileSystem} blocksize of the + input files is treated as an upper bound for input splits. A lower bound + on the split size can be set via + + mapreduce.input.fileinputformat.split.minsize.

+ +

Thus, if you expect 10TB of input data and have a blocksize of 128MB, + you'll end up with 82,000 maps, unless {@link #setNumMapTasks(int)} is + used to set it even higher.

+ + @param n the number of map tasks for this job. + @see InputFormat#getSplits(JobConf, int) + @see FileInputFormat + @see FileSystem#getDefaultBlockSize() + @see FileStatus#getBlockSize()]]> +
+
+ + + 1. + + @return the number of reduce tasks for this job.]]> + + + + + + How many reduces? + +

The right number of reduces seems to be 0.95 or + 1.75 multiplied by ( + available memory for reduce tasks + (The value of this should be smaller than + numNodes * yarn.nodemanager.resource.memory-mb + since the resource of memory is shared by map tasks and other + applications) / + + mapreduce.reduce.memory.mb). +

+ +

With 0.95 all of the reduces can launch immediately and + start transfering map outputs as the maps finish. With 1.75 + the faster nodes will finish their first round of reduces and launch a + second wave of reduces doing a much better job of load balancing.

+ +

Increasing the number of reduces increases the framework overhead, but + increases load balancing and lowers the cost of failures.

+ +

The scaling factors above are slightly less than whole numbers to + reserve a few reduce slots in the framework for speculative-tasks, failures + etc.

+ + Reducer NONE + +

It is legal to set the number of reduce-tasks to zero.

+ +

In this case the output of the map-tasks directly go to distributed + file-system, to the path set by + {@link FileOutputFormat#setOutputPath(JobConf, Path)}. Also, the + framework doesn't sort the map-outputs before writing it out to HDFS.

+ + @param n the number of reduce tasks for this job.]]> +
+
+ + + mapreduce.map.maxattempts + property. If this property is not already set, the default is 4 attempts. + + @return the max number of attempts per map task.]]> + + + + + + + + + + + mapreduce.reduce.maxattempts + property. If this property is not already set, the default is 4 attempts. + + @return the max number of attempts per reduce task.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + noFailures, the + tasktracker is blacklisted for this job. + + @param noFailures maximum no. of failures of a given job per tasktracker.]]> + + + + + blacklisted for this job. + + @return the maximum no. of failures of a given job per tasktracker.]]> + + + + + failed. + + Defaults to zero, i.e. any failed map-task results in + the job being declared as {@link JobStatus#FAILED}. + + @return the maximum percentage of map tasks that can fail without + the job being aborted.]]> + + + + + + failed. + + @param percent the maximum percentage of map tasks that can fail without + the job being aborted.]]> + + + + + failed. + + Defaults to zero, i.e. any failed reduce-task results + in the job being declared as {@link JobStatus#FAILED}. + + @return the maximum percentage of reduce tasks that can fail without + the job being aborted.]]> + + + + + + failed. + + @param percent the maximum percentage of reduce tasks that can fail without + the job being aborted.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + The debug script can aid debugging of failed map tasks. The script is + given task's stdout, stderr, syslog, jobconf files as arguments.

+ +

The debug command, run on the node where the map failed, is:

+

+ $script $stdout $stderr $syslog $jobconf.
+ 
+ +

The script file is distributed through {@link DistributedCache} + APIs. The script needs to be symlinked.

+ +

Here is an example on how to submit a script +

+ job.setMapDebugScript("./myscript");
+ DistributedCache.createSymlink(job);
+ DistributedCache.addCacheFile("/debug/scripts/myscript#myscript");
+ 
+ + @param mDbgScript the script name]]> +
+
+ + + + + + + + + The debug script can aid debugging of failed reduce tasks. The script + is given task's stdout, stderr, syslog, jobconf files as arguments.

+ +

The debug command, run on the node where the map failed, is:

+

+ $script $stdout $stderr $syslog $jobconf.
+ 
+ +

The script file is distributed through {@link DistributedCache} + APIs. The script file needs to be symlinked

+ +

Here is an example on how to submit a script +

+ job.setReduceDebugScript("./myscript");
+ DistributedCache.createSymlink(job);
+ DistributedCache.addCacheFile("/debug/scripts/myscript#myscript");
+ 
+ + @param rDbgScript the script name]]> +
+
+ + + + + + + + null if it hasn't + been set. + @see #setJobEndNotificationURI(String)]]> + + + + + + The uri can contain 2 special parameters: $jobId and + $jobStatus. Those, if present, are replaced by the job's + identifier and completion-status respectively.

+ +

This is typically used by application-writers to implement chaining of + Map-Reduce jobs in an asynchronous manner.

+ + @param uri the job end notification uri + @see JobStatus]]> +
+
+ + + + + + + + + + + + + + + When a job starts, a shared directory is created at location + + ${mapreduce.cluster.local.dir}/taskTracker/$user/jobcache/$jobid/work/ . + This directory is exposed to the users through + mapreduce.job.local.dir . + So, the tasks can use this space + as scratch space and share files among them.

+ This value is available as System property also. + + @return The localized job specific shared directory]]> +
+
+ + + + For backward compatibility, if the job configuration sets the + key {@link #MAPRED_TASK_MAXVMEM_PROPERTY} to a value different + from {@link #DISABLED_MEMORY_LIMIT}, that value will be used + after converting it from bytes to MB. + @return memory required to run a map task of the job, in MB,]]> + + + + + + + + + For backward compatibility, if the job configuration sets the + key {@link #MAPRED_TASK_MAXVMEM_PROPERTY} to a value different + from {@link #DISABLED_MEMORY_LIMIT}, that value will be used + after converting it from bytes to MB. + @return memory required to run a reduce task of the job, in MB.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + This method is deprecated. Now, different memory limits can be + set for map and reduce tasks of a job, in MB. +

+ For backward compatibility, if the job configuration sets the + key {@link #MAPRED_TASK_MAXVMEM_PROPERTY}, that value is returned. + Otherwise, this method will return the larger of the values returned by + {@link #getMemoryForMapTask()} and {@link #getMemoryForReduceTask()} + after converting them into bytes. + + @return Memory required to run a task of this job, in bytes. + @see #setMaxVirtualMemoryForTask(long) + @deprecated Use {@link #getMemoryForMapTask()} and + {@link #getMemoryForReduceTask()}]]> + + + + + + + mapred.task.maxvmem is split into + mapreduce.map.memory.mb + and mapreduce.map.memory.mb,mapred + each of the new key are set + as mapred.task.maxvmem / 1024 + as new values are in MB + + @param vmem Maximum amount of virtual memory in bytes any task of this job + can use. + @see #getMaxVirtualMemoryForTask() + @deprecated + Use {@link #setMemoryForMapTask(long mem)} and + Use {@link #setMemoryForReduceTask(long mem)}]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + k1=v1,k2=v2. Further it can + reference existing environment variables via $key on + Linux or %key% on Windows. + + Example: +

    +
  • A=foo - This will set the env variable A to foo.
  • +
+ + @deprecated Use {@link #MAPRED_MAP_TASK_ENV} or + {@link #MAPRED_REDUCE_TASK_ENV}]]> +
+ + + + k1=v1,k2=v2. Further it can + reference existing environment variables via $key on + Linux or %key% on Windows. + + Example: +
    +
  • A=foo - This will set the env variable A to foo.
  • +
+ + You can also add environment variables individually by appending + .VARNAME to this configuration key, where VARNAME is + the name of the environment variable. + + Example: +
    +
  • mapreduce.map.env.VARNAME=value
  • +
]]> +
+
+ + + k1=v1,k2=v2. Further it can + reference existing environment variables via $key on + Linux or %key% on Windows. + + Example: +
    +
  • A=foo - This will set the env variable A to foo.
  • +
+ + You can also add environment variables individually by appending + .VARNAME to this configuration key, where VARNAME is + the name of the environment variable. + + Example: +
    +
  • mapreduce.reduce.env.VARNAME=value
  • +
]]> +
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + JobConf is the primary interface for a user to describe a + map-reduce job to the Hadoop framework for execution. The framework tries to + faithfully execute the job as-is described by JobConf, however: +
    +
  1. + Some configuration parameters might have been marked as + + final by administrators and hence cannot be altered. +
  2. +
  3. + While some job parameters are straight-forward to set + (e.g. {@link #setNumReduceTasks(int)}), some parameters interact subtly + with the rest of the framework and/or job-configuration and is relatively + more complex for the user to control finely + (e.g. {@link #setNumMapTasks(int)}). +
  4. +
+ +

JobConf typically specifies the {@link Mapper}, combiner + (if any), {@link Partitioner}, {@link Reducer}, {@link InputFormat} and + {@link OutputFormat} implementations to be used etc. + +

Optionally JobConf is used to specify other advanced facets + of the job such as Comparators to be used, files to be put in + the {@link DistributedCache}, whether or not intermediate and/or job outputs + are to be compressed (and how), debugability via user-provided scripts + ( {@link #setMapDebugScript(String)}/{@link #setReduceDebugScript(String)}), + for doing post-processing on task logs, task's stdout, stderr, syslog. + and etc.

+ +

Here is an example on how to configure a job via JobConf:

+

+     // Create a new JobConf
+     JobConf job = new JobConf(new Configuration(), MyJob.class);
+     
+     // Specify various job-specific parameters     
+     job.setJobName("myjob");
+     
+     FileInputFormat.setInputPaths(job, new Path("in"));
+     FileOutputFormat.setOutputPath(job, new Path("out"));
+     
+     job.setMapperClass(MyJob.MyMapper.class);
+     job.setCombinerClass(MyJob.MyReducer.class);
+     job.setReducerClass(MyJob.MyReducer.class);
+     
+     job.setInputFormat(SequenceFileInputFormat.class);
+     job.setOutputFormat(SequenceFileOutputFormat.class);
+ 
+ + @see JobClient + @see ClusterStatus + @see Tool + @see DistributedCache]]> +
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + any job + run on the jobtracker started at 200707121733, we would use : +
 
+ JobID.getTaskIDsPattern("200707121733", null);
+ 
+ which will return : +
 "job_200707121733_[0-9]*" 
+ @param jtIdentifier jobTracker identifier, or null + @param jobId job number, or null + @return a regex pattern matching JobIDs]]> +
+
+ + + An example JobID is : + job_200707121733_0003 , which represents the third job + running at the jobtracker started at 200707121733. +

+ Applications should never construct or parse JobID strings, but rather + use appropriate constructors or {@link #forName(String)} method. + + @see TaskID + @see TaskAttemptID]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Output pairs need not be of the same types as input pairs. A given + input pair may map to zero or many output pairs. Output pairs are + collected with calls to + {@link OutputCollector#collect(Object,Object)}.

+ +

Applications can use the {@link Reporter} provided to report progress + or just indicate that they are alive. In scenarios where the application + takes significant amount of time to process individual key/value + pairs, this is crucial since the framework might assume that the task has + timed-out and kill that task. The other way of avoiding this is to set + + mapreduce.task.timeout to a high-enough value (or even zero for no + time-outs).

+ + @param key the input key. + @param value the input value. + @param output collects mapped keys and values. + @param reporter facility to report progress.]]> +
+ + + Maps are the individual tasks which transform input records into a + intermediate records. The transformed intermediate records need not be of + the same type as the input records. A given input pair may map to zero or + many output pairs.

+ +

The Hadoop Map-Reduce framework spawns one map task for each + {@link InputSplit} generated by the {@link InputFormat} for the job. + Mapper implementations can access the {@link JobConf} for the + job via the {@link JobConfigurable#configure(JobConf)} and initialize + themselves. Similarly they can use the {@link Closeable#close()} method for + de-initialization.

+ +

The framework then calls + {@link #map(Object, Object, OutputCollector, Reporter)} + for each key/value pair in the InputSplit for that task.

+ +

All intermediate values associated with a given output key are + subsequently grouped by the framework, and passed to a {@link Reducer} to + determine the final output. Users can control the grouping by specifying + a Comparator via + {@link JobConf#setOutputKeyComparatorClass(Class)}.

+ +

The grouped Mapper outputs are partitioned per + Reducer. Users can control which keys (and hence records) go to + which Reducer by implementing a custom {@link Partitioner}. + +

Users can optionally specify a combiner, via + {@link JobConf#setCombinerClass(Class)}, to perform local aggregation of the + intermediate outputs, which helps to cut down the amount of data transferred + from the Mapper to the Reducer. + +

The intermediate, grouped outputs are always stored in + {@link SequenceFile}s. Applications can specify if and how the intermediate + outputs are to be compressed and which {@link CompressionCodec}s are to be + used via the JobConf.

+ +

If the job has + zero + reduces then the output of the Mapper is directly written + to the {@link FileSystem} without grouping by keys.

+ +

Example:

+

+     public class MyMapper<K extends WritableComparable, V extends Writable> 
+     extends MapReduceBase implements Mapper<K, V, K, V> {
+     
+       static enum MyCounters { NUM_RECORDS }
+       
+       private String mapTaskId;
+       private String inputFile;
+       private int noRecords = 0;
+       
+       public void configure(JobConf job) {
+         mapTaskId = job.get(JobContext.TASK_ATTEMPT_ID);
+         inputFile = job.get(JobContext.MAP_INPUT_FILE);
+       }
+       
+       public void map(K key, V val,
+                       OutputCollector<K, V> output, Reporter reporter)
+       throws IOException {
+         // Process the <key, value> pair (assume this takes a while)
+         // ...
+         // ...
+         
+         // Let the framework know that we are alive, and kicking!
+         // reporter.progress();
+         
+         // Process some more
+         // ...
+         // ...
+         
+         // Increment the no. of <key, value> pairs processed
+         ++noRecords;
+
+         // Increment counters
+         reporter.incrCounter(NUM_RECORDS, 1);
+        
+         // Every 100 records update application-level status
+         if ((noRecords%100) == 0) {
+           reporter.setStatus(mapTaskId + " processed " + noRecords + 
+                              " from input-file: " + inputFile); 
+         }
+         
+         // Output the result
+         output.collect(key, val);
+       }
+     }
+ 
+ +

Applications may write a custom {@link MapRunnable} to exert greater + control on map processing e.g. multi-threaded Mappers etc.

+ + @see JobConf + @see InputFormat + @see Partitioner + @see Reducer + @see MapReduceBase + @see MapRunnable + @see SequenceFile]]> +
+
+ + + + + + + + + + + + + + + + + + + + + Provides default no-op implementations for a few methods, most non-trivial + applications need to override some of them.

]]> +
+
+ + + + + + + + + + + <key, value> pairs. + +

Mapping of input records to output records is complete when this method + returns.

+ + @param input the {@link RecordReader} to read the input records. + @param output the {@link OutputCollector} to collect the outputrecords. + @param reporter {@link Reporter} to report progress, status-updates etc. + @throws IOException]]> +
+
+ + Custom implementations of MapRunnable can exert greater + control on map processing e.g. multi-threaded, asynchronous mappers etc.

+ + @see Mapper]]> +
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + nearly + equal content length.
+ Subclasses implement {@link #getRecordReader(InputSplit, JobConf, Reporter)} + to construct RecordReader's for MultiFileSplit's. + @see MultiFileSplit]]> +
+
+ + + + + + + + + + + + + MultiFileSplit can be used to implement {@link RecordReader}'s, with + reading one record per file. + @see FileSplit + @see MultiFileInputFormat]]> + + + + + + + + + + + + + + + <key, value> pairs output by {@link Mapper}s + and {@link Reducer}s. + +

OutputCollector is the generalization of the facility + provided by the Map-Reduce framework to collect data output by either the + Mapper or the Reducer i.e. intermediate outputs + or the output of the job.

]]> +
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + true if task output recovery is supported, + false otherwise + @throws IOException + @see #recoverTask(TaskAttemptContext)]]> + + + + + + + true repeatable job commit is supported, + false otherwise + @throws IOException]]> + + + + + + + + + + + OutputCommitter. This is called from the application master + process, but it is called individually for each task. + + If an exception is thrown the task will be attempted again. + + @param taskContext Context of the task whose output is being recovered + @throws IOException]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + OutputCommitter describes the commit of task output for a + Map-Reduce job. + +

The Map-Reduce framework relies on the OutputCommitter of + the job to:

+

    +
  1. + Setup the job during initialization. For example, create the temporary + output directory for the job during the initialization of the job. +
  2. +
  3. + Cleanup the job after the job completion. For example, remove the + temporary output directory after the job completion. +
  4. +
  5. + Setup the task temporary output. +
  6. +
  7. + Check whether a task needs a commit. This is to avoid the commit + procedure if a task does not need commit. +
  8. +
  9. + Commit of the task output. +
  10. +
  11. + Discard the task commit. +
  12. +
+ The methods in this class can be called from several different processes and + from several different contexts. It is important to know which process and + which context each is called from. Each method should be marked accordingly + in its documentation. It is also important to note that not all methods are + guaranteed to be called once and only once. If a method is not guaranteed to + have this property the output committer needs to handle this appropriately. + Also note it will only be in rare situations where they may be called + multiple times for the same task. + + @see FileOutputCommitter + @see JobContext + @see TaskAttemptContext]]> +
+
+ + + + + + + + + + + + + + + + + + + This is to validate the output specification for the job when it is + a job is submitted. Typically checks that it does not already exist, + throwing an exception when it already exists, so that output is not + overwritten.

+ + @param ignored + @param job job configuration. + @throws IOException when output should not be attempted]]> +
+
+ + OutputFormat describes the output-specification for a + Map-Reduce job. + +

The Map-Reduce framework relies on the OutputFormat of the + job to:

+

    +
  1. + Validate the output-specification of the job. For e.g. check that the + output directory doesn't already exist. +
  2. + Provide the {@link RecordWriter} implementation to be used to write out + the output files of the job. Output files are stored in a + {@link FileSystem}. +
  3. +
+ + @see RecordWriter + @see JobConf]]> +
+
+ + + + + + + + + + + + + + + + + + + + + + + Typically a hash function on a all or a subset of the key.

+ + @param key the key to be paritioned. + @param value the entry value. + @param numPartitions the total number of partitions. + @return the partition number for the key.]]> +
+
+ + Partitioner controls the partitioning of the keys of the + intermediate map-outputs. The key (or a subset of the key) is used to derive + the partition, typically by a hash function. The total number of partitions + is the same as the number of reduce tasks for the job. Hence this controls + which of the m reduce tasks the intermediate key (and hence the + record) is sent for reduction.

+ +

Note: A Partitioner is created only when there are multiple + reducers.

+ + @see Reducer]]> +
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 0.0 to 1.0. + @throws IOException]]> + + + + RecordReader reads <key, value> pairs from an + {@link InputSplit}. + +

RecordReader, typically, converts the byte-oriented view of + the input, provided by the InputSplit, and presents a + record-oriented view for the {@link Mapper} and {@link Reducer} tasks for + processing. It thus assumes the responsibility of processing record + boundaries and presenting the tasks with keys and values.

+ + @see InputSplit + @see InputFormat]]> +
+
+ + + + + + + + + + + + + + + + RecordWriter to future operations. + + @param reporter facility to report progress. + @throws IOException]]> + + + + RecordWriter writes the output <key, value> pairs + to an output file. + +

RecordWriter implementations write the job outputs to the + {@link FileSystem}. + + @see OutputFormat]]> + + + + + + + + + + + + + + + Reduces values for a given key. + +

The framework calls this method for each + <key, (list of values)> pair in the grouped inputs. + Output values must be of the same type as input values. Input keys must + not be altered. The framework will reuse the key and value objects + that are passed into the reduce, therefore the application should clone + the objects they want to keep a copy of. In many cases, all values are + combined into zero or one value. +

+ +

Output pairs are collected with calls to + {@link OutputCollector#collect(Object,Object)}.

+ +

Applications can use the {@link Reporter} provided to report progress + or just indicate that they are alive. In scenarios where the application + takes a significant amount of time to process individual key/value + pairs, this is crucial since the framework might assume that the task has + timed-out and kill that task. The other way of avoiding this is to set + + mapreduce.task.timeout to a high-enough value (or even zero for no + time-outs).

+ + @param key the key. + @param values the list of values to reduce. + @param output to collect keys and combined values. + @param reporter facility to report progress.]]> +
+ + + The number of Reducers for the job is set by the user via + {@link JobConf#setNumReduceTasks(int)}. Reducer implementations + can access the {@link JobConf} for the job via the + {@link JobConfigurable#configure(JobConf)} method and initialize themselves. + Similarly they can use the {@link Closeable#close()} method for + de-initialization.

+ +

Reducer has 3 primary phases:

+
    +
  1. + + Shuffle + +

    Reducer is input the grouped output of a {@link Mapper}. + In the phase the framework, for each Reducer, fetches the + relevant partition of the output of all the Mappers, via HTTP. +

    +
  2. + +
  3. + Sort + +

    The framework groups Reducer inputs by keys + (since different Mappers may have output the same key) in this + stage.

    + +

    The shuffle and sort phases occur simultaneously i.e. while outputs are + being fetched they are merged.

    + + SecondarySort + +

    If equivalence rules for keys while grouping the intermediates are + different from those for grouping keys before reduction, then one may + specify a Comparator via + {@link JobConf#setOutputValueGroupingComparator(Class)}.Since + {@link JobConf#setOutputKeyComparatorClass(Class)} can be used to + control how intermediate keys are grouped, these can be used in conjunction + to simulate secondary sort on values.

    + + + For example, say that you want to find duplicate web pages and tag them + all with the url of the "best" known example. You would set up the job + like: +
      +
    • Map Input Key: url
    • +
    • Map Input Value: document
    • +
    • Map Output Key: document checksum, url pagerank
    • +
    • Map Output Value: url
    • +
    • Partitioner: by checksum
    • +
    • OutputKeyComparator: by checksum and then decreasing pagerank
    • +
    • OutputValueGroupingComparator: by checksum
    • +
    +
  4. + +
  5. + Reduce + +

    In this phase the + {@link #reduce(Object, Iterator, OutputCollector, Reporter)} + method is called for each <key, (list of values)> pair in + the grouped inputs.

    +

    The output of the reduce task is typically written to the + {@link FileSystem} via + {@link OutputCollector#collect(Object, Object)}.

    +
  6. +
+ +

The output of the Reducer is not re-sorted.

+ +

Example:

+

+     public class MyReducer<K extends WritableComparable, V extends Writable> 
+     extends MapReduceBase implements Reducer<K, V, K, V> {
+     
+       static enum MyCounters { NUM_RECORDS }
+        
+       private String reduceTaskId;
+       private int noKeys = 0;
+       
+       public void configure(JobConf job) {
+         reduceTaskId = job.get(JobContext.TASK_ATTEMPT_ID);
+       }
+       
+       public void reduce(K key, Iterator<V> values,
+                          OutputCollector<K, V> output, 
+                          Reporter reporter)
+       throws IOException {
+       
+         // Process
+         int noValues = 0;
+         while (values.hasNext()) {
+           V value = values.next();
+           
+           // Increment the no. of values for this key
+           ++noValues;
+           
+           // Process the <key, value> pair (assume this takes a while)
+           // ...
+           // ...
+           
+           // Let the framework know that we are alive, and kicking!
+           if ((noValues%10) == 0) {
+             reporter.progress();
+           }
+         
+           // Process some more
+           // ...
+           // ...
+           
+           // Output the <key, value> 
+           output.collect(key, value);
+         }
+         
+         // Increment the no. of <key, list of values> pairs processed
+         ++noKeys;
+         
+         // Increment counters
+         reporter.incrCounter(NUM_RECORDS, 1);
+         
+         // Every 100 keys update application-level status
+         if ((noKeys%100) == 0) {
+           reporter.setStatus(reduceTaskId + " processed " + noKeys);
+         }
+       }
+     }
+ 
+ + @see Mapper + @see Partitioner + @see Reporter + @see MapReduceBase]]> +
+
+ + + + + + + + + + + + + + Counter of the given group/name.]]> + + + + + + + Counter of the given group/name.]]> + + + + + + + Enum. + @param amount A non-negative amount by which the counter is to + be incremented.]]> + + + + + + + + + + + + + + InputSplit that the map is reading from. + @throws UnsupportedOperationException if called outside a mapper]]> + + + + + + + + + + + + + + {@link Mapper} and {@link Reducer} can use the Reporter + provided to report progress or just indicate that they are alive. In + scenarios where the application takes significant amount of time to + process individual key/value pairs, this is crucial since the framework + might assume that the task has timed-out and kill that task. + +

Applications can also update {@link Counters} via the provided + Reporter .

+ + @see Progressable + @see Counters]]> +
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + progress of the job's map-tasks, as a float between 0.0 + and 1.0. When all map tasks have completed, the function returns 1.0. + + @return the progress of the job's map-tasks. + @throws IOException]]> + + + + + + progress of the job's reduce-tasks, as a float between 0.0 + and 1.0. When all reduce tasks have completed, the function returns 1.0. + + @return the progress of the job's reduce-tasks. + @throws IOException]]> + + + + + + progress of the job's cleanup-tasks, as a float between 0.0 + and 1.0. When all cleanup tasks have completed, the function returns 1.0. + + @return the progress of the job's cleanup-tasks. + @throws IOException]]> + + + + + + progress of the job's setup-tasks, as a float between 0.0 + and 1.0. When all setup tasks have completed, the function returns 1.0. + + @return the progress of the job's setup-tasks. + @throws IOException]]> + + + + + + true if the job is complete, else false. + @throws IOException]]> + + + + + + true if the job succeeded, else false. + @throws IOException]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + true if the job retired, else false. + @throws IOException]]> + + + + + + + + + + RunningJob is the user-interface to query for details on a + running Map-Reduce job. + +

Clients can get hold of RunningJob via the {@link JobClient} + and then query the running-job for details such as name, configuration, + progress etc.

+ + @see JobClient]]> +
+
+ + + + + + + + + + + + + + + + + + + + + + + + + This allows the user to specify the key class to be different + from the actual class ({@link BytesWritable}) used for writing

+ + @param conf the {@link JobConf} to modify + @param theClass the SequenceFile output key class.]]> +
+
+ + + + + This allows the user to specify the value class to be different + from the actual class ({@link BytesWritable}) used for writing

+ + @param conf the {@link JobConf} to modify + @param theClass the SequenceFile output key class.]]> +
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + true if auto increment + {@link SkipBadRecords#COUNTER_MAP_PROCESSED_RECORDS}. + false otherwise.]]> + + + + + + + + + + + + + true if auto increment + {@link SkipBadRecords#COUNTER_REDUCE_PROCESSED_GROUPS}. + false otherwise.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Hadoop provides an optional mode of execution in which the bad records + are detected and skipped in further attempts. + +

This feature can be used when map/reduce tasks crashes deterministically on + certain input. This happens due to bugs in the map/reduce function. The usual + course would be to fix these bugs. But sometimes this is not possible; + perhaps the bug is in third party libraries for which the source code is + not available. Due to this, the task never reaches to completion even with + multiple attempts and complete data for that task is lost.

+ +

With this feature, only a small portion of data is lost surrounding + the bad record, which may be acceptable for some user applications. + see {@link SkipBadRecords#setMapperMaxSkipRecords(Configuration, long)}

+ +

The skipping mode gets kicked off after certain no of failures + see {@link SkipBadRecords#setAttemptsToStartSkipping(Configuration, int)}

+ +

In the skipping mode, the map/reduce task maintains the record range which + is getting processed at all times. Before giving the input to the + map/reduce function, it sends this record range to the Task tracker. + If task crashes, the Task tracker knows which one was the last reported + range. On further attempts that range get skipped.

]]> +
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + all task attempt IDs + of any jobtracker, in any job, of the first + map task, we would use : +
 
+ TaskAttemptID.getTaskAttemptIDsPattern(null, null, true, 1, null);
+ 
+ which will return : +
 "attempt_[^_]*_[0-9]*_m_000001_[0-9]*" 
+ @param jtIdentifier jobTracker identifier, or null + @param jobId job number, or null + @param isMap whether the tip is a map, or null + @param taskId taskId number, or null + @param attemptId the task attempt number, or null + @return a regex pattern matching TaskAttemptIDs]]> +
+
+ + + + + + + + all task attempt IDs + of any jobtracker, in any job, of the first + map task, we would use : +
 
+ TaskAttemptID.getTaskAttemptIDsPattern(null, null, TaskType.MAP, 1, null);
+ 
+ which will return : +
 "attempt_[^_]*_[0-9]*_m_000001_[0-9]*" 
+ @param jtIdentifier jobTracker identifier, or null + @param jobId job number, or null + @param type the {@link TaskType} + @param taskId taskId number, or null + @param attemptId the task attempt number, or null + @return a regex pattern matching TaskAttemptIDs]]> +
+
+ + + An example TaskAttemptID is : + attempt_200707121733_0003_m_000005_0 , which represents the + zeroth task attempt for the fifth map task in the third job + running at the jobtracker started at 200707121733. +

+ Applications should never construct or parse TaskAttemptID strings + , but rather use appropriate constructors or {@link #forName(String)} + method. + + @see JobID + @see TaskID]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the first map task + of any jobtracker, of any job, we would use : +

 
+ TaskID.getTaskIDsPattern(null, null, true, 1);
+ 
+ which will return : +
 "task_[^_]*_[0-9]*_m_000001*" 
+ @param jtIdentifier jobTracker identifier, or null + @param jobId job number, or null + @param isMap whether the tip is a map, or null + @param taskId taskId number, or null + @return a regex pattern matching TaskIDs + @deprecated Use {@link TaskID#getTaskIDsPattern(String, Integer, TaskType, + Integer)}]]> +
+ + + + + + + + the first map task + of any jobtracker, of any job, we would use : +
 
+ TaskID.getTaskIDsPattern(null, null, true, 1);
+ 
+ which will return : +
 "task_[^_]*_[0-9]*_m_000001*" 
+ @param jtIdentifier jobTracker identifier, or null + @param jobId job number, or null + @param type the {@link TaskType}, or null + @param taskId taskId number, or null + @return a regex pattern matching TaskIDs]]> +
+
+ + + + + + + An example TaskID is : + task_200707121733_0003_m_000005 , which represents the + fifth map task in the third job running at the jobtracker + started at 200707121733. +

+ Applications should never construct or parse TaskID strings + , but rather use appropriate constructors or {@link #forName(String)} + method. + + @see JobID + @see TaskAttemptID]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + true if the Job was added.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ([,]*) + func ::= tbl(,"") + class ::= @see java.lang.Class#forName(java.lang.String) + path ::= @see org.apache.hadoop.fs.Path#Path(java.lang.String) + } + Reads expression from the mapred.join.expr property and + user-supplied join types from mapred.join.define.<ident> + types. Paths supplied to tbl are given as input paths to the + InputFormat class listed. + @see #compose(java.lang.String, java.lang.Class, java.lang.String...)]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ,

) }]]> + + + + + + + + (tbl(,),tbl(,),...,tbl(,)) }]]> + + + + + + + + (tbl(,),tbl(,),...,tbl(,)) }]]> + + + + mapred.join.define.<ident> to a classname. In the expression + mapred.join.expr, the identifier will be assumed to be a + ComposableRecordReader. + mapred.join.keycomparator can be a classname used to compare keys + in the join. + @see #setFormat + @see JoinRecordReader + @see MultiFilterRecordReader]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ...... + }]]> + + + + + + + + + + + + + + + + + + + + + capacity children to position + id in the parent reader. + The id of a root CompositeRecordReader is -1 by convention, but relying + on this is not recommended.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + override(S1,S2,S3) will prefer values + from S3 over S2, and values from S2 over S1 for all keys + emitted from all sources.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + It has to be specified how key and values are passed from one element of + the chain to the next, by value or by reference. If a Mapper leverages the + assumed semantics that the key and values are not modified by the collector + 'by value' must be used. If the Mapper does not expect this semantics, as + an optimization to avoid serialization and deserialization 'by reference' + can be used. +

+ For the added Mapper the configuration given for it, + mapperConf, have precedence over the job's JobConf. This + precedence is in effect when the task is running. +

+ IMPORTANT: There is no need to specify the output key/value classes for the + ChainMapper, this is done by the addMapper for the last mapper in the chain +

+ + @param job job's JobConf to add the Mapper class. + @param klass the Mapper class to add. + @param inputKeyClass mapper input key class. + @param inputValueClass mapper input value class. + @param outputKeyClass mapper output key class. + @param outputValueClass mapper output value class. + @param byValue indicates if key/values should be passed by value + to the next Mapper in the chain, if any. + @param mapperConf a JobConf with the configuration for the Mapper + class. It is recommended to use a JobConf without default values using the + JobConf(boolean loadDefaults) constructor with FALSE.]]> + + + + + + + If this method is overriden super.configure(...) should be + invoked at the beginning of the overwriter method.]]> + + + + + + + + + + map(...) methods of the Mappers in the chain.]]> + + + + + + + If this method is overriden super.close() should be + invoked at the end of the overwriter method.]]> + + + + + The Mapper classes are invoked in a chained (or piped) fashion, the output of + the first becomes the input of the second, and so on until the last Mapper, + the output of the last Mapper will be written to the task's output. +

+ The key functionality of this feature is that the Mappers in the chain do not + need to be aware that they are executed in a chain. This enables having + reusable specialized Mappers that can be combined to perform composite + operations within a single task. +

+ Special care has to be taken when creating chains that the key/values output + by a Mapper are valid for the following Mapper in the chain. It is assumed + all Mappers and the Reduce in the chain use maching output and input key and + value classes as no conversion is done by the chaining code. +

+ Using the ChainMapper and the ChainReducer classes is possible to compose + Map/Reduce jobs that look like [MAP+ / REDUCE MAP*]. And + immediate benefit of this pattern is a dramatic reduction in disk IO. +

+ IMPORTANT: There is no need to specify the output key/value classes for the + ChainMapper, this is done by the addMapper for the last mapper in the chain. +

+ ChainMapper usage pattern: +

+

+ ...
+ conf.setJobName("chain");
+ conf.setInputFormat(TextInputFormat.class);
+ conf.setOutputFormat(TextOutputFormat.class);
+
+ JobConf mapAConf = new JobConf(false);
+ ...
+ ChainMapper.addMapper(conf, AMap.class, LongWritable.class, Text.class,
+   Text.class, Text.class, true, mapAConf);
+
+ JobConf mapBConf = new JobConf(false);
+ ...
+ ChainMapper.addMapper(conf, BMap.class, Text.class, Text.class,
+   LongWritable.class, Text.class, false, mapBConf);
+
+ JobConf reduceConf = new JobConf(false);
+ ...
+ ChainReducer.setReducer(conf, XReduce.class, LongWritable.class, Text.class,
+   Text.class, Text.class, true, reduceConf);
+
+ ChainReducer.addMapper(conf, CMap.class, Text.class, Text.class,
+   LongWritable.class, Text.class, false, null);
+
+ ChainReducer.addMapper(conf, DMap.class, LongWritable.class, Text.class,
+   LongWritable.class, LongWritable.class, true, null);
+
+ FileInputFormat.setInputPaths(conf, inDir);
+ FileOutputFormat.setOutputPath(conf, outDir);
+ ...
+
+ JobClient jc = new JobClient(conf);
+ RunningJob job = jc.submitJob(conf);
+ ...
+ 
]]> +
+
+ + + + + + + + + + + + + + + + + + + + + It has to be specified how key and values are passed from one element of + the chain to the next, by value or by reference. If a Reducer leverages the + assumed semantics that the key and values are not modified by the collector + 'by value' must be used. If the Reducer does not expect this semantics, as + an optimization to avoid serialization and deserialization 'by reference' + can be used. +

+ For the added Reducer the configuration given for it, + reducerConf, have precedence over the job's JobConf. This + precedence is in effect when the task is running. +

+ IMPORTANT: There is no need to specify the output key/value classes for the + ChainReducer, this is done by the setReducer or the addMapper for the last + element in the chain. + + @param job job's JobConf to add the Reducer class. + @param klass the Reducer class to add. + @param inputKeyClass reducer input key class. + @param inputValueClass reducer input value class. + @param outputKeyClass reducer output key class. + @param outputValueClass reducer output value class. + @param byValue indicates if key/values should be passed by value + to the next Mapper in the chain, if any. + @param reducerConf a JobConf with the configuration for the Reducer + class. It is recommended to use a JobConf without default values using the + JobConf(boolean loadDefaults) constructor with FALSE.]]> + + + + + + + + + + + + + + It has to be specified how key and values are passed from one element of + the chain to the next, by value or by reference. If a Mapper leverages the + assumed semantics that the key and values are not modified by the collector + 'by value' must be used. If the Mapper does not expect this semantics, as + an optimization to avoid serialization and deserialization 'by reference' + can be used. +

+ For the added Mapper the configuration given for it, + mapperConf, have precedence over the job's JobConf. This + precedence is in effect when the task is running. +

+ IMPORTANT: There is no need to specify the output key/value classes for the + ChainMapper, this is done by the addMapper for the last mapper in the chain + . + + @param job chain job's JobConf to add the Mapper class. + @param klass the Mapper class to add. + @param inputKeyClass mapper input key class. + @param inputValueClass mapper input value class. + @param outputKeyClass mapper output key class. + @param outputValueClass mapper output value class. + @param byValue indicates if key/values should be passed by value + to the next Mapper in the chain, if any. + @param mapperConf a JobConf with the configuration for the Mapper + class. It is recommended to use a JobConf without default values using the + JobConf(boolean loadDefaults) constructor with FALSE.]]> + + + + + + + If this method is overriden super.configure(...) should be + invoked at the beginning of the overwriter method.]]> + + + + + + + + + + reduce(...) method of the Reducer with the + map(...) methods of the Mappers in the chain.]]> + + + + + + + If this method is overriden super.close() should be + invoked at the end of the overwriter method.]]> + + + + + For each record output by the Reducer, the Mapper classes are invoked in a + chained (or piped) fashion, the output of the first becomes the input of the + second, and so on until the last Mapper, the output of the last Mapper will + be written to the task's output. +

+ The key functionality of this feature is that the Mappers in the chain do not + need to be aware that they are executed after the Reducer or in a chain. + This enables having reusable specialized Mappers that can be combined to + perform composite operations within a single task. +

+ Special care has to be taken when creating chains that the key/values output + by a Mapper are valid for the following Mapper in the chain. It is assumed + all Mappers and the Reduce in the chain use maching output and input key and + value classes as no conversion is done by the chaining code. +

+ Using the ChainMapper and the ChainReducer classes is possible to compose + Map/Reduce jobs that look like [MAP+ / REDUCE MAP*]. And + immediate benefit of this pattern is a dramatic reduction in disk IO. +

+ IMPORTANT: There is no need to specify the output key/value classes for the + ChainReducer, this is done by the setReducer or the addMapper for the last + element in the chain. +

+ ChainReducer usage pattern: +

+

+ ...
+ conf.setJobName("chain");
+ conf.setInputFormat(TextInputFormat.class);
+ conf.setOutputFormat(TextOutputFormat.class);
+
+ JobConf mapAConf = new JobConf(false);
+ ...
+ ChainMapper.addMapper(conf, AMap.class, LongWritable.class, Text.class,
+   Text.class, Text.class, true, mapAConf);
+
+ JobConf mapBConf = new JobConf(false);
+ ...
+ ChainMapper.addMapper(conf, BMap.class, Text.class, Text.class,
+   LongWritable.class, Text.class, false, mapBConf);
+
+ JobConf reduceConf = new JobConf(false);
+ ...
+ ChainReducer.setReducer(conf, XReduce.class, LongWritable.class, Text.class,
+   Text.class, Text.class, true, reduceConf);
+
+ ChainReducer.addMapper(conf, CMap.class, Text.class, Text.class,
+   LongWritable.class, Text.class, false, null);
+
+ ChainReducer.addMapper(conf, DMap.class, LongWritable.class, Text.class,
+   LongWritable.class, LongWritable.class, true, null);
+
+ FileInputFormat.setInputPaths(conf, inDir);
+ FileOutputFormat.setOutputPath(conf, outDir);
+ ...
+
+ JobClient jc = new JobClient(conf);
+ RunningJob job = jc.submitJob(conf);
+ ...
+ 
]]> +
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + RecordReader's for CombineFileSplit's. + @see CombineFileSplit]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + CombineFileRecordReader. + + Subclassing is needed to get a concrete record reader wrapper because of the + constructor requirement. + + @see CombineFileRecordReader + @see CombineFileInputFormat]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + CombineFileInputFormat-equivalent for + SequenceFileInputFormat. + + @see CombineFileInputFormat]]> + + + + + + + + + + + + + + + CombineFileInputFormat-equivalent for + TextInputFormat. + + @see CombineFileInputFormat]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + true if the name output is multi, false + if it is single. If the name output is not defined it returns + false]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + By default these counters are disabled. +

+ MultipleOutputs supports counters, by default the are disabled. + The counters group is the {@link MultipleOutputs} class name. +

+ The names of the counters are the same as the named outputs. For multi + named outputs the name of the counter is the concatenation of the named + output, and underscore '_' and the multiname. + + @param conf job conf to enableadd the named output. + @param enabled indicates if the counters will be enabled or not.]]> +
+
+ + + + + By default these counters are disabled. +

+ MultipleOutputs supports counters, by default the are disabled. + The counters group is the {@link MultipleOutputs} class name. +

+ The names of the counters are the same as the named outputs. For multi + named outputs the name of the counter is the concatenation of the named + output, and underscore '_' and the multiname. + + + @param conf job conf to enableadd the named output. + @return TRUE if the counters are enabled, FALSE if they are disabled.]]> +
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + If overriden subclasses must invoke super.close() at the + end of their close() + + @throws java.io.IOException thrown if any of the MultipleOutput files + could not be closed properly.]]> + + + + OutputCollector passed to + the map() and reduce() methods of the + Mapper and Reducer implementations. +

+ Each additional output, or named output, may be configured with its own + OutputFormat, with its own key class and with its own value + class. +

+ A named output can be a single file or a multi file. The later is referred as + a multi named output. +

+ A multi named output is an unbound set of files all sharing the same + OutputFormat, key class and value class configuration. +

+ When named outputs are used within a Mapper implementation, + key/values written to a name output are not part of the reduce phase, only + key/values written to the job OutputCollector are part of the + reduce phase. +

+ MultipleOutputs supports counters, by default the are disabled. The counters + group is the {@link MultipleOutputs} class name. +

+ The names of the counters are the same as the named outputs. For multi + named outputs the name of the counter is the concatenation of the named + output, and underscore '_' and the multiname. +

+ Job configuration usage pattern is: +

+
+ JobConf conf = new JobConf();
+
+ conf.setInputPath(inDir);
+ FileOutputFormat.setOutputPath(conf, outDir);
+
+ conf.setMapperClass(MOMap.class);
+ conf.setReducerClass(MOReduce.class);
+ ...
+
+ // Defines additional single text based output 'text' for the job
+ MultipleOutputs.addNamedOutput(conf, "text", TextOutputFormat.class,
+ LongWritable.class, Text.class);
+
+ // Defines additional multi sequencefile based output 'sequence' for the
+ // job
+ MultipleOutputs.addMultiNamedOutput(conf, "seq",
+   SequenceFileOutputFormat.class,
+   LongWritable.class, Text.class);
+ ...
+
+ JobClient jc = new JobClient();
+ RunningJob job = jc.submitJob(conf);
+
+ ...
+ 
+

+ Job configuration usage pattern is: +

+
+ public class MOReduce implements
+   Reducer<WritableComparable, Writable> {
+ private MultipleOutputs mos;
+
+ public void configure(JobConf conf) {
+ ...
+ mos = new MultipleOutputs(conf);
+ }
+
+ public void reduce(WritableComparable key, Iterator<Writable> values,
+ OutputCollector output, Reporter reporter)
+ throws IOException {
+ ...
+ mos.getCollector("text", reporter).collect(key, new Text("Hello"));
+ mos.getCollector("seq", "A", reporter).collect(key, new Text("Bye"));
+ mos.getCollector("seq", "B", reporter).collect(key, new Text("Chau"));
+ ...
+ }
+
+ public void close() throws IOException {
+ mos.close();
+ ...
+ }
+
+ }
+ 
]]> +
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + It can be used instead of the default implementation, + of {@link org.apache.hadoop.mapred.MapRunner}, when the Map + operation is not CPU bound in order to improve throughput. +

+ Map implementations using this MapRunnable must be thread-safe. +

+ The Map-Reduce job has to be configured to use this MapRunnable class (using + the JobConf.setMapRunnerClass method) and + the number of threads the thread-pool can use with the + mapred.map.multithreadedrunner.threads property, its default + value is 10 threads. +

]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + R reduces, there are R-1 + keys in the SequenceFile. + @deprecated Use + {@link #setPartitionFile(Configuration, Path)} + instead]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Cluster. + @throws IOException]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ClusterMetrics provides clients with information such as: +

    +
  1. + Size of the cluster. +
  2. +
  3. + Number of blacklisted and decommissioned trackers. +
  4. +
  5. + Slot capacity of the cluster. +
  6. +
  7. + The number of currently occupied/reserved map and reduce slots. +
  8. +
  9. + The number of currently running map and reduce tasks. +
  10. +
  11. + The number of job submissions. +
  12. +
+ +

Clients can query for the latest ClusterMetrics, via + {@link Cluster#getClusterStatus()}.

+ + @see Cluster]]> +
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Counters represent global counters, defined either by the + Map-Reduce framework or applications. Each Counter is named by + an {@link Enum} and has a long for the value.

+ +

Counters are bunched into Groups, each comprising of + counters from a particular Enum class.]]> + + + + + + + + + + + + + + + + + + + + + the type of counter + @param the type of counter group + @param counters the old counters object]]> + + + + Counters holds per job/task counters, defined either by the + Map-Reduce framework or applications. Each Counter can be of + any {@link Enum} type.

+ +

Counters are bunched into {@link CounterGroup}s, each + comprising of counters from a particular Enum class.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Each {@link InputSplit} is then assigned to an individual {@link Mapper} + for processing.

+ +

Note: The split is a logical split of the inputs and the + input files are not physically split into chunks. For e.g. a split could + be <input-file-path, start, offset> tuple. The InputFormat + also creates the {@link RecordReader} to read the {@link InputSplit}. + + @param context job configuration. + @return an array of {@link InputSplit}s for the job.]]> + + + + + + + + + + + + + InputFormat describes the input-specification for a + Map-Reduce job. + +

The Map-Reduce framework relies on the InputFormat of the + job to:

+

    +
  1. + Validate the input-specification of the job. +
  2. + Split-up the input file(s) into logical {@link InputSplit}s, each of + which is then assigned to an individual {@link Mapper}. +
  3. +
  4. + Provide the {@link RecordReader} implementation to be used to glean + input records from the logical InputSplit for processing by + the {@link Mapper}. +
  5. +
+ +

The default behavior of file-based {@link InputFormat}s, typically + sub-classes of {@link FileInputFormat}, is to split the + input into logical {@link InputSplit}s based on the total size, in + bytes, of the input files. However, the {@link FileSystem} blocksize of + the input files is treated as an upper bound for input splits. A lower bound + on the split size can be set via + + mapreduce.input.fileinputformat.split.minsize.

+ +

Clearly, logical splits based on input-size is insufficient for many + applications since record boundaries are to respected. In such cases, the + application has to also implement a {@link RecordReader} on whom lies the + responsibility to respect record-boundaries and present a record-oriented + view of the logical InputSplit to the individual task. + + @see InputSplit + @see RecordReader + @see FileInputFormat]]> + + + + + + + + + + + + + + + + + + + + + + + + + SplitLocationInfos describing how the split + data is stored at each location. A null value indicates that all the + locations have the data stored on disk. + @throws IOException]]> + + + + InputSplit represents the data to be processed by an + individual {@link Mapper}. + +

Typically, it presents a byte-oriented view on the input and is the + responsibility of {@link RecordReader} of the job to process this and present + a record-oriented view. + + @see InputFormat + @see RecordReader]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Job makes a copy of the Configuration so + that any necessary internal modifications do not reflect on the incoming + parameter. + + A Cluster will be created from the conf parameter only when it's needed. + + @param conf the configuration + @return the {@link Job} , with no connection to a cluster yet. + @throws IOException]]> + + + + + + + + Job makes a copy of the Configuration so + that any necessary internal modifications do not reflect on the incoming + parameter. + + @param conf the configuration + @return the {@link Job} , with no connection to a cluster yet. + @throws IOException]]> + + + + + + + + Job makes a copy of the Configuration so + that any necessary internal modifications do not reflect on the incoming + parameter. + + @param status job status + @param conf job configuration + @return the {@link Job} , with no connection to a cluster yet. + @throws IOException]]> + + + + + + + Job makes a copy of the Configuration so + that any necessary internal modifications do not reflect on the incoming + parameter. + + @param ignored + @return the {@link Job} , with no connection to a cluster yet. + @throws IOException + @deprecated Use {@link #getInstance()}]]> + + + + + + + + Job makes a copy of the Configuration so + that any necessary internal modifications do not reflect on the incoming + parameter. + + @param ignored + @param conf job configuration + @return the {@link Job} , with no connection to a cluster yet. + @throws IOException + @deprecated Use {@link #getInstance(Configuration)}]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + progress of the job's map-tasks, as a float between 0.0 + and 1.0. When all map tasks have completed, the function returns 1.0. + + @return the progress of the job's map-tasks. + @throws IOException]]> + + + + + + progress of the job's reduce-tasks, as a float between 0.0 + and 1.0. When all reduce tasks have completed, the function returns 1.0. + + @return the progress of the job's reduce-tasks. + @throws IOException]]> + + + + + + + progress of the job's cleanup-tasks, as a float between 0.0 + and 1.0. When all cleanup tasks have completed, the function returns 1.0. + + @return the progress of the job's cleanup-tasks. + @throws IOException]]> + + + + + + progress of the job's setup-tasks, as a float between 0.0 + and 1.0. When all setup tasks have completed, the function returns 1.0. + + @return the progress of the job's setup-tasks. + @throws IOException]]> + + + + + + true if the job is complete, else false. + @throws IOException]]> + + + + + + true if the job succeeded, else false. + @throws IOException]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + InputFormat to use + @throws IllegalStateException if the job is submitted]]> + + + + + + + OutputFormat to use + @throws IllegalStateException if the job is submitted]]> + + + + + + + Mapper to use + @throws IllegalStateException if the job is submitted]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Reducer to use + @throws IllegalStateException if the job is submitted]]> + + + + + + + Partitioner to use + @throws IllegalStateException if the job is submitted]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + true if speculative execution + should be turned on, else false.]]> + + + + + + true if speculative execution + should be turned on for map tasks, + else false.]]> + + + + + + true if speculative execution + should be turned on for reduce tasks, + else false.]]> + + + + + + true, job-setup and job-cleanup will be + considered from {@link OutputCommitter} + else ignored.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + JobTracker is lost]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Job. + @throws IOException if fail to close.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + It allows the user to configure the + job, submit it, control its execution, and query the state. The set methods + only work until the job is submitted, afterwards they will throw an + IllegalStateException.

+ +

+ Normally the user creates the application, describes various facets of the + job via {@link Job} and then submits the job and monitor its progress.

+ +

Here is an example on how to submit a job:

+

+     // Create a new Job
+     Job job = Job.getInstance();
+     job.setJarByClass(MyJob.class);
+     
+     // Specify various job-specific parameters     
+     job.setJobName("myjob");
+     
+     job.setInputPath(new Path("in"));
+     job.setOutputPath(new Path("out"));
+     
+     job.setMapperClass(MyJob.MyMapper.class);
+     job.setReducerClass(MyJob.MyReducer.class);
+
+     // Submit the job, then poll for progress until the job is complete
+     job.waitForCompletion(true);
+ 
]]> +
+ + + + + + + + + + + + + + + + + + + + + + + 1. + @return the number of reduce tasks for this job.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + mapred.map.max.attempts + property. If this property is not already set, the default is 4 attempts. + + @return the max number of attempts per map task.]]> + + + + + mapred.reduce.max.attempts + property. If this property is not already set, the default is 4 attempts. + + @return the max number of attempts per reduce task.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + An example JobID is : + job_200707121733_0003 , which represents the third job + running at the jobtracker started at 200707121733. +

+ Applications should never construct or parse JobID strings, but rather + use appropriate constructors or {@link #forName(String)} method. + + @see TaskID + @see TaskAttemptID]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the key input type to the Mapper + @param the value input type to the Mapper + @param the key output type from the Mapper + @param the value output type from the Mapper]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Maps are the individual tasks which transform input records into a + intermediate records. The transformed intermediate records need not be of + the same type as the input records. A given input pair may map to zero or + many output pairs.

+ +

The Hadoop Map-Reduce framework spawns one map task for each + {@link InputSplit} generated by the {@link InputFormat} for the job. + Mapper implementations can access the {@link Configuration} for + the job via the {@link JobContext#getConfiguration()}. + +

The framework first calls + {@link #setup(org.apache.hadoop.mapreduce.Mapper.Context)}, followed by + {@link #map(Object, Object, org.apache.hadoop.mapreduce.Mapper.Context)} + for each key/value pair in the InputSplit. Finally + {@link #cleanup(org.apache.hadoop.mapreduce.Mapper.Context)} is called.

+ +

All intermediate values associated with a given output key are + subsequently grouped by the framework, and passed to a {@link Reducer} to + determine the final output. Users can control the sorting and grouping by + specifying two key {@link RawComparator} classes.

+ +

The Mapper outputs are partitioned per + Reducer. Users can control which keys (and hence records) go to + which Reducer by implementing a custom {@link Partitioner}. + +

Users can optionally specify a combiner, via + {@link Job#setCombinerClass(Class)}, to perform local aggregation of the + intermediate outputs, which helps to cut down the amount of data transferred + from the Mapper to the Reducer. + +

Applications can specify if and how the intermediate + outputs are to be compressed and which {@link CompressionCodec}s are to be + used via the Configuration.

+ +

If the job has zero + reduces then the output of the Mapper is directly written + to the {@link OutputFormat} without sorting by keys.

+ +

Example:

+

+ public class TokenCounterMapper 
+     extends Mapper<Object, Text, Text, IntWritable>{
+    
+   private final static IntWritable one = new IntWritable(1);
+   private Text word = new Text();
+   
+   public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
+     StringTokenizer itr = new StringTokenizer(value.toString());
+     while (itr.hasMoreTokens()) {
+       word.set(itr.nextToken());
+       context.write(word, one);
+     }
+   }
+ }
+ 
+ +

Applications may override the + {@link #run(org.apache.hadoop.mapreduce.Mapper.Context)} method to exert + greater control on map processing e.g. multi-threaded Mappers + etc.

+ + @see InputFormat + @see JobContext + @see Partitioner + @see Reducer]]> +
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + MarkableIterator is a wrapper iterator class that + implements the {@link MarkableIteratorInterface}.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + true if task output recovery is supported, + false otherwise + @see #recoverTask(TaskAttemptContext) + @deprecated Use {@link #isRecoverySupported(JobContext)} instead.]]> + + + + + + + true repeatable job commit is supported, + false otherwise + @throws IOException]]> + + + + + + + true if task output recovery is supported, + false otherwise + @throws IOException + @see #recoverTask(TaskAttemptContext)]]> + + + + + + + OutputCommitter. This is called from the application master + process, but it is called individually for each task. + + If an exception is thrown the task will be attempted again. + + This may be called multiple times for the same task. But from different + application attempts. + + @param taskContext Context of the task whose output is being recovered + @throws IOException]]> + + + + OutputCommitter describes the commit of task output for a + Map-Reduce job. + +

The Map-Reduce framework relies on the OutputCommitter of + the job to:

+

    +
  1. + Setup the job during initialization. For example, create the temporary + output directory for the job during the initialization of the job. +
  2. +
  3. + Cleanup the job after the job completion. For example, remove the + temporary output directory after the job completion. +
  4. +
  5. + Setup the task temporary output. +
  6. +
  7. + Check whether a task needs a commit. This is to avoid the commit + procedure if a task does not need commit. +
  8. +
  9. + Commit of the task output. +
  10. +
  11. + Discard the task commit. +
  12. +
+ The methods in this class can be called from several different processes and + from several different contexts. It is important to know which process and + which context each is called from. Each method should be marked accordingly + in its documentation. It is also important to note that not all methods are + guaranteed to be called once and only once. If a method is not guaranteed to + have this property the output committer needs to handle this appropriately. + Also note it will only be in rare situations where they may be called + multiple times for the same task. + + @see org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter + @see JobContext + @see TaskAttemptContext]]> +
+
+ + + + + + + + + + + + + + + + + + + This is to validate the output specification for the job when it is + a job is submitted. Typically checks that it does not already exist, + throwing an exception when it already exists, so that output is not + overwritten.

+ + @param context information about the job + @throws IOException when output should not be attempted]]> +
+
+ + + + + + + + + + OutputFormat describes the output-specification for a + Map-Reduce job. + +

The Map-Reduce framework relies on the OutputFormat of the + job to:

+

    +
  1. + Validate the output-specification of the job. For e.g. check that the + output directory doesn't already exist. +
  2. + Provide the {@link RecordWriter} implementation to be used to write out + the output files of the job. Output files are stored in a + {@link FileSystem}. +
  3. +
+ + @see RecordWriter]]> +
+
+ + + + + + + + + + + Typically a hash function on a all or a subset of the key.

+ + @param key the key to be partioned. + @param value the entry value. + @param numPartitions the total number of partitions. + @return the partition number for the key.]]> +
+
+ + Partitioner controls the partitioning of the keys of the + intermediate map-outputs. The key (or a subset of the key) is used to derive + the partition, typically by a hash function. The total number of partitions + is the same as the number of reduce tasks for the job. Hence this controls + which of the m reduce tasks the intermediate key (and hence the + record) is sent for reduction.

+ +

Note: A Partitioner is created only when there are multiple + reducers.

+ +

Note: If you require your Partitioner class to obtain the Job's + configuration object, implement the {@link Configurable} interface.

+ + @see Reducer]]> +
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + "N/A" + + @return Scheduling information associated to particular Job Queue]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @param ]]> + + + + + + + + + + + + + + + + + + + + + + RecordWriter to future operations. + + @param context the context of the task + @throws IOException]]> + + + + RecordWriter writes the output <key, value> pairs + to an output file. + +

RecordWriter implementations write the job outputs to the + {@link FileSystem}. + + @see OutputFormat]]> + + + + + + + + + + + + + + + + + + + + + + the class of the input keys + @param the class of the input values + @param the class of the output keys + @param the class of the output values]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Reducer implementations + can access the {@link Configuration} for the job via the + {@link JobContext#getConfiguration()} method.

+ +

Reducer has 3 primary phases:

+
    +
  1. + + Shuffle + +

    The Reducer copies the sorted output from each + {@link Mapper} using HTTP across the network.

    +
  2. + +
  3. + Sort + +

    The framework merge sorts Reducer inputs by + keys + (since different Mappers may have output the same key).

    + +

    The shuffle and sort phases occur simultaneously i.e. while outputs are + being fetched they are merged.

    + + SecondarySort + +

    To achieve a secondary sort on the values returned by the value + iterator, the application should extend the key with the secondary + key and define a grouping comparator. The keys will be sorted using the + entire key, but will be grouped using the grouping comparator to decide + which keys and values are sent in the same call to reduce.The grouping + comparator is specified via + {@link Job#setGroupingComparatorClass(Class)}. The sort order is + controlled by + {@link Job#setSortComparatorClass(Class)}.

    + + + For example, say that you want to find duplicate web pages and tag them + all with the url of the "best" known example. You would set up the job + like: +
      +
    • Map Input Key: url
    • +
    • Map Input Value: document
    • +
    • Map Output Key: document checksum, url pagerank
    • +
    • Map Output Value: url
    • +
    • Partitioner: by checksum
    • +
    • OutputKeyComparator: by checksum and then decreasing pagerank
    • +
    • OutputValueGroupingComparator: by checksum
    • +
    +
  4. + +
  5. + Reduce + +

    In this phase the + {@link #reduce(Object, Iterable, org.apache.hadoop.mapreduce.Reducer.Context)} + method is called for each <key, (collection of values)> in + the sorted inputs.

    +

    The output of the reduce task is typically written to a + {@link RecordWriter} via + {@link Context#write(Object, Object)}.

    +
  6. +
+ +

The output of the Reducer is not re-sorted.

+ +

Example:

+

+ public class IntSumReducer<Key> extends Reducer<Key,IntWritable,
+                                                 Key,IntWritable> {
+   private IntWritable result = new IntWritable();
+ 
+   public void reduce(Key key, Iterable<IntWritable> values,
+                      Context context) throws IOException, InterruptedException {
+     int sum = 0;
+     for (IntWritable val : values) {
+       sum += val.get();
+     }
+     result.set(sum);
+     context.write(key, result);
+   }
+ }
+ 
+ + @see Mapper + @see Partitioner]]> +
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + counterName. + @param counterName counter name + @return the Counter for the given counterName]]> + + + + + + + groupName and + counterName. + @param counterName counter name + @return the Counter for the given groupName and + counterName]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + An example TaskAttemptID is : + attempt_200707121733_0003_m_000005_0 , which represents the + zeroth task attempt for the fifth map task in the third job + running at the jobtracker started at 200707121733. +

+ Applications should never construct or parse TaskAttemptID strings + , but rather use appropriate constructors or {@link #forName(String)} + method. + + @see JobID + @see TaskID]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + An example TaskID is : + task_200707121733_0003_m_000005 , which represents the + fifth map task in the third job running at the jobtracker + started at 200707121733. +

+ Applications should never construct or parse TaskID strings + , but rather use appropriate constructors or {@link #forName(String)} + method. + + @see JobID + @see TaskAttemptID]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + OutputCommitter for the task-attempt]]> + + + + the input key type for the task + @param the input value type for the task + @param the output key type for the task + @param the output value type for the task]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + type of the other counter + @param type of the other counter group + @param counters the counters object to copy + @param groupFactory the factory for new groups]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + type of counter inside the counters + @param type of group inside the counters]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + type of the counter for the group]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + The key and values are passed from one element of the chain to the next, by + value. For the added Mapper the configuration given for it, + mapperConf, have precedence over the job's Configuration. This + precedence is in effect when the task is running. +

+

+ IMPORTANT: There is no need to specify the output key/value classes for the + ChainMapper, this is done by the addMapper for the last mapper in the chain +

+ + @param job + The job. + @param klass + the Mapper class to add. + @param inputKeyClass + mapper input key class. + @param inputValueClass + mapper input value class. + @param outputKeyClass + mapper output key class. + @param outputValueClass + mapper output value class. + @param mapperConf + a configuration for the Mapper class. It is recommended to use a + Configuration without default values using the + Configuration(boolean loadDefaults) constructor with + FALSE.]]> +
+ + + + + + + + + + + + The Mapper classes are invoked in a chained (or piped) fashion, the output of + the first becomes the input of the second, and so on until the last Mapper, + the output of the last Mapper will be written to the task's output. +

+

+ The key functionality of this feature is that the Mappers in the chain do not + need to be aware that they are executed in a chain. This enables having + reusable specialized Mappers that can be combined to perform composite + operations within a single task. +

+

+ Special care has to be taken when creating chains that the key/values output + by a Mapper are valid for the following Mapper in the chain. It is assumed + all Mappers and the Reduce in the chain use matching output and input key and + value classes as no conversion is done by the chaining code. +

+

+ Using the ChainMapper and the ChainReducer classes is possible to compose + Map/Reduce jobs that look like [MAP+ / REDUCE MAP*]. And + immediate benefit of this pattern is a dramatic reduction in disk IO. +

+

+ IMPORTANT: There is no need to specify the output key/value classes for the + ChainMapper, this is done by the addMapper for the last mapper in the chain. +

+ ChainMapper usage pattern: +

+ +

+ ...
+ Job = new Job(conf);
+
+ Configuration mapAConf = new Configuration(false);
+ ...
+ ChainMapper.addMapper(job, AMap.class, LongWritable.class, Text.class,
+   Text.class, Text.class, true, mapAConf);
+
+ Configuration mapBConf = new Configuration(false);
+ ...
+ ChainMapper.addMapper(job, BMap.class, Text.class, Text.class,
+   LongWritable.class, Text.class, false, mapBConf);
+
+ ...
+
+ job.waitForComplettion(true);
+ ...
+ 
]]> +
+
+ + + + + + + + + + + + + + + + The key and values are passed from one element of the chain to the next, by + value. For the added Reducer the configuration given for it, + reducerConf, have precedence over the job's Configuration. + This precedence is in effect when the task is running. +

+

+ IMPORTANT: There is no need to specify the output key/value classes for the + ChainReducer, this is done by the setReducer or the addMapper for the last + element in the chain. +

+ + @param job + the job + @param klass + the Reducer class to add. + @param inputKeyClass + reducer input key class. + @param inputValueClass + reducer input value class. + @param outputKeyClass + reducer output key class. + @param outputValueClass + reducer output value class. + @param reducerConf + a configuration for the Reducer class. It is recommended to use a + Configuration without default values using the + Configuration(boolean loadDefaults) constructor with + FALSE.]]> +
+
+ + + + + + + + + + + + The key and values are passed from one element of the chain to the next, by + value For the added Mapper the configuration given for it, + mapperConf, have precedence over the job's Configuration. This + precedence is in effect when the task is running. +

+

+ IMPORTANT: There is no need to specify the output key/value classes for the + ChainMapper, this is done by the addMapper for the last mapper in the + chain. +

+ + @param job + The job. + @param klass + the Mapper class to add. + @param inputKeyClass + mapper input key class. + @param inputValueClass + mapper input value class. + @param outputKeyClass + mapper output key class. + @param outputValueClass + mapper output value class. + @param mapperConf + a configuration for the Mapper class. It is recommended to use a + Configuration without default values using the + Configuration(boolean loadDefaults) constructor with + FALSE.]]> +
+
+ + + + + + + + + + + For each record output by the Reducer, the Mapper classes are invoked in a + chained (or piped) fashion. The output of the reducer becomes the input of + the first mapper and output of first becomes the input of the second, and so + on until the last Mapper, the output of the last Mapper will be written to + the task's output. +

+

+ The key functionality of this feature is that the Mappers in the chain do not + need to be aware that they are executed after the Reducer or in a chain. This + enables having reusable specialized Mappers that can be combined to perform + composite operations within a single task. +

+

+ Special care has to be taken when creating chains that the key/values output + by a Mapper are valid for the following Mapper in the chain. It is assumed + all Mappers and the Reduce in the chain use matching output and input key and + value classes as no conversion is done by the chaining code. +

+

Using the ChainMapper and the ChainReducer classes is possible to + compose Map/Reduce jobs that look like [MAP+ / REDUCE MAP*]. And + immediate benefit of this pattern is a dramatic reduction in disk IO.

+

+ IMPORTANT: There is no need to specify the output key/value classes for the + ChainReducer, this is done by the setReducer or the addMapper for the last + element in the chain. +

+ ChainReducer usage pattern: +

+ +

+ ...
+ Job = new Job(conf);
+ ....
+
+ Configuration reduceConf = new Configuration(false);
+ ...
+ ChainReducer.setReducer(job, XReduce.class, LongWritable.class, Text.class,
+   Text.class, Text.class, true, reduceConf);
+
+ ChainReducer.addMapper(job, CMap.class, Text.class, Text.class,
+   LongWritable.class, Text.class, false, null);
+
+ ChainReducer.addMapper(job, DMap.class, LongWritable.class, Text.class,
+   LongWritable.class, LongWritable.class, true, null);
+
+ ...
+
+ job.waitForCompletion(true);
+ ...
+ 
]]> +
+
+ +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + DBInputFormat emits LongWritables containing the record number as + key and DBWritables as value. + + The SQL query, and input class can be using one of the two + setInput methods.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + {@link DBOutputFormat} accepts <key,value> pairs, where + key has a type extending DBWritable. Returned {@link RecordWriter} + writes only the key to the database with a batch SQL query.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + DBWritable. DBWritable, is similar to {@link Writable} + except that the {@link #write(PreparedStatement)} method takes a + {@link PreparedStatement}, and {@link #readFields(ResultSet)} + takes a {@link ResultSet}. +

+ Implementations are responsible for writing the fields of the object + to PreparedStatement, and reading the fields of the object from the + ResultSet. + +

Example:

+ If we have the following table in the database : +
+ CREATE TABLE MyTable (
+   counter        INTEGER NOT NULL,
+   timestamp      BIGINT  NOT NULL,
+ );
+ 
+ then we can read/write the tuples from/to the table with : +

+ public class MyWritable implements Writable, DBWritable {
+   // Some data     
+   private int counter;
+   private long timestamp;
+       
+   //Writable#write() implementation
+   public void write(DataOutput out) throws IOException {
+     out.writeInt(counter);
+     out.writeLong(timestamp);
+   }
+       
+   //Writable#readFields() implementation
+   public void readFields(DataInput in) throws IOException {
+     counter = in.readInt();
+     timestamp = in.readLong();
+   }
+       
+   public void write(PreparedStatement statement) throws SQLException {
+     statement.setInt(1, counter);
+     statement.setLong(2, timestamp);
+   }
+       
+   public void readFields(ResultSet resultSet) throws SQLException {
+     counter = resultSet.getInt(1);
+     timestamp = resultSet.getLong(2);
+   } 
+ }
+ 
]]> +
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + RecordReader's for + CombineFileSplit's. + + @see CombineFileSplit]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + CombineFileRecordReader. + + Subclassing is needed to get a concrete record reader wrapper because of the + constructor requirement. + + @see CombineFileRecordReader + @see CombineFileInputFormat]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + th Path]]> + + + + + + th Path]]> + + + + + + + + + + + th Path]]> + + + + + + + + + + + + + + + + + + + + + + + + + + CombineFileSplit can be used to implement {@link RecordReader}'s, + with reading one record per file. + + @see FileSplit + @see CombineFileInputFormat]]> + + + + + + + + + + + + + + CombineFileInputFormat-equivalent for + SequenceFileInputFormat. + + @see CombineFileInputFormat]]> + + + + + + + + + + + + + + CombineFileInputFormat-equivalent for + TextInputFormat. + + @see CombineFileInputFormat]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + FileInputFormat always returns + true. Implementations that may deal with non-splittable files must + override this method. + + FileInputFormat implementations can override this and return + false to ensure that individual input files are never split-up + so that {@link Mapper}s process entire files. + + @param context the job context + @param filename the file name to check + @return is this file splitable?]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + FileInputFormat is the base class for all file-based + InputFormats. This provides a generic implementation of + {@link #getSplits(JobContext)}. + + Implementations of FileInputFormat can also override the + {@link #isSplitable(JobContext, Path)} method to prevent input files + from being split-up in certain situations. Implementations that may + deal with non-splittable files must override this method, since + the default implementation assumes splitting is always possible.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
or + conf.setInt(FixedLengthInputFormat.FIXED_RECORD_LENGTH, recordLength); +

+ @see FixedLengthRecordReader]]> +
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + true if the Job was added.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ([,]*) + func ::= tbl(,"") + class ::= @see java.lang.Class#forName(java.lang.String) + path ::= @see org.apache.hadoop.fs.Path#Path(java.lang.String) + } + Reads expression from the mapreduce.join.expr property and + user-supplied join types from mapreduce.join.define.<ident> + types. Paths supplied to tbl are given as input paths to the + InputFormat class listed. + @see #compose(java.lang.String, java.lang.Class, java.lang.String...)]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ,

) }]]> + + + + + + + + (tbl(,),tbl(,),...,tbl(,)) }]]> + + + + + + + + (tbl(,),tbl(,),...,tbl(,)) }]]> + + + + + + + + mapreduce.join.define.<ident> to a classname. + In the expression mapreduce.join.expr, the identifier will be + assumed to be a ComposableRecordReader. + mapreduce.join.keycomparator can be a classname used to compare + keys in the join. + @see #setFormat + @see JoinRecordReader + @see MultiFilterRecordReader]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ...... + }]]> + + + + + + + + + + + + + + + + + + + + + capacity children to position + id in the parent reader. + The id of a root CompositeRecordReader is -1 by convention, but relying + on this is not recommended.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + override(S1,S2,S3) will prefer values + from S3 over S2, and values from S2 over S1 for all keys + emitted from all sources.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + [<child1>,<child2>,...,<childn>]]]> + + + + + + + out. + TupleWritable format: + {@code + ...... + }]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the map's input key type + @param the map's input value type + @param the map's output key type + @param the map's output value type + @param job the job + @return the mapper class to run]]> + + + + + + + the map input key type + @param the map input value type + @param the map output key type + @param the map output value type + @param job the job to modify + @param cls the class to use as the mapper]]> + + + + + + + + + + + + + + + + + It can be used instead of the default implementation, + {@link org.apache.hadoop.mapred.MapRunner}, when the Map operation is not CPU + bound in order to improve throughput. +

+ Mapper implementations using this MapRunnable must be thread-safe. +

+ The Map-Reduce job has to be configured with the mapper to use via + {@link #setMapperClass(Job, Class)} and + the number of thread the thread-pool can use with the + {@link #getNumberOfThreads(JobContext)} method. The default + value is 10 threads. +

]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + MapContext to be wrapped + @return a wrapped Mapper.Context for custom implementations]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

  • + In applications which take a classname of committer in + a configuration option, set it to the canonical name of this class + (see {@link #NAME}). When this class is instantiated, it will + use the factory mechanism to locate the configured committer for the + destination. +
  • +
  • + In code, explicitly create an instance of this committer through + its constructor, then invoke commit lifecycle operations on it. + The dynamically configured committer will be created in the constructor + and have the lifecycle operations relayed to it. +
  • + ]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + true if the job output should be compressed, + false otherwise]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Tasks' Side-Effect Files + +

    Some applications need to create/write-to side-files, which differ from + the actual job-outputs. + +

    In such cases there could be issues with 2 instances of the same TIP + (running simultaneously e.g. speculative tasks) trying to open/write-to the + same file (path) on HDFS. Hence the application-writer will have to pick + unique names per task-attempt (e.g. using the attemptid, say + attempt_200709221812_0001_m_000000_0), not just per TIP.

    + +

    To get around this the Map-Reduce framework helps the application-writer + out by maintaining a special + ${mapreduce.output.fileoutputformat.outputdir}/_temporary/_${taskid} + sub-directory for each task-attempt on HDFS where the output of the + task-attempt goes. On successful completion of the task-attempt the files + in the ${mapreduce.output.fileoutputformat.outputdir}/_temporary/_${taskid} (only) + are promoted to ${mapreduce.output.fileoutputformat.outputdir}. Of course, the + framework discards the sub-directory of unsuccessful task-attempts. This + is completely transparent to the application.

    + +

    The application-writer can take advantage of this by creating any + side-files required in a work directory during execution + of his task i.e. via + {@link #getWorkOutputPath(TaskInputOutputContext)}, and + the framework will move them out similarly - thus she doesn't have to pick + unique paths per task-attempt.

    + +

    The entire discussion holds true for maps of jobs with + reducer=NONE (i.e. 0 reduces) since output of the map, in that case, + goes directly to HDFS.

    + + @return the {@link Path} to the task's temporary output directory + for the map-reduce job.]]> +
    +
    + + + + + + + + The path can be used to create custom files from within the map and + reduce tasks. The path name will be unique for each task. The path parent + will be the job output directory.

    ls + +

    This method uses the {@link #getUniqueFile} method to make the file name + unique for the task.

    + + @param context the context for the task. + @param name the name for the file. + @param extension the extension for the file + @return a unique path accross all tasks of the job.]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Warning: when the baseOutputPath is a path that resolves + outside of the final job output directory, the directory is created + immediately and then persists through subsequent task retries, breaking + the concept of output committing.]]> + + + + + + + + + + Warning: when the baseOutputPath is a path that resolves + outside of the final job output directory, the directory is created + immediately and then persists through subsequent task retries, breaking + the concept of output committing.]]> + + + + + + + super.close() at the + end of their close()]]> + + + + + Case one: writing to additional outputs other than the job default output. + + Each additional output, or named output, may be configured with its own + OutputFormat, with its own key class and with its own value + class. +

    + +

    + Case two: to write data to different files provided by user +

    + +

    + MultipleOutputs supports counters, by default they are disabled. The + counters group is the {@link MultipleOutputs} class name. The names of the + counters are the same as the output name. These count the number records + written to each output name. +

    + + Usage pattern for job submission: +
    +
    + Job job = new Job();
    +
    + FileInputFormat.setInputPath(job, inDir);
    + FileOutputFormat.setOutputPath(job, outDir);
    +
    + job.setMapperClass(MOMap.class);
    + job.setReducerClass(MOReduce.class);
    + ...
    +
    + // Defines additional single text based output 'text' for the job
    + MultipleOutputs.addNamedOutput(job, "text", TextOutputFormat.class,
    + LongWritable.class, Text.class);
    +
    + // Defines additional sequence-file based output 'sequence' for the job
    + MultipleOutputs.addNamedOutput(job, "seq",
    +   SequenceFileOutputFormat.class,
    +   LongWritable.class, Text.class);
    + ...
    +
    + job.waitForCompletion(true);
    + ...
    + 
    +

    + Usage in Reducer: +

    + <K, V> String generateFileName(K k, V v) {
    +   return k.toString() + "_" + v.toString();
    + }
    + 
    + public class MOReduce extends
    +   Reducer<WritableComparable, Writable,WritableComparable, Writable> {
    + private MultipleOutputs mos;
    + public void setup(Context context) {
    + ...
    + mos = new MultipleOutputs(context);
    + }
    +
    + public void reduce(WritableComparable key, Iterator<Writable> values,
    + Context context)
    + throws IOException {
    + ...
    + mos.write("text", , key, new Text("Hello"));
    + mos.write("seq", LongWritable(1), new Text("Bye"), "seq_a");
    + mos.write("seq", LongWritable(2), key, new Text("Chau"), "seq_b");
    + mos.write(key, new Text("value"), generateFileName(key, new Text("value")));
    + ...
    + }
    +
    + public void cleanup(Context) throws IOException {
    + mos.close();
    + ...
    + }
    +
    + }
    + 
    + +

    + When used in conjuction with org.apache.hadoop.mapreduce.lib.output.LazyOutputFormat, + MultipleOutputs can mimic the behaviour of MultipleTextOutputFormat and MultipleSequenceFileOutputFormat + from the old Hadoop API - ie, output can be written from the Reducer to more than one location. +

    + +

    + Use MultipleOutputs.write(KEYOUT key, VALUEOUT value, String baseOutputPath) to write key and + value to a path specified by baseOutputPath, with no need to specify a named output. + Warning: when the baseOutputPath passed to MultipleOutputs.write + is a path that resolves outside of the final job output directory, the + directory is created immediately and then persists through subsequent + task retries, breaking the concept of output committing: +

    + +
    + private MultipleOutputs<Text, Text> out;
    + 
    + public void setup(Context context) {
    +   out = new MultipleOutputs<Text, Text>(context);
    +   ...
    + }
    + 
    + public void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
    + for (Text t : values) {
    +   out.write(key, t, generateFileName(<parameter list...>));
    +   }
    + }
    + 
    + protected void cleanup(Context context) throws IOException, InterruptedException {
    +   out.close();
    + }
    + 
    + +

    + Use your own code in generateFileName() to create a custom path to your results. + '/' characters in baseOutputPath will be translated into directory levels in your file system. + Also, append your custom-generated path with "part" or similar, otherwise your output will be -00000, -00001 etc. + No call to context.write() is necessary. See example generateFileName() code below. +

    + +
    + private String generateFileName(Text k) {
    +   // expect Text k in format "Surname|Forename"
    +   String[] kStr = k.toString().split("\\|");
    +   
    +   String sName = kStr[0];
    +   String fName = kStr[1];
    +
    +   // example for k = Smith|John
    +   // output written to /user/hadoop/path/to/output/Smith/John-r-00000 (etc)
    +   return sName + "/" + fName;
    + }
    + 
    + +

    + Using MultipleOutputs in this way will still create zero-sized default output, eg part-00000. + To prevent this use LazyOutputFormat.setOutputFormatClass(job, TextOutputFormat.class); + instead of job.setOutputFormatClass(TextOutputFormat.class); in your Hadoop job configuration. +

    ]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + This allows the user to specify the key class to be different + from the actual class ({@link BytesWritable}) used for writing

    + + @param job the {@link Job} to modify + @param theClass the SequenceFile output key class.]]> +
    +
    + + + + + This allows the user to specify the value class to be different + from the actual class ({@link BytesWritable}) used for writing

    + + @param job the {@link Job} to modify + @param theClass the SequenceFile output key class.]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + + + + + + + + + + bytes[left:(right+1)] in Python syntax. + + @param conf configuration object + @param left left Python-style offset + @param right right Python-style offset]]> + + + + + + + bytes[offset:] in Python syntax. + + @param conf configuration object + @param offset left Python-style offset]]> + + + + + + + bytes[:(offset+1)] in Python syntax. + + @param conf configuration object + @param offset right Python-style offset]]> + + + + + + + + + + + + + + + + + + + + + Partition {@link BinaryComparable} keys using a configurable part of + the bytes array returned by {@link BinaryComparable#getBytes()}.

    + +

    The subarray to be used for the partitioning can be defined by means + of the following properties: +

      +
    • + mapreduce.partition.binarypartitioner.left.offset: + left offset in array (0 by default) +
    • +
    • + mapreduce.partition.binarypartitioner.right.offset: + right offset in array (-1 by default) +
    • +
    + Like in Python, both negative and positive offsets are allowed, but + the meaning is slightly different. In case of an array of length 5, + for instance, the possible offsets are: +
    
    +  +---+---+---+---+---+
    +  | B | B | B | B | B |
    +  +---+---+---+---+---+
    +    0   1   2   3   4
    +   -5  -4  -3  -2  -1
    + 
    + The first row of numbers gives the position of the offsets 0...5 in + the array; the second row gives the corresponding negative offsets. + Contrary to Python, the specified subarray has byte i + and j as first and last element, repectively, when + i and j are the left and right offset. + +

    For Hadoop programs written in Java, it is advisable to use one of + the following static convenience methods for setting the offsets: +

      +
    • {@link #setOffsets}
    • +
    • {@link #setLeftOffset}
    • +
    • {@link #setRightOffset}
    • +
    ]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + total.order.partitioner.natural.order is not false, a trie + of the first total.order.partitioner.max.trie.depth(2) + 1 bytes + will be built. Otherwise, keys will be located using a binary search of + the partition keyset using the {@link org.apache.hadoop.io.RawComparator} + defined for this job. The input file must be sorted with the same + comparator and contain {@link Job#getNumReduceTasks()} - 1 keys.]]> + + + + + + + + + + + + + + R reduces, there are R-1 + keys in the SequenceFile.]]> + + + + + + + + + + + + + + + + + + + + + +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ReduceContext to be wrapped + @return a wrapped Reducer.Context for custom implementations]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    diff --git a/hadoop-mapreduce-project/dev-support/jdiff/Apache_Hadoop_MapReduce_JobClient_3.2.4.xml b/hadoop-mapreduce-project/dev-support/jdiff/Apache_Hadoop_MapReduce_JobClient_3.2.4.xml new file mode 100644 index 00000000000..ed4979ac1fa --- /dev/null +++ b/hadoop-mapreduce-project/dev-support/jdiff/Apache_Hadoop_MapReduce_JobClient_3.2.4.xml @@ -0,0 +1,16 @@ + + + + + + + + + + + + diff --git a/hadoop-yarn-project/hadoop-yarn/dev-support/jdiff/Apache_Hadoop_YARN_API_3.2.4.xml b/hadoop-yarn-project/hadoop-yarn/dev-support/jdiff/Apache_Hadoop_YARN_API_3.2.4.xml new file mode 100644 index 00000000000..98c76451549 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/dev-support/jdiff/Apache_Hadoop_YARN_API_3.2.4.xml @@ -0,0 +1,25332 @@ + + + + + + + + + + + + + + + + + + + + The interface used by clients to obtain a new {@link ApplicationId} for + submitting new applications.

    + +

    The ResourceManager responds with a new, monotonically + increasing, {@link ApplicationId} which is used by the client to submit + a new application.

    + +

    The ResourceManager also responds with details such + as maximum resource capabilities in the cluster as specified in + {@link GetNewApplicationResponse}.

    + + @param request request to get a new ApplicationId + @return response containing the new ApplicationId to be used + to submit an application + @throws YarnException + @throws IOException + @see #submitApplication(SubmitApplicationRequest)]]> +
    +
    + + + + + + The interface used by clients to submit a new application to the + ResourceManager.

    + +

    The client is required to provide details such as queue, + {@link Resource} required to run the ApplicationMaster, + the equivalent of {@link ContainerLaunchContext} for launching + the ApplicationMaster etc. via the + {@link SubmitApplicationRequest}.

    + +

    Currently the ResourceManager sends an immediate (empty) + {@link SubmitApplicationResponse} on accepting the submission and throws + an exception if it rejects the submission. However, this call needs to be + followed by {@link #getApplicationReport(GetApplicationReportRequest)} + to make sure that the application gets properly submitted - obtaining a + {@link SubmitApplicationResponse} from ResourceManager doesn't guarantee + that RM 'remembers' this application beyond failover or restart. If RM + failover or RM restart happens before ResourceManager saves the + application's state successfully, the subsequent + {@link #getApplicationReport(GetApplicationReportRequest)} will throw + a {@link ApplicationNotFoundException}. The Clients need to re-submit + the application with the same {@link ApplicationSubmissionContext} when + it encounters the {@link ApplicationNotFoundException} on the + {@link #getApplicationReport(GetApplicationReportRequest)} call.

    + +

    During the submission process, it checks whether the application + already exists. If the application exists, it will simply return + SubmitApplicationResponse

    + +

    In secure mode,the ResourceManager verifies access to + queues etc. before accepting the application submission.

    + + @param request request to submit a new application + @return (empty) response on accepting the submission + @throws YarnException + @throws IOException + @see #getNewApplication(GetNewApplicationRequest)]]> +
    +
    + + + + + + The interface used by clients to request the + ResourceManager to fail an application attempt.

    + +

    The client, via {@link FailApplicationAttemptRequest} provides the + {@link ApplicationAttemptId} of the attempt to be failed.

    + +

    In secure mode,the ResourceManager verifies access to the + application, queue etc. before failing the attempt.

    + +

    Currently, the ResourceManager returns an empty response + on success and throws an exception on rejecting the request.

    + + @param request request to fail an attempt + @return ResourceManager returns an empty response + on success and throws an exception on rejecting the request + @throws YarnException + @throws IOException + @see #getQueueUserAcls(GetQueueUserAclsInfoRequest)]]> +
    +
    + + + + + + The interface used by clients to request the + ResourceManager to abort submitted application.

    + +

    The client, via {@link KillApplicationRequest} provides the + {@link ApplicationId} of the application to be aborted.

    + +

    In secure mode,the ResourceManager verifies access to the + application, queue etc. before terminating the application.

    + +

    Currently, the ResourceManager returns an empty response + on success and throws an exception on rejecting the request.

    + + @param request request to abort a submitted application + @return ResourceManager returns an empty response + on success and throws an exception on rejecting the request + @throws YarnException + @throws IOException + @see #getQueueUserAcls(GetQueueUserAclsInfoRequest)]]> +
    +
    + + + + + + The interface used by clients to get metrics about the cluster from + the ResourceManager.

    + +

    The ResourceManager responds with a + {@link GetClusterMetricsResponse} which includes the + {@link YarnClusterMetrics} with details such as number of current + nodes in the cluster.

    + + @param request request for cluster metrics + @return cluster metrics + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + The interface used by clients to get a report of all nodes + in the cluster from the ResourceManager.

    + +

    The ResourceManager responds with a + {@link GetClusterNodesResponse} which includes the + {@link NodeReport} for all the nodes in the cluster.

    + + @param request request for report on all nodes + @return report on all nodes + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + The interface used by clients to get information about queues + from the ResourceManager.

    + +

    The client, via {@link GetQueueInfoRequest}, can ask for details such + as used/total resources, child queues, running applications etc.

    + +

    In secure mode,the ResourceManager verifies access before + providing the information.

    + + @param request request to get queue information + @return queue information + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + The interface used by clients to get information about queue + acls for current user from the ResourceManager. +

    + +

    The ResourceManager responds with queue acls for all + existing queues.

    + + @param request request to get queue acls for current user + @return queue acls for current user + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + + + + + + + + The interface used by clients to obtain a new {@link ReservationId} for + submitting new reservations.

    + +

    The ResourceManager responds with a new, unique, + {@link ReservationId} which is used by the client to submit + a new reservation.

    + + @param request to get a new ReservationId + @return response containing the new ReservationId to be used + to submit a new reservation + @throws YarnException if the reservation system is not enabled. + @throws IOException on IO failures. + @see #submitReservation(ReservationSubmissionRequest)]]> +
    +
    + + + + + + + The interface used by clients to submit a new reservation to the + {@code ResourceManager}. +

    + +

    + The client packages all details of its request in a + {@link ReservationSubmissionRequest} object. This contains information + about the amount of capacity, temporal constraints, and concurrency needs. + Furthermore, the reservation might be composed of multiple stages, with + ordering dependencies among them. +

    + +

    + In order to respond, a new admission control component in the + {@code ResourceManager} performs an analysis of the resources that have + been committed over the period of time the user is requesting, verify that + the user requests can be fulfilled, and that it respect a sharing policy + (e.g., {@code CapacityOverTimePolicy}). Once it has positively determined + that the ReservationSubmissionRequest is satisfiable the + {@code ResourceManager} answers with a + {@link ReservationSubmissionResponse} that include a non-null + {@link ReservationId}. Upon failure to find a valid allocation the response + is an exception with the reason. + + On application submission the client can use this {@link ReservationId} to + obtain access to the reserved resources. +

    + +

    + The system guarantees that during the time-range specified by the user, the + reservationID will be corresponding to a valid reservation. The amount of + capacity dedicated to such queue can vary overtime, depending of the + allocation that has been determined. But it is guaranteed to satisfy all + the constraint expressed by the user in the + {@link ReservationSubmissionRequest}. +

    + + @param request the request to submit a new Reservation + @return response the {@link ReservationId} on accepting the submission + @throws YarnException if the request is invalid or reservation cannot be + created successfully + @throws IOException]]> +
    +
    + + + + + + + The interface used by clients to update an existing Reservation. This is + referred to as a re-negotiation process, in which a user that has + previously submitted a Reservation. +

    + +

    + The allocation is attempted by virtually substituting all previous + allocations related to this Reservation with new ones, that satisfy the new + {@link ReservationUpdateRequest}. Upon success the previous allocation is + substituted by the new one, and on failure (i.e., if the system cannot find + a valid allocation for the updated request), the previous allocation + remains valid. + + The {@link ReservationId} is not changed, and applications currently + running within this reservation will automatically receive the resources + based on the new allocation. +

    + + @param request to update an existing Reservation (the ReservationRequest + should refer to an existing valid {@link ReservationId}) + @return response empty on successfully updating the existing reservation + @throws YarnException if the request is invalid or reservation cannot be + updated successfully + @throws IOException]]> +
    +
    + + + + + + + The interface used by clients to remove an existing Reservation. + + Upon deletion of a reservation applications running with this reservation, + are automatically downgraded to normal jobs running without any dedicated + reservation. +

    + + @param request to remove an existing Reservation (the ReservationRequest + should refer to an existing valid {@link ReservationId}) + @return response empty on successfully deleting the existing reservation + @throws YarnException if the request is invalid or reservation cannot be + deleted successfully + @throws IOException]]> +
    +
    + + + + + + + The interface used by clients to get the list of reservations in a plan. + The reservationId will be used to search for reservations to list if it is + provided. Otherwise, it will select active reservations within the + startTime and endTime (inclusive). +

    + + @param request to list reservations in a plan. Contains fields to select + String queue, ReservationId reservationId, long startTime, + long endTime, and a bool includeReservationAllocations. + + queue: Required. Cannot be null or empty. Refers to the + reservable queue in the scheduler that was selected when + creating a reservation submission + {@link ReservationSubmissionRequest}. + + reservationId: Optional. If provided, other fields will + be ignored. + + startTime: Optional. If provided, only reservations that + end after the startTime will be selected. This defaults + to 0 if an invalid number is used. + + endTime: Optional. If provided, only reservations that + start on or before endTime will be selected. This defaults + to Long.MAX_VALUE if an invalid number is used. + + includeReservationAllocations: Optional. Flag that + determines whether the entire reservation allocations are + to be returned. Reservation allocations are subject to + change in the event of re-planning as described by + {@code ReservationDefinition}. + + @return response that contains information about reservations that are + being searched for. + @throws YarnException if the request is invalid + @throws IOException on IO failures]]> +
    +
    + + + + + + + The interface used by client to get node to labels mappings in existing cluster +

    + + @param request + @return node to labels mappings + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + The interface used by client to get labels to nodes mappings + in existing cluster +

    + + @param request + @return labels to nodes mappings + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + The interface used by client to get node labels in the cluster +

    + + @param request to get node labels collection of this cluster + @return node labels collection of this cluster + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + The interface used by client to set priority of an application. +

    + @param request to set priority of an application + @return an empty response + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + The interface used by clients to request the + ResourceManager to signal a container. For example, + the client can send command OUTPUT_THREAD_DUMP to dump threads of the + container.

    + +

    The client, via {@link SignalContainerRequest} provides the + id of the container and the signal command.

    + +

    In secure mode,the ResourceManager verifies access to the + application before signaling the container. + The user needs to have MODIFY_APP permission.

    + +

    Currently, the ResourceManager returns an empty response + on success and throws an exception on rejecting the request.

    + + @param request request to signal a container + @return ResourceManager returns an empty response + on success and throws an exception on rejecting the request + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + The interface used by client to set ApplicationTimeouts of an application. + The UpdateApplicationTimeoutsRequest should have timeout value with + absolute time with ISO8601 format yyyy-MM-dd'T'HH:mm:ss.SSSZ. +

    + Note: If application timeout value is less than or equal to current + time then update application throws YarnException. + @param request to set ApplicationTimeouts of an application + @return a response with updated timeouts. + @throws YarnException if update request has empty values or application is + in completing states. + @throws IOException on IO failures]]> +
    +
    + + + + + + + The interface used by clients to get all the resource profiles that are + available on the ResourceManager. +

    + @param request request to get all the resource profiles + @return Response containing a map of the profile name to Resource + capabilities + @throws YARNFeatureNotEnabledException if resource-profile is disabled + @throws YarnException if any error happens inside YARN + @throws IOException in case of other errors]]> +
    +
    + + + + + + + The interface to get the details for a specific resource profile. +

    + @param request request to get the details of a resource profile + @return Response containing the details for a particular resource profile + @throws YARNFeatureNotEnabledException if resource-profile is disabled + @throws YarnException if any error happens inside YARN + @throws IOException in case of other errors]]> +
    +
    + + + + + + + The interface to get the details for a specific resource profile. +

    + @param request request to get the details of a resource profile + @return Response containing the details for a particular resource profile + @throws YarnException if any error happens inside YARN + @throws IOException in case of other errors]]> +
    +
    + + + + + + + The interface used by client to get attributes to nodes mappings + available in ResourceManager. +

    + + @param request request to get details of attributes to nodes mapping. + @return Response containing the details of attributes to nodes mappings. + @throws YarnException if any error happens inside YARN + @throws IOException incase of other errors]]> +
    +
    + + + + + + + The interface used by client to get node attributes available in + ResourceManager. +

    + + @param request request to get node attributes collection of this cluster. + @return Response containing node attributes collection. + @throws YarnException if any error happens inside YARN. + @throws IOException incase of other errors.]]> +
    +
    + + + + + + + The interface used by client to get node to attributes mappings. + in existing cluster. +

    + + @param request request to get nodes to attributes mapping. + @return nodes to attributes mappings. + @throws YarnException if any error happens inside YARN. + @throws IOException]]> +
    +
    + + The protocol between clients and the ResourceManager + to submit/abort jobs and to get information on applications, cluster metrics, + nodes, queues and ACLs.

    ]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + The protocol between clients and the ApplicationHistoryServer to + get the information of completed applications etc. +

    ]]> +
    +
    + + + + + + + + + + The interface used by a new ApplicationMaster to register with + the ResourceManager. +

    + +

    + The ApplicationMaster needs to provide details such as RPC + Port, HTTP tracking url etc. as specified in + {@link RegisterApplicationMasterRequest}. +

    + +

    + The ResourceManager responds with critical details such as + maximum resource capabilities in the cluster as specified in + {@link RegisterApplicationMasterResponse}. +

    + +

    + Re-register is only allowed for Unmanaged Application Master + (UAM) HA, with + {@link org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext#getKeepContainersAcrossApplicationAttempts()} + set to true. +

    + + @param request registration request + @return registration respose + @throws YarnException + @throws IOException + @throws InvalidApplicationMasterRequestException The exception is thrown + when an ApplicationMaster tries to register more then once. + @see RegisterApplicationMasterRequest + @see RegisterApplicationMasterResponse]]> +
    +
    + + + + + + The interface used by an ApplicationMaster to notify the + ResourceManager about its completion (success or failed).

    + +

    The ApplicationMaster has to provide details such as + final state, diagnostics (in case of failures) etc. as specified in + {@link FinishApplicationMasterRequest}.

    + +

    The ResourceManager responds with + {@link FinishApplicationMasterResponse}.

    + + @param request completion request + @return completion response + @throws YarnException + @throws IOException + @see FinishApplicationMasterRequest + @see FinishApplicationMasterResponse]]> +
    +
    + + + + + + + The main interface between an ApplicationMaster and the + ResourceManager. +

    + +

    + The ApplicationMaster uses this interface to provide a list of + {@link ResourceRequest} and returns unused {@link Container} allocated to + it via {@link AllocateRequest}. Optionally, the + ApplicationMaster can also blacklist resources which + it doesn't want to use. +

    + +

    + This also doubles up as a heartbeat to let the + ResourceManager know that the ApplicationMaster + is alive. Thus, applications should periodically make this call to be kept + alive. The frequency depends on + {@link YarnConfiguration#RM_AM_EXPIRY_INTERVAL_MS} which defaults to + {@link YarnConfiguration#DEFAULT_RM_AM_EXPIRY_INTERVAL_MS}. +

    + +

    + The ResourceManager responds with list of allocated + {@link Container}, status of completed containers and headroom information + for the application. +

    + +

    + The ApplicationMaster can use the available headroom + (resources) to decide how to utilized allocated resources and make informed + decisions about future resource requests. +

    + + @param request + allocation request + @return allocation response + @throws YarnException + @throws IOException + @throws InvalidApplicationMasterRequestException + This exception is thrown when an ApplicationMaster calls allocate + without registering first. + @throws InvalidResourceBlacklistRequestException + This exception is thrown when an application provides an invalid + specification for blacklist of resources. + @throws InvalidResourceRequestException + This exception is thrown when a {@link ResourceRequest} is out of + the range of the configured lower and upper limits on the + resources. + @see AllocateRequest + @see AllocateResponse]]> +
    +
    + + The protocol between a live instance of ApplicationMaster + and the ResourceManager.

    + +

    This is used by the ApplicationMaster to register/unregister + and to request and obtain resources in the cluster from the + ResourceManager.

    ]]> +
    +
    + + + + + + + + + + The interface used by clients to claim a resource with the + SharedCacheManager. The client uses a checksum to identify the + resource and an {@link ApplicationId} to identify which application will be + using the resource. +

    + +

    + The SharedCacheManager responds with whether or not the + resource exists in the cache. If the resource exists, a Path + to the resource in the shared cache is returned. If the resource does not + exist, the response is empty. +

    + + @param request request to claim a resource in the shared cache + @return response indicating if the resource is already in the cache + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + The interface used by clients to release a resource with the + SharedCacheManager. This method is called once an application + is no longer using a claimed resource in the shared cache. The client uses + a checksum to identify the resource and an {@link ApplicationId} to + identify which application is releasing the resource. +

    + +

    + Note: This method is an optimization and the client is not required to call + it for correctness. +

    + +

    + Currently the SharedCacheManager sends an empty response. +

    + + @param request request to release a resource in the shared cache + @return (empty) response on releasing the resource + @throws YarnException + @throws IOException]]> +
    +
    + + + The protocol between clients and the SharedCacheManager to claim + and release resources in the shared cache. +

    ]]> +
    +
    + + + + + + + + + + The ApplicationMaster provides a list of + {@link StartContainerRequest}s to a NodeManager to + start {@link Container}s allocated to it using this interface. +

    + +

    + The ApplicationMaster has to provide details such as allocated + resource capability, security tokens (if enabled), command to be executed + to start the container, environment for the process, necessary + binaries/jar/shared-objects etc. via the {@link ContainerLaunchContext} in + the {@link StartContainerRequest}. +

    + +

    + The NodeManager sends a response via + {@link StartContainersResponse} which includes a list of + {@link Container}s of successfully launched {@link Container}s, a + containerId-to-exception map for each failed {@link StartContainerRequest} in + which the exception indicates errors from per container and a + allServicesMetaData map between the names of auxiliary services and their + corresponding meta-data. Note: None-container-specific exceptions will + still be thrown by the API method itself. +

    +

    + The ApplicationMaster can use + {@link #getContainerStatuses(GetContainerStatusesRequest)} to get updated + statuses of the to-be-launched or launched containers. +

    + + @param request + request to start a list of containers + @return response including conatinerIds of all successfully launched + containers, a containerId-to-exception map for failed requests and + a allServicesMetaData map. + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + The ApplicationMaster requests a NodeManager to + stop a list of {@link Container}s allocated to it using this + interface. +

    + +

    + The ApplicationMaster sends a {@link StopContainersRequest} + which includes the {@link ContainerId}s of the containers to be stopped. +

    + +

    + The NodeManager sends a response via + {@link StopContainersResponse} which includes a list of {@link ContainerId} + s of successfully stopped containers, a containerId-to-exception map for + each failed request in which the exception indicates errors from per + container. Note: None-container-specific exceptions will still be thrown by + the API method itself. ApplicationMaster can use + {@link #getContainerStatuses(GetContainerStatusesRequest)} to get updated + statuses of the containers. +

    + + @param request + request to stop a list of containers + @return response which includes a list of containerIds of successfully + stopped containers, a containerId-to-exception map for failed + requests. + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + The API used by the ApplicationMaster to request for current + statuses of Containers from the NodeManager. +

    + +

    + The ApplicationMaster sends a + {@link GetContainerStatusesRequest} which includes the {@link ContainerId}s + of all containers whose statuses are needed. +

    + +

    + The NodeManager responds with + {@link GetContainerStatusesResponse} which includes a list of + {@link ContainerStatus} of the successfully queried containers and a + containerId-to-exception map for each failed request in which the exception + indicates errors from per container. Note: None-container-specific + exceptions will still be thrown by the API method itself. +

    + + @param request + request to get ContainerStatuses of containers with + the specified ContainerIds + @return response containing the list of ContainerStatus of the + successfully queried containers and a containerId-to-exception map + for failed requests. + + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + The API used by the ApplicationMaster to request for + resource increase of running containers on the NodeManager. +

    + + @param request + request to increase resource of a list of containers + @return response which includes a list of containerIds of containers + whose resource has been successfully increased and a + containerId-to-exception map for failed requests. + + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + The API used by the ApplicationMaster to request for + resource update of running containers on the NodeManager. +

    + + @param request + request to update resource of a list of containers + @return response which includes a list of containerIds of containers + whose resource has been successfully updated and a + containerId-to-exception map for failed requests. + + @throws YarnException Exception specific to YARN + @throws IOException IOException thrown from NodeManager]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + The protocol between an ApplicationMaster and a + NodeManager to start/stop and increase resource of containers + and to get status of running containers.

    + +

    If security is enabled the NodeManager verifies that the + ApplicationMaster has truly been allocated the container + by the ResourceManager and also verifies all interactions such + as stopping the container or obtaining status information for the container. +

    ]]> +
    +
    + +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + response id used to track duplicate responses. + @return response id]]> + + + + + + response id used to track duplicate responses. + @param id response id]]> + + + + + current progress of application. + @return current progress of application]]> + + + + + + current progress of application + @param progress current progress of application]]> + + + + + ResourceRequest to update the + ResourceManager about the application's resource requirements. + @return the list of ResourceRequest + @see ResourceRequest]]> + + + + + + ResourceRequest to update the + ResourceManager about the application's resource requirements. + @param resourceRequests list of ResourceRequest to update the + ResourceManager about the application's + resource requirements + @see ResourceRequest]]> + + + + + ContainerId of containers being + released by the ApplicationMaster. + @return list of ContainerId of containers being + released by the ApplicationMaster]]> + + + + + + ContainerId of containers being + released by the ApplicationMaster + @param releaseContainers list of ContainerId of + containers being released by the + ApplicationMaster]]> + + + + + ResourceBlacklistRequest being sent by the + ApplicationMaster. + @return the ResourceBlacklistRequest being sent by the + ApplicationMaster + @see ResourceBlacklistRequest]]> + + + + + + ResourceBlacklistRequest to inform the + ResourceManager about the blacklist additions and removals + per the ApplicationMaster. + + @param resourceBlacklistRequest the ResourceBlacklistRequest + to inform the ResourceManager about + the blacklist additions and removals + per the ApplicationMaster + @see ResourceBlacklistRequest]]> + + + + + ApplicationMaster. + @return list of {@link UpdateContainerRequest} + being sent by the + ApplicationMaster.]]> + + + + + + ResourceManager about the containers that need to be + updated. + @param updateRequests list of UpdateContainerRequest for + containers to be updated]]> + + + + + ApplicationMaster. + @return list of {@link SchedulingRequest} being sent by the + ApplicationMaster.]]> + + + + + + ResourceManager about the application's resource requirements + (potentially including allocation tags and placement constraints). + @param schedulingRequests list of {@link SchedulingRequest} to update + the ResourceManager about the application's resource + requirements.]]> + + + + + + + + + + + + + + + + + The core request sent by the ApplicationMaster to the + ResourceManager to obtain resources in the cluster.

    + +

    The request includes: +

      +
    • A response id to track duplicate responses.
    • +
    • Progress information.
    • +
    • + A list of {@link ResourceRequest} to inform the + ResourceManager about the application's + resource requirements. +
    • +
    • + A list of unused {@link Container} which are being returned. +
    • +
    • + A list of {@link UpdateContainerRequest} to inform + the ResourceManager about the change in + requirements of running containers. +
    • +
    + + @see ApplicationMasterProtocol#allocate(AllocateRequest)]]> +
    +
    + + + + + + + responseId of the request. + @see AllocateRequest#setResponseId(int) + @param responseId responseId of the request + @return {@link AllocateRequestBuilder}]]> + + + + + + progress of the request. + @see AllocateRequest#setProgress(float) + @param progress progress of the request + @return {@link AllocateRequestBuilder}]]> + + + + + + askList of the request. + @see AllocateRequest#setAskList(List) + @param askList askList of the request + @return {@link AllocateRequestBuilder}]]> + + + + + + releaseList of the request. + @see AllocateRequest#setReleaseList(List) + @param releaseList releaseList of the request + @return {@link AllocateRequestBuilder}]]> + + + + + + resourceBlacklistRequest of the request. + @see AllocateRequest#setResourceBlacklistRequest( + ResourceBlacklistRequest) + @param resourceBlacklistRequest + resourceBlacklistRequest of the request + @return {@link AllocateRequestBuilder}]]> + + + + + + updateRequests of the request. + @see AllocateRequest#setUpdateRequests(List) + @param updateRequests updateRequests of the request + @return {@link AllocateRequestBuilder}]]> + + + + + + schedulingRequests of the request. + @see AllocateRequest#setSchedulingRequests(List) + @param schedulingRequests SchedulingRequest of the request + @return {@link AllocateRequestBuilder}]]> + + + + + + trackingUrl of the request. + @see AllocateRequest#setTrackingUrl(String) + @param trackingUrl new tracking url + @return {@link AllocateRequestBuilder}]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ResourceManager needs the + ApplicationMaster to take some action then it will send an + AMCommand to the ApplicationMaster. See AMCommand + for details on commands and actions for them. + @return AMCommand if the ApplicationMaster should + take action, null otherwise + @see AMCommand]]> + + + + + last response id. + @return last response id]]> + + + + + newly allocated Container by the + ResourceManager. + @return list of newly allocated Container]]> + + + + + available headroom for resources in the cluster for the + application. + @return limit of available headroom for resources in the cluster for the + application]]> + + + + + completed containers' statuses. + @return the list of completed containers' statuses]]> + + + + + updated NodeReports. Updates could + be changes in health, availability etc of the nodes. + @return The delta of updated nodes since the last response]]> + + + + + + + + + + + The message is a snapshot of the resources the RM wants back from the AM. + While demand persists, the RM will repeat its request; applications should + not interpret each message as a request for additional + resources on top of previous messages. Resources requested consistently + over some duration may be forcibly killed by the RM. + + @return A specification of the resources to reclaim from this AM.]]> + + + + + + 1) AM is receiving first container on underlying NodeManager.
    + OR
    + 2) NMToken master key rolled over in ResourceManager and AM is getting new + container on the same underlying NodeManager. +

    + AM will receive one NMToken per NM irrespective of the number of containers + issued on same NM. AM is expected to store these tokens until issued a + new token for the same NM. + @return list of NMTokens required for communicating with NM]]> + + + + + ResourceManager. + @return list of newly increased containers]]> + + + + + + + + + + + + + + + + + + + + + + + + + + UpdateContainerError for + containers updates requests that were in error]]> + + + + + ResourceManager from previous application attempts which + have not been reported to the Application Master yet. +
    + These containers were recovered by the RM after the application master + had already registered. This may happen after RM restart when some NMs get + delayed in connecting to the RM and reporting the active containers. + Since they were not reported in the registration + response, they are reported in the response to the AM heartbeat. + + @return the list of running containers as viewed by + ResourceManager from previous application attempts.]]> +
    +
    + + + + + + + ResourceManager the + ApplicationMaster during resource negotiation. +

    + The response, includes: +

      +
    • Response ID to track duplicate responses.
    • +
    • + An AMCommand sent by ResourceManager to let the + {@code ApplicationMaster} take some actions (resync, shutdown etc.). +
    • +
    • A list of newly allocated {@link Container}.
    • +
    • A list of completed {@link Container}s' statuses.
    • +
    • + The available headroom for resources in the cluster for the + application. +
    • +
    • A list of nodes whose status has been updated.
    • +
    • The number of available nodes in a cluster.
    • +
    • A description of resources requested back by the cluster
    • +
    • AMRMToken, if AMRMToken has been rolled over
    • +
    • + A list of {@link Container} representing the containers + whose resource has been increased. +
    • +
    • + A list of {@link Container} representing the containers + whose resource has been decreased. +
    • +
    + + @see ApplicationMasterProtocol#allocate(AllocateRequest)]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Note: {@link NMToken} will be used for authenticating communication with + {@code NodeManager}. + @return the list of container tokens to be used for authorization during + container resource update. + @see NMToken]]> + + + + + + AllocateResponse.getUpdatedContainers. + The token contains the container id and resource capability required for + container resource update. + @param containersToUpdate the list of container tokens to be used + for container resource increase.]]> + + + + The request sent by Application Master to the + Node Manager to change the resource quota of a container.

    + + @see ContainerManagementProtocol#updateContainer(ContainerUpdateRequest)]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + The response sent by the NodeManager to the + ApplicationMaster when asked to update container resource. +

    + + @see ContainerManagementProtocol#updateContainer(ContainerUpdateRequest)]]> +
    +
    + + + + + + + + + + + ApplicationAttemptId of the attempt to be failed. + @return ApplicationAttemptId of the attempt.]]> + + + + + + + The request sent by the client to the ResourceManager + to fail an application attempt.

    + +

    The request includes the {@link ApplicationAttemptId} of the attempt to + be failed.

    + + @see ApplicationClientProtocol#failApplicationAttempt(FailApplicationAttemptRequest)]]> +
    +
    + + + + + + + The response sent by the ResourceManager to the client + failing an application attempt.

    + +

    Currently it's empty.

    + + @see ApplicationClientProtocol#failApplicationAttempt(FailApplicationAttemptRequest)]]> +
    +
    + + + + + + + + + + + + + final state of the ApplicationMaster. + @return final state of the ApplicationMaster]]> + + + + + + final state of the ApplicationMaster + @param finalState final state of the ApplicationMaster]]> + + + + + diagnostic information on application failure. + @return diagnostic information on application failure]]> + + + + + + diagnostic information on application failure. + @param diagnostics diagnostic information on application failure]]> + + + + + tracking URL for the ApplicationMaster. + This url if contains scheme then that will be used by resource manager + web application proxy otherwise it will default to http. + @return tracking URLfor the ApplicationMaster]]> + + + + + + final tracking URLfor the ApplicationMaster. + This is the web-URL to which ResourceManager or web-application proxy will + redirect client/users once the application is finished and the + ApplicationMaster is gone. +

    + If the passed url has a scheme then that will be used by the + ResourceManager and web-application proxy, otherwise the scheme will + default to http. +

    +

    + Empty, null, "N/A" strings are all valid besides a real URL. In case an url + isn't explicitly passed, it defaults to "N/A" on the ResourceManager. +

    + + @param url + tracking URLfor the ApplicationMaster]]> + + + + + The final request includes details such: +

      +
    • Final state of the {@code ApplicationMaster}
    • +
    • + Diagnostic information in case of failure of the + {@code ApplicationMaster} +
    • +
    • Tracking URL
    • +
    + + @see ApplicationMasterProtocol#finishApplicationMaster(FinishApplicationMasterRequest)]]> +
    +
    + + + + + + + + + + + + ResourceManager to a + ApplicationMaster on it's completion. +

    + The response, includes: +

      +
    • A flag which indicates that the application has successfully unregistered + with the RM and the application can safely stop.
    • +
    +

    + Note: The flag indicates whether the application has successfully + unregistered and is safe to stop. The application may stop after the flag is + true. If the application stops before the flag is true then the RM may retry + the application. + + @see ApplicationMasterProtocol#finishApplicationMaster(FinishApplicationMasterRequest)]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ApplicationAttemptId of an application attempt. + + @return ApplicationAttemptId of an application attempt]]> + + + + + + ApplicationAttemptId of an application attempt + + @param applicationAttemptId + ApplicationAttemptId of an application attempt]]> + + + + + The request sent by a client to the ResourceManager to get an + {@link ApplicationAttemptReport} for an application attempt. +

    + +

    + The request should include the {@link ApplicationAttemptId} of the + application attempt. +

    + + @see ApplicationAttemptReport + @see ApplicationHistoryProtocol#getApplicationAttemptReport(GetApplicationAttemptReportRequest)]]> +
    +
    + + + + + + + + + + + ApplicationAttemptReport for the application attempt. + + @return ApplicationAttemptReport for the application attempt]]> + + + + + + ApplicationAttemptReport for the application attempt. + + @param applicationAttemptReport + ApplicationAttemptReport for the application attempt]]> + + + + + The response sent by the ResourceManager to a client requesting + an application attempt report. +

    + +

    + The response includes an {@link ApplicationAttemptReport} which has the + details about the particular application attempt +

    + + @see ApplicationAttemptReport + @see ApplicationHistoryProtocol#getApplicationAttemptReport(GetApplicationAttemptReportRequest)]]> +
    +
    + + + + + + + + + + + ApplicationId of an application + + @return ApplicationId of an application]]> + + + + + + ApplicationId of an application + + @param applicationId + ApplicationId of an application]]> + + + + + The request from clients to get a list of application attempt reports of an + application from the ResourceManager. +

    + + @see ApplicationHistoryProtocol#getApplicationAttempts(GetApplicationAttemptsRequest)]]> +
    +
    + + + + + + + + + + + ApplicationReport of an application. + + @return a list of ApplicationReport of an application]]> + + + + + + ApplicationReport of an application. + + @param applicationAttempts + a list of ApplicationReport of an application]]> + + + + + The response sent by the ResourceManager to a client requesting + a list of {@link ApplicationAttemptReport} for application attempts. +

    + +

    + The ApplicationAttemptReport for each application includes the + details of an application attempt. +

    + + @see ApplicationAttemptReport + @see ApplicationHistoryProtocol#getApplicationAttempts(GetApplicationAttemptsRequest)]]> +
    +
    + + + + + + + + + + + ApplicationId of the application. + @return ApplicationId of the application]]> + + + + + + ApplicationId of the application + @param applicationId ApplicationId of the application]]> + + + + The request sent by a client to the ResourceManager to + get an {@link ApplicationReport} for an application.

    + +

    The request should include the {@link ApplicationId} of the + application.

    + + @see ApplicationClientProtocol#getApplicationReport(GetApplicationReportRequest) + @see ApplicationReport]]> +
    +
    + + + + + + + + ApplicationReport for the application. + @return ApplicationReport for the application]]> + + + + The response sent by the ResourceManager to a client + requesting an application report.

    + +

    The response includes an {@link ApplicationReport} which has details such + as user, queue, name, host on which the ApplicationMaster is + running, RPC port, tracking URL, diagnostics, start time etc.

    + + @see ApplicationClientProtocol#getApplicationReport(GetApplicationReportRequest)]]> +
    +
    + + + + + + + + + + + + + + + + + + + + The request from clients to get a report of Applications matching the + giving application types in the cluster from the + ResourceManager. +

    + + @see ApplicationClientProtocol#getApplications(GetApplicationsRequest) + +

    Setting any of the parameters to null, would just disable that + filter

    + + @param scope {@link ApplicationsRequestScope} to filter by + @param users list of users to filter by + @param queues list of scheduler queues to filter by + @param applicationTypes types of applications + @param applicationTags application tags to filter by + @param applicationStates application states to filter by + @param startRange range of application start times to filter by + @param finishRange range of application finish times to filter by + @param limit number of applications to limit to + @return {@link GetApplicationsRequest} to be used with + {@link ApplicationClientProtocol#getApplications(GetApplicationsRequest)}]]> +
    +
    + + + + + The request from clients to get a report of Applications matching the + giving application types in the cluster from the + ResourceManager. +

    + + @param scope {@link ApplicationsRequestScope} to filter by + @see ApplicationClientProtocol#getApplications(GetApplicationsRequest) + @return a report of Applications in {@link GetApplicationsRequest}]]> +
    +
    + + + + + The request from clients to get a report of Applications matching the + giving application types in the cluster from the + ResourceManager. +

    + + + @see ApplicationClientProtocol#getApplications(GetApplicationsRequest) + @return a report of Applications in {@link GetApplicationsRequest}]]> +
    +
    + + + + + The request from clients to get a report of Applications matching the + giving application states in the cluster from the + ResourceManager. +

    + + + @see ApplicationClientProtocol#getApplications(GetApplicationsRequest) + @return a report of Applications in {@link GetApplicationsRequest}]]> +
    +
    + + + + + + The request from clients to get a report of Applications matching the + giving and application types and application types in the cluster from the + ResourceManager. +

    + + + @see ApplicationClientProtocol#getApplications(GetApplicationsRequest) + @return a report of Applications in GetApplicationsRequest]]> +
    +
    + + + + + + + + + + + + The request from clients to get a report of Applications + in the cluster from the ResourceManager.

    + + @see ApplicationClientProtocol#getApplications(GetApplicationsRequest)]]> +
    +
    + + + + + + + + ApplicationReport for applications. + @return ApplicationReport for applications]]> + + + + The response sent by the ResourceManager to a client + requesting an {@link ApplicationReport} for applications.

    + +

    The ApplicationReport for each application includes details + such as user, queue, name, host on which the ApplicationMaster + is running, RPC port, tracking URL, diagnostics, start time etc.

    + + @see ApplicationReport + @see ApplicationClientProtocol#getApplications(GetApplicationsRequest)]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + The request from clients to get node to attribute value mapping for all or + given set of Node AttributeKey's in the cluster from the + ResourceManager. +

    + + @see ApplicationClientProtocol#getAttributesToNodes + (GetAttributesToNodesRequest)]]> +
    +
    + + + + + + + + + + + + + + + + + + + The response sent by the ResourceManager to a client requesting + node to attribute value mapping for all or given set of Node AttributeKey's. +

    + + @see ApplicationClientProtocol#getAttributesToNodes + (GetAttributesToNodesRequest)]]> +
    +
    + + + + + + + + + The request sent by clients to get cluster metrics from the + ResourceManager.

    + +

    Currently, this is empty.

    + + @see ApplicationClientProtocol#getClusterMetrics(GetClusterMetricsRequest)]]> +
    +
    + + + + + + + + YarnClusterMetrics for the cluster. + @return YarnClusterMetrics for the cluster]]> + + + + ResourceManager to a client + requesting cluster metrics. + + @see YarnClusterMetrics + @see ApplicationClientProtocol#getClusterMetrics(GetClusterMetricsRequest)]]> + + + + + + + + + + + + + + + The request from clients to get node attributes in the cluster from the + ResourceManager. +

    + + @see ApplicationClientProtocol#getClusterNodeAttributes + (GetClusterNodeAttributesRequest)]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + The response sent by the ResourceManager to a client requesting + a node attributes in cluster. +

    + + @see ApplicationClientProtocol#getClusterNodeAttributes + (GetClusterNodeAttributesRequest)]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + The request from clients to get a report of all nodes + in the cluster from the ResourceManager.

    + + The request will ask for all nodes in the given {@link NodeState}s. + + @see ApplicationClientProtocol#getClusterNodes(GetClusterNodesRequest)]]> +
    +
    + + + + + + + + NodeReport for all nodes in the cluster. + @return NodeReport for all nodes in the cluster]]> + + + + The response sent by the ResourceManager to a client + requesting a {@link NodeReport} for all nodes.

    + +

    The NodeReport contains per-node information such as + available resources, number of containers, tracking url, rack name, health + status etc. + + @see NodeReport + @see ApplicationClientProtocol#getClusterNodes(GetClusterNodesRequest)]]> + + + + + + + + + + + + + ContainerId of the Container. + + @return ContainerId of the Container]]> + + + + + + ContainerId of the container + + @param containerId + ContainerId of the container]]> + + + + + The request sent by a client to the ResourceManager to get an + {@link ContainerReport} for a container. +

    ]]> +
    +
    + + + + + + + + + + + ContainerReport for the container. + + @return ContainerReport for the container]]> + + + + + + + + The response sent by the ResourceManager to a client requesting + a container report. +

    + +

    + The response includes a {@link ContainerReport} which has details of a + container. +

    ]]> +
    +
    + + + + + + + + + + + ApplicationAttemptId of an application attempt. + + @return ApplicationAttemptId of an application attempt]]> + + + + + + ApplicationAttemptId of an application attempt + + @param applicationAttemptId + ApplicationAttemptId of an application attempt]]> + + + + + The request from clients to get a list of container reports, which belong to + an application attempt from the ResourceManager. +

    + + @see ApplicationHistoryProtocol#getContainers(GetContainersRequest)]]> +
    +
    + + + + + + + + + + + ContainerReport for all the containers of an + application attempt. + + @return a list of ContainerReport for all the containers of an + application attempt]]> + + + + + + ContainerReport for all the containers of an + application attempt. + + @param containers + a list of ContainerReport for all the containers of + an application attempt]]> + + + + + The response sent by the ResourceManager to a client requesting + a list of {@link ContainerReport} for containers. +

    + +

    + The ContainerReport for each container includes the container + details. +

    + + @see ContainerReport + @see ApplicationHistoryProtocol#getContainers(GetContainersRequest)]]> +
    +
    + + + + + + + + + + + ContainerIds of containers for which to obtain + the ContainerStatus. + + @return the list of ContainerIds of containers for which to + obtain the ContainerStatus.]]> + + + + + + ContainerIds of containers for which to obtain + the ContainerStatus + + @param containerIds + a list of ContainerIds of containers for which to + obtain the ContainerStatus]]> + + + + ApplicationMaster to the + NodeManager to get {@link ContainerStatus} of requested + containers. + + @see ContainerManagementProtocol#getContainerStatuses(GetContainerStatusesRequest)]]> + + + + + + + + + + ContainerStatuses of the requested containers. + + @return ContainerStatuses of the requested containers.]]> + + + + + + + + + NodeManager to the + ApplicationMaster when asked to obtain the + ContainerStatus of requested containers. + + @see ContainerManagementProtocol#getContainerStatuses(GetContainerStatusesRequest)]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + The request sent by clients to get a new {@link ApplicationId} for + submitting an application.

    + +

    Currently, this is empty.

    + + @see ApplicationClientProtocol#getNewApplication(GetNewApplicationRequest)]]> +
    +
    + + + + + + + + new ApplicationId allocated by the + ResourceManager. + @return new ApplicationId allocated by the + ResourceManager]]> + + + + + ResourceManager in the cluster. + @return maximum capability of allocated resources in the cluster]]> + + + + The response sent by the ResourceManager to the client for + a request to get a new {@link ApplicationId} for submitting applications.

    + +

    Clients can submit an application with the returned + {@link ApplicationId}.

    + + @see ApplicationClientProtocol#getNewApplication(GetNewApplicationRequest)]]> +
    +
    + + + + + + + + + The request sent by clients to get a new {@code ReservationId} for + submitting an reservation.

    + + {@code ApplicationClientProtocol#getNewReservation(GetNewReservationRequest)}]]> +
    +
    + + + + + + + + + + + + The response sent by the ResourceManager to the client for + a request to get a new {@link ReservationId} for submitting reservations.

    + +

    Clients can submit an reservation with the returned + {@link ReservationId}.

    + + {@code ApplicationClientProtocol#getNewReservation(GetNewReservationRequest)}]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + The request from clients to get nodes to attributes mapping + in the cluster from the ResourceManager. +

    + + @see ApplicationClientProtocol#getNodesToAttributes + (GetNodesToAttributesRequest)]]> +
    +
    + + + + + + + + + + + + + + + + + + + The response sent by the ResourceManager to a client requesting + nodes to attributes mapping. +

    + + @see ApplicationClientProtocol#getNodesToAttributes + (GetNodesToAttributesRequest)]]> +
    +
    + + + + + + + + + + + + + + queue name for which to get queue information. + @return queue name for which to get queue information]]> + + + + + + queue name for which to get queue information + @param queueName queue name for which to get queue information]]> + + + + + active applications required? + @return true if applications' information is to be included, + else false]]> + + + + + + active applications? + @param includeApplications fetch information about active + applications?]]> + + + + + child queues required? + @return true if information about child queues is required, + else false]]> + + + + + + child queues? + @param includeChildQueues fetch information about child queues?]]> + + + + + child queue hierarchy required? + @return true if information about entire hierarchy is + required, false otherwise]]> + + + + + + child queue hierarchy? + @param recursive fetch information on the entire child queue + hierarchy?]]> + + + + The request sent by clients to get queue information + from the ResourceManager.

    + + @see ApplicationClientProtocol#getQueueInfo(GetQueueInfoRequest)]]> +
    +
    + + + + + + + + QueueInfo for the specified queue. + @return QueueInfo for the specified queue]]> + + + + + The response includes a {@link QueueInfo} which has details such as + queue name, used/total capacities, running applications, child queues etc. + + @see QueueInfo + @see ApplicationClientProtocol#getQueueInfo(GetQueueInfoRequest)]]> + + + + + + + + + + + The request sent by clients to the ResourceManager to + get queue acls for the current user.

    + +

    Currently, this is empty.

    + + @see ApplicationClientProtocol#getQueueUserAcls(GetQueueUserAclsInfoRequest)]]> +
    +
    + + + + + + + + QueueUserACLInfo per queue for the user. + @return QueueUserACLInfo per queue for the user]]> + + + + The response sent by the ResourceManager to clients + seeking queue acls for the user.

    + +

    The response contains a list of {@link QueueUserACLInfo} which + provides information about {@link QueueACL} per queue.

    + + @see QueueACL + @see QueueUserACLInfo + @see ApplicationClientProtocol#getQueueUserAcls(GetQueueUserAclsInfoRequest)]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Note: {@link NMToken} will be used for authenticating communication with + {@code NodeManager}. + @return the list of container tokens to be used for authorization during + container resource increase. + @see NMToken]]> + + + + + + AllocateResponse.getIncreasedContainers. + The token contains the container id and resource capability required for + container resource increase. + @param containersToIncrease the list of container tokens to be used + for container resource increase.]]> + + + + The request sent by Application Master to the + Node Manager to change the resource quota of a container.

    + + @see ContainerManagementProtocol#increaseContainersResource(IncreaseContainersResourceRequest)]]> +
    +
    + + + + + + + + + + + + + + + + + + The response sent by the NodeManager to the + ApplicationMaster when asked to increase container resource. +

    + + @see ContainerManagementProtocol#increaseContainersResource(IncreaseContainersResourceRequest)]]> +
    +
    + + + + + + + + + + + ApplicationId of the application to be aborted. + @return ApplicationId of the application to be aborted]]> + + + + + + + + diagnostics to which the application is being killed. + @return diagnostics to which the application is being killed]]> + + + + + + diagnostics to which the application is being killed. + @param diagnostics diagnostics to which the application is being + killed]]> + + + + The request sent by the client to the ResourceManager + to abort a submitted application.

    + +

    The request includes the {@link ApplicationId} of the application to be + aborted.

    + + @see ApplicationClientProtocol#forceKillApplication(KillApplicationRequest)]]> +
    +
    + + + + + + + + + + + + ResourceManager to the client aborting + a submitted application. +

    + The response, includes: +

      +
    • + A flag which indicates that the process of killing the application is + completed or not. +
    • +
    + Note: user is recommended to wait until this flag becomes true, otherwise if + the ResourceManager crashes before the process of killing the + application is completed, the ResourceManager may retry this + application on recovery. + + @see ApplicationClientProtocol#forceKillApplication(KillApplicationRequest)]]> +
    +
    + + + + + + + + + + + + ApplicationId of the application to be moved. + @return ApplicationId of the application to be moved]]> + + + + + + ApplicationId of the application to be moved. + @param appId ApplicationId of the application to be moved]]> + + + + + + + + + + + + + + + The request sent by the client to the ResourceManager + to move a submitted application to a different queue.

    + +

    The request includes the {@link ApplicationId} of the application to be + moved and the queue to place it in.

    + + @see ApplicationClientProtocol#moveApplicationAcrossQueues(MoveApplicationAcrossQueuesRequest)]]> +
    +
    + + + + + + + + The response sent by the ResourceManager to the client moving + a submitted application to a different queue. +

    +

    + A response without exception means that the move has completed successfully. +

    + + @see ApplicationClientProtocol#moveApplicationAcrossQueues(MoveApplicationAcrossQueuesRequest)]]> +
    +
    + + + + + + + + + + + RegisterApplicationMasterRequest. + If port, trackingUrl is not used, use the following default value: +
      +
    • port: -1
    • +
    • trackingUrl: null
    • +
    + The port is allowed to be any integer larger than or equal to -1. + @return the new instance of RegisterApplicationMasterRequest]]> +
    +
    + + + host on which the ApplicationMaster is + running. + @return host on which the ApplicationMaster is running]]> + + + + + + host on which the ApplicationMaster is + running. + @param host host on which the ApplicationMaster + is running]]> + + + + + RPC port on which the {@code ApplicationMaster} is + responding. + @return the RPC port on which the {@code ApplicationMaster} + is responding]]> + + + + + + RPC port on which the {@code ApplicationMaster} is + responding. + @param port RPC port on which the {@code ApplicationMaster} + is responding]]> + + + + + tracking URL for the ApplicationMaster. + This url if contains scheme then that will be used by resource manager + web application proxy otherwise it will default to http. + @return tracking URL for the ApplicationMaster]]> + + + + + + tracking URLfor the ApplicationMaster while + it is running. This is the web-URL to which ResourceManager or + web-application proxy will redirect client/users while the application and + the ApplicationMaster are still running. +

    + If the passed url has a scheme then that will be used by the + ResourceManager and web-application proxy, otherwise the scheme will + default to http. +

    +

    + Empty, null, "N/A" strings are all valid besides a real URL. In case an url + isn't explicitly passed, it defaults to "N/A" on the ResourceManager. +

    + + @param trackingUrl + tracking URLfor the ApplicationMaster]]> + + + + + PlacementConstraint associated with the tags, i.e., each + {@link org.apache.hadoop.yarn.api.records.SchedulingRequest} that has those + tags will be placed taking into account the corresponding constraint. + + @return A map of Placement Constraints.]]> + + + + + + PlacementConstraint associated with the tags. + For example: + Map < + <hb_regionserver> -> node_anti_affinity, + <hb_regionserver, hb_master> -> rack_affinity, + ... + > + @param placementConstraints Placement Constraint Mapping.]]> + + + + + The registration includes details such as: +

      +
    • Hostname on which the AM is running.
    • +
    • RPC Port
    • +
    • Tracking URL
    • +
    + + @see ApplicationMasterProtocol#registerApplicationMaster(RegisterApplicationMasterRequest)]]> +
    +
    + + + + + + + + ResourceManager in the cluster. + @return maximum capability of allocated resources in the cluster]]> + + + + + ApplicationACLs for the application. + @return all the ApplicationACLs]]> + + + + + Get ClientToAMToken master key.

    +

    The ClientToAMToken master key is sent to ApplicationMaster + by ResourceManager via {@link RegisterApplicationMasterResponse} + , used to verify corresponding ClientToAMToken.

    + @return ClientToAMToken master key]]> +
    +
    + + + + + + + + + Get the queue that the application was placed in.

    + @return the queue that the application was placed in.]]> + + + + + + Set the queue that the application was placed in.

    ]]> + + + + + + Get the list of running containers as viewed by + ResourceManager from previous application attempts. +

    + + @return the list of running containers as viewed by + ResourceManager from previous application attempts + @see RegisterApplicationMasterResponse#getNMTokensFromPreviousAttempts()]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + The response contains critical details such as: +
      +
    • Maximum capability for allocated resources in the cluster.
    • +
    • {@code ApplicationACL}s for the application.
    • +
    • ClientToAMToken master key.
    • +
    + + @see ApplicationMasterProtocol#registerApplicationMaster(RegisterApplicationMasterRequest)]]> +
    +
    + + + + + + + + + + + + + + + + ContainerId of the container to re-initialize. + + @return ContainerId of the container to re-initialize.]]> + + + + + ContainerLaunchContext to re-initialize the container + with. + + @return ContainerLaunchContext of to re-initialize the + container with.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + ApplicationId of the resource to be released. + + @return ApplicationId]]> + + + + + + ApplicationId of the resource to be released. + + @param id ApplicationId]]> + + + + + key of the resource to be released. + + @return key]]> + + + + + + key of the resource to be released. + + @param key unique identifier for the resource]]> + + + + The request from clients to release a resource in the shared cache.

    ]]> +
    +
    + + + + + + + + The response to clients from the SharedCacheManager when + releasing a resource in the shared cache. +

    + +

    + Currently, this is empty. +

    ]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + The response sent by the ResourceManager to a client on + reservation submission.

    + +

    Currently, this is empty.

    + + {@code ApplicationClientProtocol#submitReservation( + ReservationSubmissionRequest)}]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ContainerId of the container to localize resources. + + @return ContainerId of the container to localize resources.]]> + + + + + LocalResource required by the container. + + @return all LocalResource required by the container]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ContainerId of the container to signal. + @return ContainerId of the container to signal.]]> + + + + + + ContainerId of the container to signal.]]> + + + + + SignalContainerCommand of the signal request. + @return SignalContainerCommand of the signal request.]]> + + + + + + SignalContainerCommand of the signal request.]]> + + + + The request sent by the client to the ResourceManager + or by the ApplicationMaster to the NodeManager + to signal a container. + @see SignalContainerCommand

    ]]> +
    +
    + + + + + + + The response sent by the ResourceManager to the client + signalling a container.

    + +

    Currently it's empty.

    + + @see ApplicationClientProtocol#signalToContainer(SignalContainerRequest)]]> +
    +
    + + + + + + + + + + + + ContainerLaunchContext for the container to be started + by the NodeManager. + + @return ContainerLaunchContext for the container to be started + by the NodeManager]]> + + + + + + ContainerLaunchContext for the container to be started + by the NodeManager + @param context ContainerLaunchContext for the container to be + started by the NodeManager]]> + + + + + + Note: {@link NMToken} will be used for authenticating communication with + {@code NodeManager}. + @return the container token to be used for authorization during starting + container. + @see NMToken + @see ContainerManagementProtocol#startContainers(StartContainersRequest)]]> + + + + + + + The request sent by the ApplicationMaster to the + NodeManager to start a container.

    + +

    The ApplicationMaster has to provide details such as + allocated resource capability, security tokens (if enabled), command + to be executed to start the container, environment for the process, + necessary binaries/jar/shared-objects etc. via the + {@link ContainerLaunchContext}.

    + + @see ContainerManagementProtocol#startContainers(StartContainersRequest)]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + The request which contains a list of {@link StartContainerRequest} sent by + the ApplicationMaster to the NodeManager to + start containers. +

    + +

    + In each {@link StartContainerRequest}, the ApplicationMaster has + to provide details such as allocated resource capability, security tokens (if + enabled), command to be executed to start the container, environment for the + process, necessary binaries/jar/shared-objects etc. via the + {@link ContainerLaunchContext}. +

    + + @see ContainerManagementProtocol#startContainers(StartContainersRequest)]]> +
    +
    + + + + + + + + ContainerId s of the containers that are + started successfully. + + @return the list of ContainerId s of the containers that are + started successfully. + @see ContainerManagementProtocol#startContainers(StartContainersRequest)]]> + + + + + + + + + + + Get the meta-data from all auxiliary services running on the + NodeManager. +

    +

    + The meta-data is returned as a Map between the auxiliary service names and + their corresponding per service meta-data as an opaque blob + ByteBuffer +

    + +

    + To be able to interpret the per-service meta-data, you should consult the + documentation for the Auxiliary-service configured on the NodeManager +

    + + @return a Map between the names of auxiliary services and their + corresponding meta-data]]> +
    +
    + + + The response sent by the NodeManager to the + ApplicationMaster when asked to start an allocated + container. +

    + + @see ContainerManagementProtocol#startContainers(StartContainersRequest)]]> +
    +
    + + + + + + + + + + + ContainerIds of the containers to be stopped. + @return ContainerIds of containers to be stopped]]> + + + + + + ContainerIds of the containers to be stopped. + @param containerIds ContainerIds of the containers to be stopped]]> + + + + The request sent by the ApplicationMaster to the + NodeManager to stop containers.

    + + @see ContainerManagementProtocol#stopContainers(StopContainersRequest)]]> +
    +
    + + + + + + + + + + + + + + + + + + The response sent by the NodeManager to the + ApplicationMaster when asked to stop allocated + containers. +

    + + @see ContainerManagementProtocol#stopContainers(StopContainersRequest)]]> +
    +
    + + + + + + + + + + + ApplicationSubmissionContext for the application. + @return ApplicationSubmissionContext for the application]]> + + + + + + ApplicationSubmissionContext for the application. + @param context ApplicationSubmissionContext for the + application]]> + + + + The request sent by a client to submit an application to the + ResourceManager.

    + +

    The request, via {@link ApplicationSubmissionContext}, contains + details such as queue, {@link Resource} required to run the + ApplicationMaster, the equivalent of + {@link ContainerLaunchContext} for launching the + ApplicationMaster etc. + + @see ApplicationClientProtocol#submitApplication(SubmitApplicationRequest)]]> + + + + + + + + + The response sent by the ResourceManager to a client on + application submission.

    + +

    Currently, this is empty.

    + + @see ApplicationClientProtocol#submitApplication(SubmitApplicationRequest)]]> +
    +
    + + + + + + + + + + + + ApplicationId of the application. + + @return ApplicationId of the application]]> + + + + + + ApplicationId of the application. + + @param applicationId ApplicationId of the application]]> + + + + + Priority of the application to be set. + + @return Priority of the application to be set.]]> + + + + + + Priority of the application. + + @param priority Priority of the application]]> + + + + + The request sent by the client to the ResourceManager to set or + update the application priority. +

    +

    + The request includes the {@link ApplicationId} of the application and + {@link Priority} to be set for an application +

    + + @see ApplicationClientProtocol#updateApplicationPriority(UpdateApplicationPriorityRequest)]]> +
    +
    + + + + + + + + + + + Priority of the application to be set. + @return Updated Priority of the application.]]> + + + + + + Priority of the application. + + @param priority Priority of the application]]> + + + + + The response sent by the ResourceManager to the client on update + the application priority. +

    +

    + A response without exception means that the move has completed successfully. +

    + + @see ApplicationClientProtocol#updateApplicationPriority(UpdateApplicationPriorityRequest)]]> +
    +
    + + + + + + + + + + + + ApplicationId of the application. + @return ApplicationId of the application]]> + + + + + + ApplicationId of the application. + @param applicationId ApplicationId of the application]]> + + + + + ApplicationTimeouts of the application. Timeout value is + in ISO8601 standard with format yyyy-MM-dd'T'HH:mm:ss.SSSZ. + @return all ApplicationTimeouts of the application.]]> + + + + + + ApplicationTimeouts for the application. Timeout value + is absolute. Timeout value should meet ISO8601 format. Support ISO8601 + format is yyyy-MM-dd'T'HH:mm:ss.SSSZ. All pre-existing Map entries + are cleared before adding the new Map. + @param applicationTimeouts ApplicationTimeoutss for the + application]]> + + + + + The request sent by the client to the ResourceManager to set or + update the application timeout. +

    +

    + The request includes the {@link ApplicationId} of the application and timeout + to be set for an application +

    ]]> +
    +
    + + + + + + + + + + ApplicationTimeouts of the application. Timeout value is + in ISO8601 standard with format yyyy-MM-dd'T'HH:mm:ss.SSSZ. + @return all ApplicationTimeouts of the application.]]> + + + + + + ApplicationTimeouts for the application. Timeout value + is absolute. Timeout value should meet ISO8601 format. Support ISO8601 + format is yyyy-MM-dd'T'HH:mm:ss.SSSZ. All pre-existing Map entries + are cleared before adding the new Map. + @param applicationTimeouts ApplicationTimeoutss for the + application]]> + + + + + The response sent by the ResourceManager to the client on update + application timeout. +

    +

    + A response without exception means that the update has completed + successfully. +

    ]]> +
    +
    + + + + + + + + ApplicationId of the resource to be used. + + @return ApplicationId]]> + + + + + + ApplicationId of the resource to be used. + + @param id ApplicationId]]> + + + + + key of the resource to be used. + + @return key]]> + + + + + + key of the resource to be used. + + @param key unique identifier for the resource]]> + + + + + The request from clients to the SharedCacheManager that claims a + resource in the shared cache. +

    ]]> +
    +
    + + + + + + + + Path corresponding to the requested resource in the + shared cache. + + @return String A Path if the resource exists in the shared + cache, null otherwise]]> + + + + + + Path corresponding to a resource in the shared cache. + + @param p A Path corresponding to a resource in the shared + cache]]> + + + + + The response from the SharedCacheManager to the client that indicates whether + a requested resource exists in the cache. +

    ]]> +
    +
    + +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ApplicationId of the ApplicationAttempId. + @return ApplicationId of the ApplicationAttempId]]> + + + + + attempt id of the Application. + @return attempt id of the Application]]> + + + + + + + + + + + + + + + + + + + + + ApplicationAttemptId denotes the particular attempt + of an ApplicationMaster for a given {@link ApplicationId}.

    + +

    Multiple attempts might be needed to run an application to completion due + to temporal failures of the ApplicationMaster such as hardware + failures, connectivity issues etc. on the node on which it was scheduled.

    ]]> +
    +
    + + + + + + + + + + + + + + + + + + YarnApplicationAttemptState of the application attempt. + + @return YarnApplicationAttemptState of the application attempt]]> + + + + + RPC port of this attempt ApplicationMaster. + + @return RPC port of this attempt ApplicationMaster]]> + + + + + host on which this attempt of + ApplicationMaster is running. + + @return host on which this attempt of + ApplicationMaster is running]]> + + + + + diagnositic information of the application attempt in case + of errors. + + @return diagnositic information of the application attempt in case + of errors]]> + + + + + tracking url for the application attempt. + + @return tracking url for the application attempt]]> + + + + + original tracking url for the application attempt. + + @return original tracking url for the application attempt]]> + + + + + ApplicationAttemptId of this attempt of the + application + + @return ApplicationAttemptId of the attempt]]> + + + + + ContainerId of AMContainer for this attempt + + @return ContainerId of the attempt]]> + + + + + + + finish time of the application. + + @return finish time of the application]]> + + + + + It includes details such as: +
      +
    • {@link ApplicationAttemptId} of the application.
    • +
    • Host on which the ApplicationMaster of this attempt is + running.
    • +
    • RPC port of the ApplicationMaster of this attempt.
    • +
    • Tracking URL.
    • +
    • Diagnostic information in case of errors.
    • +
    • {@link YarnApplicationAttemptState} of the application attempt.
    • +
    • {@link ContainerId} of the master Container.
    • +
    ]]> +
    +
    + + + + + + + + + + + + + ApplicationId + which is unique for all applications started by a particular instance + of the ResourceManager. + @return short integer identifier of the ApplicationId]]> + + + + + start time of the ResourceManager which is + used to generate globally unique ApplicationId. + @return start time of the ResourceManager]]> + + + + + + + + + + + + + + + + + + + + + ApplicationId represents the globally unique + identifier for an application.

    + +

    The globally unique nature of the identifier is achieved by using the + cluster timestamp i.e. start-time of the + ResourceManager along with a monotonically increasing counter + for the application.

    ]]> +
    +
    + + + + + + + + ApplicationId of the application. + @return ApplicationId of the application]]> + + + + + ApplicationAttemptId of the current + attempt of the application + @return ApplicationAttemptId of the attempt]]> + + + + + user who submitted the application. + @return user who submitted the application]]> + + + + + queue to which the application was submitted. + @return queue to which the application was submitted]]> + + + + + name of the application. + @return name of the application]]> + + + + + host on which the ApplicationMaster + is running. + @return host on which the ApplicationMaster + is running]]> + + + + + RPC port of the ApplicationMaster. + @return RPC port of the ApplicationMaster]]> + + + + + client token for communicating with the + ApplicationMaster. +

    + ClientToAMToken is the security token used by the AMs to verify + authenticity of any client. +

    + +

    + The ResourceManager, provides a secure token (via + {@link ApplicationReport#getClientToAMToken()}) which is verified by the + ApplicationMaster when the client directly talks to an AM. +

    + @return client token for communicating with the + ApplicationMaster]]> +
    +
    + + + YarnApplicationState of the application. + @return YarnApplicationState of the application]]> + + + + + diagnositic information of the application in case of + errors. + @return diagnositic information of the application in case + of errors]]> + + + + + tracking url for the application. + @return tracking url for the application]]> + + + + + start time of the application. + @return start time of the application]]> + + + + + + + + + finish time of the application. + @return finish time of the application]]> + + + + + final finish status of the application. + @return final finish status of the application]]> + + + + + + + + + + + + + + + + + + + + + + + + + + The AMRM token is required for AM to RM scheduling operations. For + managed Application Masters YARN takes care of injecting it. For unmanaged + Applications Masters, the token must be obtained via this method and set + in the {@link org.apache.hadoop.security.UserGroupInformation} of the + current user. +

    + The AMRM token will be returned only if all the following conditions are + met: +

      +
    • the requester is the owner of the ApplicationMaster
    • +
    • the application master is an unmanaged ApplicationMaster
    • +
    • the application master is in ACCEPTED state
    • +
    + Else this method returns NULL. + + @return the AM to RM token if available.]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + It includes details such as: +
      +
    • {@link ApplicationId} of the application.
    • +
    • Applications user.
    • +
    • Application queue.
    • +
    • Application name.
    • +
    • Host on which the ApplicationMaster is running.
    • +
    • RPC port of the ApplicationMaster.
    • +
    • Tracking URL.
    • +
    • {@link YarnApplicationState} of the application.
    • +
    • Diagnostic information in case of errors.
    • +
    • Start time of the application.
    • +
    • Client {@link Token} of the application (if security is enabled).
    • +
    + + @see ApplicationClientProtocol#getApplicationReport(org.apache.hadoop.yarn.api.protocolrecords.GetApplicationReportRequest)]]> +
    +
    + + + + + + + + + + + + + Resource. -1 for invalid/inaccessible reports. + @return the used Resource]]> + + + + + Resource. -1 for invalid/inaccessible reports. + @return the reserved Resource]]> + + + + + Resource. -1 for invalid/inaccessible reports. + @return the needed Resource]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ApplicationId of the submitted application. + @return ApplicationId of the submitted application]]> + + + + + + ApplicationId of the submitted application. + @param applicationId ApplicationId of the submitted + application]]> + + + + + name. + @return application name]]> + + + + + + name. + @param applicationName application name]]> + + + + + queue to which the application is being submitted. + @return queue to which the application is being submitted]]> + + + + + + queue to which the application is being submitted + @param queue queue to which the application is being submitted]]> + + + + + Priority of the application. + @return Priority of the application]]> + + + + + ContainerLaunchContext to describe the + Container with which the ApplicationMaster is + launched. + @return ContainerLaunchContext for the + ApplicationMaster container]]> + + + + + + ContainerLaunchContext to describe the + Container with which the ApplicationMaster is + launched. + @param amContainer ContainerLaunchContext for the + ApplicationMaster container]]> + + + + + YarnApplicationState. + Such apps will not be retried by the RM on app attempt failure. + The default value is false. + @return true if the AM is not managed by the RM]]> + + + + + + + + + + + + + + + + + + + + + + ApplicationMaster for this + application. Please note this will be DEPRECATED, use getResource + in getAMContainerResourceRequest instead. + + @return the resource required by the ApplicationMaster for + this application.]]> + + + + + + ApplicationMaster for this + application. + + @param resource the resource required by the ApplicationMaster + for this application.]]> + + + + + + + + + + + + + + + + + + + + + + + For managed AM, if the flag is true, running containers will not be killed + when application attempt fails and these containers will be retrieved by + the new application attempt on registration via + {@link ApplicationMasterProtocol#registerApplicationMaster(RegisterApplicationMasterRequest)}. +

    +

    + For unmanaged AM, if the flag is true, RM allows re-register and returns + the running containers in the same attempt back to the UAM for HA. +

    + + @param keepContainers the flag which indicates whether to keep containers + across application attempts.]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + getResource and getPriority of + ApplicationSubmissionContext. + + Number of containers and Priority will be ignored. + + @return ResourceRequest of the AM container + @deprecated See {@link #getAMContainerResourceRequests()}]]> + + + + + + + + + + + getAMContainerResourceRequest and its behavior. + + Number of containers and Priority will be ignored. + + @return List of ResourceRequests of the AM container]]> + + + + + + + + + + + + + + + + + + + + + + LogAggregationContext of the application + + @return LogAggregationContext of the application]]> + + + + + + LogAggregationContext for the application + + @param logAggregationContext + for the application]]> + + + + + + + + + + + + + + + + ApplicationTimeouts of the application. Timeout value is + in seconds. + @return all ApplicationTimeouts of the application.]]> + + + + + + ApplicationTimeouts for the application in seconds. + All pre-existing Map entries are cleared before adding the new Map. +

    + Note: If application timeout value is less than or equal to zero + then application submission will throw an exception. +

    + @param applicationTimeouts ApplicationTimeoutss for the + application]]> +
    +
    + + + + + + + + + + + + + + It includes details such as: +
      +
    • {@link ApplicationId} of the application.
    • +
    • Application user.
    • +
    • Application name.
    • +
    • {@link Priority} of the application.
    • +
    • + {@link ContainerLaunchContext} of the container in which the + ApplicationMaster is executed. +
    • +
    • + maxAppAttempts. The maximum number of application attempts. + It should be no larger than the global number of max attempts in the + YARN configuration. +
    • +
    • + attemptFailuresValidityInterval. The default value is -1. + when attemptFailuresValidityInterval in milliseconds is set to + {@literal >} 0, the failure number will no take failures which happen + out of the validityInterval into failure count. If failure count + reaches to maxAppAttempts, the application will be failed. +
    • +
    • Optional, application-specific {@link LogAggregationContext}
    • +
    + + @see ContainerLaunchContext + @see ApplicationClientProtocol#submitApplication(org.apache.hadoop.yarn.api.protocolrecords.SubmitApplicationRequest)]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + expiryTime for given timeout type. + @return expiryTime in ISO8601 standard with format + yyyy-MM-dd'T'HH:mm:ss.SSSZ.]]> + + + + + + expiryTime for given timeout type. + @param expiryTime in ISO8601 standard with format + yyyy-MM-dd'T'HH:mm:ss.SSSZ.]]> + + + + + Remaining Time of an application for given timeout type. + @return Remaining Time in seconds.]]> + + + + + + Remaining Time of an application for given timeout type. + @param remainingTime in seconds.]]> + + + + +
  • {@link ApplicationTimeoutType} of the timeout type.
  • +
  • Expiry time in ISO8601 standard with format + yyyy-MM-dd'T'HH:mm:ss.SSSZ or "UNLIMITED".
  • +
  • Remaining time in seconds.
  • + + The possible values for {ExpiryTime, RemainingTimeInSeconds} are +
      +
    • {UNLIMITED,-1} : Timeout is not configured for given timeout type + (LIFETIME).
    • +
    • {ISO8601 date string, 0} : Timeout is configured and application has + completed.
    • +
    • {ISO8601 date string, greater than zero} : Timeout is configured and + application is RUNNING. Application will be timed out after configured + value.
    • +
    ]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Resource allocated to the container. + @return Resource allocated to the container]]> + + + + + Priority at which the Container was + allocated. + @return Priority at which the Container was + allocated]]> + + + + + ContainerToken for the container. +

    ContainerToken is the security token used by the framework + to verify authenticity of any Container.

    + +

    The ResourceManager, on container allocation provides a + secure token which is verified by the NodeManager on + container launch.

    + +

    Applications do not need to care about ContainerToken, they + are transparently handled by the framework - the allocated + Container includes the ContainerToken.

    + + @see ApplicationMasterProtocol#allocate(org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest) + @see ContainerManagementProtocol#startContainers(org.apache.hadoop.yarn.api.protocolrecords.StartContainersRequest) + + @return ContainerToken for the container]]> +
    +
    + + + ID corresponding to the original {@code + ResourceRequest{@link #getAllocationRequestId()}}s which is satisfied by + this allocated {@code Container}. +

    + The scheduler may return multiple {@code AllocateResponse}s corresponding + to the same ID as and when scheduler allocates {@code Container}s. + Applications can continue to completely ignore the returned ID in + the response and use the allocation for any of their outstanding requests. +

    + + @return the ID corresponding to the original allocation request + which is satisfied by this allocation.]]> + + + + + The {@code ResourceManager} is the sole authority to allocate any + {@code Container} to applications. The allocated {@code Container} + is always on a single node and has a unique {@link ContainerId}. It has + a specific amount of {@link Resource} allocated. +

    + It includes details such as: +

      +
    • {@link ContainerId} for the container, which is globally unique.
    • +
    • + {@link NodeId} of the node on which it is allocated. +
    • +
    • HTTP uri of the node.
    • +
    • {@link Resource} allocated to the container.
    • +
    • {@link Priority} at which the container was allocated.
    • +
    • + Container {@link Token} of the container, used to securely verify + authenticity of the allocation. +
    • +
    + + Typically, an {@code ApplicationMaster} receives the {@code Container} + from the {@code ResourceManager} during resource-negotiation and then + talks to the {@code NodeManager} to start/stop containers. + + @see ApplicationMasterProtocol#allocate(org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest) + @see ContainerManagementProtocol#startContainers(org.apache.hadoop.yarn.api.protocolrecords.StartContainersRequest) + @see ContainerManagementProtocol#stopContainers(org.apache.hadoop.yarn.api.protocolrecords.StopContainersRequest)]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ApplicationAttemptId of the application to which the + Container was assigned. +

    + Note: If containers are kept alive across application attempts via + {@link ApplicationSubmissionContext#setKeepContainersAcrossApplicationAttempts(boolean)} + the ContainerId does not necessarily contain the current + running application attempt's ApplicationAttemptId This + container can be allocated by previously exited application attempt and + managed by the current running attempt thus have the previous application + attempt's ApplicationAttemptId. +

    + + @return ApplicationAttemptId of the application to which the + Container was assigned]]> +
    +
    + + + ContainerId, + which doesn't include epoch. Note that this method will be marked as + deprecated, so please use getContainerId instead. + @return lower 32 bits of identifier of the ContainerId]]> + + + + + ContainerId. Upper 24 bits are + reserved as epoch of cluster, and lower 40 bits are reserved as + sequential number of containers. + @return identifier of the ContainerId]]> + + + + + + + + + + + + + + + + + + + + + + + + ContainerId represents a globally unique identifier + for a {@link Container} in the cluster.

    ]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + LocalResource required by the container. + @return all LocalResource required by the container]]> + + + + + + LocalResource required by the container. All pre-existing + Map entries are cleared before adding the new Map + @param localResources LocalResource required by the container]]> + + + + + + Get application-specific binary service data. This is a map keyed + by the name of each {@link AuxiliaryService} that is configured on a + NodeManager and value correspond to the application specific data targeted + for the keyed {@link AuxiliaryService}. +

    + +

    + This will be used to initialize this application on the specific + {@link AuxiliaryService} running on the NodeManager by calling + {@link AuxiliaryService#initializeApplication(ApplicationInitializationContext)} +

    + + @return application-specific binary service data]]> +
    +
    + + + + + Set application-specific binary service data. This is a map keyed + by the name of each {@link AuxiliaryService} that is configured on a + NodeManager and value correspond to the application specific data targeted + for the keyed {@link AuxiliaryService}. All pre-existing Map entries are + preserved. +

    + + @param serviceData + application-specific binary service data]]> +
    +
    + + + environment variables for the container. + @return environment variables for the container]]> + + + + + + environment variables for the container. All pre-existing Map + entries are cleared before adding the new Map + @param environment environment variables for the container]]> + + + + + commands for launching the container. + @return the list of commands for launching the container]]> + + + + + + commands for launching the container. All + pre-existing List entries are cleared before adding the new List + @param commands the list of commands for launching the container]]> + + + + + ApplicationACLs for the application. + @return all the ApplicationACLs]]> + + + + + + ApplicationACLs for the application. All pre-existing + Map entries are cleared before adding the new Map + @param acls ApplicationACLs for the application]]> + + + + + ContainerRetryContext to relaunch container. + @return ContainerRetryContext to relaunch container.]]> + + + + + + ContainerRetryContext to relaunch container. + @param containerRetryContext ContainerRetryContext to + relaunch container.]]> + + + + + It includes details such as: +
      +
    • {@link ContainerId} of the container.
    • +
    • {@link Resource} allocated to the container.
    • +
    • User to whom the container is allocated.
    • +
    • Security tokens (if security is enabled).
    • +
    • + {@link LocalResource} necessary for running the container such + as binaries, jar, shared-objects, side-files etc. +
    • +
    • Optional, application-specific binary service data.
    • +
    • Environment variables for the launched process.
    • +
    • Command to launch the container.
    • +
    • Retry strategy when container exits with failure.
    • +
    + + @see ContainerManagementProtocol#startContainers(org.apache.hadoop.yarn.api.protocolrecords.StartContainersRequest)]]> +
    +
    + + + + + + + + ContainerId of the container. + + @return ContainerId of the container.]]> + + + + + + + + Resource of the container. + + @return allocated Resource of the container.]]> + + + + + + + + NodeId where container is running. + + @return allocated NodeId where container is running.]]> + + + + + + + + Priority of the container. + + @return allocated Priority of the container.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ContainerState of the container. + + @return final ContainerState of the container.]]> + + + + + + + + exit status of the container. + + @return final exit status of the container.]]> + + + + + + + + + + + + + + + + + + It includes details such as: +
      +
    • {@link ContainerId} of the container.
    • +
    • Allocated Resources to the container.
    • +
    • Assigned Node id.
    • +
    • Assigned Priority.
    • +
    • Creation Time.
    • +
    • Finish Time.
    • +
    • Container Exit Status.
    • +
    • {@link ContainerState} of the container.
    • +
    • Diagnostic information in case of errors.
    • +
    • Log URL.
    • +
    • nodeHttpAddress
    • +
    ]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + It provides details such as: +
      +
    • + {@link ContainerRetryPolicy} : + - NEVER_RETRY(DEFAULT value): no matter what error code is when container + fails to run, just do not retry. + - RETRY_ON_ALL_ERRORS: no matter what error code is, when container fails + to run, just retry. + - RETRY_ON_SPECIFIC_ERROR_CODES: when container fails to run, do retry if + the error code is one of errorCodes, otherwise do not retry. + + Note: if error code is 137(SIGKILL) or 143(SIGTERM), it will not retry + because it is usually killed on purpose. +
    • +
    • + maxRetries specifies how many times to retry if need to retry. + If the value is -1, it means retry forever. +
    • +
    • retryInterval specifies delaying some time before relaunch + container, the unit is millisecond.
    • +
    • + failuresValidityInterval: default value is -1. + When failuresValidityInterval in milliseconds is set to {@literal >} 0, + the failure number will not take failures which happen out of the + failuresValidityInterval into failure count. If failure count + reaches to maxRetries, the container will be failed. +
    • +
    ]]> +
    +
    + + + + + + + + + + Retry policy for relaunching a Container.

    ]]> +
    +
    + + + + + + + + + + State of a Container.

    ]]> +
    +
    + + + + + + + + ContainerId of the container. + @return ContainerId of the container]]> + + + + + ExecutionType of the container. + @return ExecutionType of the container]]> + + + + + ContainerState of the container. + @return ContainerState of the container]]> + + + + + Get the exit status for the container.

    + +

    Note: This is valid only for completed containers i.e. containers + with state {@link ContainerState#COMPLETE}. + Otherwise, it returns an ContainerExitStatus.INVALID. +

    + +

    Containers killed by the framework, either due to being released by + the application or being 'lost' due to node failures etc. have a special + exit code of ContainerExitStatus.ABORTED.

    + +

    When threshold number of the nodemanager-local-directories or + threshold number of the nodemanager-log-directories become bad, then + container is not launched and is exited with ContainersExitStatus.DISKS_FAILED. +

    + + @return exit status for the container]]> +
    +
    + + + diagnostic messages for failed containers. + @return diagnostic messages for failed containers]]> + + + + + Resource allocated to the container. + @return Resource allocated to the container]]> + + + + + + + + + + + + + + + It provides details such as: +
      +
    • {@code ContainerId} of the container.
    • +
    • {@code ExecutionType} of the container.
    • +
    • {@code ContainerState} of the container.
    • +
    • Exit status of a completed container.
    • +
    • Diagnostic message for a failed container.
    • +
    • {@link Resource} allocated to the container.
    • +
    ]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + The execution types are the following: +
      +
    • {@link #GUARANTEED} - this container is guaranteed to start its + execution, once the corresponding start container request is received by + an NM. +
    • {@link #OPPORTUNISTIC} - the execution of this container may not start + immediately at the NM that receives the corresponding start container + request (depending on the NM's available resources). Moreover, it may be + preempted if it blocks a GUARANTEED container from being executed. +
    ]]> +
    +
    + + + + + + + + + + + + + + + + + + + ExecutionType of the requested container. + + @param execType + ExecutionType of the requested container]]> + + + + + ExecutionType. + + @return ExecutionType.]]> + + + + + + + + + + + ResourceRequest. + Defaults to false. + @return whether ExecutionType request should be strictly honored]]> + + + + + + + + + ExecutionType as well as flag that explicitly asks the + configuredScheduler to return Containers of exactly the Execution Type + requested.]]> + + + + + + + + + + + + Application.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + location of the resource to be localized. + @return location of the resource to be localized]]> + + + + + + location of the resource to be localized. + @param resource location of the resource to be localized]]> + + + + + size of the resource to be localized. + @return size of the resource to be localized]]> + + + + + + size of the resource to be localized. + @param size size of the resource to be localized]]> + + + + + timestamp of the resource to be localized, used + for verification. + @return timestamp of the resource to be localized]]> + + + + + + timestamp of the resource to be localized, used + for verification. + @param timestamp timestamp of the resource to be localized]]> + + + + + LocalResourceType of the resource to be localized. + @return LocalResourceType of the resource to be localized]]> + + + + + + LocalResourceType of the resource to be localized. + @param type LocalResourceType of the resource to be localized]]> + + + + + LocalResourceVisibility of the resource to be + localized. + @return LocalResourceVisibility of the resource to be + localized]]> + + + + + + LocalResourceVisibility of the resource to be + localized. + @param visibility LocalResourceVisibility of the resource to be + localized]]> + + + + + pattern that should be used to extract entries from the + archive (only used when type is PATTERN). + @return pattern that should be used to extract entries from the + archive.]]> + + + + + + pattern that should be used to extract entries from the + archive (only used when type is PATTERN). + @param pattern pattern that should be used to extract entries + from the archive.]]> + + + + + + + + + + + shouldBeUploadedToSharedCache + of this request]]> + + + + LocalResource represents a local resource required to + run a container.

    + +

    The NodeManager is responsible for localizing the resource + prior to launching the container.

    + +

    Applications can specify {@link LocalResourceType} and + {@link LocalResourceVisibility}.

    + + @see LocalResourceType + @see LocalResourceVisibility + @see ContainerLaunchContext + @see ApplicationSubmissionContext + @see ContainerManagementProtocol#startContainers(org.apache.hadoop.yarn.api.protocolrecords.StartContainersRequest)]]> +
    +
    + + + + + + + + + + type + of a resource localized by the {@code NodeManager}. +

    + The type can be one of: +

      +
    • + {@link #FILE} - Regular file i.e. uninterpreted bytes. +
    • +
    • + {@link #ARCHIVE} - Archive, which is automatically unarchived by the + NodeManager. +
    • +
    • + {@link #PATTERN} - A hybrid between {@link #ARCHIVE} and {@link #FILE}. +
    • +
    + + @see LocalResource + @see ContainerLaunchContext + @see ApplicationSubmissionContext + @see ContainerManagementProtocol#startContainers(org.apache.hadoop.yarn.api.protocolrecords.StartContainersRequest)]]> +
    +
    + + + + + + + + + + visibility + of a resource localized by the {@code NodeManager}. +

    + The visibility can be one of: +

      +
    • {@link #PUBLIC} - Shared by all users on the node.
    • +
    • + {@link #PRIVATE} - Shared among all applications of the + same user on the node. +
    • +
    • + {@link #APPLICATION} - Shared only among containers of the + same application on the node. +
    • +
    + + @see LocalResource + @see ContainerLaunchContext + @see ApplicationSubmissionContext + @see ContainerManagementProtocol#startContainers(org.apache.hadoop.yarn.api.protocolrecords.StartContainersRequest)]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + It includes details such as: +
      +
    • + includePattern. It uses Java Regex to filter the log files + which match the defined include pattern and those log files + will be uploaded when the application finishes. +
    • +
    • + excludePattern. It uses Java Regex to filter the log files + which match the defined exclude pattern and those log files + will not be uploaded when application finishes. If the log file + name matches both the include and the exclude pattern, this file + will be excluded eventually. +
    • +
    • + rolledLogsIncludePattern. It uses Java Regex to filter the log files + which match the defined include pattern and those log files + will be aggregated in a rolling fashion. +
    • +
    • + rolledLogsExcludePattern. It uses Java Regex to filter the log files + which match the defined exclude pattern and those log files + will not be aggregated in a rolling fashion. If the log file + name matches both the include and the exclude pattern, this file + will be excluded eventually. +
    • +
    • + policyClassName. The policy class name that implements + ContainerLogAggregationPolicy. At runtime, nodemanager will the policy + if a given container's log should be aggregated based on the + ContainerType and other runtime state such as exit code by calling + ContainerLogAggregationPolicy#shouldDoLogAggregation. + This is useful when the app only wants to aggregate logs of a subset of + containers. Here are the available policies. Please make sure to specify + the canonical name by prefixing org.apache.hadoop.yarn.server. + nodemanager.containermanager.logaggregation. + to the class simple name below. + NoneContainerLogAggregationPolicy: skip aggregation for all containers. + AllContainerLogAggregationPolicy: aggregate all containers. + AMOrFailedContainerLogAggregationPolicy: aggregate application master + or failed containers. + FailedOrKilledContainerLogAggregationPolicy: aggregate failed or killed + containers + FailedContainerLogAggregationPolicy: aggregate failed containers + AMOnlyLogAggregationPolicy: aggregate application master containers + SampleContainerLogAggregationPolicy: sample logs of successful worker + containers, in addition to application master and failed/killed + containers. + If it isn't specified, it will use the cluster-wide default policy + defined by configuration yarn.nodemanager.log-aggregation.policy.class. + The default value of yarn.nodemanager.log-aggregation.policy.class is + AllContainerLogAggregationPolicy. +
    • +
    • + policyParameters. The parameters passed to the policy class via + ContainerLogAggregationPolicy#parseParameters during the policy object + initialization. This is optional. Some policy class might use parameters + to adjust its settings. It is up to policy class to define the scheme of + parameters. + For example, SampleContainerLogAggregationPolicy supports the format of + "SR:0.5,MIN:50", which means sample rate of 50% beyond the first 50 + successful worker containers. +
    • +
    + + @see ApplicationSubmissionContext]]> +
    +
    + + + + + + + + NodeManager for which the NMToken + is used to authenticate. + @return the {@link NodeId} of the NodeManager for which the + NMToken is used to authenticate.]]> + + + + + + + + NodeManager + @return the {@link Token} used for authenticating with NodeManager]]> + + + + + + + + + + + + The NMToken is used for authenticating communication with + NodeManager

    +

    It is issued by ResourceMananger when ApplicationMaster + negotiates resource with ResourceManager and + validated on NodeManager side.

    + @see AllocateResponse#getNMTokens()]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Node Attribute is a kind of a label which represents one of the + attribute/feature of a Node. Its different from node partition label as + resource guarantees across the queues will not be maintained for these type + of labels. +

    +

    + A given Node can be mapped with any kind of attribute, few examples are + HAS_SSD=true, JAVA_VERSION=JDK1.8, OS_TYPE=WINDOWS. +

    +

    + Its not compulsory for all the attributes to have value, empty string is the + default value of the NodeAttributeType.STRING +

    +

    + Node Attribute Prefix is used as namespace to segregate the attributes. +

    ]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + Node Attribute Info describes a NodeAttribute. +

    ]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + Node AttributeKey uniquely identifies a given Node Attribute. Node Attribute + is identified based on attribute prefix and name. +

    +

    + Node Attribute Prefix is used as namespace to segregate the attributes. +

    ]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + Type of a node Attribute. +

    + Based on this attribute expressions and values will be evaluated.]]> +
    +
    + + + + + + + + + + + + + hostname of the node. + @return hostname of the node]]> + + + + + port for communicating with the node. + @return port for communicating with the node]]> + + + + + + + + + + + + + + + + + + + NodeId is the unique identifier for a node.

    + +

    It includes the hostname and port to uniquely + identify the node. Thus, it is unique across restarts of any + NodeManager.

    ]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + NodeId of the node. + @return NodeId of the node]]> + + + + + NodeState of the node. + @return NodeState of the node]]> + + + + + http address of the node. + @return http address of the node]]> + + + + + rack name for the node. + @return rack name for the node]]> + + + + + used Resource on the node. + @return used Resource on the node]]> + + + + + total Resource on the node. + @return total Resource on the node]]> + + + + + diagnostic health report of the node. + @return diagnostic health report of the node]]> + + + + + last timestamp at which the health report was received. + @return last timestamp at which the health report was received]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + It includes details such as: +
      +
    • {@link NodeId} of the node.
    • +
    • HTTP Tracking URL of the node.
    • +
    • Rack name for the node.
    • +
    • Used {@link Resource} on the node.
    • +
    • Total available {@link Resource} of the node.
    • +
    • Number of running containers on the node.
    • +
    + + @see ApplicationClientProtocol#getClusterNodes(org.apache.hadoop.yarn.api.protocolrecords.GetClusterNodesRequest)]]> +
    +
    + + + + + + + + + + + + + + + + State of a Node.

    ]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + Mapping of Attribute Value to a Node. +

    ]]> +
    +
    + + + + + + + + + + + + ResourceManager. + @see PreemptionContract + @see StrictPreemptionContract]]> + + + + + + + + + + ApplicationMaster about resources requested back by the + ResourceManager. + @see AllocateRequest#setAskList(List)]]> + + + + + ApplicationMaster that may be reclaimed by the + ResourceManager. If the AM prefers a different set of + containers, then it may checkpoint or kill containers matching the + description in {@link #getResourceRequest}. + @return Set of containers at risk if the contract is not met.]]> + + + + ResourceManager. + The ApplicationMaster (AM) can satisfy this request according + to its own priorities to prevent containers from being forcibly killed by + the platform. + @see PreemptionMessage]]> + + + + + + + + + + ResourceManager]]> + + + + + + + + + + The AM should decode both parts of the message. The {@link + StrictPreemptionContract} specifies particular allocations that the RM + requires back. The AM can checkpoint containers' state, adjust its execution + plan to move the computation, or take no action and hope that conditions that + caused the RM to ask for the container will change. +

    + In contrast, the {@link PreemptionContract} also includes a description of + resources with a set of containers. If the AM releases containers matching + that profile, then the containers enumerated in {@link + PreemptionContract#getContainers()} may not be killed. +

    + Each preemption message reflects the RM's current understanding of the + cluster state, so a request to return N containers may not + reflect containers the AM is releasing, recently exited containers the RM has + yet to learn about, or new containers allocated before the message was + generated. Conversely, an RM may request a different profile of containers in + subsequent requests. +

    + The policy enforced by the RM is part of the scheduler. Generally, only + containers that have been requested consistently should be killed, but the + details are not specified.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + The ACL is one of: +

      +
    • + {@link #SUBMIT_APPLICATIONS} - ACL to submit applications to the queue. +
    • +
    • {@link #ADMINISTER_QUEUE} - ACL to administer the queue.
    • +
    + + @see QueueInfo + @see ApplicationClientProtocol#getQueueUserAcls(org.apache.hadoop.yarn.api.protocolrecords.GetQueueUserAclsInfoRequest)]]> +
    +
    + + + + + + + + name of the queue. + @return name of the queue]]> + + + + + configured capacity of the queue. + @return configured capacity of the queue]]> + + + + + maximum capacity of the queue. + @return maximum capacity of the queue]]> + + + + + current capacity of the queue. + @return current capacity of the queue]]> + + + + + child queues of the queue. + @return child queues of the queue]]> + + + + + running applications of the queue. + @return running applications of the queue]]> + + + + + QueueState of the queue. + @return QueueState of the queue]]> + + + + + accessible node labels of the queue. + @return accessible node labels of the queue]]> + + + + + default node label expression of the queue, this takes + affect only when the ApplicationSubmissionContext and + ResourceRequest don't specify their + NodeLabelExpression. + + @return default node label expression of the queue]]> + + + + + + + + queue stats for the queue + + @return queue stats of the queue]]> + + + + + + + + + + + preemption status of the queue. + @return if property is not in proto, return null; + otherwise, return preemption status of the queue]]> + + + + + + + + + + + + + + + It includes information such as: +
      +
    • Queue name.
    • +
    • Capacity of the queue.
    • +
    • Maximum capacity of the queue.
    • +
    • Current capacity of the queue.
    • +
    • Child queues.
    • +
    • Running applications.
    • +
    • {@link QueueState} of the queue.
    • +
    • {@link QueueConfigurations} of the queue.
    • +
    + + @see QueueState + @see QueueConfigurations + @see ApplicationClientProtocol#getQueueInfo(org.apache.hadoop.yarn.api.protocolrecords.GetQueueInfoRequest)]]> +
    +
    + + + + + + + + + + + A queue is in one of: +
      +
    • {@link #RUNNING} - normal state.
    • +
    • {@link #STOPPED} - not accepting new application submissions.
    • +
    • + {@link #DRAINING} - not accepting new application submissions + and waiting for applications finish. +
    • +
    + + @see QueueInfo + @see ApplicationClientProtocol#getQueueInfo(org.apache.hadoop.yarn.api.protocolrecords.GetQueueInfoRequest)]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + queue name of the queue. + @return queue name of the queue]]> + + + + + QueueACL for the given user. + @return list of QueueACL for the given user]]> + + + + QueueUserACLInfo provides information {@link QueueACL} for + the given user.

    + + @see QueueACL + @see ApplicationClientProtocol#getQueueUserAcls(org.apache.hadoop.yarn.api.protocolrecords.GetQueueUserAclsInfoRequest)]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + The ACL is one of: +
      +
    • + {@link #ADMINISTER_RESERVATIONS} - ACL to create, list, update and + delete reservations. +
    • +
    • {@link #LIST_RESERVATIONS} - ACL to list reservations.
    • +
    • {@link #SUBMIT_RESERVATIONS} - ACL to create reservations.
    • +
    + Users can always list, update and delete their own reservations.]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + It includes: +
      +
    • Duration of the reservation.
    • +
    • Acceptance time of the duration.
    • +
    • + List of {@link ResourceAllocationRequest}, which includes the time + interval, and capability of the allocation. + {@code ResourceAllocationRequest} represents an allocation + made for a reservation for the current state of the queue. This can be + changed for reasons such as re-planning, but will always be subject to + the constraints of the user contract as described by + {@link ReservationDefinition} +
    • +
    • {@link ReservationId} of the reservation.
    • +
    • {@link ReservationDefinition} used to make the reservation.
    • +
    + + @see ResourceAllocationRequest + @see ReservationId + @see ReservationDefinition]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + start time of the {@code ResourceManager} which is used to + generate globally unique {@link ReservationId}. + + @return start time of the {@code ResourceManager}]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + {@link ReservationId} represents the globally unique identifier for + a reservation. +

    + +

    + The globally unique nature of the identifier is achieved by using the + cluster timestamp i.e. start-time of the {@code ResourceManager} + along with a monotonically increasing counter for the reservation. +

    ]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + It includes: +
      +
    • {@link Resource} required for each request.
    • +
    • + Number of containers, of above specifications, which are required by the + application. +
    • +
    • Concurrency that indicates the gang size of the request.
    • +
    ]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + memory of the resource. Note - while memory has + never had a unit specified, all YARN configurations have specified memory + in MB. The assumption has been that the daemons and applications are always + using the same units. With the introduction of the ResourceInformation + class we have support for units - so this function will continue to return + memory but in the units of MB + + @return memory(in MB) of the resource]]> + + + + + memory of the resource. Note - while memory has + never had a unit specified, all YARN configurations have specified memory + in MB. The assumption has been that the daemons and applications are always + using the same units. With the introduction of the ResourceInformation + class we have support for units - so this function will continue to return + memory but in the units of MB + + @return memory of the resource]]> + + + + + + memory of the resource. Note - while memory has + never had a unit specified, all YARN configurations have specified memory + in MB. The assumption has been that the daemons and applications are always + using the same units. With the introduction of the ResourceInformation + class we have support for units - so this function will continue to set + memory but the assumption is that the value passed is in units of MB. + + @param memory memory(in MB) of the resource]]> + + + + + + memory of the resource. + @param memory memory of the resource]]> + + + + + number of virtual cpu cores of the resource. + + Virtual cores are a unit for expressing CPU parallelism. A node's capacity + should be configured with virtual cores equal to its number of physical + cores. A container should be requested with the number of cores it can + saturate, i.e. the average number of threads it expects to have runnable + at a time. + + @return num of virtual cpu cores of the resource]]> + + + + + + number of virtual cpu cores of the resource. + + Virtual cores are a unit for expressing CPU parallelism. A node's capacity + should be configured with virtual cores equal to its number of physical + cores. A container should be requested with the number of cores it can + saturate, i.e. the average number of threads it expects to have runnable + at a time. + + @param vCores number of virtual cpu cores of the resource]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Resource models a set of computer resources in the + cluster.

    + +

    Currently it models both memory and CPU.

    + +

    The unit for memory is megabytes. CPU is modeled with virtual cores + (vcores), a unit for expressing parallelism. A node's capacity should + be configured with virtual cores equal to its number of physical cores. A + container should be requested with the number of cores it can saturate, i.e. + the average number of threads it expects to have runnable at a time.

    + +

    Virtual cores take integer values and thus currently CPU-scheduling is + very coarse. A complementary axis for CPU requests that represents + processing power will likely be added in the future to enable finer-grained + resource configuration.

    + +

    Typically, applications request Resource of suitable + capability to run their component tasks.

    + + @see ResourceRequest + @see ApplicationMasterProtocol#allocate(org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest)]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + It includes: +
      +
    • StartTime of the allocation.
    • +
    • EndTime of the allocation.
    • +
    • {@link Resource} reserved for the allocation.
    • +
    + + @see Resource]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + blacklist of resources + for the application. + + @see ResourceRequest + @see ApplicationMasterProtocol#allocate(org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest)]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + host/rack string represents an arbitrary + host name. + + @param hostName host/rack on which the allocation is desired + @return whether the given host/rack string represents an arbitrary + host name]]> + + + + + Priority of the request. + @return Priority of the request]]> + + + + + + Priority of the request + @param priority Priority of the request]]> + + + + + host/rack) on which the allocation + is desired. + + A special value of * signifies that any resource + (host/rack) is acceptable. + + @return resource (e.g. host/rack) on which the allocation + is desired]]> + + + + + + host/rack) on which the allocation + is desired. + + A special value of * signifies that any resource name + (e.g. host/rack) is acceptable. + + @param resourceName (e.g. host/rack) on which the + allocation is desired]]> + + + + + + + + + + + + + + + + ResourceRequest. Defaults to true. + + @return whether locality relaxation is enabled with this + ResourceRequest.]]> + + + + + + ExecutionTypeRequest of the requested container. + + @param execSpec + ExecutionTypeRequest of the requested container]]> + + + + + ResourceRequest. Defaults to true. + + @return whether locality relaxation is enabled with this + ResourceRequest.]]> + + + + + + For a request at a network hierarchy level, set whether locality can be relaxed + to that level and beyond.

    + +

    If the flag is off on a rack-level ResourceRequest, + containers at that request's priority will not be assigned to nodes on that + request's rack unless requests specifically for those nodes have also been + submitted.

    + +

    If the flag is off on an {@link ResourceRequest#ANY}-level + ResourceRequest, containers at that request's priority will + only be assigned on racks for which specific requests have also been + submitted.

    + +

    For example, to request a container strictly on a specific node, the + corresponding rack-level and any-level requests should have locality + relaxation set to false. Similarly, to request a container strictly on a + specific rack, the corresponding any-level request should have locality + relaxation set to false.

    + + @param relaxLocality whether locality relaxation is enabled with this + ResourceRequest.]]> + + + + + + + + + + + + + + + + ID corresponding to this allocation request. This + ID is an identifier for different {@code ResourceRequest}s from the same + application. The allocated {@code Container}(s) received as part of the + {@code AllocateResponse} response will have the ID corresponding to the + original {@code ResourceRequest} for which the RM made the allocation. +

    + The scheduler may return multiple {@code AllocateResponse}s corresponding + to the same ID as and when scheduler allocates {@code Container}(s). + Applications can continue to completely ignore the returned ID in + the response and use the allocation for any of their outstanding requests. +

    + If one wishes to replace an entire {@code ResourceRequest} corresponding to + a specific ID, they can simply cancel the corresponding {@code + ResourceRequest} and submit a new one afresh. + + @return the ID corresponding to this allocation request.]]> + + + + + + ID corresponding to this allocation request. This + ID is an identifier for different {@code ResourceRequest}s from the same + application. The allocated {@code Container}(s) received as part of the + {@code AllocateResponse} response will have the ID corresponding to the + original {@code ResourceRequest} for which the RM made the allocation. +

    + The scheduler may return multiple {@code AllocateResponse}s corresponding + to the same ID as and when scheduler allocates {@code Container}(s). + Applications can continue to completely ignore the returned ID in + the response and use the allocation for any of their outstanding requests. +

    + If one wishes to replace an entire {@code ResourceRequest} corresponding to + a specific ID, they can simply cancel the corresponding {@code + ResourceRequest} and submit a new one afresh. +

    + If the ID is not set, scheduler will continue to work as previously and all + allocated {@code Container}(s) will have the default ID, -1. + + @param allocationRequestID the ID corresponding to this allocation + request.]]> + + + + + + Resource capability of the request. + @param capability Resource capability of the request]]> + + + + + Resource capability of the request. + @return Resource capability of the request]]> + + + + + + + + + + + + + + + + + + It includes: +

      +
    • {@link Priority} of the request.
    • +
    • + The name of the host or rack on which the allocation is + desired. A special value of * signifies that + any host/rack is acceptable to the application. +
    • +
    • {@link Resource} required for each request.
    • +
    • + Number of containers, of above specifications, which are required + by the application. +
    • +
    • + A boolean relaxLocality flag, defaulting to {@code true}, + which tells the {@code ResourceManager} if the application wants + locality to be loose (i.e. allows fall-through to rack or any) + or strict (i.e. specify hard constraint on resource allocation). +
    • +
    + + @see Resource + @see ApplicationMasterProtocol#allocate(org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest)]]> +
    +
    + + + + + + + priority of the request. + @see ResourceRequest#setPriority(Priority) + @param priority priority of the request + @return {@link ResourceRequestBuilder}]]> + + + + + + resourceName of the request. + @see ResourceRequest#setResourceName(String) + @param resourceName resourceName of the request + @return {@link ResourceRequestBuilder}]]> + + + + + + capability of the request. + @see ResourceRequest#setCapability(Resource) + @param capability capability of the request + @return {@link ResourceRequestBuilder}]]> + + + + + + numContainers of the request. + @see ResourceRequest#setNumContainers(int) + @param numContainers numContainers of the request + @return {@link ResourceRequestBuilder}]]> + + + + + + relaxLocality of the request. + @see ResourceRequest#setRelaxLocality(boolean) + @param relaxLocality relaxLocality of the request + @return {@link ResourceRequestBuilder}]]> + + + + + + nodeLabelExpression of the request. + @see ResourceRequest#setNodeLabelExpression(String) + @param nodeLabelExpression + nodeLabelExpression of the request + @return {@link ResourceRequestBuilder}]]> + + + + + + executionTypeRequest of the request. + @see ResourceRequest#setExecutionTypeRequest( + ExecutionTypeRequest) + @param executionTypeRequest + executionTypeRequest of the request + @return {@link ResourceRequestBuilder}]]> + + + + + + executionTypeRequest of the request with 'ensure + execution type' flag set to true. + @see ResourceRequest#setExecutionTypeRequest( + ExecutionTypeRequest) + @param executionType executionType of the request. + @return {@link ResourceRequestBuilder}]]> + + + + + + allocationRequestId of the request. + @see ResourceRequest#setAllocationRequestId(long) + @param allocationRequestId + allocationRequestId of the request + @return {@link ResourceRequestBuilder}]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + virtual memory. + + @return virtual memory in MB]]> + + + + + + virtual memory. + + @param vmem virtual memory in MB]]> + + + + + physical memory. + + @return physical memory in MB]]> + + + + + + physical memory. + + @param pmem physical memory in MB]]> + + + + + CPU utilization. + + @return CPU utilization normalized to 1 CPU]]> + + + + + + CPU utilization. + + @param cpu CPU utilization normalized to 1 CPU]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + ResourceUtilization models the utilization of a set of computer + resources in the cluster. +

    ]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + allocationRequestId of the request. + + @see SchedulingRequest#setAllocationRequestId(long) + @param allocationRequestId allocationRequestId of the + request + @return {@link SchedulingRequest.SchedulingRequestBuilder}]]> + + + + + + priority of the request. + + @param priority priority of the request + @return {@link SchedulingRequest.SchedulingRequestBuilder} + @see SchedulingRequest#setPriority(Priority)]]> + + + + + + executionType of the request. + + @see SchedulingRequest#setExecutionType(ExecutionTypeRequest) + @param executionType executionType of the request + @return {@link SchedulingRequest.SchedulingRequestBuilder}]]> + + + + + + allocationTags of the request. + + @see SchedulingRequest#setAllocationTags(Set) + @param allocationTags allocationsTags of the request + @return {@link SchedulingRequest.SchedulingRequestBuilder}]]> + + + + + + executionType of the request. + + @see SchedulingRequest#setResourceSizing(ResourceSizing) + @param resourceSizing resourceSizing of the request + @return {@link SchedulingRequest.SchedulingRequestBuilder}]]> + + + + + + placementConstraintExpression of the request. + + @see SchedulingRequest#setPlacementConstraint( + PlacementConstraint) + @param placementConstraintExpression placementConstraints of + the request + @return {@link SchedulingRequest.SchedulingRequestBuilder}]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ApplicationMaster that may be reclaimed by the + ResourceManager. + @return the set of {@link ContainerId} to be preempted.]]> + + + + ApplicationMaster (AM) + may attempt to checkpoint work or adjust its execution plan to accommodate + it. In contrast to {@link PreemptionContract}, the AM has no flexibility in + selecting which resources to return to the cluster. + @see PreemptionMessage]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Token is the security entity used by the framework + to verify authenticity of any resource.

    ]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ContainerId of the container. + @return ContainerId of the container]]> + + + + + + + + + + + ContainerUpdateType of the container. + @return ContainerUpdateType of the container.]]> + + + + + + ContainerUpdateType of the container. + @param updateType of the Container]]> + + + + + ContainerId of the container. + @return ContainerId of the container]]> + + + + + + ContainerId of the container. + @param containerId ContainerId of the container]]> + + + + + ExecutionType of the container. + @return ExecutionType of the container]]> + + + + + + ExecutionType of the container. + @param executionType ExecutionType of the container]]> + + + + + + Resource capability of the request. + @param capability Resource capability of the request]]> + + + + + Resource capability of the request. + @return Resource capability of the request]]> + + + + + + + + + + + + It includes: +
      +
    • version for the container.
    • +
    • {@link ContainerId} for the container.
    • +
    • + {@link Resource} capability of the container after the update request + is completed. +
    • +
    • + {@link ExecutionType} of the container after the update request is + completed. +
    • +
    + + Update rules: +
      +
    • + Currently only ONE aspect of the container can be updated per request + (user can either update Capability OR ExecutionType in one request.. + not both). +
    • +
    • + There must be only 1 update request per container in an allocate call. +
    • +
    • + If a new update request is sent for a container (in a subsequent allocate + call) before the first one is satisfied by the Scheduler, it will + overwrite the previous request. +
    • +
    + @see ApplicationMasterProtocol#allocate(org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest)]]> +
    +
    + + + + + + + + + + + + + + + ContainerUpdateType. + @return ContainerUpdateType]]> + + + + + + ContainerUpdateType. + @param updateType ContainerUpdateType]]> + + + + + Container. + @return Container]]> + + + + + + Container. + @param container Container]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + URL represents a serializable {@link java.net.URL}.

    ]]> +
    +
    + + + + + + + + + + RMAppAttempt.]]> + + + + + + + + + + + + ApplicationMaster.]]> + + + + + + + + + + NodeManagers in the cluster. + @return number of NodeManagers in the cluster]]> + + + + + DecommissionedNodeManagers in the cluster. + + @return number of DecommissionedNodeManagers in the cluster]]> + + + + + ActiveNodeManagers in the cluster. + + @return number of ActiveNodeManagers in the cluster]]> + + + + + LostNodeManagers in the cluster. + + @return number of LostNodeManagers in the cluster]]> + + + + + UnhealthyNodeManagers in the cluster. + + @return number of UnhealthyNodeManagers in the cluster]]> + + + + + RebootedNodeManagers in the cluster. + + @return number of RebootedNodeManagers in the cluster]]> + + + + YarnClusterMetrics represents cluster metrics.

    + +

    Currently only number of NodeManagers is provided.

    ]]> +
    +
    + +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + This class contains the information about a timeline domain, which is used + to a user to host a number of timeline entities, isolating them from others'. + The user can also define the reader and writer users/groups for the the + domain, which is used to control the access to its entities. +

    + +

    + The reader and writer users/groups pattern that the user can supply is the + same as what AccessControlList takes. +

    ]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + The class that contains the the meta information of some conceptual entity + and its related events. The entity can be an application, an application + attempt, a container or whatever the user-defined object. +

    + +

    + Primary filters will be used to index the entities in + TimelineStore, such that users should carefully choose the + information they want to store as the primary filters. The remaining can be + stored as other information. +

    ]]> +
    +
    + + + + + + + + + + + + + ApplicationId of the + TimelineEntityGroupId. + + @return ApplicationId of the + TimelineEntityGroupId]]> + + + + + + + + timelineEntityGroupId. + + @return timelineEntityGroupId]]> + + + + + + + + + + + + + + + + + + + TimelineEntityGroupId is an abstract way for + timeline service users to represent #a group of related timeline data. + For example, all entities that represents one data flow DAG execution + can be grouped into one timeline entity group.

    ]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + This class contains the information about a timeline service domain, which is + used to a user to host a number of timeline entities, isolating them from + others'. The user can also define the reader and writer users/groups for + the domain, which is used to control the access to its entities. +

    +

    + The reader and writer users/groups pattern that the user can supply is the + same as what AccessControlList takes. +

    ]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + The constuctor is used to construct a proxy {@link TimelineEntity} or its + subclass object from the real entity object that carries information. +

    + +

    + It is usually used in the case where we want to recover class polymorphism + after deserializing the entity from its JSON form. +

    + @param entity the real entity that carries information]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Note: Entities will be stored in the order of idPrefix specified. + If users decide to set idPrefix for an entity, they MUST provide + the same prefix for every update of this entity. +

    + Example:
    + TimelineEntity entity = new TimelineEntity();
    + entity.setIdPrefix(value);
    + 
    + Users can use {@link TimelineServiceHelper#invertLong(long)} to invert + the prefix if necessary. + + @param entityIdPrefix prefix for an entity.]]> +
    +
    + + + + + + + +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + name property as a + InetSocketAddress. On an HA cluster, + this fetches the address corresponding to the RM identified by + {@link #RM_HA_ID}. + @param name property name. + @param defaultAddress the default value + @param defaultPort the default port + @return InetSocketAddress]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + yarn.resourcemanager.scheduler.class + cannot handle placement constraints, the corresponding SchedulingRequests + will be rejected. As of now, only the capacity scheduler supports + SchedulingRequests. In particular, it currently supports anti-affinity + constraints (no affinity or cardinality) and places one container at a + time. The advantage of this handler compared to the placement-processor is + that it follows the same ordering rules for queues (sorted by utilization, + priority) and apps (sorted by FIFO/fairness/priority) as the ones followed + by the main scheduler.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + OPPORTUNISTIC containers on the NM.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
  • default
  • +
  • docker
  • +
  • javasandbox
  • + ]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Default platform-specific CLASSPATH for YARN applications. A + comma-separated list of CLASSPATH entries constructed based on the client + OS environment expansion syntax. +

    +

    + Note: Use {@link #DEFAULT_YARN_CROSS_PLATFORM_APPLICATION_CLASSPATH} for + cross-platform practice i.e. submit an application from a Windows client to + a Linux/Unix server or vice versa. +

    ]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + The information is passed along to applications via + {@link StartContainersResponse#getAllServicesMetaData()} that is returned by + {@link ContainerManagementProtocol#startContainers(StartContainersRequest)} +

    + + @return meta-data for this service that should be made available to + applications.]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + The method used by the NodeManager log aggregation service + to initial the policy object with parameters specified by the application + or the cluster-wide setting. +

    + + @param parameters parameters with scheme defined by the policy class.]]> +
    +
    + + + + + The method used by the NodeManager log aggregation service + to ask the policy object if a given container's logs should be aggregated. +

    + + @param logContext ContainerLogContext + @return Whether or not the container's logs should be aggregated.]]> +
    +
    + + + +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + The method used by administrators to ask SCM to run cleaner task right away +

    + + @param request request SharedCacheManager to run a cleaner task + @return SharedCacheManager returns an empty response + on success and throws an exception on rejecting the request + @throws YarnException + @throws IOException]]> +
    +
    + + + The protocol between administrators and the SharedCacheManager +

    ]]> +
    +
    + +
    + + + + + + + + + + + + + + + + + Tag1(N1),P1:Tag2(N2),P2:...:TagN(Nn),Pn

    + + where TagN(Nn) is a key value pair to determine the source + allocation tag and the number of allocations, such as: + +

    foo(3)

    + + Optional when using NodeAttribute Constraint. + + and where Pn can be any form of a valid constraint expression, + such as: + +
      +
    • in,node,foo,bar
    • +
    • notin,node,foo,bar,1,2
    • +
    • and(notin,node,foo:notin,node,bar)
    • +
    + + and NodeAttribute Constraint such as + +
      +
    • yarn.rm.io/foo=true
    • +
    • java=1.7,1.8
    • +
    + @param expression expression string. + @return a map of source tags to placement constraint mapping. + @throws PlacementConstraintParseException]]> +
    +
    + + + + + +
    + +
    + + + +
    diff --git a/hadoop-yarn-project/hadoop-yarn/dev-support/jdiff/Apache_Hadoop_YARN_Client_3.2.4.xml b/hadoop-yarn-project/hadoop-yarn/dev-support/jdiff/Apache_Hadoop_YARN_Client_3.2.4.xml new file mode 100644 index 00000000000..b8f2a2e529c --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/dev-support/jdiff/Apache_Hadoop_YARN_Client_3.2.4.xml @@ -0,0 +1,3006 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + In secure mode, YARN verifies access to the application, queue + etc. before accepting the request. +

    + If the user does not have VIEW_APP access then the following + fields in the report will be set to stubbed values: +

      +
    • host - set to "N/A"
    • +
    • RPC port - set to -1
    • +
    • client token - set to "N/A"
    • +
    • diagnostics - set to "N/A"
    • +
    • tracking URL - set to "N/A"
    • +
    • original tracking URL - set to "N/A"
    • +
    • resource usage report - all values are -1
    • +
    + + @param appId + {@link ApplicationId} of the application that needs a report + @return application report + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + Get a report (ApplicationReport) of all Applications in the cluster. +

    + +

    + If the user does not have VIEW_APP access for an application + then the corresponding report will be filtered as described in + {@link #getApplicationReport(ApplicationId)}. +

    + + @return a list of reports for all applications + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + Get a report of the given ApplicationAttempt. +

    + +

    + In secure mode, YARN verifies access to the application, queue + etc. before accepting the request. +

    + + @param applicationAttemptId + {@link ApplicationAttemptId} of the application attempt that needs + a report + @return application attempt report + @throws YarnException + @throws ApplicationAttemptNotFoundException if application attempt + not found + @throws IOException]]> +
    +
    + + + + + + + Get a report of all (ApplicationAttempts) of Application in the cluster. +

    + + @param applicationId + @return a list of reports for all application attempts for specified + application + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + Get a report of the given Container. +

    + +

    + In secure mode, YARN verifies access to the application, queue + etc. before accepting the request. +

    + + @param containerId + {@link ContainerId} of the container that needs a report + @return container report + @throws YarnException + @throws ContainerNotFoundException if container not found + @throws IOException]]> +
    +
    + + + + + + + Get a report of all (Containers) of ApplicationAttempt in the cluster. +

    + + @param applicationAttemptId + @return a list of reports of all containers for specified application + attempt + @throws YarnException + @throws IOException]]> +
    +
    +
    + + + + + + + + + {@code + AMRMClient.createAMRMClientContainerRequest() + } + @return the newly create AMRMClient instance.]]> + + + + + + + + + + + + + + + + RegisterApplicationMasterResponse + @throws YarnException + @throws IOException]]> + + + + + + + + + + + RegisterApplicationMasterResponse + @throws YarnException + @throws IOException]]> + + + + + + + + addContainerRequest are sent to the + ResourceManager. New containers assigned to the master are + retrieved. Status of completed containers and node health updates are also + retrieved. This also doubles up as a heartbeat to the ResourceManager and + must be made periodically. The call may not always return any new + allocations of containers. App should not make concurrent allocate + requests. May cause request loss. + +

    + Note : If the user has not removed container requests that have already + been satisfied, then the re-register may end up sending the entire + container requests to the RM (including matched requests). Which would mean + the RM could end up giving it a lot of new allocated containers. +

    + + @param progressIndicator Indicates progress made by the master + @return the response of the allocate request + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + + + + + + + + allocate + @param req Resource request]]> + + + + + + + + + + + + + allocate. + Any previous pending resource change request of the same container will be + removed. + + Application that calls this method is expected to maintain the + Containers that are returned from previous successful + allocations or resource changes. By passing in the existing container and a + target resource capability to this method, the application requests the + ResourceManager to change the existing resource allocation to the target + resource allocation. + + @deprecated use + {@link #requestContainerUpdate(Container, UpdateContainerRequest)} + + @param container The container returned from the last successful resource + allocation or resource change + @param capability The target resource capability of the container]]> + + + + + + + allocate. + Any previous pending update request of the same container will be + removed. + + @param container The container returned from the last successful resource + allocation or update + @param updateContainerRequest The UpdateContainerRequest.]]> + + + + + + + + + + + + + + + + + + + + + + + + ContainerRequests matching the given + parameters. These ContainerRequests should have been added via + addContainerRequest earlier in the lifecycle. For performance, + the AMRMClient may return its internal collection directly without creating + a copy. Users should not perform mutable operations on the return value. + Each collection in the list contains requests with identical + Resource size that fit in the given capability. In a + collection, requests will be returned in the same order as they were added. + + NOTE: This API only matches Container requests that were created by the + client WITHOUT the allocationRequestId being set. + + @return Collection of request matching the parameters]]> + + + + + + + + + ContainerRequests matching the given + parameters. These ContainerRequests should have been added via + addContainerRequest earlier in the lifecycle. For performance, + the AMRMClient may return its internal collection directly without creating + a copy. Users should not perform mutable operations on the return value. + Each collection in the list contains requests with identical + Resource size that fit in the given capability. In a + collection, requests will be returned in the same order as they were added. + specify an ExecutionType. + + NOTE: This API only matches Container requests that were created by the + client WITHOUT the allocationRequestId being set. + + @param priority Priority + @param resourceName Location + @param executionType ExecutionType + @param capability Capability + @return Collection of request matching the parameters]]> + + + + + + + + + + + + + ContainerRequests matching the given + allocationRequestId. These ContainerRequests should have been added via + addContainerRequest earlier in the lifecycle. For performance, + the AMRMClient may return its internal collection directly without creating + a copy. Users should not perform mutable operations on the return value. + + NOTE: This API only matches Container requests that were created by the + client WITH the allocationRequestId being set to a non-default value. + + @param allocationRequestId Allocation Request Id + @return Collection of request matching the parameters]]> + + + + + + + + + + + + + AMRMClient. This cache must + be shared with the {@link NMClient} used to manage containers for the + AMRMClient +

    + If a NM token cache is not set, the {@link NMTokenCache#getSingleton()} + singleton instance will be used. + + @param nmTokenCache the NM token cache to use.]]> + + + + + AMRMClient. This cache must be + shared with the {@link NMClient} used to manage containers for the + AMRMClient. +

    + If a NM token cache is not set, the {@link NMTokenCache#getSingleton()} + singleton instance will be used. + + @return the NM token cache.]]> + + + + + + + + + + + + + + + + + + + + + + + + check to return true for each 1000 ms. + See also {@link #waitFor(java.util.function.Supplier, int)} + and {@link #waitFor(java.util.function.Supplier, int, int)} + @param check the condition for which it should wait]]> + + + + + + + + check to return true for each + checkEveryMillis ms. + See also {@link #waitFor(java.util.function.Supplier, int, int)} + @param check user defined checker + @param checkEveryMillis interval to call check]]> + + + + + + + + + check to return true for each + checkEveryMillis ms. In the main loop, this method will log + the message "waiting in main loop" for each logInterval times + iteration to confirm the thread is alive. + @param check user defined checker + @param checkEveryMillis interval to call check + @param logInterval interval to log for each]]> + + + + + + + + + + + + + + + + + + + + + + + + + + Start an allocated container.

    + +

    The ApplicationMaster or other applications that use the + client must provide the details of the allocated container, including the + Id, the assigned node's Id and the token via {@link Container}. In + addition, the AM needs to provide the {@link ContainerLaunchContext} as + well.

    + + @param container the allocated container + @param containerLaunchContext the context information needed by the + NodeManager to launch the + container + @return a map between the auxiliary service names and their outputs + @throws YarnException YarnException. + @throws IOException IOException.]]> +
    +
    + + + + + + Increase the resource of a container.

    + +

    The ApplicationMaster or other applications that use the + client must provide the details of the container, including the Id and + the target resource encapsulated in the updated container token via + {@link Container}. +

    + + @param container the container with updated token. + + @throws YarnException YarnException. + @throws IOException IOException.]]> +
    +
    + + + + + + Update the resources of a container.

    + +

    The ApplicationMaster or other applications that use the + client must provide the details of the container, including the Id and + the target resource encapsulated in the updated container token via + {@link Container}. +

    + + @param container the container with updated token. + + @throws YarnException YarnException. + @throws IOException IOException.]]> +
    +
    + + + + + + + Stop an started container.

    + + @param containerId the Id of the started container + @param nodeId the Id of the NodeManager + + @throws YarnException YarnException. + @throws IOException IOException.]]> +
    +
    + + + + + + + Query the status of a container.

    + + @param containerId the Id of the started container + @param nodeId the Id of the NodeManager + + @return the status of a container. + + @throws YarnException YarnException. + @throws IOException IOException.]]> +
    +
    + + + + + + + + Re-Initialize the Container.

    + + @param containerId the Id of the container to Re-Initialize. + @param containerLaunchContex the updated ContainerLaunchContext. + @param autoCommit commit re-initialization automatically ? + + @throws YarnException YarnException. + @throws IOException IOException.]]> +
    +
    + + + + + + Restart the specified container.

    + + @param containerId the Id of the container to restart. + + @throws YarnException YarnException. + @throws IOException IOException.]]> +
    +
    + + + + + + Rollback last reInitialization of the specified container.

    + + @param containerId the Id of the container to restart. + + @throws YarnException YarnException. + @throws IOException IOException.]]> +
    +
    + + + + + + Commit last reInitialization of the specified container.

    + + @param containerId the Id of the container to commit reInitialize. + + @throws YarnException YarnException. + @throws IOException IOException.]]> +
    +
    + + + + Set whether the containers that are started by this client, and are + still running should be stopped when the client stops. By default, the + feature should be enabled.

    However, containers will be stopped only + when service is stopped. i.e. after {@link NMClient#stop()}. + + @param enabled whether the feature is enabled or not]]> +
    +
    + + + + NMClient. This cache must be + shared with the {@link AMRMClient} that requested the containers managed + by this NMClient +

    + If a NM token cache is not set, the {@link NMTokenCache#getSingleton()} + singleton instance will be used. + + @param nmTokenCache the NM token cache to use.]]> + + + + + NMClient. This cache must be + shared with the {@link AMRMClient} that requested the containers managed + by this NMClient +

    + If a NM token cache is not set, the {@link NMTokenCache#getSingleton()} + singleton instance will be used. + + @return the NM token cache]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + By default YARN client libraries {@link AMRMClient} and {@link NMClient} use + {@link #getSingleton()} instance of the cache. +

      +
    • + Using the singleton instance of the cache is appropriate when running a + single ApplicationMaster in the same JVM. +
    • +
    • + When using the singleton, users don't need to do anything special, + {@link AMRMClient} and {@link NMClient} are already set up to use the + default singleton {@link NMTokenCache} +
    • +
    + If running multiple Application Masters in the same JVM, a different cache + instance should be used for each Application Master. +
      +
    • + If using the {@link AMRMClient} and the {@link NMClient}, setting up + and using an instance cache is as follows: +
      +   NMTokenCache nmTokenCache = new NMTokenCache();
      +   AMRMClient rmClient = AMRMClient.createAMRMClient();
      +   NMClient nmClient = NMClient.createNMClient();
      +   nmClient.setNMTokenCache(nmTokenCache);
      +   ...
      + 
      +
    • +
    • + If using the {@link AMRMClientAsync} and the {@link NMClientAsync}, + setting up and using an instance cache is as follows: +
      +   NMTokenCache nmTokenCache = new NMTokenCache();
      +   AMRMClient rmClient = AMRMClient.createAMRMClient();
      +   NMClient nmClient = NMClient.createNMClient();
      +   nmClient.setNMTokenCache(nmTokenCache);
      +   AMRMClientAsync rmClientAsync = new AMRMClientAsync(rmClient, 1000, [AMRM_CALLBACK]);
      +   NMClientAsync nmClientAsync = new NMClientAsync("nmClient", nmClient, [NM_CALLBACK]);
      +   ...
      + 
      +
    • +
    • + If using {@link ApplicationMasterProtocol} and + {@link ContainerManagementProtocol} directly, setting up and using an + instance cache is as follows: +
      +   NMTokenCache nmTokenCache = new NMTokenCache();
      +   ...
      +   ApplicationMasterProtocol amPro = ClientRMProxy.createRMProxy(conf, ApplicationMasterProtocol.class);
      +   ...
      +   AllocateRequest allocateRequest = ...
      +   ...
      +   AllocateResponse allocateResponse = rmClient.allocate(allocateRequest);
      +   for (NMToken token : allocateResponse.getNMTokens()) {
      +     nmTokenCache.setToken(token.getNodeId().toString(), token.getToken());
      +   }
      +   ...
      +   ContainerManagementProtocolProxy nmPro = ContainerManagementProtocolProxy(conf, nmTokenCache);
      +   ...
      +   nmPro.startContainer(container, containerContext);
      +   ...
      + 
      +
    • +
    + It is also possible to mix the usage of a client ({@code AMRMClient} or + {@code NMClient}, or the async versions of them) with a protocol proxy + ({@code ContainerManagementProtocolProxy} or + {@code ApplicationMasterProtocol}).]]> +
    +
    + + + + + + + + + + + + + + The method to claim a resource with the SharedCacheManager. + The client uses a checksum to identify the resource and an + {@link ApplicationId} to identify which application will be using the + resource. +

    + +

    + The SharedCacheManager responds with whether or not the + resource exists in the cache. If the resource exists, a URL to + the resource in the shared cache is returned. If the resource does not + exist, null is returned instead. +

    + +

    + Once a URL has been returned for a resource, that URL is safe to use for + the lifetime of the application that corresponds to the provided + ApplicationId. +

    + + @param applicationId ApplicationId of the application using the resource + @param resourceKey the key (i.e. checksum) that identifies the resource + @return URL to the resource, or null if it does not exist]]> +
    +
    + + + + + + + The method to release a resource with the SharedCacheManager. + This method is called once an application is no longer using a claimed + resource in the shared cache. The client uses a checksum to identify the + resource and an {@link ApplicationId} to identify which application is + releasing the resource. +

    + +

    + Note: This method is an optimization and the client is not required to call + it for correctness. +

    + + @param applicationId ApplicationId of the application releasing the + resource + @param resourceKey the key (i.e. checksum) that identifies the resource]]> +
    +
    + + + + + + + + + + +
    + + + + + + + + + + + + + + + + Obtain a {@link YarnClientApplication} for a new application, + which in turn contains the {@link ApplicationSubmissionContext} and + {@link org.apache.hadoop.yarn.api.protocolrecords.GetNewApplicationResponse} + objects. +

    + + @return {@link YarnClientApplication} built for a new application + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + Submit a new application to YARN. It is a blocking call - it + will not return {@link ApplicationId} until the submitted application is + submitted successfully and accepted by the ResourceManager. +

    + +

    + Users should provide an {@link ApplicationId} as part of the parameter + {@link ApplicationSubmissionContext} when submitting a new application, + otherwise it will throw the {@link ApplicationIdNotProvidedException}. +

    + +

    This internally calls {@link ApplicationClientProtocol#submitApplication + (SubmitApplicationRequest)}, and after that, it internally invokes + {@link ApplicationClientProtocol#getApplicationReport + (GetApplicationReportRequest)} and waits till it can make sure that the + application gets properly submitted. If RM fails over or RM restart + happens before ResourceManager saves the application's state, + {@link ApplicationClientProtocol + #getApplicationReport(GetApplicationReportRequest)} will throw + the {@link ApplicationNotFoundException}. This API automatically resubmits + the application with the same {@link ApplicationSubmissionContext} when it + catches the {@link ApplicationNotFoundException}

    + + @param appContext + {@link ApplicationSubmissionContext} containing all the details + needed to submit a new application + @return {@link ApplicationId} of the accepted application + @throws YarnException + @throws IOException + @see #createApplication()]]> +
    +
    + + + + + + + Fail an application attempt identified by given ID. +

    + + @param applicationAttemptId + {@link ApplicationAttemptId} of the attempt to fail. + @throws YarnException + in case of errors or if YARN rejects the request due to + access-control restrictions. + @throws IOException + @see #getQueueAclsInfo()]]> +
    +
    + + + + + + + Kill an application identified by given ID. +

    + + @param applicationId + {@link ApplicationId} of the application that needs to be killed + @throws YarnException + in case of errors or if YARN rejects the request due to + access-control restrictions. + @throws IOException + @see #getQueueAclsInfo()]]> +
    +
    + + + + + + + + Kill an application identified by given ID. +

    + @param applicationId {@link ApplicationId} of the application that needs to + be killed + @param diagnostics for killing an application. + @throws YarnException in case of errors or if YARN rejects the request due + to access-control restrictions. + @throws IOException]]> +
    +
    + + + + + + + Get a report of the given Application. +

    + +

    + In secure mode, YARN verifies access to the application, queue + etc. before accepting the request. +

    + +

    + If the user does not have VIEW_APP access then the following + fields in the report will be set to stubbed values: +

      +
    • host - set to "N/A"
    • +
    • RPC port - set to -1
    • +
    • client token - set to "N/A"
    • +
    • diagnostics - set to "N/A"
    • +
    • tracking URL - set to "N/A"
    • +
    • original tracking URL - set to "N/A"
    • +
    • resource usage report - all values are -1
    • +
    + + @param appId + {@link ApplicationId} of the application that needs a report + @return application report + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + The AMRM token is required for AM to RM scheduling operations. For + managed Application Masters YARN takes care of injecting it. For unmanaged + Applications Masters, the token must be obtained via this method and set + in the {@link org.apache.hadoop.security.UserGroupInformation} of the + current user. +

    + The AMRM token will be returned only if all the following conditions are + met: +

      +
    • the requester is the owner of the ApplicationMaster
    • +
    • the application master is an unmanaged ApplicationMaster
    • +
    • the application master is in ACCEPTED state
    • +
    + Else this method returns NULL. + + @param appId {@link ApplicationId} of the application to get the AMRM token + @return the AMRM token if available + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + Get a report (ApplicationReport) of all Applications in the cluster. +

    + +

    + If the user does not have VIEW_APP access for an application + then the corresponding report will be filtered as described in + {@link #getApplicationReport(ApplicationId)}. +

    + + @return a list of reports of all running applications + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + Get a report (ApplicationReport) of Applications + matching the given application types in the cluster. +

    + +

    + If the user does not have VIEW_APP access for an application + then the corresponding report will be filtered as described in + {@link #getApplicationReport(ApplicationId)}. +

    + + @param applicationTypes set of application types you are interested in + @return a list of reports of applications + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + Get a report (ApplicationReport) of Applications matching the given + application states in the cluster. +

    + +

    + If the user does not have VIEW_APP access for an application + then the corresponding report will be filtered as described in + {@link #getApplicationReport(ApplicationId)}. +

    + + @param applicationStates set of application states you are interested in + @return a list of reports of applications + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + + Get a report (ApplicationReport) of Applications matching the given + application types and application states in the cluster. +

    + +

    + If the user does not have VIEW_APP access for an application + then the corresponding report will be filtered as described in + {@link #getApplicationReport(ApplicationId)}. +

    + + @param applicationTypes set of application types you are interested in + @param applicationStates set of application states you are interested in + @return a list of reports of applications + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + + + Get a report (ApplicationReport) of Applications matching the given + application types, application states and application tags in the cluster. +

    + +

    + If the user does not have VIEW_APP access for an application + then the corresponding report will be filtered as described in + {@link #getApplicationReport(ApplicationId)}. +

    + + @param applicationTypes set of application types you are interested in + @param applicationStates set of application states you are interested in + @param applicationTags set of application tags you are interested in + @return a list of reports of applications + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + + + + Get a report (ApplicationReport) of Applications matching the given users, + queues, application types and application states in the cluster. If any of + the params is set to null, it is not used when filtering. +

    + +

    + If the user does not have VIEW_APP access for an application + then the corresponding report will be filtered as described in + {@link #getApplicationReport(ApplicationId)}. +

    + + @param queues set of queues you are interested in + @param users set of users you are interested in + @param applicationTypes set of application types you are interested in + @param applicationStates set of application states you are interested in + @return a list of reports of applications + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + Get a list of ApplicationReports that match the given + {@link GetApplicationsRequest}. +

    + +

    + If the user does not have VIEW_APP access for an application + then the corresponding report will be filtered as described in + {@link #getApplicationReport(ApplicationId)}. +

    + + @param request the request object to get the list of applications. + @return The list of ApplicationReports that match the request + @throws YarnException Exception specific to YARN. + @throws IOException Exception mostly related to connection errors.]]> +
    +
    + + + + + + Get metrics ({@link YarnClusterMetrics}) about the cluster. +

    + + @return cluster metrics + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + Get a report of nodes ({@link NodeReport}) in the cluster. +

    + + @param states The {@link NodeState}s to filter on. If no filter states are + given, nodes in all states will be returned. + @return A list of node reports + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + Get a delegation token so as to be able to talk to YARN using those tokens. + + @param renewer + Address of the renewer who can renew these tokens when needed by + securely talking to YARN. + @return a delegation token ({@link Token}) that can be used to + talk to YARN + @throws YarnException + @throws IOException]]> + + + + + + + + + Get information ({@link QueueInfo}) about a given queue. +

    + + @param queueName + Name of the queue whose information is needed + @return queue information + @throws YarnException + in case of errors or if YARN rejects the request due to + access-control restrictions. + @throws IOException]]> +
    +
    + + + + + + Get information ({@link QueueInfo}) about all queues, recursively if there + is a hierarchy +

    + + @return a list of queue-information for all queues + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + Get information ({@link QueueInfo}) about top level queues. +

    + + @return a list of queue-information for all the top-level queues + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + Get information ({@link QueueInfo}) about all the immediate children queues + of the given queue +

    + + @param parent + Name of the queue whose child-queues' information is needed + @return a list of queue-information for all queues who are direct children + of the given parent queue. + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + Get information about acls for current user on all the + existing queues. +

    + + @return a list of queue acls ({@link QueueUserACLInfo}) for + current user + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + Get a report of the given ApplicationAttempt. +

    + +

    + In secure mode, YARN verifies access to the application, queue + etc. before accepting the request. +

    + + @param applicationAttemptId + {@link ApplicationAttemptId} of the application attempt that needs + a report + @return application attempt report + @throws YarnException + @throws ApplicationAttemptNotFoundException if application attempt + not found + @throws IOException]]> +
    +
    + + + + + + + Get a report of all (ApplicationAttempts) of Application in the cluster. +

    + + @param applicationId application id of the app + @return a list of reports for all application attempts for specified + application. + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + Get a report of the given Container. +

    + +

    + In secure mode, YARN verifies access to the application, queue + etc. before accepting the request. +

    + + @param containerId + {@link ContainerId} of the container that needs a report + @return container report + @throws YarnException + @throws ContainerNotFoundException if container not found. + @throws IOException]]> +
    +
    + + + + + + + Get a report of all (Containers) of ApplicationAttempt in the cluster. +

    + + @param applicationAttemptId application attempt id + @return a list of reports of all containers for specified application + attempts + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + + Attempts to move the given application to the given queue. +

    + + @param appId + Application to move. + @param queue + Queue to place it in to. + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + Obtain a {@link GetNewReservationResponse} for a new reservation, + which contains the {@link ReservationId} object. +

    + + @return The {@link GetNewReservationResponse} containing a new + {@link ReservationId} object. + @throws YarnException if reservation cannot be created. + @throws IOException if reservation cannot be created.]]> +
    +
    + + + + + + + The interface used by clients to submit a new reservation to the + {@code ResourceManager}. +

    + +

    + The client packages all details of its request in a + {@link ReservationSubmissionRequest} object. This contains information + about the amount of capacity, temporal constraints, and gang needs. + Furthermore, the reservation might be composed of multiple stages, with + ordering dependencies among them. +

    + +

    + In order to respond, a new admission control component in the + {@code ResourceManager} performs an analysis of the resources that have + been committed over the period of time the user is requesting, verify that + the user requests can be fulfilled, and that it respect a sharing policy + (e.g., {@code CapacityOverTimePolicy}). Once it has positively determined + that the ReservationRequest is satisfiable the {@code ResourceManager} + answers with a {@link ReservationSubmissionResponse} that includes a + {@link ReservationId}. Upon failure to find a valid allocation the response + is an exception with the message detailing the reason of failure. +

    + +

    + The semantics guarantees that the {@link ReservationId} returned, + corresponds to a valid reservation existing in the time-range request by + the user. The amount of capacity dedicated to such reservation can vary + overtime, depending of the allocation that has been determined. But it is + guaranteed to satisfy all the constraint expressed by the user in the + {@link ReservationDefinition} +

    + + @param request request to submit a new Reservation + @return response contains the {@link ReservationId} on accepting the + submission + @throws YarnException if the reservation cannot be created successfully + @throws IOException]]> +
    +
    + + + + + + + The interface used by clients to update an existing Reservation. This is + referred to as a re-negotiation process, in which a user that has + previously submitted a Reservation. +

    + +

    + The allocation is attempted by virtually substituting all previous + allocations related to this Reservation with new ones, that satisfy the new + {@link ReservationDefinition}. Upon success the previous allocation is + atomically substituted by the new one, and on failure (i.e., if the system + cannot find a valid allocation for the updated request), the previous + allocation remains valid. +

    + + @param request to update an existing Reservation (the + {@link ReservationUpdateRequest} should refer to an existing valid + {@link ReservationId}) + @return response empty on successfully updating the existing reservation + @throws YarnException if the request is invalid or reservation cannot be + updated successfully + @throws IOException]]> +
    +
    + + + + + + + The interface used by clients to remove an existing Reservation. +

    + + @param request to remove an existing Reservation (the + {@link ReservationDeleteRequest} should refer to an existing valid + {@link ReservationId}) + @return response empty on successfully deleting the existing reservation + @throws YarnException if the request is invalid or reservation cannot be + deleted successfully + @throws IOException]]> +
    +
    + + + + + + + The interface used by clients to get the list of reservations in a plan. + The reservationId will be used to search for reservations to list if it is + provided. Otherwise, it will select active reservations within the + startTime and endTime (inclusive). +

    + + @param request to list reservations in a plan. Contains fields to select + String queue, ReservationId reservationId, long startTime, + long endTime, and a bool includeReservationAllocations. + + queue: Required. Cannot be null or empty. Refers to the + reservable queue in the scheduler that was selected when + creating a reservation submission + {@link ReservationSubmissionRequest}. + + reservationId: Optional. If provided, other fields will + be ignored. + + startTime: Optional. If provided, only reservations that + end after the startTime will be selected. This defaults + to 0 if an invalid number is used. + + endTime: Optional. If provided, only reservations that + start on or before endTime will be selected. This defaults + to Long.MAX_VALUE if an invalid number is used. + + includeReservationAllocations: Optional. Flag that + determines whether the entire reservation allocations are + to be returned. Reservation allocations are subject to + change in the event of re-planning as described by + {@link ReservationDefinition}. + + @return response that contains information about reservations that are + being searched for. + @throws YarnException if the request is invalid + @throws IOException if the request failed otherwise]]> +
    +
    + + + + + + The interface used by client to get node to labels mappings in existing cluster +

    + + @return node to labels mappings + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + The interface used by client to get labels to nodes mapping + in existing cluster +

    + + @return node to labels mappings + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + The interface used by client to get labels to nodes mapping + for specified labels in existing cluster +

    + + @param labels labels for which labels to nodes mapping has to be retrieved + @return labels to nodes mappings for specific labels + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + The interface used by client to get node labels in the cluster +

    + + @return cluster node labels collection + @throws YarnException when there is a failure in + {@link ApplicationClientProtocol} + @throws IOException when there is a failure in + {@link ApplicationClientProtocol}]]> +
    +
    + + + + + + + + The interface used by client to set priority of an application +

    + @param applicationId + @param priority + @return updated priority of an application. + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + + Signal a container identified by given ID. +

    + + @param containerId + {@link ContainerId} of the container that needs to be signaled + @param command the signal container command + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + + + + + Get the resource profiles available in the RM. +

    + @return a Map of the resource profile names to their capabilities + @throws YARNFeatureNotEnabledException if resource-profile is disabled + @throws YarnException if any error happens inside YARN + @throws IOException in case of other errors]]> +
    +
    + + + + + + + Get the details of a specific resource profile from the RM. +

    + @param profile the profile name + @return resource profile name with its capabilities + @throws YARNFeatureNotEnabledException if resource-profile is disabled + @throws YarnException if any error happens inside YARN + @throws IOException in case of other others]]> +
    +
    + + + + + + Get available resource types supported by RM. +

    + @return list of supported resource types with detailed information + @throws YarnException if any issue happens inside YARN + @throws IOException in case of other others]]> +
    +
    + + + + + + The interface used by client to get node attributes in the cluster. +

    + + @return cluster node attributes collection + @throws YarnException when there is a failure in + {@link ApplicationClientProtocol} + @throws IOException when there is a failure in + {@link ApplicationClientProtocol}]]> +
    +
    + + + + + + + The interface used by client to get mapping of AttributeKey to associated + NodeToAttributeValue list for specified node attributeKeys in the cluster. +

    + + @param attributes AttributeKeys for which associated NodeToAttributeValue + mapping value has to be retrieved. If empty or null is set then + will return mapping for all attributeKeys in the cluster + @return mapping of AttributeKey to List of associated + NodeToAttributeValue's. + @throws YarnException + @throws IOException]]> +
    +
    + + + + + + + The interface used by client to get all node to attribute mapping in + existing cluster. +

    + + @param hostNames HostNames for which host to attributes mapping has to + be retrived.If empty or null is set then will return + all nodes to attributes mapping in cluster. + @return Node to attribute mappings + @throws YarnException + @throws IOException]]> +
    +
    +
    + + + + + + + + + + + +
    + + + + + + + + + + + + + + + + Create a new instance of AMRMClientAsync.

    + + @param intervalMs heartbeat interval in milliseconds between AM and RM + @param callbackHandler callback handler that processes responses from + the ResourceManager]]> +
    +
    + + + + + + Create a new instance of AMRMClientAsync.

    + + @param client the AMRMClient instance + @param intervalMs heartbeat interval in milliseconds between AM and RM + @param callbackHandler callback handler that processes responses from + the ResourceManager]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + RegisterApplicationMasterResponse + @throws YarnException + @throws IOException]]> + + + + + + + + + + + + + + + + allocate + @param req Resource request]]> + + + + + + + + + + + + + allocate. + Any previous pending resource change request of the same container will be + removed. + + Application that calls this method is expected to maintain the + Containers that are returned from previous successful + allocations or resource changes. By passing in the existing container and a + target resource capability to this method, the application requests the + ResourceManager to change the existing resource allocation to the target + resource allocation. + + @deprecated use + {@link #requestContainerUpdate(Container, UpdateContainerRequest)} + + @param container The container returned from the last successful resource + allocation or resource change + @param capability The target resource capability of the container]]> + + + + + + + allocate. + Any previous pending update request of the same container will be + removed. + + @param container The container returned from the last successful resource + allocation or update + @param updateContainerRequest The UpdateContainerRequest.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + check to return true for each 1000 ms. + See also {@link #waitFor(java.util.function.Supplier, int)} + and {@link #waitFor(java.util.function.Supplier, int, int)} + @param check the condition for which it should wait]]> + + + + + + + + check to return true for each + checkEveryMillis ms. + See also {@link #waitFor(java.util.function.Supplier, int, int)} + @param check user defined checker + @param checkEveryMillis interval to call check]]> + + + + + + + + + check to return true for each + checkEveryMillis ms. In the main loop, this method will log + the message "waiting in main loop" for each logInterval times + iteration to confirm the thread is alive. + @param check user defined checker + @param checkEveryMillis interval to call check + @param logInterval interval to log for each]]> + + + + + + + + + + AMRMClientAsync handles communication with the ResourceManager + and provides asynchronous updates on events such as container allocations and + completions. It contains a thread that sends periodic heartbeats to the + ResourceManager. + + It should be used by implementing a CallbackHandler: +
    + {@code
    + class MyCallbackHandler extends AMRMClientAsync.AbstractCallbackHandler {
    +   public void onContainersAllocated(List containers) {
    +     [run tasks on the containers]
    +   }
    +
    +   public void onContainersUpdated(List containers) {
    +     [determine if resource allocation of containers have been increased in
    +      the ResourceManager, and if so, inform the NodeManagers to increase the
    +      resource monitor/enforcement on the containers]
    +   }
    +
    +   public void onContainersCompleted(List statuses) {
    +     [update progress, check whether app is done]
    +   }
    +   
    +   public void onNodesUpdated(List updated) {}
    +   
    +   public void onReboot() {}
    + }
    + }
    + 
    + + The client's lifecycle should be managed similarly to the following: + +
    + {@code
    + AMRMClientAsync asyncClient = 
    +     createAMRMClientAsync(appAttId, 1000, new MyCallbackhandler());
    + asyncClient.init(conf);
    + asyncClient.start();
    + RegisterApplicationMasterResponse response = asyncClient
    +    .registerApplicationMaster(appMasterHostname, appMasterRpcPort,
    +       appMasterTrackingUrl);
    + asyncClient.addContainerRequest(containerRequest);
    + [... wait for application to complete]
    + asyncClient.unregisterApplicationMaster(status, appMsg, trackingUrl);
    + asyncClient.stop();
    + }
    + 
    ]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Update the resources of a container.

    + +

    The ApplicationMaster or other applications that use the + client must provide the details of the container, including the Id and + the target resource encapsulated in the updated container token via + {@link Container}. +

    + + @param container the container with updated token.]]> +
    +
    + + + + + + Re-Initialize the Container.

    + + @param containerId the Id of the container to Re-Initialize. + @param containerLaunchContex the updated ContainerLaunchContext. + @param autoCommit commit re-initialization automatically ?]]> +
    +
    + + + + Restart the specified container.

    + + @param containerId the Id of the container to restart.]]> +
    +
    + + + + Rollback last reInitialization of the specified container.

    + + @param containerId the Id of the container to restart.]]> +
    +
    + + + + Commit last reInitialization of the specified container.

    + + @param containerId the Id of the container to commit reInitialize.]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + NMClientAsync handles communication with all the NodeManagers + and provides asynchronous updates on getting responses from them. It + maintains a thread pool to communicate with individual NMs where a number of + worker threads process requests to NMs by using {@link NMClientImpl}. The max + size of the thread pool is configurable through + {@link YarnConfiguration#NM_CLIENT_ASYNC_THREAD_POOL_MAX_SIZE}. + + It should be used in conjunction with a CallbackHandler. For example + +
    + {@code
    + class MyCallbackHandler extends NMClientAsync.AbstractCallbackHandler {
    +   public void onContainerStarted(ContainerId containerId,
    +       Map allServiceResponse) {
    +     [post process after the container is started, process the response]
    +   }
    +
    +   public void onContainerResourceIncreased(ContainerId containerId,
    +       Resource resource) {
    +     [post process after the container resource is increased]
    +   }
    +
    +   public void onContainerStatusReceived(ContainerId containerId,
    +       ContainerStatus containerStatus) {
    +     [make use of the status of the container]
    +   }
    +
    +   public void onContainerStopped(ContainerId containerId) {
    +     [post process after the container is stopped]
    +   }
    +
    +   public void onStartContainerError(
    +       ContainerId containerId, Throwable t) {
    +     [handle the raised exception]
    +   }
    +
    +   public void onGetContainerStatusError(
    +       ContainerId containerId, Throwable t) {
    +     [handle the raised exception]
    +   }
    +
    +   public void onStopContainerError(
    +       ContainerId containerId, Throwable t) {
    +     [handle the raised exception]
    +   }
    + }
    + }
    + 
    + + The client's life-cycle should be managed like the following: + +
    + {@code
    + NMClientAsync asyncClient = 
    +     NMClientAsync.createNMClientAsync(new MyCallbackhandler());
    + asyncClient.init(conf);
    + asyncClient.start();
    + asyncClient.startContainer(container, containerLaunchContext);
    + [... wait for container being started]
    + asyncClient.getContainerStatus(container.getId(), container.getNodeId(),
    +     container.getContainerToken());
    + [... handle the status in the callback instance]
    + asyncClient.stopContainer(container.getId(), container.getNodeId(),
    +     container.getContainerToken());
    + [... wait for container being stopped]
    + asyncClient.stop();
    + }
    + 
    ]]> +
    +
    + +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    diff --git a/hadoop-yarn-project/hadoop-yarn/dev-support/jdiff/Apache_Hadoop_YARN_Common_3.2.4.xml b/hadoop-yarn-project/hadoop-yarn/dev-support/jdiff/Apache_Hadoop_YARN_Common_3.2.4.xml new file mode 100644 index 00000000000..5ca1716cca2 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/dev-support/jdiff/Apache_Hadoop_YARN_Common_3.2.4.xml @@ -0,0 +1,3964 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Type of proxy. + @return Proxy to the ResourceManager for the specified client protocol. + @throws IOException]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Create a new instance of AppAdminClient. +

    + + @param appType application type + @param conf configuration + @return app admin client]]> +
    +
    + + + + + + + + + + Launch a new YARN application. +

    + + @param fileName specification of application + @param appName name of the application + @param lifetime lifetime of the application + @param queue queue of the application + @return exit code + @throws IOException IOException + @throws YarnException exception in client or server]]> +
    +
    + + + + + + + Stop a YARN application (attempt to stop gracefully before killing the + application). In the case of a long-running service, the service may be + restarted later. +

    + + @param appName the name of the application + @return exit code + @throws IOException IOException + @throws YarnException exception in client or server]]> +
    +
    + + + + + + + Start a YARN application from a previously saved specification. In the + case of a long-running service, the service must have been previously + launched/started and then stopped, or previously saved but not started. +

    + + @param appName the name of the application + @return exit code + @throws IOException IOException + @throws YarnException exception in client or server]]> +
    +
    + + + + + + + + + + Save the specification for a YARN application / long-running service. + The application may be started later. +

    + + @param fileName specification of application to save + @param appName name of the application + @param lifetime lifetime of the application + @param queue queue of the application + @return exit code + @throws IOException IOException + @throws YarnException exception in client or server]]> +
    +
    + + + + + + + Remove the specification and all application data for a YARN application. + The application cannot be running. +

    + + @param appName the name of the application + @return exit code + @throws IOException IOException + @throws YarnException exception in client or server]]> +
    +
    + + + + + + + + Change the number of running containers for a component of a YARN + application / long-running service. +

    + + @param appName the name of the application + @param componentCounts map of component name to new component count or + amount to change existing component count (e.g. + 5, +5, -5) + @return exit code + @throws IOException IOException + @throws YarnException exception in client or server]]> +
    +
    + + + + + + + Upload AM dependencies to HDFS. This makes future application launches + faster since the dependencies do not have to be uploaded on each launch. +

    + + @param destinationFolder + an optional HDFS folder where dependency tarball will be uploaded + @return exit code + @throws IOException + IOException + @throws YarnException + exception in client or server]]> +
    +
    + + + + + + + Get detailed app specific status string for a YARN application. +

    + + @param appIdOrName appId or appName + @return status string + @throws IOException IOException + @throws YarnException exception in client or server]]> +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + + + + + + + + + + + + + + + + Send the information of a number of conceptual entities to the timeline + server. It is a blocking API. The method will not return until it gets the + response from the timeline server. +

    + + @param entities + the collection of {@link TimelineEntity} + @return the error information if the sent entities are not correctly stored + @throws IOException if there are I/O errors + @throws YarnException if entities are incomplete/invalid]]> +
    +
    + + + + + + + + + Send the information of a number of conceptual entities to the timeline + server. It is a blocking API. The method will not return until it gets the + response from the timeline server. + + This API is only for timeline service v1.5 +

    + + @param appAttemptId {@link ApplicationAttemptId} + @param groupId {@link TimelineEntityGroupId} + @param entities + the collection of {@link TimelineEntity} + @return the error information if the sent entities are not correctly stored + @throws IOException if there are I/O errors + @throws YarnException if entities are incomplete/invalid]]> +
    +
    + + + + + + + Send the information of a domain to the timeline server. It is a + blocking API. The method will not return until it gets the response from + the timeline server. +

    + + @param domain + an {@link TimelineDomain} object + @throws IOException + @throws YarnException]]> +
    +
    + + + + + + + + Send the information of a domain to the timeline server. It is a + blocking API. The method will not return until it gets the response from + the timeline server. + + This API is only for timeline service v1.5 +

    + + @param domain + an {@link TimelineDomain} object + @param appAttemptId {@link ApplicationAttemptId} + @throws IOException + @throws YarnException]]> +
    +
    + + + + + + + Get a delegation token so as to be able to talk to the timeline server in a + secure way. +

    + + @param renewer + Address of the renewer who can renew these tokens when needed by + securely talking to the timeline server + @return a delegation token ({@link Token}) that can be used to talk to the + timeline server + @throws IOException + @throws YarnException]]> +
    +
    + + + + + + + Renew a timeline delegation token. +

    + + @param timelineDT + the delegation token to renew + @return the new expiration time + @throws IOException + @throws YarnException]]> +
    +
    + + + + + + + Cancel a timeline delegation token. +

    + + @param timelineDT + the delegation token to cancel + @throws IOException + @throws YarnException]]> +
    +
    + + + +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + parameterized event of type T]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + InputStream to be checksumed + @return the message digest of the input stream + @throws IOException]]> + + + + + + + + + + + + SharedCacheChecksum object based on the configurable + algorithm implementation + (see yarn.sharedcache.checksum.algo.impl) + + @return SharedCacheChecksum object]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + The object type on which this state machine operates. + @param The state of the entity. + @param The external eventType to be handled. + @param The event object.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + When {@link #limit} would be reached on append, past messages will be + truncated from head, and a header telling the user about truncation will be + prepended, with ellipses in between header and messages. +

    + Note that header and ellipses are not counted against {@link #limit}. +

    + An example: + +

    + {@code
    +   // At the beginning it's an empty string
    +   final Appendable shortAppender = new BoundedAppender(80);
    +   // The whole message fits into limit
    +   shortAppender.append(
    +       "message1 this is a very long message but fitting into limit\n");
    +   // The first message is truncated, the second not
    +   shortAppender.append("message2 this is shorter than the previous one\n");
    +   // The first message is deleted, the second truncated, the third
    +   // preserved
    +   shortAppender.append("message3 this is even shorter message, maybe.\n");
    +   // The first two are deleted, the third one truncated, the last preserved
    +   shortAppender.append("message4 the shortest one, yet the greatest :)");
    +   // Current contents are like this:
    +   // Diagnostic messages truncated, showing last 80 chars out of 199:
    +   // ...s is even shorter message, maybe.
    +   // message4 the shortest one, yet the greatest :)
    + }
    + 
    +

    + Note that null values are {@link #append(CharSequence) append}ed + just like in {@link StringBuilder#append(CharSequence) original + implementation}. +

    + Note that this class is not thread safe.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/hadoop-yarn-project/hadoop-yarn/dev-support/jdiff/Apache_Hadoop_YARN_Server_Common_3.2.4.xml b/hadoop-yarn-project/hadoop-yarn/dev-support/jdiff/Apache_Hadoop_YARN_Server_Common_3.2.4.xml new file mode 100644 index 00000000000..d06cc2fe00f --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/dev-support/jdiff/Apache_Hadoop_YARN_Server_Common_3.2.4.xml @@ -0,0 +1,1412 @@ + + + + + + + + + + + + + + + + + + + + + + + + true if the node is healthy, else false]]> + + + + + diagnostic health report of the node. + @return diagnostic health report of the node]]> + + + + + last timestamp at which the health report was received. + @return last timestamp at which the health report was received]]> + + + + + It includes information such as: +

      +
    • + An indicator of whether the node is healthy, as determined by the + health-check script. +
    • +
    • The previous time at which the health status was reported.
    • +
    • A diagnostic report on the health status.
    • +
    + + @see NodeReport + @see ApplicationClientProtocol#getClusterNodes(org.apache.hadoop.yarn.api.protocolrecords.GetClusterNodesRequest)]]> +
    +
    + +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + type of the proxy + @return the proxy instance + @throws IOException if fails to create the proxy]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + true if the iteration has more elements.]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +