2023-02-14 00:55:33 -05:00
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<!--
2023-02-22 19:53:13 -05:00
| Generated by Apache Maven Doxia at 2023-02-23
2023-02-14 00:55:33 -05:00
| Rendered using Apache Maven Stylus Skin 1.5
-->
< html xmlns = "http://www.w3.org/1999/xhtml" >
< head >
< title > Apache Hadoop 3.4.0-SNAPSHOT – Using FPGA On YARN< / title >
< style type = "text/css" media = "all" >
@import url("./css/maven-base.css");
@import url("./css/maven-theme.css");
@import url("./css/site.css");
< / style >
< link rel = "stylesheet" href = "./css/print.css" type = "text/css" media = "print" / >
2023-02-22 19:53:13 -05:00
< meta name = "Date-Revision-yyyymmdd" content = "20230223" / >
2023-02-14 00:55:33 -05:00
< meta http-equiv = "Content-Type" content = "text/html; charset=UTF-8" / >
< / head >
< body class = "composite" >
< div id = "banner" >
< a href = "http://hadoop.apache.org/" id = "bannerLeft" >
< img src = "http://hadoop.apache.org/images/hadoop-logo.jpg" alt = "" / >
< / a >
< a href = "http://www.apache.org/" id = "bannerRight" >
< img src = "http://www.apache.org/images/asf_logo_wide.png" alt = "" / >
< / a >
< div class = "clear" >
< hr / >
< / div >
< / div >
< div id = "breadcrumbs" >
< div class = "xright" > < a href = "http://wiki.apache.org/hadoop" class = "externalLink" > Wiki< / a >
|
< a href = "https://gitbox.apache.org/repos/asf/hadoop.git" class = "externalLink" > git< / a >
|
< a href = "http://hadoop.apache.org/" class = "externalLink" > Apache Hadoop< / a >
2023-02-22 19:53:13 -05:00
| Last Published: 2023-02-23
2023-02-14 00:55:33 -05:00
| Version: 3.4.0-SNAPSHOT
< / div >
< div class = "clear" >
< hr / >
< / div >
< / div >
< div id = "leftColumn" >
< div id = "navcolumn" >
< h5 > General< / h5 >
< ul >
< li class = "none" >
< a href = "../../index.html" > Overview< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/SingleCluster.html" > Single Node Setup< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/ClusterSetup.html" > Cluster Setup< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/CommandsManual.html" > Commands Reference< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/FileSystemShell.html" > FileSystem Shell< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/Compatibility.html" > Compatibility Specification< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/DownstreamDev.html" > Downstream Developer's Guide< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/AdminCompatibilityGuide.html" > Admin Compatibility Guide< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/InterfaceClassification.html" > Interface Classification< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/filesystem/index.html" > FileSystem Specification< / a >
< / li >
< / ul >
< h5 > Common< / h5 >
< ul >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/CLIMiniCluster.html" > CLI Mini Cluster< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/FairCallQueue.html" > Fair Call Queue< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/NativeLibraries.html" > Native Libraries< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/Superusers.html" > Proxy User< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/RackAwareness.html" > Rack Awareness< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/SecureMode.html" > Secure Mode< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/ServiceLevelAuth.html" > Service Level Authorization< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/HttpAuthentication.html" > HTTP Authentication< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/CredentialProviderAPI.html" > Credential Provider API< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-kms/index.html" > Hadoop KMS< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/Tracing.html" > Tracing< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/UnixShellGuide.html" > Unix Shell Guide< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/registry/index.html" > Registry< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/AsyncProfilerServlet.html" > Async Profiler< / a >
< / li >
< / ul >
< h5 > HDFS< / h5 >
< ul >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HdfsDesign.html" > Architecture< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html" > User Guide< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HDFSCommands.html" > Commands Reference< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html" > NameNode HA With QJM< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html" > NameNode HA With NFS< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/ObserverNameNode.html" > Observer NameNode< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/Federation.html" > Federation< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/ViewFs.html" > ViewFs< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/ViewFsOverloadScheme.html" > ViewFsOverloadScheme< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html" > Snapshots< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HdfsEditsViewer.html" > Edits Viewer< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html" > Image Viewer< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html" > Permissions and HDFS< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HdfsQuotaAdminGuide.html" > Quotas and HDFS< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/LibHdfs.html" > libhdfs (C API)< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/WebHDFS.html" > WebHDFS (REST API)< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-hdfs-httpfs/index.html" > HttpFS< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html" > Short Circuit Local Reads< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html" > Centralized Cache Management< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html" > NFS Gateway< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html" > Rolling Upgrade< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/ExtendedAttributes.html" > Extended Attributes< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html" > Transparent Encryption< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HdfsMultihoming.html" > Multihoming< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html" > Storage Policies< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/MemoryStorage.html" > Memory Storage Support< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/SLGUserGuide.html" > Synthetic Load Generator< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html" > Erasure Coding< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HDFSDiskbalancer.html" > Disk Balancer< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HdfsUpgradeDomain.html" > Upgrade Domain< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HdfsDataNodeAdminGuide.html" > DataNode Admin< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs-rbf/HDFSRouterFederation.html" > Router Federation< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HdfsProvidedStorage.html" > Provided Storage< / a >
< / li >
< / ul >
< h5 > MapReduce< / h5 >
< ul >
< li class = "none" >
< a href = "../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html" > Tutorial< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html" > Commands Reference< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html" > Compatibility with 1.x< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/EncryptedShuffle.html" > Encrypted Shuffle< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/PluggableShuffleAndPluggableSort.html" > Pluggable Shuffle/Sort< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/DistributedCacheDeploy.html" > Distributed Cache Deploy< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/SharedCacheSupport.html" > Support for YARN Shared Cache< / a >
< / li >
< / ul >
< h5 > MapReduce REST APIs< / h5 >
< ul >
< li class = "none" >
< a href = "../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredAppMasterRest.html" > MR Application Master< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/HistoryServerRest.html" > MR History Server< / a >
< / li >
< / ul >
< h5 > YARN< / h5 >
< ul >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/YARN.html" > Architecture< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/YarnCommands.html" > Commands Reference< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html" > Capacity Scheduler< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/FairScheduler.html" > Fair Scheduler< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html" > ResourceManager Restart< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html" > ResourceManager HA< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/ResourceModel.html" > Resource Model< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/NodeLabel.html" > Node Labels< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/NodeAttributes.html" > Node Attributes< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/WebApplicationProxy.html" > Web Application Proxy< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html" > Timeline Server< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html" > Timeline Service V.2< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html" > Writing YARN Applications< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html" > YARN Application Security< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/NodeManager.html" > NodeManager< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/DockerContainers.html" > Running Applications in Docker Containers< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/RuncContainers.html" > Running Applications in runC Containers< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/NodeManagerCgroups.html" > Using CGroups< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/SecureContainer.html" > Secure Containers< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/ReservationSystem.html" > Reservation System< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/GracefulDecommission.html" > Graceful Decommission< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html" > Opportunistic Containers< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/Federation.html" > YARN Federation< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/SharedCache.html" > Shared Cache< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/UsingGpus.html" > Using GPU< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/UsingFPGA.html" > Using FPGA< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/PlacementConstraints.html" > Placement Constraints< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/YarnUI2.html" > YARN UI2< / a >
< / li >
< / ul >
< h5 > YARN REST APIs< / h5 >
< ul >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/WebServicesIntro.html" > Introduction< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html" > Resource Manager< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/NodeManagerRest.html" > Node Manager< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html#Timeline_Server_REST_API_v1" > Timeline Server< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html#Timeline_Service_v.2_REST_API" > Timeline Service V.2< / a >
< / li >
< / ul >
< h5 > YARN Service< / h5 >
< ul >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/yarn-service/Overview.html" > Overview< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/yarn-service/QuickStart.html" > QuickStart< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/yarn-service/Concepts.html" > Concepts< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/yarn-service/YarnServiceAPI.html" > Yarn Service API< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/yarn-service/ServiceDiscovery.html" > Service Discovery< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/yarn-service/SystemServices.html" > System Services< / a >
< / li >
< / ul >
< h5 > Hadoop Compatible File Systems< / h5 >
< ul >
< li class = "none" >
< a href = "../../hadoop-aliyun/tools/hadoop-aliyun/index.html" > Aliyun OSS< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-aws/tools/hadoop-aws/index.html" > Amazon S3< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-azure/index.html" > Azure Blob Storage< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-azure-datalake/index.html" > Azure Data Lake Storage< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-cos/cloud-storage/index.html" > Tencent COS< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-huaweicloud/cloud-storage/index.html" > Huaweicloud OBS< / a >
< / li >
< / ul >
< h5 > Auth< / h5 >
< ul >
< li class = "none" >
< a href = "../../hadoop-auth/index.html" > Overview< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-auth/Examples.html" > Examples< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-auth/Configuration.html" > Configuration< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-auth/BuildingIt.html" > Building< / a >
< / li >
< / ul >
< h5 > Tools< / h5 >
< ul >
< li class = "none" >
< a href = "../../hadoop-streaming/HadoopStreaming.html" > Hadoop Streaming< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-archives/HadoopArchives.html" > Hadoop Archives< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-archive-logs/HadoopArchiveLogs.html" > Hadoop Archive Logs< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-distcp/DistCp.html" > DistCp< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-federation-balance/HDFSFederationBalance.html" > HDFS Federation Balance< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-gridmix/GridMix.html" > GridMix< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-rumen/Rumen.html" > Rumen< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-resourceestimator/ResourceEstimator.html" > Resource Estimator Service< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-sls/SchedulerLoadSimulator.html" > Scheduler Load Simulator< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/Benchmarking.html" > Hadoop Benchmarking< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-dynamometer/Dynamometer.html" > Dynamometer< / a >
< / li >
< / ul >
< h5 > Reference< / h5 >
< ul >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/release/" > Changelog and Release Notes< / a >
< / li >
< li class = "none" >
< a href = "../../api/index.html" > Java API docs< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/UnixShellAPI.html" > Unix Shell API< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/Metrics.html" > Metrics< / a >
< / li >
< / ul >
< h5 > Configuration< / h5 >
< ul >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/core-default.xml" > core-default.xml< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/hdfs-default.xml" > hdfs-default.xml< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs-rbf/hdfs-rbf-default.xml" > hdfs-rbf-default.xml< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml" > mapred-default.xml< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-common/yarn-default.xml" > yarn-default.xml< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-kms/kms-default.html" > kms-default.xml< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-hdfs-httpfs/httpfs-default.html" > httpfs-default.xml< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/DeprecatedProperties.html" > Deprecated Properties< / a >
< / li >
< / ul >
< a href = "http://maven.apache.org/" title = "Built by Maven" class = "poweredBy" >
< img alt = "Built by Maven" src = "./images/logos/maven-feather.png" / >
< / a >
< / div >
< / div >
< div id = "bodyColumn" >
< div id = "contentBox" >
<!-- -
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
< h1 > Using FPGA On YARN< / h1 >
< h1 > Prerequisites< / h1 >
< ul >
< li > The FPGA resource is supported by YARN but only shipped with “ IntelFpgaOpenclPlugin” for now< / li >
< li > YARN node managers have to be pre-installed with vendor drivers and configured with needed environment variables< / li >
< li > Docker support is not supported yet< / li >
< / ul >
< h1 > Configs< / h1 > < section >
< h2 > < a name = "FPGA_scheduling" > < / a > FPGA scheduling< / h2 >
< p > In < code > resource-types.xml< / code > < / p >
< p > Add following properties< / p >
< div class = "source" >
< div class = "source" >
< pre > < configuration>
< property>
< name> yarn.resource-types< /name>
< value> yarn.io/fpga< /value>
< /property>
< /configuration>
< / pre > < / div > < / div >
< p > For < code > Capacity Scheduler< / code > , < code > DominantResourceCalculator< / code > MUST be configured to enable FPGA scheduling/isolation. Use following property to configure < code > DominantResourceCalculator< / code > (In < code > capacity-scheduler.xml< / code > ):< / p >
< table border = "0" class = "bodyTable" >
< thead >
< tr class = "a" >
< th > Property < / th >
< th > Default value < / th > < / tr >
< / thead > < tbody >
< tr class = "b" >
< td > yarn.scheduler.capacity.resource-calculator < / td >
< td > org.apache.hadoop.yarn.util.resource.DominantResourceCalculator < / td > < / tr >
< / tbody >
< / table > < / section > < section >
< h2 > < a name = "FPGA_Isolation" > < / a > FPGA Isolation< / h2 > < section >
< h3 > < a name = "In_yarn-site.xml" > < / a > In < code > yarn-site.xml< / code > < / h3 >
< div class = "source" >
< div class = "source" >
< pre > < property>
< name> yarn.nodemanager.resource-plugins< /name>
< value> yarn.io/fpga< /value>
< /property>
< / pre > < / div > < / div >
< p > This is to enable FPGA isolation module on NodeManager side.< / p >
< p > By default, YARN will automatically detect and config FPGAs when above config is set. Following configs need to be set in < code > yarn-site.xml< / code > only if admin has specialized requirements.< / p >
< p > < b > 1) Allowed FPGA Devices< / b > < / p >
< table border = "0" class = "bodyTable" >
< thead >
< tr class = "a" >
< th > Property < / th >
< th > Default value < / th > < / tr >
< / thead > < tbody >
< tr class = "b" >
< td > yarn.nodemanager.resource-plugins.fpga.allowed-fpga-devices < / td >
< td > auto < / td > < / tr >
< / tbody >
< / table >
< p > Specify FPGA devices which can be managed by YARN NodeManager, split by comma Number of FPGA devices will be reported to RM to make scheduling decisions. Set to auto (default) let YARN automatically discover FPGA resource from system.< / p >
< p > Manually specify FPGA devices if admin only want subset of FPGA devices managed by YARN. At present, since we can only configure one major number in c-e.cfg, FPGA device is identified by their minor device number. For Intel devices, a common approach to get minor device number of FPGA is using “ aocl diagnose” and check uevent with device name.< / p >
< p > < b > 2) Executable to discover FPGAs< / b > < / p >
< table border = "0" class = "bodyTable" >
< thead >
< tr class = "a" >
< th > Property < / th >
< th > Default value < / th > < / tr >
< / thead > < tbody >
< tr class = "b" >
< td > yarn.nodemanager.resource-plugins.fpga.path-to-discovery-executables < / td >
< td > < / td > < / tr >
< / tbody >
< / table >
< p > When yarn.nodemanager.resource.fpga.allowed-fpga-devices=auto specified, YARN NodeManager needs to run FPGA discovery binary (now only support IntelFpgaOpenclPlugin) to get FPGA information. When value is empty (default), YARN NodeManager will try to locate discovery executable from vendor plugin’ s preference. For instance, the “ IntelFpgaOpenclPlugin” will try to find “ aocl” in directory got from environment “ ALTERAOCLSDKROOT” < / p >
< p > < b > 3) FPGA plugin to use< / b > < / p >
< table border = "0" class = "bodyTable" >
< thead >
< tr class = "a" >
< th > Property < / th >
< th > Default value < / th > < / tr >
< / thead > < tbody >
< tr class = "b" >
< td > yarn.nodemanager.resource-plugins.fpga.vendor-plugin.class < / td >
< td > org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.fpga.IntelFpgaOpenclPlugin < / td > < / tr >
< / tbody >
< / table >
< p > For now, only Intel OpenCL SDK for FPGA is supported. The IP program(.aocx file) running on FPGA should be written with OpenCL based on Intel platform.< / p >
< p > < b > 4) CGroups mount< / b > FPGA isolation uses CGroup < a class = "externalLink" href = "https://www.kernel.org/doc/Documentation/cgroup-v1/devices.txt" > devices controller< / a > to do per-FPGA device isolation. Following configs should be added to < code > yarn-site.xml< / code > to automatically mount CGroup sub devices, otherwise admin has to manually create devices subfolder in order to use this feature.< / p >
< table border = "0" class = "bodyTable" >
< thead >
< tr class = "a" >
< th > Property < / th >
< th > Default value < / th > < / tr >
< / thead > < tbody >
< tr class = "b" >
< td > yarn.nodemanager.linux-container-executor.cgroups.mount < / td >
< td > true < / td > < / tr >
< / tbody >
< / table >
< p > For more details of YARN CGroups configurations, please refer to < a class = "externalLink" href = "https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/NodeManagerCgroups.html" > Using CGroups with YARN< / a > < / p > < / section > < section >
< h3 > < a name = "In_container-executor.cfg" > < / a > In < code > container-executor.cfg< / code > < / h3 >
< p > In general, following config needs to be added to < code > container-executor.cfg< / code > . The fpga.major-device-number and allowed-device-minor-numbers are optional allowed devices.< / p >
< div class = "source" >
< div class = "source" >
< pre > [fpga]
module.enabled=true
fpga.major-device-number=## Major device number of FPGA, by default is 246. Strongly recommend setting this
fpga.allowed-device-minor-numbers=## Comma separated allowed minor device numbers, empty means all FPGA devices managed by YARN.
< / pre > < / div > < / div >
< p > When user needs to run FPGA applications under non-Docker environment:< / p >
< div class = "source" >
< div class = "source" >
< pre > [cgroups]
# Root of system cgroups (Cannot be empty or " /" )
root=/cgroup
# Parent folder of YARN's CGroups
yarn-hierarchy=yarn
< / pre > < / div > < / div >
< h1 > Use it< / h1 > < / section > < / section > < section >
< h2 > < a name = "Distributed-shell_.2B_FPGA" > < / a > Distributed-shell + FPGA< / h2 >
< p > Distributed shell currently support specify additional resource types other than memory and vcores< / p >
< p > Run distributed shell without using docker container (the .bashrc contains some SDK related environment variables):< / p >
< div class = "source" >
< div class = "source" >
< pre > yarn jar < path/to/hadoop-yarn-applications-distributedshell.jar> \
-jar < path/to/hadoop-yarn-applications-distributedshell.jar> \
-shell_command " source /home/yarn/.bashrc & & aocl diagnose" \
-container_resources memory-mb=2048,vcores=2,yarn.io/fpga=1 \
-num_containers 1
< / pre > < / div > < / div >
< p > You should be able to see output like< / p >
< div class = "source" >
< div class = "source" >
< pre > aocl diagnose: Running diagnose from /home/fpga/intelFPGA_pro/17.0/hld/board/nalla_pcie/linux64/libexec
------------------------- acl0 -------------------------
Vendor: Nallatech ltd
Phys Dev Name Status Information
aclnalla_pcie0Passed nalla_pcie (aclnalla_pcie0)
PCIe dev_id = 2494, bus:slot.func = 02:00.00, Gen3 x8
FPGA temperature = 54.4 degrees C.
Total Card Power Usage = 32.4 Watts.
Device Power Usage = 0.0 Watts.
DIAGNOSTIC_PASSED
---------------------------------------------------------
< / pre > < / div > < / div >
< p > < b > Specify IP that YARN should configure before launch container< / b > < / p >
< p > For FPGA resource, the container can have an environment variable “ REQUESTED_FPGA_IP_ID” to make YARN download and flash an IP for it before launch. For instance, REQUESTED_FPGA_IP_ID=“ matrix_mul” will lead to a searching in container’ s local directory for IP file(“ .aocx” file) whose name contains “ matirx_mul” (the application should distribute it first). We only support flashing one IP for all devices for now. If user don’ t set this environment variable, we assume that user’ s application can find the IP file by itself. Note that the IP downloading and reprogramming in advance in YARN is not necessary because the OpenCL application may find the IP file and reprogram device on the fly. But YARN do this for the containers will achieve the quickest re-programming path.< / p > < / section >
< / div >
< / div >
< div class = "clear" >
< hr / >
< / div >
< div id = "footer" >
< div class = "xright" >
© 2008-2023
Apache Software Foundation
- < a href = "http://maven.apache.org/privacy-policy.html" > Privacy Policy< / a > .
Apache Maven, Maven, Apache, the Apache feather logo, and the Apache Maven project logos are trademarks of The Apache Software Foundation.
< / div >
< div class = "clear" >
< hr / >
< / div >
< / div >
< / body >
< / html >