2023-02-14 00:55:33 -05:00
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<!--
2023-05-26 09:01:10 -04:00
| Generated by Apache Maven Doxia at 2023-05-26
2023-02-14 00:55:33 -05:00
| Rendered using Apache Maven Stylus Skin 1.5
-->
< html xmlns = "http://www.w3.org/1999/xhtml" >
< head >
< title > Apache Hadoop 3.4.0-SNAPSHOT – HDFS Support for Multihomed Networks< / title >
< style type = "text/css" media = "all" >
@import url("./css/maven-base.css");
@import url("./css/maven-theme.css");
@import url("./css/site.css");
< / style >
< link rel = "stylesheet" href = "./css/print.css" type = "text/css" media = "print" / >
2023-05-26 09:01:10 -04:00
< meta name = "Date-Revision-yyyymmdd" content = "20230526" / >
2023-02-14 00:55:33 -05:00
< meta http-equiv = "Content-Type" content = "text/html; charset=UTF-8" / >
< / head >
< body class = "composite" >
< div id = "banner" >
< a href = "http://hadoop.apache.org/" id = "bannerLeft" >
< img src = "http://hadoop.apache.org/images/hadoop-logo.jpg" alt = "" / >
< / a >
< a href = "http://www.apache.org/" id = "bannerRight" >
< img src = "http://www.apache.org/images/asf_logo_wide.png" alt = "" / >
< / a >
< div class = "clear" >
< hr / >
< / div >
< / div >
< div id = "breadcrumbs" >
< div class = "xright" > < a href = "http://wiki.apache.org/hadoop" class = "externalLink" > Wiki< / a >
|
< a href = "https://gitbox.apache.org/repos/asf/hadoop.git" class = "externalLink" > git< / a >
|
< a href = "http://hadoop.apache.org/" class = "externalLink" > Apache Hadoop< / a >
2023-05-26 09:01:10 -04:00
| Last Published: 2023-05-26
2023-02-14 00:55:33 -05:00
| Version: 3.4.0-SNAPSHOT
< / div >
< div class = "clear" >
< hr / >
< / div >
< / div >
< div id = "leftColumn" >
< div id = "navcolumn" >
< h5 > General< / h5 >
< ul >
< li class = "none" >
< a href = "../../index.html" > Overview< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/SingleCluster.html" > Single Node Setup< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/ClusterSetup.html" > Cluster Setup< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/CommandsManual.html" > Commands Reference< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/FileSystemShell.html" > FileSystem Shell< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/Compatibility.html" > Compatibility Specification< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/DownstreamDev.html" > Downstream Developer's Guide< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/AdminCompatibilityGuide.html" > Admin Compatibility Guide< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/InterfaceClassification.html" > Interface Classification< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/filesystem/index.html" > FileSystem Specification< / a >
< / li >
< / ul >
< h5 > Common< / h5 >
< ul >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/CLIMiniCluster.html" > CLI Mini Cluster< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/FairCallQueue.html" > Fair Call Queue< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/NativeLibraries.html" > Native Libraries< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/Superusers.html" > Proxy User< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/RackAwareness.html" > Rack Awareness< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/SecureMode.html" > Secure Mode< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/ServiceLevelAuth.html" > Service Level Authorization< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/HttpAuthentication.html" > HTTP Authentication< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/CredentialProviderAPI.html" > Credential Provider API< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-kms/index.html" > Hadoop KMS< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/Tracing.html" > Tracing< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/UnixShellGuide.html" > Unix Shell Guide< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/registry/index.html" > Registry< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/AsyncProfilerServlet.html" > Async Profiler< / a >
< / li >
< / ul >
< h5 > HDFS< / h5 >
< ul >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HdfsDesign.html" > Architecture< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html" > User Guide< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HDFSCommands.html" > Commands Reference< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html" > NameNode HA With QJM< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html" > NameNode HA With NFS< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/ObserverNameNode.html" > Observer NameNode< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/Federation.html" > Federation< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/ViewFs.html" > ViewFs< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/ViewFsOverloadScheme.html" > ViewFsOverloadScheme< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html" > Snapshots< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HdfsEditsViewer.html" > Edits Viewer< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html" > Image Viewer< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html" > Permissions and HDFS< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HdfsQuotaAdminGuide.html" > Quotas and HDFS< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/LibHdfs.html" > libhdfs (C API)< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/WebHDFS.html" > WebHDFS (REST API)< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-hdfs-httpfs/index.html" > HttpFS< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html" > Short Circuit Local Reads< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html" > Centralized Cache Management< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html" > NFS Gateway< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html" > Rolling Upgrade< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/ExtendedAttributes.html" > Extended Attributes< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html" > Transparent Encryption< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HdfsMultihoming.html" > Multihoming< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html" > Storage Policies< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/MemoryStorage.html" > Memory Storage Support< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/SLGUserGuide.html" > Synthetic Load Generator< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html" > Erasure Coding< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HDFSDiskbalancer.html" > Disk Balancer< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HdfsUpgradeDomain.html" > Upgrade Domain< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HdfsDataNodeAdminGuide.html" > DataNode Admin< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs-rbf/HDFSRouterFederation.html" > Router Federation< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/HdfsProvidedStorage.html" > Provided Storage< / a >
< / li >
< / ul >
< h5 > MapReduce< / h5 >
< ul >
< li class = "none" >
< a href = "../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html" > Tutorial< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html" > Commands Reference< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html" > Compatibility with 1.x< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/EncryptedShuffle.html" > Encrypted Shuffle< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/PluggableShuffleAndPluggableSort.html" > Pluggable Shuffle/Sort< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/DistributedCacheDeploy.html" > Distributed Cache Deploy< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/SharedCacheSupport.html" > Support for YARN Shared Cache< / a >
< / li >
< / ul >
< h5 > MapReduce REST APIs< / h5 >
< ul >
< li class = "none" >
< a href = "../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredAppMasterRest.html" > MR Application Master< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/HistoryServerRest.html" > MR History Server< / a >
< / li >
< / ul >
< h5 > YARN< / h5 >
< ul >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/YARN.html" > Architecture< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/YarnCommands.html" > Commands Reference< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html" > Capacity Scheduler< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/FairScheduler.html" > Fair Scheduler< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html" > ResourceManager Restart< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html" > ResourceManager HA< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/ResourceModel.html" > Resource Model< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/NodeLabel.html" > Node Labels< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/NodeAttributes.html" > Node Attributes< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/WebApplicationProxy.html" > Web Application Proxy< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html" > Timeline Server< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html" > Timeline Service V.2< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html" > Writing YARN Applications< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html" > YARN Application Security< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/NodeManager.html" > NodeManager< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/DockerContainers.html" > Running Applications in Docker Containers< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/RuncContainers.html" > Running Applications in runC Containers< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/NodeManagerCgroups.html" > Using CGroups< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/SecureContainer.html" > Secure Containers< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/ReservationSystem.html" > Reservation System< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/GracefulDecommission.html" > Graceful Decommission< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html" > Opportunistic Containers< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/Federation.html" > YARN Federation< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/SharedCache.html" > Shared Cache< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/UsingGpus.html" > Using GPU< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/UsingFPGA.html" > Using FPGA< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/PlacementConstraints.html" > Placement Constraints< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/YarnUI2.html" > YARN UI2< / a >
< / li >
< / ul >
< h5 > YARN REST APIs< / h5 >
< ul >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/WebServicesIntro.html" > Introduction< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html" > Resource Manager< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/NodeManagerRest.html" > Node Manager< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html#Timeline_Server_REST_API_v1" > Timeline Server< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html#Timeline_Service_v.2_REST_API" > Timeline Service V.2< / a >
< / li >
< / ul >
< h5 > YARN Service< / h5 >
< ul >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/yarn-service/Overview.html" > Overview< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/yarn-service/QuickStart.html" > QuickStart< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/yarn-service/Concepts.html" > Concepts< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/yarn-service/YarnServiceAPI.html" > Yarn Service API< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/yarn-service/ServiceDiscovery.html" > Service Discovery< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-site/yarn-service/SystemServices.html" > System Services< / a >
< / li >
< / ul >
< h5 > Hadoop Compatible File Systems< / h5 >
< ul >
< li class = "none" >
< a href = "../../hadoop-aliyun/tools/hadoop-aliyun/index.html" > Aliyun OSS< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-aws/tools/hadoop-aws/index.html" > Amazon S3< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-azure/index.html" > Azure Blob Storage< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-azure-datalake/index.html" > Azure Data Lake Storage< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-cos/cloud-storage/index.html" > Tencent COS< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-huaweicloud/cloud-storage/index.html" > Huaweicloud OBS< / a >
< / li >
< / ul >
< h5 > Auth< / h5 >
< ul >
< li class = "none" >
< a href = "../../hadoop-auth/index.html" > Overview< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-auth/Examples.html" > Examples< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-auth/Configuration.html" > Configuration< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-auth/BuildingIt.html" > Building< / a >
< / li >
< / ul >
< h5 > Tools< / h5 >
< ul >
< li class = "none" >
< a href = "../../hadoop-streaming/HadoopStreaming.html" > Hadoop Streaming< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-archives/HadoopArchives.html" > Hadoop Archives< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-archive-logs/HadoopArchiveLogs.html" > Hadoop Archive Logs< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-distcp/DistCp.html" > DistCp< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-federation-balance/HDFSFederationBalance.html" > HDFS Federation Balance< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-gridmix/GridMix.html" > GridMix< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-rumen/Rumen.html" > Rumen< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-resourceestimator/ResourceEstimator.html" > Resource Estimator Service< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-sls/SchedulerLoadSimulator.html" > Scheduler Load Simulator< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/Benchmarking.html" > Hadoop Benchmarking< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-dynamometer/Dynamometer.html" > Dynamometer< / a >
< / li >
< / ul >
< h5 > Reference< / h5 >
< ul >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/release/" > Changelog and Release Notes< / a >
< / li >
< li class = "none" >
< a href = "../../api/index.html" > Java API docs< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/UnixShellAPI.html" > Unix Shell API< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/Metrics.html" > Metrics< / a >
< / li >
< / ul >
< h5 > Configuration< / h5 >
< ul >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/core-default.xml" > core-default.xml< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs/hdfs-default.xml" > hdfs-default.xml< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-hdfs-rbf/hdfs-rbf-default.xml" > hdfs-rbf-default.xml< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml" > mapred-default.xml< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-yarn/hadoop-yarn-common/yarn-default.xml" > yarn-default.xml< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-kms/kms-default.html" > kms-default.xml< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-hdfs-httpfs/httpfs-default.html" > httpfs-default.xml< / a >
< / li >
< li class = "none" >
< a href = "../../hadoop-project-dist/hadoop-common/DeprecatedProperties.html" > Deprecated Properties< / a >
< / li >
< / ul >
< a href = "http://maven.apache.org/" title = "Built by Maven" class = "poweredBy" >
< img alt = "Built by Maven" src = "./images/logos/maven-feather.png" / >
< / a >
< / div >
< / div >
< div id = "bodyColumn" >
< div id = "contentBox" >
<!-- -
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
< h1 > HDFS Support for Multihomed Networks< / h1 >
< p > This document is targetted to cluster administrators deploying < code > HDFS< / code > in multihomed networks. Similar support for < code > YARN< / code > /< code > MapReduce< / code > is work in progress and will be documented when available.< / p >
< ul >
< li > < a href = "#Multihoming_Background" > Multihoming Background< / a > < / li >
< li > < a href = "#Fixing_Hadoop_Issues_In_Multihomed_Environments" > Fixing Hadoop Issues In Multihomed Environments< / a >
< ul >
< li > < a href = "#Ensuring_HDFS_Daemons_Bind_All_Interfaces" > Ensuring HDFS Daemons Bind All Interfaces< / a > < / li >
< li > < a href = "#Clients_use_Hostnames_when_connecting_to_DataNodes" > Clients use Hostnames when connecting to DataNodes< / a > < / li >
< li > < a href = "#DataNodes_use_HostNames_when_connecting_to_other_DataNodes" > DataNodes use HostNames when connecting to other DataNodes< / a > < / li > < / ul > < / li >
< li > < a href = "#Multihoming_and_Hadoop_Security" > Multihoming and Hadoop Security< / a >
< ul >
< li > < a href = "#Hostname_Lookup" > Hostname Lookup< / a > < / li > < / ul > < / li > < / ul >
< section >
< h2 > < a name = "Multihoming_Background" > < / a > Multihoming Background< / h2 >
< p > In multihomed networks the cluster nodes are connected to more than one network interface. There could be multiple reasons for doing so.< / p >
< ol style = "list-style-type: decimal" >
< li >
< p > < b > Security< / b > : Security requirements may dictate that intra-cluster traffic be confined to a different network than the network used to transfer data in and out of the cluster.< / p >
< / li >
< li >
< p > < b > Performance< / b > : Intra-cluster traffic may use one or more high bandwidth interconnects like Fiber Channel, Infiniband or 10GbE.< / p >
< / li >
< li >
< p > < b > Failover/Redundancy< / b > : The nodes may have multiple network adapters connected to a single network to handle network adapter failure.< / p >
< / li >
< / ol >
< p > Note that NIC Bonding (also known as NIC Teaming or Link Aggregation) is a related but separate topic. The following settings are usually not applicable to a NIC bonding configuration which handles multiplexing and failover transparently while presenting a single ‘ logical network’ to applications.< / p > < / section > < section >
< h2 > < a name = "Fixing_Hadoop_Issues_In_Multihomed_Environments" > < / a > Fixing Hadoop Issues In Multihomed Environments< / h2 > < section >
< h3 > < a name = "Ensuring_HDFS_Daemons_Bind_All_Interfaces" > < / a > Ensuring HDFS Daemons Bind All Interfaces< / h3 >
< p > By default < code > HDFS< / code > endpoints are specified as either hostnames or IP addresses. In either case < code > HDFS< / code > daemons will bind to a single IP address making the daemons unreachable from other networks.< / p >
< p > The solution is to have separate setting for server endpoints to force binding the wildcard IP address < code > INADDR_ANY< / code > i.e. < code > 0.0.0.0< / code > . Do NOT supply a port number with any of these settings.< / p >
< p > < b > NOTE:< / b > Prefer using hostnames over IP addresses in master/slave configuration files.< / p >
< div class = "source" >
< div class = "source" >
< pre > < property>
< name> dfs.namenode.rpc-bind-host< /name>
< value> 0.0.0.0< /value>
< description>
The actual address the RPC server will bind to. If this optional address is
set, it overrides only the hostname portion of dfs.namenode.rpc-address.
It can also be specified per name node or name service for HA/Federation.
This is useful for making the name node listen on all interfaces by
setting it to 0.0.0.0.
< /description>
< /property>
< property>
< name> dfs.namenode.servicerpc-bind-host< /name>
< value> 0.0.0.0< /value>
< description>
The actual address the service RPC server will bind to. If this optional address is
set, it overrides only the hostname portion of dfs.namenode.servicerpc-address.
It can also be specified per name node or name service for HA/Federation.
This is useful for making the name node listen on all interfaces by
setting it to 0.0.0.0.
< /description>
< /property>
< property>
< name> dfs.namenode.http-bind-host< /name>
< value> 0.0.0.0< /value>
< description>
The actual address the HTTP server will bind to. If this optional address
is set, it overrides only the hostname portion of dfs.namenode.http-address.
It can also be specified per name node or name service for HA/Federation.
This is useful for making the name node HTTP server listen on all
interfaces by setting it to 0.0.0.0.
< /description>
< /property>
< property>
< name> dfs.namenode.https-bind-host< /name>
< value> 0.0.0.0< /value>
< description>
The actual address the HTTPS server will bind to. If this optional address
is set, it overrides only the hostname portion of dfs.namenode.https-address.
It can also be specified per name node or name service for HA/Federation.
This is useful for making the name node HTTPS server listen on all
interfaces by setting it to 0.0.0.0.
< /description>
< /property>
< / pre > < / div > < / div >
< / section > < section >
< h3 > < a name = "Clients_use_Hostnames_when_connecting_to_DataNodes" > < / a > Clients use Hostnames when connecting to DataNodes< / h3 >
< p > By default < code > HDFS< / code > clients connect to DataNodes using the IP address provided by the NameNode. Depending on the network configuration this IP address may be unreachable by the clients. The fix is letting clients perform their own DNS resolution of the DataNode hostname. The following setting enables this behavior.< / p >
< div class = "source" >
< div class = "source" >
< pre > < property>
< name> dfs.client.use.datanode.hostname< /name>
< value> true< /value>
< description> Whether clients should use datanode hostnames when
connecting to datanodes.
< /description>
< /property>
< / pre > < / div > < / div >
< / section > < section >
< h3 > < a name = "DataNodes_use_HostNames_when_connecting_to_other_DataNodes" > < / a > DataNodes use HostNames when connecting to other DataNodes< / h3 >
< p > Rarely, the NameNode-resolved IP address for a DataNode may be unreachable from other DataNodes. The fix is to force DataNodes to perform their own DNS resolution for inter-DataNode connections. The following setting enables this behavior.< / p >
< div class = "source" >
< div class = "source" >
< pre > < property>
< name> dfs.datanode.use.datanode.hostname< /name>
< value> true< /value>
< description> Whether datanodes should use datanode hostnames when
connecting to other datanodes for data transfer.
< /description>
< /property>
< / pre > < / div > < / div >
< / section > < / section > < section >
< h2 > < a name = "Multihoming_and_Hadoop_Security" > < / a > Multihoming and Hadoop Security< / h2 >
< p > Configuring multihomed hosts with < a href = "../hadoop-common/SecureMode.html" > Hadoop in Secure Mode< / a > may require additional configuration.< / p > < section >
< h3 > < a name = "Hostname_Lookup" > < / a > Hostname Lookup< / h3 >
< p > Kerberos principals for Hadoop Services are specified using the pattern < code > ServiceName/_HOST@REALM.TLD< / code > e.g. < code > nn/_HOST@REALM.TLD< / code > . This allows the same configuration file to be used on all hosts. Services will substitute < code > _HOST< / code > in the principal with their own hostname looked up at runtime.< / p >
< p > When nodes are configured to have multiple hostnames in DNS or in < code > /etc/hosts< / code > files, a service may lookup a different hostname than what is expected by the server. e.g. intra-cluster traffic between two services may be routed over a private interface but the client service looked up its public hostname. Kerberos authentication will fail since the hostname in the principal does not match the IP address over which the traffic arrived.< / p >
< p > The following setting (available starting Apache Hadoop 2.8.0) can be used to control the hostname looked up the service.< / p >
< div class = "source" >
< div class = "source" >
< pre > < property>
< name> hadoop.security.dns.interface< /name>
< description>
The name of the Network Interface from which the service should determine
its host name for Kerberos login. e.g. eth2. In a multi-homed environment,
the setting can be used to affect the _HOST subsitution in the service
Kerberos principal. If this configuration value is not set, the service
will use its default hostname as returned by
InetAddress.getLocalHost().getCanonicalHostName().
Most clusters will not require this setting.
< /description>
< /property>
< / pre > < / div > < / div >
< p > Services can also be configured to use a specific DNS server for hostname lookups (rarely required).< / p >
< div class = "source" >
< div class = "source" >
< pre > < property>
< name> hadoop.security.dns.nameserver< /name>
< description>
The host name or IP address of the name server (DNS) which a service Node
should use to determine its own host name for Kerberos Login. Requires
hadoop.security.dns.interface.
Most clusters will not require this setting.
< /description>
< /property>
< / pre > < / div > < / div > < / section > < / section >
< / div >
< / div >
< div class = "clear" >
< hr / >
< / div >
< div id = "footer" >
< div class = "xright" >
© 2008-2023
Apache Software Foundation
- < a href = "http://maven.apache.org/privacy-policy.html" > Privacy Policy< / a > .
Apache Maven, Maven, Apache, the Apache feather logo, and the Apache Maven project logos are trademarks of The Apache Software Foundation.
< / div >
< div class = "clear" >
< hr / >
< / div >
< / div >
< / body >
< / html >