728 lines
47 KiB
HTML
728 lines
47 KiB
HTML
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
|
|
<!--
|
|
| Generated by Apache Maven Doxia at 2023-02-22
|
|
| Rendered using Apache Maven Stylus Skin 1.5
|
|
-->
|
|
<html xmlns="http://www.w3.org/1999/xhtml">
|
|
<head>
|
|
<title>Apache Hadoop Dynamometer – Dynamometer Guide</title>
|
|
<style type="text/css" media="all">
|
|
@import url("./css/maven-base.css");
|
|
@import url("./css/maven-theme.css");
|
|
@import url("./css/site.css");
|
|
</style>
|
|
<link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
|
|
<meta name="Date-Revision-yyyymmdd" content="20230222" />
|
|
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
|
|
</head>
|
|
<body class="composite">
|
|
<div id="banner">
|
|
<a href="http://hadoop.apache.org/" id="bannerLeft">
|
|
<img src="http://hadoop.apache.org/images/hadoop-logo.jpg" alt="" />
|
|
</a>
|
|
<a href="http://www.apache.org/" id="bannerRight">
|
|
<img src="http://www.apache.org/images/asf_logo_wide.png" alt="" />
|
|
</a>
|
|
<div class="clear">
|
|
<hr/>
|
|
</div>
|
|
</div>
|
|
<div id="breadcrumbs">
|
|
|
|
<div class="xright"> <a href="http://wiki.apache.org/hadoop" class="externalLink">Wiki</a>
|
|
|
|
|
<a href="https://gitbox.apache.org/repos/asf/hadoop.git" class="externalLink">git</a>
|
|
|
|
| Last Published: 2023-02-22
|
|
| Version: 3.4.0-SNAPSHOT
|
|
</div>
|
|
<div class="clear">
|
|
<hr/>
|
|
</div>
|
|
</div>
|
|
<div id="leftColumn">
|
|
<div id="navcolumn">
|
|
|
|
<h5>General</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../index.html">Overview</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-common/SingleCluster.html">Single Node Setup</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-common/ClusterSetup.html">Cluster Setup</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-common/CommandsManual.html">Commands Reference</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-common/FileSystemShell.html">FileSystem Shell</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-common/Compatibility.html">Compatibility Specification</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-common/DownstreamDev.html">Downstream Developer's Guide</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-common/AdminCompatibilityGuide.html">Admin Compatibility Guide</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-common/InterfaceClassification.html">Interface Classification</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-common/filesystem/index.html">FileSystem Specification</a>
|
|
</li>
|
|
</ul>
|
|
<h5>Common</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-common/CLIMiniCluster.html">CLI Mini Cluster</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-common/FairCallQueue.html">Fair Call Queue</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-common/NativeLibraries.html">Native Libraries</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-common/Superusers.html">Proxy User</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-common/RackAwareness.html">Rack Awareness</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-common/SecureMode.html">Secure Mode</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-common/ServiceLevelAuth.html">Service Level Authorization</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-common/HttpAuthentication.html">HTTP Authentication</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-common/CredentialProviderAPI.html">Credential Provider API</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-kms/index.html">Hadoop KMS</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-common/Tracing.html">Tracing</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-common/UnixShellGuide.html">Unix Shell Guide</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-common/registry/index.html">Registry</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-common/AsyncProfilerServlet.html">Async Profiler</a>
|
|
</li>
|
|
</ul>
|
|
<h5>HDFS</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/HdfsDesign.html">Architecture</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html">User Guide</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/HDFSCommands.html">Commands Reference</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html">NameNode HA With QJM</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html">NameNode HA With NFS</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/ObserverNameNode.html">Observer NameNode</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/Federation.html">Federation</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/ViewFs.html">ViewFs</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/ViewFsOverloadScheme.html">ViewFsOverloadScheme</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html">Snapshots</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/HdfsEditsViewer.html">Edits Viewer</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html">Image Viewer</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html">Permissions and HDFS</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/HdfsQuotaAdminGuide.html">Quotas and HDFS</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/LibHdfs.html">libhdfs (C API)</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/WebHDFS.html">WebHDFS (REST API)</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-hdfs-httpfs/index.html">HttpFS</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html">Short Circuit Local Reads</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html">Centralized Cache Management</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html">NFS Gateway</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html">Rolling Upgrade</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/ExtendedAttributes.html">Extended Attributes</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html">Transparent Encryption</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/HdfsMultihoming.html">Multihoming</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html">Storage Policies</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/MemoryStorage.html">Memory Storage Support</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/SLGUserGuide.html">Synthetic Load Generator</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html">Erasure Coding</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/HDFSDiskbalancer.html">Disk Balancer</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/HdfsUpgradeDomain.html">Upgrade Domain</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/HdfsDataNodeAdminGuide.html">DataNode Admin</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs-rbf/HDFSRouterFederation.html">Router Federation</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/HdfsProvidedStorage.html">Provided Storage</a>
|
|
</li>
|
|
</ul>
|
|
<h5>MapReduce</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html">Tutorial</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html">Commands Reference</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html">Compatibility with 1.x</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-mapreduce-client/hadoop-mapreduce-client-core/EncryptedShuffle.html">Encrypted Shuffle</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-mapreduce-client/hadoop-mapreduce-client-core/PluggableShuffleAndPluggableSort.html">Pluggable Shuffle/Sort</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-mapreduce-client/hadoop-mapreduce-client-core/DistributedCacheDeploy.html">Distributed Cache Deploy</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-mapreduce-client/hadoop-mapreduce-client-core/SharedCacheSupport.html">Support for YARN Shared Cache</a>
|
|
</li>
|
|
</ul>
|
|
<h5>MapReduce REST APIs</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredAppMasterRest.html">MR Application Master</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/HistoryServerRest.html">MR History Server</a>
|
|
</li>
|
|
</ul>
|
|
<h5>YARN</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/YARN.html">Architecture</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/YarnCommands.html">Commands Reference</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html">Capacity Scheduler</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/FairScheduler.html">Fair Scheduler</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html">ResourceManager Restart</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html">ResourceManager HA</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/ResourceModel.html">Resource Model</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/NodeLabel.html">Node Labels</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/NodeAttributes.html">Node Attributes</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/WebApplicationProxy.html">Web Application Proxy</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/TimelineServer.html">Timeline Server</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html">Timeline Service V.2</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html">Writing YARN Applications</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html">YARN Application Security</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/NodeManager.html">NodeManager</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/DockerContainers.html">Running Applications in Docker Containers</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/RuncContainers.html">Running Applications in runC Containers</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/NodeManagerCgroups.html">Using CGroups</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/SecureContainer.html">Secure Containers</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/ReservationSystem.html">Reservation System</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/GracefulDecommission.html">Graceful Decommission</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html">Opportunistic Containers</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/Federation.html">YARN Federation</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/SharedCache.html">Shared Cache</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/UsingGpus.html">Using GPU</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/UsingFPGA.html">Using FPGA</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/PlacementConstraints.html">Placement Constraints</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/YarnUI2.html">YARN UI2</a>
|
|
</li>
|
|
</ul>
|
|
<h5>YARN REST APIs</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/WebServicesIntro.html">Introduction</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html">Resource Manager</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/NodeManagerRest.html">Node Manager</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/TimelineServer.html#Timeline_Server_REST_API_v1">Timeline Server</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html#Timeline_Service_v.2_REST_API">Timeline Service V.2</a>
|
|
</li>
|
|
</ul>
|
|
<h5>YARN Service</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/yarn-service/Overview.html">Overview</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/yarn-service/QuickStart.html">QuickStart</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/yarn-service/Concepts.html">Concepts</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/yarn-service/YarnServiceAPI.html">Yarn Service API</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/yarn-service/ServiceDiscovery.html">Service Discovery</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-site/yarn-service/SystemServices.html">System Services</a>
|
|
</li>
|
|
</ul>
|
|
<h5>Hadoop Compatible File Systems</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../hadoop-aliyun/tools/hadoop-aliyun/index.html">Aliyun OSS</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-aws/tools/hadoop-aws/index.html">Amazon S3</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-azure/index.html">Azure Blob Storage</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-azure-datalake/index.html">Azure Data Lake Storage</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-cos/cloud-storage/index.html">Tencent COS</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-huaweicloud/cloud-storage/index.html">Huaweicloud OBS</a>
|
|
</li>
|
|
</ul>
|
|
<h5>Auth</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../hadoop-auth/index.html">Overview</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-auth/Examples.html">Examples</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-auth/Configuration.html">Configuration</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-auth/BuildingIt.html">Building</a>
|
|
</li>
|
|
</ul>
|
|
<h5>Tools</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../hadoop-streaming/HadoopStreaming.html">Hadoop Streaming</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-archives/HadoopArchives.html">Hadoop Archives</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-archive-logs/HadoopArchiveLogs.html">Hadoop Archive Logs</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-distcp/DistCp.html">DistCp</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-federation-balance/HDFSFederationBalance.html">HDFS Federation Balance</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-gridmix/GridMix.html">GridMix</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-rumen/Rumen.html">Rumen</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-resourceestimator/ResourceEstimator.html">Resource Estimator Service</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-sls/SchedulerLoadSimulator.html">Scheduler Load Simulator</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-common/Benchmarking.html">Hadoop Benchmarking</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-dynamometer/Dynamometer.html">Dynamometer</a>
|
|
</li>
|
|
</ul>
|
|
<h5>Reference</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-common/release/">Changelog and Release Notes</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../api/index.html">Java API docs</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-common/UnixShellAPI.html">Unix Shell API</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-common/Metrics.html">Metrics</a>
|
|
</li>
|
|
</ul>
|
|
<h5>Configuration</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-common/core-default.xml">core-default.xml</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs/hdfs-default.xml">hdfs-default.xml</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-hdfs-rbf/hdfs-rbf-default.xml">hdfs-rbf-default.xml</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml">mapred-default.xml</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-yarn/hadoop-yarn-common/yarn-default.xml">yarn-default.xml</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-kms/kms-default.html">kms-default.xml</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-hdfs-httpfs/httpfs-default.html">httpfs-default.xml</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../hadoop-project-dist/hadoop-common/DeprecatedProperties.html">Deprecated Properties</a>
|
|
</li>
|
|
</ul>
|
|
<a href="http://maven.apache.org/" title="Built by Maven" class="poweredBy">
|
|
<img alt="Built by Maven" src="./images/logos/maven-feather.png"/>
|
|
</a>
|
|
|
|
</div>
|
|
</div>
|
|
<div id="bodyColumn">
|
|
<div id="contentBox">
|
|
<!---
|
|
Licensed under the Apache License, Version 2.0 (the "License");
|
|
you may not use this file except in compliance with the License.
|
|
You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software
|
|
distributed under the License is distributed on an "AS IS" BASIS,
|
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
See the License for the specific language governing permissions and
|
|
limitations under the License. See accompanying LICENSE file.
|
|
-->
|
|
<h1>Dynamometer Guide</h1>
|
|
<ul>
|
|
<li><a href="#Overview">Overview</a></li>
|
|
<li><a href="#Requirements">Requirements</a></li>
|
|
<li><a href="#Building">Building</a></li>
|
|
<li><a href="#Setup_Steps">Setup Steps</a>
|
|
<ul>
|
|
<li><a href="#Step_1:_Preparing_Requisite_Files">Step 1: Preparing Requisite Files</a></li>
|
|
<li><a href="#Step_2:_Prepare_FsImage_Files">Step 2: Prepare FsImage Files</a></li>
|
|
<li><a href="#Step_3:_Prepare_a_Hadoop_Binary">Step 3: Prepare a Hadoop Binary</a></li>
|
|
<li><a href="#Step_4:_Prepare_Configurations">Step 4: Prepare Configurations</a></li>
|
|
<li><a href="#Step_5:_Execute_the_Block_Generation_Job">Step 5: Execute the Block Generation Job</a></li>
|
|
<li><a href="#Step_6:_Prepare_Audit_Traces_.28Optional.29">Step 6: Prepare Audit Traces (Optional)</a>
|
|
<ul>
|
|
<li><a href="#Partitioning_the_Audit_Logs">Partitioning the Audit Logs</a></li></ul></li></ul></li>
|
|
<li><a href="#Running_Dynamometer">Running Dynamometer</a>
|
|
<ul>
|
|
<li><a href="#Manual_Workload_Launch">Manual Workload Launch</a></li>
|
|
<li><a href="#Integrated_Workload_Launch">Integrated Workload Launch</a></li></ul></li>
|
|
<li><a href="#Architecture">Architecture</a></li>
|
|
<li><a href="#External_Resources">External Resources</a></li></ul>
|
|
<section>
|
|
<h2><a name="Overview"></a>Overview</h2>
|
|
<p>Dynamometer is a tool to performance test Hadoop’s HDFS NameNode. The intent is to provide a real-world environment by initializing the NameNode against a production file system image and replaying a production workload collected via e.g. the NameNode’s audit logs. This allows for replaying a workload which is not only similar in characteristic to that experienced in production, but actually identical.</p>
|
|
<p>Dynamometer will launch a YARN application which starts a single NameNode and a configurable number of DataNodes, simulating an entire HDFS cluster as a single application. There is an additional <code>workload</code> job run as a MapReduce job which accepts audit logs as input and uses the information contained within to submit matching requests to the NameNode, inducing load on the service.</p>
|
|
<p>Dynamometer can execute this same workload against different Hadoop versions or with different configurations, allowing for the testing of configuration tweaks and code changes at scale without the necessity of deploying to a real large-scale cluster.</p>
|
|
<p>Throughout this documentation, we will use “Dyno-HDFS”, “Dyno-NN”, and “Dyno-DN” to refer to the HDFS cluster, NameNode, and DataNodes (respectively) which are started <i>inside of</i> a Dynamometer application. Terms like HDFS, YARN, and NameNode used without qualification refer to the existing infrastructure on top of which Dynamometer is run.</p>
|
|
<p>For more details on how Dynamometer works, as opposed to how to use it, see the Architecture section at the end of this page.</p></section><section>
|
|
<h2><a name="Requirements"></a>Requirements</h2>
|
|
<p>Dynamometer is based around YARN applications, so an existing YARN cluster will be required for execution. It also requires an accompanying HDFS instance to store some temporary files for communication.</p></section><section>
|
|
<h2><a name="Building"></a>Building</h2>
|
|
<p>Dynamometer consists of three main components, each one in its own module:</p>
|
|
<ul>
|
|
|
|
<li>Infrastructure (<code>dynamometer-infra</code>): This is the YARN application which starts a Dyno-HDFS cluster.</li>
|
|
<li>Workload (<code>dynamometer-workload</code>): This is the MapReduce job which replays audit logs.</li>
|
|
<li>Block Generator (<code>dynamometer-blockgen</code>): This is a MapReduce job used to generate input files for each Dyno-DN; its execution is a prerequisite step to running the infrastructure application.</li>
|
|
</ul>
|
|
<p>The compiled version of all of these components will be included in a standard Hadoop distribution. You can find them in the packaged distribution within <code>share/hadoop/tools/dynamometer</code>.</p></section><section>
|
|
<h2><a name="Setup_Steps"></a>Setup Steps</h2>
|
|
<p>Before launching a Dynamometer application, there are a number of setup steps that must be completed, instructing Dynamometer what configurations to use, what version to use, what fsimage to use when loading, etc. These steps can be performed a single time to put everything in place, and then many Dynamometer executions can be performed against them with minor tweaks to measure variations.</p>
|
|
<p>Scripts discussed below can be found in the <code>share/hadoop/tools/dynamometer/dynamometer-{infra,workload,blockgen}/bin</code> directories of the distribution. The corresponding Java JAR files can be found in the <code>share/hadoop/tools/lib/</code> directory. References to bin files below assume that the current working directory is <code>share/hadoop/tools/dynamometer</code>.</p><section>
|
|
<h3><a name="Step_1:_Preparing_Requisite_Files"></a>Step 1: Preparing Requisite Files</h3>
|
|
<p>A number of steps are required in advance of starting your first Dyno-HDFS cluster:</p></section><section>
|
|
<h3><a name="Step_2:_Prepare_FsImage_Files"></a>Step 2: Prepare FsImage Files</h3>
|
|
<p>Collect an fsimage and related files from your NameNode. This will include the <code>fsimage_TXID</code> file which the NameNode creates as part of checkpointing, the <code>fsimage_TXID.md5</code> containing the md5 hash of the image, the <code>VERSION</code> file containing some metadata, and the <code>fsimage_TXID.xml</code> file which can be generated from the fsimage using the offline image viewer:</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>hdfs oiv -i fsimage_TXID -o fsimage_TXID.xml -p XML
|
|
</pre></div></div>
|
|
|
|
<p>It is recommended that you collect these files from your Secondary/Standby NameNode if you have one to avoid placing additional load on your Active NameNode.</p>
|
|
<p>All of these files must be placed somewhere on HDFS where the various jobs will be able to access them. They should all be in the same folder, e.g. <code>hdfs:///dyno/fsimage</code>.</p>
|
|
<p>All these steps can be automated with the <code>upload-fsimage.sh</code> script, e.g.:</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>./dynamometer-infra/bin/upload-fsimage.sh 0001 hdfs:///dyno/fsimage
|
|
</pre></div></div>
|
|
|
|
<p>Where 0001 is the transaction ID of the desired fsimage. See usage info of the script for more detail.</p></section><section>
|
|
<h3><a name="Step_3:_Prepare_a_Hadoop_Binary"></a>Step 3: Prepare a Hadoop Binary</h3>
|
|
<p>Collect the Hadoop distribution tarball to use to start the Dyno-NN and -DNs. For example, if testing against Hadoop 3.0.2, use <a class="externalLink" href="http://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-3.0.2/hadoop-3.0.2.tar.gz">hadoop-3.0.2.tar.gz</a>. This distribution contains several components unnecessary for Dynamometer (e.g. YARN), so to reduce its size, you can optionally use the <code>create-slim-hadoop-tar.sh</code> script:</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>./dynamometer-infra/bin/create-slim-hadoop-tar.sh hadoop-VERSION.tar.gz
|
|
</pre></div></div>
|
|
|
|
<p>The Hadoop tar can be present on HDFS or locally where the client will be run from. Its path will be supplied to the client via the <code>-hadoop_binary_path</code> argument.</p>
|
|
<p>Alternatively, if you use the <code>-hadoop_version</code> argument, you can simply specify which version you would like to run against (e.g. ‘3.0.2’) and the client will attempt to download it automatically from an Apache mirror. See the usage information of the client for more details.</p></section><section>
|
|
<h3><a name="Step_4:_Prepare_Configurations"></a>Step 4: Prepare Configurations</h3>
|
|
<p>Prepare a configuration directory. You will need to specify a configuration directory with the standard Hadoop configuration layout, e.g. it should contain <code>etc/hadoop/*-site.xml</code>. This determines with what configuration the Dyno-NN and -DNs will be launched. Configurations that must be modified for Dynamometer to work properly (e.g. <code>fs.defaultFS</code> or <code>dfs.namenode.name.dir</code>) will be overridden at execution time. This can be a directory if it is available locally, else an archive file on local or remote (HDFS) storage.</p></section><section>
|
|
<h3><a name="Step_5:_Execute_the_Block_Generation_Job"></a>Step 5: Execute the Block Generation Job</h3>
|
|
<p>This will use the <code>fsimage_TXID.xml</code> file to generate the list of blocks that each Dyno-DN should advertise to the Dyno-NN. It runs as a MapReduce job.</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>./dynamometer-blockgen/bin/generate-block-lists.sh
|
|
-fsimage_input_path hdfs:///dyno/fsimage/fsimage_TXID.xml
|
|
-block_image_output_dir hdfs:///dyno/blocks
|
|
-num_reducers R
|
|
-num_datanodes D
|
|
</pre></div></div>
|
|
|
|
<p>In this example, the XML file uploaded above is used to generate block listings into <code>hdfs:///dyno/blocks</code>. <code>R</code> reducers are used for the job, and <code>D</code> block listings are generated - this will determine how many Dyno-DNs are started in the Dyno-HDFS cluster.</p></section><section>
|
|
<h3><a name="Step_6:_Prepare_Audit_Traces_.28Optional.29"></a>Step 6: Prepare Audit Traces (Optional)</h3>
|
|
<p>This step is only necessary if you intend to use the audit trace replay capabilities of Dynamometer; if you just intend to start a Dyno-HDFS cluster you can skip to the next section.</p>
|
|
<p>The audit trace replay accepts one input file per mapper, and currently supports two input formats, configurable via the <code>auditreplay.command-parser.class</code> configuration. One mapper will automatically be created for every audit log file within the audit log directory specified at launch time.</p>
|
|
<p>The default is a direct format, <code>org.apache.hadoop.tools.dynamometer.workloadgenerator.audit.AuditLogDirectParser</code>. This accepts files in the format produced by a standard configuration audit logger, e.g. lines like:</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>1970-01-01 00:00:42,000 INFO FSNamesystem.audit: allowed=true ugi=hdfs ip=/127.0.0.1 cmd=open src=/tmp/foo dst=null perm=null proto=rpc
|
|
</pre></div></div>
|
|
|
|
<p>When using this format you must also specify <code>auditreplay.log-start-time.ms</code>, which should be (in milliseconds since the Unix epoch) the start time of the audit traces. This is needed for all mappers to agree on a single start time. For example, if the above line was the first audit event, you would specify <code>auditreplay.log-start-time.ms=42000</code>. Within a file, the audit logs must be in order of ascending timestamp.</p>
|
|
<p>The other supported format is <code>org.apache.hadoop.tools.dynamometer.workloadgenerator.audit.AuditLogHiveTableParser</code>. This accepts files in the format produced by a Hive query with output fields, in order:</p>
|
|
<ul>
|
|
|
|
<li><code>relativeTimestamp</code>: event time offset, in milliseconds, from the start of the trace</li>
|
|
<li><code>ugi</code>: user information of the submitting user</li>
|
|
<li><code>command</code>: name of the command, e.g. ‘open’</li>
|
|
<li><code>source</code>: source path</li>
|
|
<li><code>dest</code>: destination path</li>
|
|
<li><code>sourceIP</code>: source IP of the event</li>
|
|
</ul>
|
|
<p>Assuming your audit logs are available in Hive, this can be produced via a Hive query looking like:</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>INSERT OVERWRITE DIRECTORY '${outputPath}'
|
|
SELECT (timestamp - ${startTimestamp} AS relativeTimestamp, ugi, command, source, dest, sourceIP
|
|
FROM '${auditLogTableLocation}'
|
|
WHERE timestamp >= ${startTimestamp} AND timestamp < ${endTimestamp}
|
|
DISTRIBUTE BY src
|
|
SORT BY relativeTimestamp ASC;
|
|
</pre></div></div>
|
|
<section>
|
|
<h4><a name="Partitioning_the_Audit_Logs"></a>Partitioning the Audit Logs</h4>
|
|
<p>You may notice that in the Hive query shown above, there is a <code>DISTRIBUTE BY src</code> clause which indicates that the output files should be partitioned by the source IP of the caller. This is done to try to maintain closer ordering of requests which originated from a single client. Dynamometer does not guarantee strict ordering of operations even within a partition, but ordering will typically be maintained more closely within a partition than across partitions.</p>
|
|
<p>Whether you use Hive or raw audit logs, it will be necessary to partition the audit logs based on the number of simultaneous clients you required to perform your workload replay. Using the source IP as a partition key is one approach with the potential advantages discussed above, but any partition scheme should work reasonably well.</p></section></section></section><section>
|
|
<h2><a name="Running_Dynamometer"></a>Running Dynamometer</h2>
|
|
<p>After the setup steps above have been completed, you’re ready to start up a Dyno-HDFS cluster and replay some workload against it!</p>
|
|
<p>The client which launches the Dyno-HDFS YARN application can optionally launch the workload replay job once the Dyno-HDFS cluster has fully started. This makes each replay into a single execution of the client, enabling easy testing of various configurations. You can also launch the two separately to have more control. Similarly, it is possible to launch Dyno-DNs for an external NameNode which is not controlled by Dynamometer/YARN. This can be useful for testing NameNode configurations which are not yet supported (e.g. HA NameNodes). You can do this by passing the <code>-namenode_servicerpc_addr</code> argument to the infrastructure application with a value that points to an external NameNode’s service RPC address.</p><section>
|
|
<h3><a name="Manual_Workload_Launch"></a>Manual Workload Launch</h3>
|
|
<p>First launch the infrastructure application to begin the startup of the internal HDFS cluster, e.g.:</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>./dynamometer-infra/bin/start-dynamometer-cluster.sh
|
|
-hadoop_binary_path hadoop-3.0.2.tar.gz
|
|
-conf_path my-hadoop-conf
|
|
-fs_image_dir hdfs:///fsimage
|
|
-block_list_path hdfs:///dyno/blocks
|
|
</pre></div></div>
|
|
|
|
<p>This demonstrates the required arguments. You can run this with the <code>-help</code> flag to see further usage information.</p>
|
|
<p>The client will track the Dyno-NN’s startup progress and how many Dyno-DNs it considers live. It will notify via logging when the Dyno-NN has exited safemode and is ready for use.</p>
|
|
<p>At this point, a workload job (map-only MapReduce job) can be launched, e.g.:</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>./dynamometer-workload/bin/start-workload.sh
|
|
-Dauditreplay.input-path=hdfs:///dyno/audit_logs/
|
|
-Dauditreplay.output-path=hdfs:///dyno/results/
|
|
-Dauditreplay.num-threads=50
|
|
-nn_uri hdfs://namenode_address:port/
|
|
-start_time_offset 5m
|
|
-mapper_class_name AuditReplayMapper
|
|
</pre></div></div>
|
|
|
|
<p>The type of workload generation is configurable; AuditReplayMapper replays an audit log trace as discussed previously. The AuditReplayMapper is configured via configurations; <code>auditreplay.input-path</code>, <code>auditreplay.output-path</code> and <code>auditreplay.num-threads</code> are required to specify the input path for audit log files, the output path for the results, and the number of threads per map task. A number of map tasks equal to the number of files in <code>input-path</code> will be launched; each task will read in one of these input files and use <code>num-threads</code> threads to replay the events contained within that file. A best effort is made to faithfully replay the audit log events at the same pace at which they originally occurred (optionally, this can be adjusted by specifying <code>auditreplay.rate-factor</code> which is a multiplicative factor towards the rate of replay, e.g. use 2.0 to replay the events at twice the original speed).</p>
|
|
<p>The AuditReplayMapper will output the benchmark results to a file <code>part-r-00000</code> in the output directory in CSV format. Each line is in the format <code>user,type,operation,numops,cumulativelatency</code>, e.g. <code>hdfs,WRITE,MKDIRS,2,150</code>.</p></section><section>
|
|
<h3><a name="Integrated_Workload_Launch"></a>Integrated Workload Launch</h3>
|
|
<p>To have the infrastructure application client launch the workload automatically, parameters for the workload job are passed to the infrastructure script. Only the AuditReplayMapper is supported in this fashion at this time. To launch an integrated application with the same parameters as were used above, the following can be used:</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>./dynamometer-infra/bin/start-dynamometer-cluster.sh
|
|
-hadoop_binary hadoop-3.0.2.tar.gz
|
|
-conf_path my-hadoop-conf
|
|
-fs_image_dir hdfs:///fsimage
|
|
-block_list_path hdfs:///dyno/blocks
|
|
-workload_replay_enable
|
|
-workload_input_path hdfs:///dyno/audit_logs/
|
|
-workload_output_path hdfs:///dyno/results/
|
|
-workload_threads_per_mapper 50
|
|
-workload_start_delay 5m
|
|
</pre></div></div>
|
|
|
|
<p>When run in this way, the client will automatically handle tearing down the Dyno-HDFS cluster once the workload has completed. To see the full list of supported parameters, run this with the <code>-help</code> flag.</p></section></section><section>
|
|
<h2><a name="Architecture"></a>Architecture</h2>
|
|
<p>Dynamometer is implemented as an application on top of YARN. There are three main actors in a Dynamometer application:</p>
|
|
<ul>
|
|
|
|
<li>Infrastructure is the simulated HDFS cluster.</li>
|
|
<li>Workload simulates HDFS clients to generate load on the simulated NameNode.</li>
|
|
<li>The driver coordinates the two other components.</li>
|
|
</ul>
|
|
<p>The logic encapsulated in the driver enables a user to perform a full test execution of Dynamometer with a single command, making it possible to do things like sweeping over different parameters to find optimal configurations.</p>
|
|
<p><img src="./images/dynamometer-architecture-infra.png" alt="Dynamometer Infrastructure Application Architecture" /></p>
|
|
<p>The infrastructure application is written as a native YARN application in which a single NameNode and numerous DataNodes are launched and wired together to create a fully simulated HDFS cluster. For Dynamometer to provide an extremely realistic scenario, it is necessary to have a cluster which contains, from the NameNode’s perspective, the same information as a production cluster. This is why the setup steps described above involve first collecting the FsImage file from a production NameNode and placing it onto the host HDFS cluster. To avoid having to copy an entire cluster’s worth of blocks, Dynamometer leverages the fact that the actual data stored in blocks is irrelevant to the NameNode, which is only aware of the block metadata. Dynamometer’s blockgen job first uses the Offline Image Viewer to turn the FsImage into XML, then parses this to extract the metadata for each block, then partitions this information before placing it on HDFS for the simulated DataNodes to consume. <code>SimulatedFSDataset</code> is used to bypass the DataNode storage layer and store only the block metadata, loaded from the information extracted in the previous step. This scheme allows Dynamometer to pack many simulated DataNodes onto each physical node, as the size of the metadata is many orders of magnitude smaller than the data itself.</p>
|
|
<p>To create a stress test that matches a production environment, Dynamometer needs a way to collect the information about the production workload. For this the HDFS audit log is used, which contains a faithful record of all client-facing operations against the NameNode. By replaying this audit log to recreate the client load, and running simulated DataNodes to recreate the cluster management load, Dynamometer is able to provide a realistic simulation of the conditions of a production NameNode.</p>
|
|
<p><img src="./images/dynamometer-architecture-replay.png" alt="Dynamometer Replay Architecture" /></p>
|
|
<p>A heavily-loaded NameNode can service tens of thousands of operations per second; to induce such a load, Dynamometer needs numerous clients to submit requests. In an effort to ensure that each request has the same effect and performance implications as its original submission, Dynamometer attempts to make related requests (for example, a directory creation followed by a listing of that directory) in such a way as to preserve their original ordering. It is for this reason that audit log files are suggested to be partitioned by source IP address, using the assumption that requests which originated from the same host have more tightly coupled causal relationships than those which originated from different hosts. In the interest of simplicity, the stress testing job is written as a map-only MapReduce job, in which each mapper consumes a partitioned audit log file and replays the commands contained within against the simulated NameNode. During execution statistics are collected about the replay, such as latency for different types of requests.</p></section><section>
|
|
<h2><a name="External_Resources"></a>External Resources</h2>
|
|
<p>To see more information on Dynamometer, you can see the <a class="externalLink" href="https://engineering.linkedin.com/blog/2018/02/dynamometer--scale-testing-hdfs-on-minimal-hardware-with-maximum">blog post announcing its initial release</a> or <a class="externalLink" href="https://www.slideshare.net/xkrogen/hadoop-meetup-jan-2019-dynamometer-and-a-case-study-in-namenode-gc">this presentation</a>.</p></section>
|
|
</div>
|
|
</div>
|
|
<div class="clear">
|
|
<hr/>
|
|
</div>
|
|
<div id="footer">
|
|
<div class="xright">
|
|
© 2008-2023
|
|
Apache Software Foundation
|
|
|
|
- <a href="http://maven.apache.org/privacy-policy.html">Privacy Policy</a>.
|
|
Apache Maven, Maven, Apache, the Apache feather logo, and the Apache Maven project logos are trademarks of The Apache Software Foundation.
|
|
</div>
|
|
<div class="clear">
|
|
<hr/>
|
|
</div>
|
|
</div>
|
|
</body>
|
|
</html>
|