1261 lines
77 KiB
HTML
1261 lines
77 KiB
HTML
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
|
|
<!--
|
|
| Generated by Apache Maven Doxia at 2023-04-27
|
|
| Rendered using Apache Maven Stylus Skin 1.5
|
|
-->
|
|
<html xmlns="http://www.w3.org/1999/xhtml">
|
|
<head>
|
|
<title>Apache Hadoop Amazon Web Services support – Working with IAM Assumed Roles</title>
|
|
<style type="text/css" media="all">
|
|
@import url("../../css/maven-base.css");
|
|
@import url("../../css/maven-theme.css");
|
|
@import url("../../css/site.css");
|
|
</style>
|
|
<link rel="stylesheet" href="../../css/print.css" type="text/css" media="print" />
|
|
<meta name="Date-Revision-yyyymmdd" content="20230427" />
|
|
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
|
|
</head>
|
|
<body class="composite">
|
|
<div id="banner">
|
|
<a href="http://hadoop.apache.org/" id="bannerLeft">
|
|
<img src="http://hadoop.apache.org/images/hadoop-logo.jpg" alt="" />
|
|
</a>
|
|
<a href="http://www.apache.org/" id="bannerRight">
|
|
<img src="http://www.apache.org/images/asf_logo_wide.png" alt="" />
|
|
</a>
|
|
<div class="clear">
|
|
<hr/>
|
|
</div>
|
|
</div>
|
|
<div id="breadcrumbs">
|
|
|
|
<div class="xright"> <a href="http://wiki.apache.org/hadoop" class="externalLink">Wiki</a>
|
|
|
|
|
<a href="https://gitbox.apache.org/repos/asf/hadoop.git" class="externalLink">git</a>
|
|
|
|
| Last Published: 2023-04-27
|
|
| Version: 3.4.0-SNAPSHOT
|
|
</div>
|
|
<div class="clear">
|
|
<hr/>
|
|
</div>
|
|
</div>
|
|
<div id="leftColumn">
|
|
<div id="navcolumn">
|
|
|
|
<h5>General</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../../../index.html">Overview</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-common/SingleCluster.html">Single Node Setup</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-common/ClusterSetup.html">Cluster Setup</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-common/CommandsManual.html">Commands Reference</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-common/FileSystemShell.html">FileSystem Shell</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-common/Compatibility.html">Compatibility Specification</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-common/DownstreamDev.html">Downstream Developer's Guide</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-common/AdminCompatibilityGuide.html">Admin Compatibility Guide</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-common/InterfaceClassification.html">Interface Classification</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-common/filesystem/index.html">FileSystem Specification</a>
|
|
</li>
|
|
</ul>
|
|
<h5>Common</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-common/CLIMiniCluster.html">CLI Mini Cluster</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-common/FairCallQueue.html">Fair Call Queue</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-common/NativeLibraries.html">Native Libraries</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-common/Superusers.html">Proxy User</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-common/RackAwareness.html">Rack Awareness</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-common/SecureMode.html">Secure Mode</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-common/ServiceLevelAuth.html">Service Level Authorization</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-common/HttpAuthentication.html">HTTP Authentication</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-common/CredentialProviderAPI.html">Credential Provider API</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-kms/index.html">Hadoop KMS</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-common/Tracing.html">Tracing</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-common/UnixShellGuide.html">Unix Shell Guide</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-common/registry/index.html">Registry</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-common/AsyncProfilerServlet.html">Async Profiler</a>
|
|
</li>
|
|
</ul>
|
|
<h5>HDFS</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsDesign.html">Architecture</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html">User Guide</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/HDFSCommands.html">Commands Reference</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html">NameNode HA With QJM</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html">NameNode HA With NFS</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/ObserverNameNode.html">Observer NameNode</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/Federation.html">Federation</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/ViewFs.html">ViewFs</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/ViewFsOverloadScheme.html">ViewFsOverloadScheme</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html">Snapshots</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsEditsViewer.html">Edits Viewer</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html">Image Viewer</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html">Permissions and HDFS</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsQuotaAdminGuide.html">Quotas and HDFS</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/LibHdfs.html">libhdfs (C API)</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/WebHDFS.html">WebHDFS (REST API)</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-hdfs-httpfs/index.html">HttpFS</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html">Short Circuit Local Reads</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html">Centralized Cache Management</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html">NFS Gateway</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html">Rolling Upgrade</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/ExtendedAttributes.html">Extended Attributes</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html">Transparent Encryption</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsMultihoming.html">Multihoming</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html">Storage Policies</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/MemoryStorage.html">Memory Storage Support</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/SLGUserGuide.html">Synthetic Load Generator</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html">Erasure Coding</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/HDFSDiskbalancer.html">Disk Balancer</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsUpgradeDomain.html">Upgrade Domain</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsDataNodeAdminGuide.html">DataNode Admin</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs-rbf/HDFSRouterFederation.html">Router Federation</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsProvidedStorage.html">Provided Storage</a>
|
|
</li>
|
|
</ul>
|
|
<h5>MapReduce</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html">Tutorial</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html">Commands Reference</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html">Compatibility with 1.x</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/EncryptedShuffle.html">Encrypted Shuffle</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/PluggableShuffleAndPluggableSort.html">Pluggable Shuffle/Sort</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/DistributedCacheDeploy.html">Distributed Cache Deploy</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/SharedCacheSupport.html">Support for YARN Shared Cache</a>
|
|
</li>
|
|
</ul>
|
|
<h5>MapReduce REST APIs</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredAppMasterRest.html">MR Application Master</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/HistoryServerRest.html">MR History Server</a>
|
|
</li>
|
|
</ul>
|
|
<h5>YARN</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/YARN.html">Architecture</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/YarnCommands.html">Commands Reference</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html">Capacity Scheduler</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/FairScheduler.html">Fair Scheduler</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html">ResourceManager Restart</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html">ResourceManager HA</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/ResourceModel.html">Resource Model</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/NodeLabel.html">Node Labels</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/NodeAttributes.html">Node Attributes</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/WebApplicationProxy.html">Web Application Proxy</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html">Timeline Server</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html">Timeline Service V.2</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html">Writing YARN Applications</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html">YARN Application Security</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/NodeManager.html">NodeManager</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/DockerContainers.html">Running Applications in Docker Containers</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/RuncContainers.html">Running Applications in runC Containers</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/NodeManagerCgroups.html">Using CGroups</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/SecureContainer.html">Secure Containers</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/ReservationSystem.html">Reservation System</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/GracefulDecommission.html">Graceful Decommission</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html">Opportunistic Containers</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/Federation.html">YARN Federation</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/SharedCache.html">Shared Cache</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/UsingGpus.html">Using GPU</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/UsingFPGA.html">Using FPGA</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/PlacementConstraints.html">Placement Constraints</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/YarnUI2.html">YARN UI2</a>
|
|
</li>
|
|
</ul>
|
|
<h5>YARN REST APIs</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/WebServicesIntro.html">Introduction</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html">Resource Manager</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/NodeManagerRest.html">Node Manager</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html#Timeline_Server_REST_API_v1">Timeline Server</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html#Timeline_Service_v.2_REST_API">Timeline Service V.2</a>
|
|
</li>
|
|
</ul>
|
|
<h5>YARN Service</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/yarn-service/Overview.html">Overview</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/yarn-service/QuickStart.html">QuickStart</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/yarn-service/Concepts.html">Concepts</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/yarn-service/YarnServiceAPI.html">Yarn Service API</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/yarn-service/ServiceDiscovery.html">Service Discovery</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-site/yarn-service/SystemServices.html">System Services</a>
|
|
</li>
|
|
</ul>
|
|
<h5>Hadoop Compatible File Systems</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../../../hadoop-aliyun/tools/hadoop-aliyun/index.html">Aliyun OSS</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-aws/tools/hadoop-aws/index.html">Amazon S3</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-azure/index.html">Azure Blob Storage</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-azure-datalake/index.html">Azure Data Lake Storage</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-cos/cloud-storage/index.html">Tencent COS</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-huaweicloud/cloud-storage/index.html">Huaweicloud OBS</a>
|
|
</li>
|
|
</ul>
|
|
<h5>Auth</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../../../hadoop-auth/index.html">Overview</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-auth/Examples.html">Examples</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-auth/Configuration.html">Configuration</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-auth/BuildingIt.html">Building</a>
|
|
</li>
|
|
</ul>
|
|
<h5>Tools</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../../../hadoop-streaming/HadoopStreaming.html">Hadoop Streaming</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-archives/HadoopArchives.html">Hadoop Archives</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-archive-logs/HadoopArchiveLogs.html">Hadoop Archive Logs</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-distcp/DistCp.html">DistCp</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-federation-balance/HDFSFederationBalance.html">HDFS Federation Balance</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-gridmix/GridMix.html">GridMix</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-rumen/Rumen.html">Rumen</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-resourceestimator/ResourceEstimator.html">Resource Estimator Service</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-sls/SchedulerLoadSimulator.html">Scheduler Load Simulator</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-common/Benchmarking.html">Hadoop Benchmarking</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-dynamometer/Dynamometer.html">Dynamometer</a>
|
|
</li>
|
|
</ul>
|
|
<h5>Reference</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-common/release/">Changelog and Release Notes</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../api/index.html">Java API docs</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-common/UnixShellAPI.html">Unix Shell API</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-common/Metrics.html">Metrics</a>
|
|
</li>
|
|
</ul>
|
|
<h5>Configuration</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-common/core-default.xml">core-default.xml</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs/hdfs-default.xml">hdfs-default.xml</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-hdfs-rbf/hdfs-rbf-default.xml">hdfs-rbf-default.xml</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml">mapred-default.xml</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-yarn/hadoop-yarn-common/yarn-default.xml">yarn-default.xml</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-kms/kms-default.html">kms-default.xml</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-hdfs-httpfs/httpfs-default.html">httpfs-default.xml</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../../hadoop-project-dist/hadoop-common/DeprecatedProperties.html">Deprecated Properties</a>
|
|
</li>
|
|
</ul>
|
|
<a href="http://maven.apache.org/" title="Built by Maven" class="poweredBy">
|
|
<img alt="Built by Maven" src="../../images/logos/maven-feather.png"/>
|
|
</a>
|
|
|
|
</div>
|
|
</div>
|
|
<div id="bodyColumn">
|
|
<div id="contentBox">
|
|
<!---
|
|
Licensed under the Apache License, Version 2.0 (the "License");
|
|
you may not use this file except in compliance with the License.
|
|
You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software
|
|
distributed under the License is distributed on an "AS IS" BASIS,
|
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
See the License for the specific language governing permissions and
|
|
limitations under the License. See accompanying LICENSE file.
|
|
-->
|
|
<h1>Working with IAM Assumed Roles</h1>
|
|
<ul>
|
|
<li><a href="#Using_IAM_Assumed_Roles"> Using IAM Assumed Roles</a>
|
|
<ul>
|
|
<li><a href="#Before_You_Begin">Before You Begin</a></li>
|
|
<li><a href="#How_the_S3A_connector_supports_IAM_Assumed_Roles."> How the S3A connector supports IAM Assumed Roles.</a></li>
|
|
<li><a href="#Configuring_Assumed_Roles"> Configuring Assumed Roles</a></li>
|
|
<li><a href="#Assumed_Role_Configuration_Options">Assumed Role Configuration Options</a></li></ul></li>
|
|
<li><a href="#Restricting_S3A_operations_through_AWS_Policies"> Restricting S3A operations through AWS Policies</a>
|
|
<ul>
|
|
<li><a href="#Read_Access_Permissions"> Read Access Permissions</a></li>
|
|
<li><a href="#Write_Access_Permissions"> Write Access Permissions</a></li>
|
|
<li><a href="#SSE-KMS_Permissions"> SSE-KMS Permissions</a></li>
|
|
<li><a href="#Mixed_Permissions_in_a_single_S3_Bucket"> Mixed Permissions in a single S3 Bucket</a></li>
|
|
<li><a href="#Example:_Read_access_to_the_base.2C_R.2FW_to_the_path_underneath">Example: Read access to the base, R/W to the path underneath</a></li></ul></li>
|
|
<li><a href="#Troubleshooting_Assumed_Roles"> Troubleshooting Assumed Roles</a>
|
|
<ul>
|
|
<li><a href="#IOException:_.E2.80.9CUnset_property_fs.s3a.assumed.role.arn.E2.80.9D"> IOException: “Unset property fs.s3a.assumed.role.arn”</a></li>
|
|
<li><a href="#a.E2.80.9CNot_authorized_to_perform_sts:AssumeRole.E2.80.9D"> “Not authorized to perform sts:AssumeRole”</a></li>
|
|
<li><a href="#a.E2.80.9CRoles_may_not_be_assumed_by_root_accounts.E2.80.9D"> “Roles may not be assumed by root accounts”</a></li>
|
|
<li><a href="#Member_must_have_value_greater_than_or_equal_to_900"> Member must have value greater than or equal to 900</a></li>
|
|
<li><a href="#Error_.E2.80.9CThe_requested_DurationSeconds_exceeds_the_MaxSessionDuration_set_for_this_role.E2.80.9D"> Error “The requested DurationSeconds exceeds the MaxSessionDuration set for this role”</a></li>
|
|
<li><a href="#a.E2.80.9CValue_.E2.80.98345600.E2.80.99_at_.E2.80.98durationSeconds.E2.80.99_failed_to_satisfy_constraint:_Member_must_have_value_less_than_or_equal_to_43200.E2.80.9D">“Value ‘345600’ at ‘durationSeconds’ failed to satisfy constraint: Member must have value less than or equal to 43200”</a></li>
|
|
<li><a href="#MalformedPolicyDocumentException_.E2.80.9CThe_policy_is_not_in_the_valid_JSON_format.E2.80.9D"> MalformedPolicyDocumentException “The policy is not in the valid JSON format”</a></li>
|
|
<li><a href="#MalformedPolicyDocumentException_.E2.80.9CSyntax_errors_in_policy.E2.80.9D"> MalformedPolicyDocumentException “Syntax errors in policy”</a></li>
|
|
<li><a href="#IOException:_.E2.80.9CAssumedRoleCredentialProvider_cannot_be_in_fs.s3a.assumed.role.credentials.provider.E2.80.9D"> IOException: “AssumedRoleCredentialProvider cannot be in fs.s3a.assumed.role.credentials.provider”</a></li>
|
|
<li><a href="#AWSBadRequestException:_.E2.80.9Cnot_a_valid_key.3Dvalue_pair.E2.80.9D"> AWSBadRequestException: “not a valid key=value pair”</a></li>
|
|
<li><a href="#AccessDeniedException.2FInvalidClientTokenId:_.E2.80.9CThe_security_token_included_in_the_request_is_invalid.E2.80.9D"> AccessDeniedException/InvalidClientTokenId: “The security token included in the request is invalid”</a></li>
|
|
<li><a href="#AWSSecurityTokenServiceExceptiond:_.E2.80.9CMember_must_satisfy_regular_expression_pattern:_.5B.5Cw.2B.3D.2C..40-.5D.2A.E2.80.9D"> AWSSecurityTokenServiceExceptiond: “Member must satisfy regular expression pattern: [\w+=,.@-]*”</a></li>
|
|
<li><a href="#java.nio.file.AccessDeniedException_within_a_FileSystem_API_call"> java.nio.file.AccessDeniedException within a FileSystem API call</a></li>
|
|
<li><a href="#AccessDeniedException_When_working_with_KMS-encrypted_data"> AccessDeniedException When working with KMS-encrypted data</a></li>
|
|
<li><a href="#Error_Unable_to_execute_HTTP_request">Error Unable to execute HTTP request</a></li>
|
|
<li><a href="#Error_.E2.80.9CCredential_should_be_scoped_to_a_valid_region.E2.80.9D"> Error “Credential should be scoped to a valid region”</a></li></ul></li></ul>
|
|
|
|
<p>AWS <a class="externalLink" href="http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html">“IAM Assumed Roles”</a> allows applications to change the AWS role with which to authenticate with AWS services. The assumed roles can have different rights from the main user login.</p>
|
|
<p>The S3A connector supports assumed roles for authentication with AWS. A full set of login credentials must be provided, which will be used to obtain the assumed role and refresh it regularly. By using per-filesystem configuration, it is possible to use different assumed roles for different buckets.</p>
|
|
<p><i>IAM Assumed Roles are unlikely to be supported by third-party systems supporting the S3 APIs.</i></p><section>
|
|
<h2><a name="Using_IAM_Assumed_Roles"></a><a name="using_assumed_roles"></a> Using IAM Assumed Roles</h2><section>
|
|
<h3><a name="Before_You_Begin"></a>Before You Begin</h3>
|
|
<p>This document assumes you know about IAM Assumed roles, what they are, how to configure their policies, etc.</p>
|
|
<ul>
|
|
|
|
<li>You need a role to assume, and know its “ARN”.</li>
|
|
<li>You need a pair of long-lived IAM User credentials, not the root account set.</li>
|
|
<li>Have the AWS CLI installed, and test that it works there.</li>
|
|
<li>Give the role access to S3.</li>
|
|
<li>For working with data encrypted with SSE-KMS, the role must have access to the appropriate KMS keys.</li>
|
|
</ul>
|
|
<p>Trying to learn how IAM Assumed Roles work by debugging stack traces from the S3A client is “suboptimal”.</p></section><section>
|
|
<h3><a name="How_the_S3A_connector_supports_IAM_Assumed_Roles."></a><a name="how_it_works"></a> How the S3A connector supports IAM Assumed Roles.</h3>
|
|
<p>The S3A connector support IAM Assumed Roles in two ways:</p>
|
|
<ol style="list-style-type: decimal">
|
|
|
|
<li>Using the full credentials on the client to request credentials for a specific role -credentials which are then used for all the store operations. This can be used to verify that a specific role has the access permissions you need, or to “su” into a role which has permissions that’s the full accounts does not directly qualify for -such as access to a KMS key.</li>
|
|
<li>Using the full credentials to request role credentials which are then propagated into a launched application as delegation tokens. This extends the previous use as it allows the jobs to be submitted to a shared cluster with the permissions of the requested role, rather than those of the VMs/Containers of the deployed cluster.</li>
|
|
</ol>
|
|
<p>For Delegation Token integration, see (Delegation Tokens)[delegation_tokens.html]</p>
|
|
<p>To for Assumed Role authentication, the client must be configured to use the <i>Assumed Role Credential Provider</i>, <code>org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider</code>, in the configuration option <code>fs.s3a.aws.credentials.provider</code>.</p>
|
|
<p>This AWS Credential provider will read in the <code>fs.s3a.assumed.role</code> options needed to connect to the Security Token Service <a class="externalLink" href="https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html">Assumed Role API</a>, first authenticating with the full credentials, then assuming the specific role specified. It will then refresh this login at the configured rate of <code>fs.s3a.assumed.role.session.duration</code></p>
|
|
<p>To authenticate with the <a class="externalLink" href="https://docs.aws.amazon.com/STS/latest/APIReference/Welcome.html">AWS STS service</a> both for the initial credential retrieval and for background refreshes, a different credential provider must be created, one which uses long-lived credentials (secret keys, environment variables). Short lived credentials (e.g other session tokens, EC2 instance credentials) cannot be used.</p>
|
|
<p>A list of providers can be set in <code>fs.s3a.assumed.role.credentials.provider</code>; if unset the standard <code>BasicAWSCredentialsProvider</code> credential provider is used, which uses <code>fs.s3a.access.key</code> and <code>fs.s3a.secret.key</code>.</p>
|
|
<p>Note: although you can list other AWS credential providers in to the Assumed Role Credential Provider, it can only cause confusion.</p></section><section>
|
|
<h3><a name="Configuring_Assumed_Roles"></a><a name="using"></a> Configuring Assumed Roles</h3>
|
|
<p>To use assumed roles, the S3A client credentials provider must be set to the <code>AssumedRoleCredentialProvider</code>, and <code>fs.s3a.assumed.role.arn</code> to the previously created ARN.</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre><property>
|
|
<name>fs.s3a.aws.credentials.provider</name>
|
|
<value>org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider</value>
|
|
</property>
|
|
|
|
<property>
|
|
<name>fs.s3a.assumed.role.arn</name>
|
|
<value>arn:aws:iam::90066806600238:role/s3-restricted</value>
|
|
</property>
|
|
</pre></div></div>
|
|
|
|
<p>The STS service itself needs the caller to be authenticated, <i>which can only be done with a set of long-lived credentials</i>. This means the normal <code>fs.s3a.access.key</code> and <code>fs.s3a.secret.key</code> pair, environment variables, or some other supplier of long-lived secrets.</p>
|
|
<p>The default is the <code>fs.s3a.access.key</code> and <code>fs.s3a.secret.key</code> pair. If you wish to use a different authentication mechanism, set it in the property <code>fs.s3a.assumed.role.credentials.provider</code>.</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre><property>
|
|
<name>fs.s3a.assumed.role.credentials.provider</name>
|
|
<value>org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider</value>
|
|
</property>
|
|
</pre></div></div>
|
|
|
|
<p>Requirements for long-lived credentials notwithstanding, this option takes the same values as <code>fs.s3a.aws.credentials.provider</code>.</p>
|
|
<p>The safest way to manage AWS secrets is via <a href="index.html#hadoop_credential_providers">Hadoop Credential Providers</a>.</p></section><section>
|
|
<h3><a name="Assumed_Role_Configuration_Options"></a><a name="configuration"></a>Assumed Role Configuration Options</h3>
|
|
<p>Here are the full set of configuration options.</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre><property>
|
|
<name>fs.s3a.assumed.role.arn</name>
|
|
<value />
|
|
<description>
|
|
AWS ARN for the role to be assumed.
|
|
Required if the fs.s3a.aws.credentials.provider contains
|
|
org.apache.hadoop.fs.s3a.AssumedRoleCredentialProvider
|
|
</description>
|
|
</property>
|
|
|
|
<property>
|
|
<name>fs.s3a.assumed.role.session.name</name>
|
|
<value />
|
|
<description>
|
|
Session name for the assumed role, must be valid characters according to
|
|
the AWS APIs.
|
|
Only used if AssumedRoleCredentialProvider is the AWS credential provider.
|
|
If not set, one is generated from the current Hadoop/Kerberos username.
|
|
</description>
|
|
</property>
|
|
|
|
<property>
|
|
<name>fs.s3a.assumed.role.policy</name>
|
|
<value/>
|
|
<description>
|
|
JSON policy to apply to the role.
|
|
Only used if AssumedRoleCredentialProvider is the AWS credential provider.
|
|
</description>
|
|
</property>
|
|
|
|
<property>
|
|
<name>fs.s3a.assumed.role.session.duration</name>
|
|
<value>30m</value>
|
|
<description>
|
|
Duration of assumed roles before a refresh is attempted.
|
|
Only used if AssumedRoleCredentialProvider is the AWS credential provider.
|
|
Range: 15m to 1h
|
|
</description>
|
|
</property>
|
|
|
|
<property>
|
|
<name>fs.s3a.assumed.role.sts.endpoint</name>
|
|
<value/>
|
|
<description>
|
|
AWS Security Token Service Endpoint. If unset, uses the default endpoint.
|
|
Only used if AssumedRoleCredentialProvider is the AWS credential provider.
|
|
</description>
|
|
</property>
|
|
|
|
<property>
|
|
<name>fs.s3a.assumed.role.sts.endpoint.region</name>
|
|
<value>us-west-1</value>
|
|
<description>
|
|
AWS Security Token Service Endpoint's region;
|
|
Needed if fs.s3a.assumed.role.sts.endpoint points to an endpoint
|
|
other than the default one and the v4 signature is used.
|
|
Only used if AssumedRoleCredentialProvider is the AWS credential provider.
|
|
</description>
|
|
</property>
|
|
|
|
<property>
|
|
<name>fs.s3a.assumed.role.credentials.provider</name>
|
|
<value>org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider,
|
|
com.amazonaws.auth.EnvironmentVariableCredentialsProvider
|
|
</value>
|
|
<description>
|
|
List of credential providers to authenticate with the STS endpoint and
|
|
retrieve short-lived role credentials.
|
|
Used by AssumedRoleCredentialProvider and the S3A Session Delegation Token
|
|
and S3A Role Delegation Token bindings.
|
|
</description>
|
|
</property>
|
|
</pre></div></div>
|
|
</section></section><section>
|
|
<h2><a name="Restricting_S3A_operations_through_AWS_Policies"></a><a name="polices"></a> Restricting S3A operations through AWS Policies</h2>
|
|
<p>The S3A client needs to be granted specific permissions in order to work with a bucket. Here is a non-normative list of the permissions which must be granted for FileSystem operations to work.</p>
|
|
<p><i>Disclaimer</i> The specific set of actions which the S3A connector needs will change over time.</p>
|
|
<p>As more operations are added to the S3A connector, and as the means by which existing operations are implemented change, the AWS actions which are required by the client will change.</p>
|
|
<p>These lists represent the minimum actions to which the client’s principal must have in order to work with a bucket.</p><section>
|
|
<h3><a name="Read_Access_Permissions"></a><a name="read-permissions"></a> Read Access Permissions</h3>
|
|
<p>Permissions which must be granted when reading from a bucket:</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>s3:Get*
|
|
s3:ListBucket
|
|
</pre></div></div>
|
|
|
|
<p>To use SSE-KMS encryption, the client needs the <a href="sse-kms-permissions">SSE-KMS Permissions</a> to access the KMS key(s).</p></section><section>
|
|
<h3><a name="Write_Access_Permissions"></a><a name="write-permissions"></a> Write Access Permissions</h3>
|
|
<p>These permissions must all be granted for write access:</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>s3:Get*
|
|
s3:Delete*
|
|
s3:Put*
|
|
s3:ListBucket
|
|
s3:ListBucketMultipartUploads
|
|
s3:AbortMultipartUpload
|
|
</pre></div></div>
|
|
</section><section>
|
|
<h3><a name="SSE-KMS_Permissions"></a><a name="sse-kms-permissions"></a> SSE-KMS Permissions</h3>
|
|
<p>When to read data encrypted using SSE-KMS, the client must have <code>kms:Decrypt</code> permission for the specific key a file was encrypted with.</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>kms:Decrypt
|
|
</pre></div></div>
|
|
|
|
<p>To write data using SSE-KMS, the client must have all the following permissions.</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>kms:Decrypt
|
|
kms:GenerateDataKey
|
|
</pre></div></div>
|
|
|
|
<p>This includes renaming: renamed files are encrypted with the encryption key of the current S3A client; it must decrypt the source file first.</p>
|
|
<p>If the caller doesn’t have these permissions, the operation will fail with an <code>AccessDeniedException</code>: the S3 Store does not provide the specifics of the cause of the failure.</p></section><section>
|
|
<h3><a name="Mixed_Permissions_in_a_single_S3_Bucket"></a><a name="mixed-permissions"></a> Mixed Permissions in a single S3 Bucket</h3>
|
|
<p>Mixing permissions down the “directory tree” is limited only to the extent of supporting writeable directories under read-only parent paths.</p>
|
|
<p><i>Disclaimer:</i> When a client lacks write access up the entire directory tree, there are no guarantees of consistent filesystem views or operations.</p>
|
|
<p>Particular troublespots are “directory markers” and failures of non-atomic operations, particularly <code>rename()</code> and <code>delete()</code>.</p>
|
|
<p>A directory marker such as <code>/users/</code> will not be deleted if the user <code>alice</code> creates a directory <code>/users/alice</code> <i>and</i> she only has access to <code>/users/alice</code>.</p>
|
|
<p>When a path or directory is deleted, the parent directory may not exist afterwards. In the example above, if <code>alice</code> deletes <code>/users/alice</code> and there are no other entries under <code>/users/alice</code>, then the directory marker <code>/users/</code> cannot be created. The directory <code>/users</code> will not exist in listings, <code>getFileStatus("/users")</code> or similar.</p>
|
|
<p>Rename will fail if it cannot delete the items it has just copied, that is <code>rename(read-only-source, writeable-dest)</code> will fail —but only after performing the COPY of the data. Even though the operation failed, for a single file copy, the destination file will exist. For a directory copy, only a partial copy of the source data may take place before the permission failure is raised.</p></section><section>
|
|
<h3><a name="Example:_Read_access_to_the_base.2C_R.2FW_to_the_path_underneath"></a>Example: Read access to the base, R/W to the path underneath</h3>
|
|
<p>This example has the base bucket read only, and a directory underneath, <code>/users/alice/</code> granted full R/W access.</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>{
|
|
"Version" : "2012-10-17",
|
|
"Statement" : [ {
|
|
"Sid" : "4",
|
|
"Effect" : "Allow",
|
|
"Action" : [
|
|
"s3:ListBucket",
|
|
"s3:ListBucketMultipartUploads",
|
|
"s3:Get*"
|
|
],
|
|
"Resource" : "arn:aws:s3:::example-bucket/*"
|
|
}, {
|
|
"Sid" : "5",
|
|
"Effect" : "Allow",
|
|
"Action" : [
|
|
"s3:Get*",
|
|
"s3:PutObject",
|
|
"s3:DeleteObject",
|
|
"s3:AbortMultipartUpload",
|
|
"s3:ListMultipartUploadParts" ],
|
|
"Resource" : [
|
|
"arn:aws:s3:::example-bucket/users/alice/*",
|
|
"arn:aws:s3:::example-bucket/users/alice",
|
|
"arn:aws:s3:::example-bucket/users/alice/"
|
|
]
|
|
} ]
|
|
}
|
|
</pre></div></div>
|
|
|
|
<p>Note how three resources are provided to represent the path <code>/users/alice</code></p>
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th> Path </th>
|
|
<th> Matches </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td> <code>/users/alice</code> </td>
|
|
<td> Any file <code>alice</code> created under <code>/users</code> </td></tr>
|
|
<tr class="a">
|
|
<td> <code>/users/alice/</code> </td>
|
|
<td> The directory marker <code>alice/</code> created under <code>/users</code> </td></tr>
|
|
<tr class="b">
|
|
<td> <code>/users/alice/*</code> </td>
|
|
<td> All files and directories under the path <code>/users/alice</code> </td></tr>
|
|
</tbody>
|
|
</table>
|
|
<p>Note that the resource <code>arn:aws:s3:::example-bucket/users/alice*</code> cannot be used to refer to all of these paths, because it would also cover adjacent paths like <code>/users/alice2</code> and <code>/users/alicebob</code>.</p></section></section><section>
|
|
<h2><a name="Troubleshooting_Assumed_Roles"></a><a name="troubleshooting"></a> Troubleshooting Assumed Roles</h2>
|
|
<ol style="list-style-type: decimal">
|
|
|
|
<li>Make sure the role works and the user trying to enter it can do so from AWS the command line before trying to use the S3A client.</li>
|
|
<li>Try to access the S3 bucket with reads and writes from the AWS CLI.</li>
|
|
<li>With the Hadoop configuration set too use the role, try to read data from the <code>hadoop fs</code> CLI: <code>hadoop fs -ls -p s3a://bucket/</code></li>
|
|
<li>With the hadoop CLI, try to create a new directory with a request such as <code>hadoop fs -mkdirs -p s3a://bucket/path/p1/</code></li>
|
|
</ol><section>
|
|
<h3><a name="IOException:_.E2.80.9CUnset_property_fs.s3a.assumed.role.arn.E2.80.9D"></a><a name="no_role"></a> IOException: “Unset property fs.s3a.assumed.role.arn”</h3>
|
|
<p>The Assumed Role Credential Provider is enabled, but <code>fs.s3a.assumed.role.arn</code> is unset.</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>java.io.IOException: Unset property fs.s3a.assumed.role.arn
|
|
at org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider.<init>(AssumedRoleCredentialProvider.java:76)
|
|
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
|
|
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
|
|
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
|
|
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
|
|
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProvider(S3AUtils.java:583)
|
|
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProviderSet(S3AUtils.java:520)
|
|
at org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(DefaultS3ClientFactory.java:52)
|
|
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:252)
|
|
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
|
|
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:474)
|
|
</pre></div></div>
|
|
</section><section>
|
|
<h3><a name="a.E2.80.9CNot_authorized_to_perform_sts:AssumeRole.E2.80.9D"></a><a name="not_authorized_for_assumed_role"></a> “Not authorized to perform sts:AssumeRole”</h3>
|
|
<p>This can arise if the role ARN set in <code>fs.s3a.assumed.role.arn</code> is invalid or one to which the caller has no access.</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>java.nio.file.AccessDeniedException: : Instantiate org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider
|
|
on : com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
|
|
Not authorized to perform sts:AssumeRole (Service: AWSSecurityTokenService; Status Code: 403;
|
|
Error Code: AccessDenied; Request ID: aad4e59a-f4b0-11e7-8c78-f36aaa9457f6):AccessDenied
|
|
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:215)
|
|
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProvider(S3AUtils.java:616)
|
|
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProviderSet(S3AUtils.java:520)
|
|
at org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(DefaultS3ClientFactory.java:52)
|
|
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:252)
|
|
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
|
|
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:474)
|
|
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
|
|
</pre></div></div>
|
|
</section><section>
|
|
<h3><a name="a.E2.80.9CRoles_may_not_be_assumed_by_root_accounts.E2.80.9D"></a><a name="root_account"></a> “Roles may not be assumed by root accounts”</h3>
|
|
<p>You can’t assume a role with the root account of an AWS account; you need to create a new user and give it the permission to change into the role.</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>java.nio.file.AccessDeniedException: : Instantiate org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider
|
|
on : com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
|
|
Roles may not be assumed by root accounts. (Service: AWSSecurityTokenService; Status Code: 403; Error Code: AccessDenied;
|
|
Request ID: e86dfd8f-e758-11e7-88e7-ad127c04b5e2):
|
|
No AWS Credentials provided by AssumedRoleCredentialProvider :
|
|
com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
|
|
Roles may not be assumed by root accounts. (Service: AWSSecurityTokenService;
|
|
Status Code: 403; Error Code: AccessDenied; Request ID: e86dfd8f-e758-11e7-88e7-ad127c04b5e2)
|
|
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:215)
|
|
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProvider(S3AUtils.java:616)
|
|
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProviderSet(S3AUtils.java:520)
|
|
at org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(DefaultS3ClientFactory.java:52)
|
|
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:252)
|
|
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
|
|
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:474)
|
|
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
|
|
... 22 more
|
|
Caused by: com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
|
|
Roles may not be assumed by root accounts.
|
|
(Service: AWSSecurityTokenService; Status Code: 403; Error Code: AccessDenied;
|
|
Request ID: e86dfd8f-e758-11e7-88e7-ad127c04b5e2)
|
|
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1638)
|
|
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1303)
|
|
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1055)
|
|
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
|
|
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
|
|
</pre></div></div>
|
|
</section><section>
|
|
<h3><a name="Member_must_have_value_greater_than_or_equal_to_900"></a><a name="invalid_duration"></a> <code>Member must have value greater than or equal to 900</code></h3>
|
|
<p>The value of <code>fs.s3a.assumed.role.session.duration</code> is too low.</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>org.apache.hadoop.fs.s3a.AWSBadRequestException: request role credentials:
|
|
com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
|
|
1 validation error detected: Value '20' at 'durationSeconds' failed to satisfy constraint:
|
|
Member must have value greater than or equal to 900 (Service: AWSSecurityTokenService;
|
|
Status Code: 400; Error Code: ValidationError;
|
|
Request ID: b9a82403-d0a7-11e8-98ef-596679ee890d)
|
|
</pre></div></div>
|
|
|
|
<p>Fix: increase.</p></section><section>
|
|
<h3><a name="Error_.E2.80.9CThe_requested_DurationSeconds_exceeds_the_MaxSessionDuration_set_for_this_role.E2.80.9D"></a><a name="duration_too_high"></a> Error “The requested DurationSeconds exceeds the MaxSessionDuration set for this role”</h3>
|
|
<p>The value of <code>fs.s3a.assumed.role.session.duration</code> is too high.</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>org.apache.hadoop.fs.s3a.AWSBadRequestException: request role credentials:
|
|
com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
|
|
The requested DurationSeconds exceeds the MaxSessionDuration set for this role.
|
|
(Service: AWSSecurityTokenService; Status Code: 400;
|
|
Error Code: ValidationError; Request ID: 17875165-d0a7-11e8-b85f-d15a599a7f6d)
|
|
</pre></div></div>
|
|
|
|
<p>There are two solutions to this</p>
|
|
<ul>
|
|
|
|
<li>Decrease the duration value.</li>
|
|
<li>Increase the duration of a role in the <a class="externalLink" href="https://console.aws.amazon.com/iam/home#/roles">AWS IAM Console</a>.</li>
|
|
</ul></section><section>
|
|
<h3><a name="a.E2.80.9CValue_.E2.80.98345600.E2.80.99_at_.E2.80.98durationSeconds.E2.80.99_failed_to_satisfy_constraint:_Member_must_have_value_less_than_or_equal_to_43200.E2.80.9D"></a>“Value ‘345600’ at ‘durationSeconds’ failed to satisfy constraint: Member must have value less than or equal to 43200”</h3>
|
|
<p>Irrespective of the maximum duration of a role, the AWS role API only permits callers to request any role for up to 12h; attempting to use a larger number will fail.</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>Caused by: com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
|
|
1 validation error detected:
|
|
Value '345600' at 'durationSeconds' failed to satisfy constraint:
|
|
Member must have value less than or equal to 43200
|
|
(Service: AWSSecurityTokenService;
|
|
Status Code: 400; Error Code:
|
|
ValidationError;
|
|
Request ID: dec1ca6b-d0aa-11e8-ac8c-4119b3ea9f7f)
|
|
</pre></div></div>
|
|
|
|
<p>For full sessions, the duration limit is 129600 seconds: 36h.</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>org.apache.hadoop.fs.s3a.AWSBadRequestException: request session credentials:
|
|
com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
|
|
1 validation error detected: Value '345600' at 'durationSeconds' failed to satisfy constraint:
|
|
Member must have value less than or equal to 129600
|
|
(Service: AWSSecurityTokenService; Status Code: 400; Error Code: ValidationError;
|
|
Request ID: a6e73d44-d0aa-11e8-95ed-c5bba29f0635)
|
|
</pre></div></div>
|
|
|
|
<p>For both these errors, the sole fix is to request a shorter duration in <code>fs.s3a.assumed.role.session.duration</code>.</p></section><section>
|
|
<h3><a name="MalformedPolicyDocumentException_.E2.80.9CThe_policy_is_not_in_the_valid_JSON_format.E2.80.9D"></a><a name="malformed_policy"></a> <code>MalformedPolicyDocumentException</code> “The policy is not in the valid JSON format”</h3>
|
|
<p>The policy set in <code>fs.s3a.assumed.role.policy</code> is not valid according to the AWS specification of Role Policies.</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>org.apache.hadoop.fs.s3a.AWSBadRequestException: Instantiate org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider on :
|
|
com.amazonaws.services.securitytoken.model.MalformedPolicyDocumentException:
|
|
The policy is not in the valid JSON format. (Service: AWSSecurityTokenService; Status Code: 400;
|
|
Error Code: MalformedPolicyDocument; Request ID: baf8cb62-f552-11e7-9768-9df3b384e40c):
|
|
MalformedPolicyDocument: The policy is not in the valid JSON format.
|
|
(Service: AWSSecurityTokenService; Status Code: 400; Error Code: MalformedPolicyDocument;
|
|
Request ID: baf8cb62-f552-11e7-9768-9df3b384e40c)
|
|
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:209)
|
|
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProvider(S3AUtils.java:616)
|
|
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProviderSet(S3AUtils.java:520)
|
|
at org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(DefaultS3ClientFactory.java:52)
|
|
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:252)
|
|
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
|
|
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:474)
|
|
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
|
|
Caused by: com.amazonaws.services.securitytoken.model.MalformedPolicyDocumentException:
|
|
The policy is not in the valid JSON format.
|
|
(Service: AWSSecurityTokenService; Status Code: 400;
|
|
Error Code: MalformedPolicyDocument; Request ID: baf8cb62-f552-11e7-9768-9df3b384e40c)
|
|
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1638)
|
|
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1303)
|
|
</pre></div></div>
|
|
</section><section>
|
|
<h3><a name="MalformedPolicyDocumentException_.E2.80.9CSyntax_errors_in_policy.E2.80.9D"></a><a name="policy_syntax_error"></a> <code>MalformedPolicyDocumentException</code> “Syntax errors in policy”</h3>
|
|
<p>The policy set in <code>fs.s3a.assumed.role.policy</code> is not valid JSON.</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>org.apache.hadoop.fs.s3a.AWSBadRequestException:
|
|
Instantiate org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider on :
|
|
com.amazonaws.services.securitytoken.model.MalformedPolicyDocumentException:
|
|
Syntax errors in policy. (Service: AWSSecurityTokenService;
|
|
Status Code: 400; Error Code: MalformedPolicyDocument;
|
|
Request ID: 24a281e8-f553-11e7-aa91-a96becfb4d45):
|
|
MalformedPolicyDocument: Syntax errors in policy.
|
|
(Service: AWSSecurityTokenService; Status Code: 400; Error Code: MalformedPolicyDocument;
|
|
Request ID: 24a281e8-f553-11e7-aa91-a96becfb4d45)
|
|
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:209)
|
|
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProvider(S3AUtils.java:616)
|
|
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProviderSet(S3AUtils.java:520)
|
|
at org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(DefaultS3ClientFactory.java:52)
|
|
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:252)
|
|
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
|
|
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:474)
|
|
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
|
|
(Service: AWSSecurityTokenService; Status Code: 400; Error Code: MalformedPolicyDocument;
|
|
Request ID: 24a281e8-f553-11e7-aa91-a96becfb4d45)
|
|
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1638)
|
|
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1303)
|
|
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1055)
|
|
... 19 more
|
|
</pre></div></div>
|
|
</section><section>
|
|
<h3><a name="IOException:_.E2.80.9CAssumedRoleCredentialProvider_cannot_be_in_fs.s3a.assumed.role.credentials.provider.E2.80.9D"></a><a name="recursive_auth"></a> <code>IOException</code>: “AssumedRoleCredentialProvider cannot be in fs.s3a.assumed.role.credentials.provider”</h3>
|
|
<p>You can’t use the Assumed Role Credential Provider as the provider in <code>fs.s3a.assumed.role.credentials.provider</code>.</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>java.io.IOException: AssumedRoleCredentialProvider cannot be in fs.s3a.assumed.role.credentials.provider
|
|
at org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider.<init>(AssumedRoleCredentialProvider.java:86)
|
|
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
|
|
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
|
|
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
|
|
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
|
|
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProvider(S3AUtils.java:583)
|
|
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProviderSet(S3AUtils.java:520)
|
|
at org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(DefaultS3ClientFactory.java:52)
|
|
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:252)
|
|
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
|
|
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:474)
|
|
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
|
|
</pre></div></div>
|
|
</section><section>
|
|
<h3><a name="AWSBadRequestException:_.E2.80.9Cnot_a_valid_key.3Dvalue_pair.E2.80.9D"></a><a name="invalid_keypair"></a> <code>AWSBadRequestException</code>: “not a valid key=value pair”</h3>
|
|
<p>There’s an space or other typo in the <code>fs.s3a.access.key</code> or <code>fs.s3a.secret.key</code> values used for the inner authentication which is breaking signature creation.</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre> org.apache.hadoop.fs.s3a.AWSBadRequestException: Instantiate org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider
|
|
on : com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
|
|
'valid/20180109/us-east-1/sts/aws4_request' not a valid key=value pair (missing equal-sign) in Authorization header:
|
|
'AWS4-HMAC-SHA256 Credential=not valid/20180109/us-east-1/sts/aws4_request,
|
|
SignedHeaders=amz-sdk-invocation-id;amz-sdk-retry;host;user-agent;x-amz-date.
|
|
(Service: AWSSecurityTokenService; Status Code: 400; Error Code:
|
|
IncompleteSignature; Request ID: c4a8841d-f556-11e7-99f9-af005a829416):IncompleteSignature:
|
|
'valid/20180109/us-east-1/sts/aws4_request' not a valid key=value pair (missing equal-sign)
|
|
in Authorization header: 'AWS4-HMAC-SHA256 Credential=not valid/20180109/us-east-1/sts/aws4_request,
|
|
SignedHeaders=amz-sdk-invocation-id;amz-sdk-retry;host;user-agent;x-amz-date,
|
|
(Service: AWSSecurityTokenService; Status Code: 400; Error Code: IncompleteSignature;
|
|
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:209)
|
|
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProvider(S3AUtils.java:616)
|
|
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProviderSet(S3AUtils.java:520)
|
|
at org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(DefaultS3ClientFactory.java:52)
|
|
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:252)
|
|
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
|
|
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:474)
|
|
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
|
|
|
|
Caused by: com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
|
|
'valid/20180109/us-east-1/sts/aws4_request' not a valid key=value pair (missing equal-sign)
|
|
in Authorization header: 'AWS4-HMAC-SHA256 Credential=not valid/20180109/us-east-1/sts/aws4_request,
|
|
SignedHeaders=amz-sdk-invocation-id;amz-sdk-retry;host;user-agent;x-amz-date,
|
|
(Service: AWSSecurityTokenService; Status Code: 400; Error Code: IncompleteSignature;
|
|
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1638)
|
|
</pre></div></div>
|
|
</section><section>
|
|
<h3><a name="AccessDeniedException.2FInvalidClientTokenId:_.E2.80.9CThe_security_token_included_in_the_request_is_invalid.E2.80.9D"></a><a name="invalid_token"></a> <code>AccessDeniedException/InvalidClientTokenId</code>: “The security token included in the request is invalid”</h3>
|
|
<p>The credentials used to authenticate with the AWS Security Token Service are invalid.</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>[ERROR] Failures:
|
|
[ERROR] java.nio.file.AccessDeniedException: : Instantiate org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider on :
|
|
com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
|
|
The security token included in the request is invalid.
|
|
(Service: AWSSecurityTokenService; Status Code: 403; Error Code: InvalidClientTokenId;
|
|
Request ID: 74aa7f8a-f557-11e7-850c-33d05b3658d7):InvalidClientTokenId
|
|
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:215)
|
|
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProvider(S3AUtils.java:616)
|
|
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProviderSet(S3AUtils.java:520)
|
|
at org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(DefaultS3ClientFactory.java:52)
|
|
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:252)
|
|
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
|
|
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:474)
|
|
|
|
Caused by: com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
|
|
The security token included in the request is invalid.
|
|
(Service: AWSSecurityTokenService; Status Code: 403; Error Code: InvalidClientTokenId;
|
|
Request ID: 74aa7f8a-f557-11e7-850c-33d05b3658d7)
|
|
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1638)
|
|
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1303)
|
|
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1055)
|
|
... 25 more
|
|
</pre></div></div>
|
|
</section><section>
|
|
<h3><a name="AWSSecurityTokenServiceExceptiond:_.E2.80.9CMember_must_satisfy_regular_expression_pattern:_.5B.5Cw.2B.3D.2C..40-.5D.2A.E2.80.9D"></a><a name="invalid_session"></a> <code>AWSSecurityTokenServiceExceptiond</code>: “Member must satisfy regular expression pattern: <code>[\w+=,.@-]*</code>”</h3>
|
|
<p>The session name, as set in <code>fs.s3a.assumed.role.session.name</code> must match the wildcard <code>[\w+=,.@-]*</code>.</p>
|
|
<p>If the property is unset, it is extracted from the current username and then sanitized to match these constraints. If set explicitly, it must be valid.</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>org.apache.hadoop.fs.s3a.AWSBadRequestException:
|
|
Instantiate org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider on
|
|
com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
|
|
1 validation error detected: Value 'Session Names cannot Hava Spaces!' at 'roleSessionName'
|
|
failed to satisfy constraint: Member must satisfy regular expression pattern: [\w+=,.@-]*
|
|
(Service: AWSSecurityTokenService; Status Code: 400; Error Code: ValidationError;
|
|
Request ID: 7c437acb-f55d-11e7-9ad8-3b5e4f701c20):ValidationError:
|
|
1 validation error detected: Value 'Session Names cannot Hava Spaces!' at 'roleSessionName'
|
|
failed to satisfy constraint: Member must satisfy regular expression pattern: [\w+=,.@-]*
|
|
(Service: AWSSecurityTokenService; Status Code: 400; Error Code: ValidationError;
|
|
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:209)
|
|
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProvider(S3AUtils.java:616)
|
|
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProviderSet(S3AUtils.java:520)
|
|
at org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(DefaultS3ClientFactory.java:52)
|
|
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:252)
|
|
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
|
|
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:474)
|
|
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
|
|
|
|
Caused by: com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
|
|
1 validation error detected: Value 'Session Names cannot Hava Spaces!' at 'roleSessionName'
|
|
failed to satisfy constraint:
|
|
Member must satisfy regular expression pattern: [\w+=,.@-]*
|
|
(Service: AWSSecurityTokenService; Status Code: 400; Error Code: ValidationError;
|
|
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1638)
|
|
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1303)
|
|
</pre></div></div>
|
|
</section><section>
|
|
<h3><a name="java.nio.file.AccessDeniedException_within_a_FileSystem_API_call"></a><a name="access_denied"></a> <code>java.nio.file.AccessDeniedException</code> within a FileSystem API call</h3>
|
|
<p>If an operation fails with an <code>AccessDeniedException</code>, then the role does not have the permission for the S3 Operation invoked during the call.</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>java.nio.file.AccessDeniedException: s3a://bucket/readonlyDir:
|
|
rename(s3a://bucket/readonlyDir, s3a://bucket/renameDest)
|
|
on s3a://bucket/readonlyDir:
|
|
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied
|
|
(Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 2805F2ABF5246BB1;
|
|
S3 Extended Request ID: iEXDVzjIyRbnkAc40MS8Sjv+uUQNvERRcqLsJsy9B0oyrjHLdkRKwJ/phFfA17Kjn483KSlyJNw=),
|
|
S3 Extended Request ID: iEXDVzjIyRbnkAc40MS8Sjv+uUQNvERRcqLsJsy9B0oyrjHLdkRKwJ/phFfA17Kjn483KSlyJNw=:AccessDenied
|
|
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:216)
|
|
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:143)
|
|
at org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:853)
|
|
...
|
|
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied
|
|
(Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 2805F2ABF5246BB1;
|
|
S3 Extended Request ID: iEXDVzjIyRbnkAc40MS8Sjv+uUQNvERRcqLsJsy9B0oyrjHLdkRKwJ/phFfA17Kjn483KSlyJNw=),
|
|
S3 Extended Request ID: iEXDVzjIyRbnkAc40MS8Sjv+uUQNvERRcqLsJsy9B0oyrjHLdkRKwJ/phFfA17Kjn483KSlyJNw=
|
|
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1638)
|
|
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1303)
|
|
</pre></div></div>
|
|
|
|
<p>This is the policy restriction behaving as intended: the caller is trying to perform an action which is forbidden.</p>
|
|
<ol style="list-style-type: decimal">
|
|
|
|
<li>
|
|
|
|
<p>If a policy has been set in <code>fs.s3a.assumed.role.policy</code> then it must declare <i>all</i> permissions which the caller is allowed to perform. The existing role policies act as an outer constraint on what the caller can perform, but are not inherited.</p>
|
|
</li>
|
|
<li>
|
|
|
|
<p>If the policy for a bucket is set up with complex rules on different paths, check the path for the operation.</p>
|
|
</li>
|
|
<li>
|
|
|
|
<p>The policy may have omitted one or more actions which are required. Make sure that all the read and write permissions are allowed for any bucket/path to which data is being written to, and read permissions for all buckets read from.</p>
|
|
</li>
|
|
</ol></section><section>
|
|
<h3><a name="AccessDeniedException_When_working_with_KMS-encrypted_data"></a><a name="access_denied_kms"></a> <code>AccessDeniedException</code> When working with KMS-encrypted data</h3>
|
|
<p>If the bucket is using SSE-KMS to encrypt data:</p>
|
|
<ol style="list-style-type: decimal">
|
|
|
|
<li>The caller must have the <code>kms:Decrypt</code> permission to read the data.</li>
|
|
<li>The caller needs <code>kms:Decrypt</code> and <code>kms:GenerateDataKey</code> to write data.</li>
|
|
</ol>
|
|
<p>Without permissions, the request fails <i>and there is no explicit message indicating that this is an encryption-key issue</i>.</p>
|
|
<p>This problem is most obvious when you fail when writing data in a “Writing Object” operation.</p>
|
|
<p>If the client does have write access to the bucket, verify that the caller has <code>kms:GenerateDataKey</code> permissions for the encryption key in use.</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>java.nio.file.AccessDeniedException: test/testDTFileSystemClient: Writing Object on test/testDTFileSystemClient:
|
|
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403;
|
|
Error Code: AccessDenied; Request ID: E86544FF1D029857)
|
|
|
|
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:243)
|
|
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:111)
|
|
at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:314)
|
|
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:406)
|
|
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:310)
|
|
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:285)
|
|
at org.apache.hadoop.fs.s3a.WriteOperationHelper.retry(WriteOperationHelper.java:150)
|
|
at org.apache.hadoop.fs.s3a.WriteOperationHelper.putObject(WriteOperationHelper.java:460)
|
|
at org.apache.hadoop.fs.s3a.S3ABlockOutputStream.lambda$putObject$0(S3ABlockOutputStream.java:438)
|
|
at org.apache.hadoop.util.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:219)
|
|
at org.apache.hadoop.util.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:219)
|
|
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
|
|
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
|
|
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
|
|
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
|
|
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
|
|
at java.lang.Thread.run(Thread.java:748)
|
|
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403;
|
|
Error Code: AccessDenied; Request ID: E86544FF1D029857)
|
|
</pre></div></div>
|
|
|
|
<p>Note: the ability to read encrypted data in the store does not guarantee that the caller can encrypt new data. It is a separate permission.</p></section><section>
|
|
<h3><a name="Error_Unable_to_execute_HTTP_request"></a>Error <code>Unable to execute HTTP request</code></h3>
|
|
<p>This is a low-level networking error. Possible causes include:</p>
|
|
<ul>
|
|
|
|
<li>The endpoint set in <code>fs.s3a.assumed.role.sts.endpoint</code> is invalid.</li>
|
|
<li>There are underlying network problems.</li>
|
|
</ul>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>org.apache.hadoop.fs.s3a.AWSClientIOException: request session credentials:
|
|
com.amazonaws.SdkClientException:
|
|
|
|
Unable to execute HTTP request: null: Unable to execute HTTP request: null
|
|
at com.amazonaws.thirdparty.apache.http.impl.conn.DefaultRoutePlanner.determineRoute(DefaultRoutePlanner.java:88)
|
|
at com.amazonaws.thirdparty.apache.http.impl.client.InternalHttpClient.determineRoute(InternalHttpClient.java:124)
|
|
at com.amazonaws.thirdparty.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:183)
|
|
at com.amazonaws.thirdparty.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
|
|
at com.amazonaws.thirdparty.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
|
|
</pre></div></div>
|
|
</section><section>
|
|
<h3><a name="Error_.E2.80.9CCredential_should_be_scoped_to_a_valid_region.E2.80.9D"></a><a name="credential_scope"></a> Error “Credential should be scoped to a valid region”</h3>
|
|
<p>This is based on conflict between the values of <code>fs.s3a.assumed.role.sts.endpoint</code> and <code>fs.s3a.assumed.role.sts.endpoint.region</code> Two variants, “not '''”</p>
|
|
<p>Variant 1: <code>Credential should be scoped to a valid region, not 'us-west-1'</code> (or other string)</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>java.nio.file.AccessDeniedException: : request session credentials:
|
|
com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
|
|
Credential should be scoped to a valid region, not 'us-west-1'.
|
|
(Service: AWSSecurityTokenService; Status Code: 403; Error Code: SignatureDoesNotMatch; Request ID: d9065cc4-e2b9-11e8-8b7b-f35cb8d7aea4):SignatureDoesNotMatch
|
|
</pre></div></div>
|
|
|
|
<p>One of:</p>
|
|
<ul>
|
|
|
|
<li>the value of <code>fs.s3a.assumed.role.sts.endpoint.region</code> is not a valid region</li>
|
|
<li>the value of <code>fs.s3a.assumed.role.sts.endpoint.region</code> is not the signing region of the endpoint set in <code>fs.s3a.assumed.role.sts.endpoint</code></li>
|
|
</ul>
|
|
<p>Variant 2: <code>Credential should be scoped to a valid region, not ''</code></p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>java.nio.file.AccessDeniedException: : request session credentials:
|
|
com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
|
|
Credential should be scoped to a valid region, not ''. (
|
|
Service: AWSSecurityTokenService; Status Code: 403; Error Code: SignatureDoesNotMatch;
|
|
Request ID: bd3e5121-e2ac-11e8-a566-c1a4d66b6a16):SignatureDoesNotMatch
|
|
</pre></div></div>
|
|
|
|
<p>This should be intercepted earlier: an endpoint has been specified but not a region.</p>
|
|
<p>There’s special handling for the central <code>sts.amazonaws.com</code> region; when that is declared as the value of <code>fs.s3a.assumed.role.sts.endpoint.region</code> then there is no need to declare a region: whatever value it has is ignored.</p></section></section>
|
|
</div>
|
|
</div>
|
|
<div class="clear">
|
|
<hr/>
|
|
</div>
|
|
<div id="footer">
|
|
<div class="xright">
|
|
© 2008-2023
|
|
Apache Software Foundation
|
|
|
|
- <a href="http://maven.apache.org/privacy-policy.html">Privacy Policy</a>.
|
|
Apache Maven, Maven, Apache, the Apache feather logo, and the Apache Maven project logos are trademarks of The Apache Software Foundation.
|
|
</div>
|
|
<div class="clear">
|
|
<hr/>
|
|
</div>
|
|
</div>
|
|
</body>
|
|
</html>
|