Commit Graph

521 Commits

Author SHA1 Message Date
Mehakmeet Singh abb367aec6
HADOOP-17817.HADOOP-17823. S3A to raise IOE if both S3-CSE and S3Guard enabled (#3239)
S3A S3Guard tests to skip if S3-CSE are enabled (#3263)

    Follow on to
    * HADOOP-13887. Encrypt S3A data client-side with AWS SDK (S3-CSE)

    If the S3A bucket is set up to use S3-CSE encryption, all tests which turn
    on S3Guard are skipped, so they don't raise any exceptions about
    incompatible configurations.

Contributed by Mehakmeet Singh

Change-Id: I9f4188109b56a1f4e5a31fae265d980c5795db1e
2021-10-05 11:38:57 +01:00
Mehakmeet Singh aee975a136
HADOOP-13887. Support S3 client side encryption (S3-CSE) using AWS-SDK (#2706)
This (big!) patch adds support for client side encryption in AWS S3,
with keys managed by AWS-KMS.

Read the documentation in encryption.md very, very carefully before
use and consider it unstable.

S3-CSE is enabled in the existing configuration option
"fs.s3a.server-side-encryption-algorithm":

fs.s3a.server-side-encryption-algorithm=CSE-KMS
fs.s3a.server-side-encryption.key=<KMS_KEY_ID>

You cannot enable CSE and SSE in the same client, although
you can still enable a default SSE option in the S3 console.

* Filesystem list/get status operations subtract 16 bytes from the length
  of all files >= 16 bytes long to compensate for the padding which CSE
  adds.
* The SDK always warns about the specific algorithm chosen being
  deprecated. It is critical to use this algorithm for ranged
  GET requests to work (i.e. random IO). Ignore.
* Unencrypted files CANNOT BE READ.
  The entire bucket SHOULD be encrypted with S3-CSE.
* Uploading files may be a bit slower as blocks are now
  written sequentially.
* The Multipart Upload API is disabled when S3-CSE is active.

Contributed by Mehakmeet Singh

Change-Id: Ie1a27a036a39db66a67e9c6d33bc78d54ea708a0
2021-10-05 11:37:41 +01:00
Steve Loughran a2242df10a
HADOOP-17894. CredentialProviderFactory.getProviders() recursion loading JCEKS file from S3A (#3393)
* CredentialProviderFactory to detect and report on recursion.
* S3AFS to remove incompatible providers.
* Integration Test for this.

Contributed by Steve Loughran.

Change-Id: Ia247b3c9fe8488ffdb7f57b40eb6e37c57e522ef
2021-09-08 17:00:20 +01:00
Dongjoon Hyun 8606b2cddd
HADOOP-17869. `fs.s3a.connection.maximum` should be bigger than `fs.s3a.threads.max` (#3337).
The value of `fs.s3a.connection.maximum` has been increased to 96

Contributed by Dongjoon Hyun

Change-Id: I9020a2bfd2a67fa7a2ec0598ed9d63e78ee99c73
2021-08-30 18:31:57 +01:00
Steve Loughran c1ad91e72d
HADOOP-17822. fs.s3a.acl.default not working after S3A Audit feature (#3249)
Fixes the regression caused by HADOOP-17511 by moving where the
option  fs.s3a.acl.default is read -doing it before the RequestFactory
is created.

Adds

* A unit test in TestRequestFactory to verify the ACLs are set
  on all file write operations.
* A new ITestS3ACannedACLs test which verifies that ACLs really
  do get all the way through.
* S3A Assumed Role delegation tokens to include the IAM permission
  s3:PutObjectAcl in the generated role.

Contributed by Steve Loughran

Change-Id: I3abac6a1b9e150b6b6df0af7c2c70093f8f518cb
2021-08-02 15:33:34 +01:00
Steve Loughran 26514b6534 HADOOP-17628. Distcp contract test is really slow with ABFS and S3A; timing out. (#3240)
This patch cuts down the size of directory trees used for
distcp contract tests against object stores, so making
them much faster against distant/slow stores.

On abfs, the test only runs with -Dscale (as was the case for s3a already),
and has the larger scale test timeout.

After every test case, the FileSystem IOStatistics are logged,
to provide information about what IO is taking place and
what it's performance is.

There are some test cases which upload files of 1+ MiB; you can
increase the size of the upload in the option
"scale.test.distcp.file.size.kb" 
Set it to zero and the large file tests are skipped.

Contributed by Steve Loughran.
2021-08-02 12:58:37 +01:00
Bobby Wang 904cdd0b00
HADOOP-17812. NPE in S3AInputStream read() after failure to reconnect to store (#3222)
This improves error handling after multiple failures reading data
-when the read fails and attempts to reconnect() also fail.

Contributed by Bobby Wang.

Change-Id: If17dee395ad6b9b7c738021bad20d0a13eb4011e
2021-08-02 12:58:25 +01:00
Petre Bogdan Stolojan f2cec5cb88
HADOOP-17139 Re-enable optimized copyFromLocal implementation in S3AFileSystem (#3101)
This work
* Defines the behavior of FileSystem.copyFromLocal in filesystem.md
* Implements a high performance implementation of copyFromLocalOperation
  for S3
* Adds a contract test for the operation: AbstractContractCopyFromLocalTest
* Implements the contract tests for Local and S3A FileSystems

Contributed by: Bogdan Stolojan

Change-Id: I25d502102775c3626c4264e5a14c649879730050
2021-08-02 11:58:36 +01:00
Petre Bogdan Stolojan e89d30b6b7
HADOOP-17458. S3A to treat "SdkClientException: Data read has a different length than the expected" as EOFException (#3040)
Some network exceptions can raise SdkClientException with message
`Data read has a different length than the expected`.

These should be recoverable.

Contributed by Bogdan Stolojan

Change-Id: Ia22fd77d90971e9e02b4f947398a4749eebe5909
2021-07-23 14:46:59 +01:00
Mehakmeet Singh 14a3e74c5c
HADOOP-17801. No error message reported when bucket doesn't exist in S3AFS (#3202)
Contributed by: Mehakmeet Singh.

Change-Id: I26c2a85ef6bbfd1b8269a23fc44d9a55d7fa091c
2021-07-16 15:36:54 +01:00
Mehakmeet Singh cd15b0cb8a HADOOP-17803. Remove WARN logging from LoggingAuditor when executing a request outside an audit span (#3207)
Followup to HADOOP-17511. "Add audit/telemetry logging to S3A connector"

Contributed by Mehakmeet Singh
2021-07-16 11:52:37 +01:00
Mehakmeet Singh f1a14df9e6
HADOOP-17774. S3A bytesRead FS statistic showing twice the correct value (#3144)
Contributed by: Mehakmeet Singh

Change-Id: I3302654ca36474a5f399aa848f88bce4587022d8
2021-07-02 14:13:26 +01:00
Zamil Majdy 80859d714d
HADOOP-17764. S3AInputStream read does not re-open the input stream on the second read retry attempt (#3109)
Contributed by Zamil Majdy.

Change-Id: I680d9c425c920ff1a7cd4764d62e10e6ac78bee4
2021-06-25 20:47:11 +01:00
Steve Loughran 39e6f2d191
HADOOP-17771. S3AFS creation fails "Unable to find a region via the region provider chain." (#3133)
This addresses the regression in Hadoop 3.3.1 where if no S3 endpoint
is set in fs.s3a.endpoint, S3A filesystem creation may fail on
non-EC2 deployments, depending on the local host environment setup.

* If fs.s3a.endpoint is empty/null, and fs.s3a.endpoint.region
  is null, the region is set to "us-east-1".
* If fs.s3a.endpoint.region is explicitly set to "" then the client
  falls back to the SDK region resolution chain; this works on EC2
* Details in troubleshooting.md, including a workaround for Hadoop-3.3.1+
* Also contains some minor restructuring of troubleshooting.md

Contributed by Steve Loughran.

Change-Id: Ife482cff513307cd52d59eec56beac0a33e031f5
2021-06-24 16:38:55 +01:00
Petre Bogdan Stolojan 254d943126
HADOOP-17547 Magic committer to downgrade abort in cleanup if list uploads fails with access denied (#3051)
Contributed by Bogdan Stolojan

Change-Id: I32d6dc4f72087783a3ea12473d11690ac14fe3cb
2021-06-12 17:46:11 +01:00
Steve Loughran 464bbd5b7c
HADOOP-17511. Add audit/telemetry logging to S3A connector (#2807)
The S3A connector supports
"an auditor", a plugin which is invoked
at the start of every filesystem API call,
and whose issued "audit span" provides a context
for all REST operations against the S3 object store.

The standard auditor sets the HTTP Referrer header
on the requests with information about the API call,
such as process ID, operation name, path,
and even job ID.

If the S3 bucket is configured to log requests, this
information will be preserved there and so can be used
to analyze and troubleshoot storage IO.

Contributed by Steve Loughran.

Change-Id: Ic0a105c194342ed2d529833ecc42608e8ba2f258
2021-05-25 12:55:38 +01:00
Mehakmeet Singh b82a0fa9e6
HADOOP-17705. S3A to add Config to set AWS region (#3020)
The option `fs.s3a.endpoint.region` can be used
to explicitly set the AWS region of a bucket.

This is needed when using AWS Private Link, as
the region cannot be automatically determined.

Contributed by Mehakmeet Singh

Change-Id: I4b52f85d7af0ddd56b2b0505ac0124d4fcc67ca0
2021-05-24 13:13:37 +01:00
Mehakmeet Singh a786847b8f
HADOOP-17670. S3AFS and ABFS to log IOStats at DEBUG mode or optionally at INFO level in close() (#2963)
When the S3A and ABFS filesystems are closed,
their IOStatistics are logged at debug in the log:

org.apache.hadoop.fs.statistics.IOStatisticsLogging

Set `fs.iostatistics.logging.level` to `info` for the statistics
to be logged at info. (also: `warn` or `error` for even higher
log levels).

Contributed by: Mehakmeet Singh

Change-Id: I56d44ad89fc1c0dd4baf701681834e7fd96c544f
2021-05-24 13:04:20 +01:00
Wei-Chiu Chuang fa4915fdbb
Preparing for 3.3.2 development 2021-05-19 21:52:37 +08:00
Steve Loughran e944cc0338
HADOOP-16742. NullPointerException in S3A MultiObjectDeleteSupport
Contributed by Tor Arvid Lund.

Change-Id: Iadfe9b2f355cf373031075bfbe681705a2c65bdc
2021-05-04 16:15:13 +01:00
Steve Loughran 8308aab658
HADOOP-17112. S3A committers can't handle whitespace in paths. (#2953)
Contributed by Krzysztof Adamski.

Change-Id: I2746aabcfeb0fbb138a80b02c4d5bbf2a8cf75da
2021-04-25 18:43:25 +01:00
Steve Loughran fb71e6c91e
HADOOP-17597. Optionally downgrade on S3A Syncable calls (#2801)
Followup to HADOOP-13327, which changed S3A output stream hsync/hflush calls
to raise an exception.

Adds a new option fs.s3a.downgrade.syncable.exceptions

When true, calls to Syncable hsync/hflush on S3A output streams will
log once at warn (for entire process life, not just the stream), then
increment IOStats with the relevant operation counter

With the downgrade option false (default)
* IOStats are incremented
* The UnsupportedOperationException current raised includes a link to the
  JIRA.

Contributed by Steve Loughran.

Change-Id: I967e077eda1d1a1a3795b4d22e003fe7997b6679
2021-04-24 18:32:39 +01:00
Gabor Bota 1b2bc77923
HADOOP-17454. [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0 (#2593)
Also fixes HADOOP-16995. ITestS3AConfiguration proxy tests failures when bucket probes == 0
The improvement should include the fix, because the test would fail by default otherwise.

Change-Id: I9a7e4b5e6d4391ebba096c15e84461c038a2ec59
2021-04-24 18:28:21 +01:00
Mukund Thakur 33f9ceb3cc HADOOP-17136. ITestS3ADirectoryPerformance.testListOperations failing (#2153)
A regression caused by HADOOP-17022: the reduction in LIST calls broken an assertion.

Contributed by Mukund Thakur

Change-Id: Ib9725165906931634567fd1f62a81e3a6ea5620c
2021-04-24 18:28:14 +01:00
Ayush Saxena 9c9b16c957
HADOOP-17531. DistCp: Reduce memory usage on copying huge directories. (#2808). Contributed by Ayush Saxena.
* HADOOP-17531. DistCp: Reduce memory usage on copying huge directories. (#2732).

* HADOOP-17531.Addendum: DistCp: Reduce memory usage on copying huge directories. (#2820)

Signed-off-by: Steve Loughran <stevel@apache.org>
2021-03-27 09:25:25 +05:30
Steve Loughran a07e3c41ca
HADOOP-13551. AWS metrics wire-up (#2778)
Moves to the builder API for AWS S3 client creation, and
offers a similar style of API to the S3A FileSystem and tests, hiding
the details of which options are client, which are in AWS Conf,
and doing the wiring up of S3A statistics interfaces to the AWS
SDK internals. S3A Statistics, including IOStatistics, should now
count throttling events handled in the AWS SDK itself.

This patch restores endpoint determination by probes to US-East-1
if the client isn't configured with fs.s3a.endpoint.

Explicitly setting the endpoint will save the cost of these probe
HTTP requests.

Contributed by Steve Loughran.

Change-Id: Ifa6caa8ff56369ad30e4fd01a42bc74f7b8b3d6b
2021-03-25 13:59:33 +00:00
Mukund Thakur 4c3324ca1a
HADOOP-17305. Fix ITestCustomSigner to work with s3 compatible endpoints (#2395)
Contributed by Mukund Thakur

Change-Id: Ia5def405056691c349cf05530fd3172047d2058b
2021-03-25 13:59:33 +00:00
Steve Loughran 9bbf1f87c9
HADOOP-17476. ITestAssumeRole.testAssumeRoleBadInnerAuth failure. (#2777)
Contributed by Steve Loughran.

Change-Id: Ie96ca99f5d91e5a6aaea4cae4c2e850de9fddb01
2021-03-24 16:49:41 +00:00
Steve Loughran 469fcdaf8f HADOOP-16721. Improve S3A rename resilience (#2742)
The S3A connector's rename() operation now raises FileNotFoundException if
the source doesn't exist; a FileAlreadyExistsException if the destination
exists and is unsuitable for the source file/directory.

When renaming to a path which does not exist, the connector no longer checks
for the destination parent directory existing -instead it simply verifies
that there is no file immediately above the destination path.
This is needed to avoid race conditions with delete() and rename()
calls working on adjacent subdirectories.

Contributed by Steve Loughran.
2021-03-11 12:54:15 +00:00
Akira Ajisaka de2904f123
HADOOP-16870. Use spotbugs-maven-plugin instead of findbugs-maven-plugin (#2753)
Removed findbugs from the hadoop build images and added spotbugs instead.
Upgraded SpotBugs to 4.2.2 and spotbugs-maven-plugin to 4.2.0.

Reviewed-by: Masatake Iwasaki <iwasakims@apache.org>
(cherry picked from commit 23b343aed1)

 Conflicts:
	dev-support/docker/Dockerfile
	hadoop-project/pom.xml
2021-03-11 14:57:03 +09:00
Pierrick Hymbert 0e82d42d2c [HADOOP-17567] typo in MagicCommitTracker (#2749)
Contributed by Pierrick Hymbert
2021-03-10 15:41:21 +00:00
Steve Loughran 4423a7e736
HADOOP-16906. Abortable (#2684)
Adds an Abortable.abort() interface for streams to enable output streams to be terminated; this
is implemented by the S3A connector's output stream. It allows for commit protocols
to be implemented which commit/abort work by writing to the final destination and
using the abort() call to cancel any write which is not intended to be committed.
Consult the specification document for information about the interface and its use.

Contributed by Jungtaek Lim and Steve Loughran.

Change-Id: I7fcc25e9dd8c10ce6c29f383529f3a2642a201ae
2021-02-17 11:29:19 +00:00
Steve Loughran 98e4d516ea
HADOOP-13327 Output Stream Specification. (#2587)
This defines what output streams and especially those which implement
Syncable are meant to do, and documents where implementations (HDFS; S3)
don't. With tests.

The file:// FileSystem now supports Syncable if an application calls
FileSystem.setWriteChecksum(false) before creating a file -checksumming
and Syncable.hsync() are incompatible.

Contributed by Steve Loughran.

Change-Id: I892d768de6268f4dd6f175b3fe3b7e5bcaa91194
2021-02-10 10:31:22 +00:00
Steve Loughran 70411cb1f1
HADOOP-17337. S3A NetworkBinding has a runtime dependency on shaded httpclient. (#2599)
Contributed by Steve Loughran.

Change-Id: I0471322fc88d8bc3896ac439aefb31e6a856936c
2021-02-03 14:32:55 +00:00
Steve Loughran 2d124f2f5e HADOOP-17483. Magic committer is enabled by default. (#2656)
* core-default.xml updated so that fs.s3a.committer.magic.enabled = true
* CommitConstants updated to match
* All tests which previously enabled the magic committer now rely on
  default settings. This helps make sure it is enabled.
* Docs cover the switch, mention its enabled and explain why you may
  want to disable it.
Note: this doesn't switch to using the committer -it just enables the path
rewriting magic which it depends on.

Contributed by Steve Loughran.
2021-01-27 19:05:07 +00:00
Steve Loughran 3e1eb16837
HADOOP-17493. Revert name of DELEGATION_TOKENS_ISSUED constant/statistic (#2649)
Follow-on to HADOOP-16830/HADOOP-17271.

Contributed by Steve Loughran.

Change-Id: I16db6e788c9fd628d3295671d7c2861c249d5ef1
2021-01-27 16:40:27 +00:00
Steve Loughran fb603e81f0
HADOOP-17414. Magic committer files don't have the count of bytes written collected by spark (#2530)
This needs SPARK-33739 in the matching spark branch in order to work

Contributed by Steve Loughran.

Change-Id: I4fe75b057159e35aacc072da3cb7343467c0c3f1
2021-01-26 19:42:16 +00:00
Steve Loughran bd85f6acea
HADOOP-17480. Document that AWS S3 is consistent and that S3Guard is not needed (#2636)
Contributed by Steve Loughran.

Change-Id: I775e3ee7b60665240ec621859c337b053f747a49
2021-01-25 13:24:34 +00:00
Maksim Bober 763157dd12
HADOOP-17484. Typo in hadop-aws index.md (#2634)
Contributed by Maksim Bober.

Change-Id: Ic5196a64abc68566a3542e9ff96042593f081bdd
2021-01-21 17:32:03 +00:00
Steve Loughran b645e58de2
HADOOP-17433. Skipping network I/O in S3A getFileStatus(/) breaks ITestAssumeRole. (#2600)
Contributed by Steve Loughran.

Change-Id: Iece617be78e80fc7e956074eddf171f7763a2e66
2021-01-19 17:20:28 +00:00
Steve Loughran 56576f080b
HADOOP-17451. IOStatistics test failures in S3A code. (#2594)
Caused by HADOOP-16830 and HADOOP-17271.

Fixes tests which fail intermittently based on configs and
in the case of the HugeFile tests, bulk runs with existing
FS instances meant statistic probes sometimes ended up probing those
of a previous FS.

Contributed by Steve Loughran.

Change-Id: I65ba3f44444e59d298df25ac5c8dc5a8781dfb7d
2021-01-14 13:21:20 +00:00
Steve Loughran 240b25310e
HADOOP-17271. S3A connector to support IOStatistics. (#2580)
S3A connector to support the IOStatistics API of HADOOP-16830,

This is a major rework of the S3A Statistics collection to

* Embrace the IOStatistics APIs
* Move from direct references of S3AInstrumention statistics
  collectors to interface/implementation classes in new packages.
* Ubiquitous support of IOStatistics, including:
  S3AFileSystem, input and output streams, RemoteIterator instances
  provided in list calls.
* Adoption of new statistic names from hadoop-common

Regarding statistic collection, as well as all existing
statistics, the connector now records min/max/mean durations
of HTTP GET and HEAD requests, and those of LIST operations.

Contributed by Steve Loughran.

Change-Id: I182d34b6ac39e017a8b4a221dad8e930882b39cf
2021-01-14 13:21:01 +00:00
yzhangal adf6ca18b4
HADOOP-17338. Intermittent S3AInputStream failures: Premature end of Content-Length delimited message body etc (#2497)
Yongjun Zhang <yongjunzhang@pinterest.com>

Change-Id: Ibbc6a39afb82de1208e6ed6a63ede224cc425466
2020-12-19 12:24:16 +00:00
Chao Sun 81e533de8f
HADOOP-16080. hadoop-aws does not work with hadoop-client-api. Contributed by Chao Sun (#2522) 2020-12-12 09:37:13 -08:00
Mukund Thakur e4cab4b7a3
HADOOP-17186. Fixing javadoc in ListingOperationCallbacks (#2196)
(cherry picked from commit ac697571a1)
2020-12-10 18:32:22 +09:00
Ayush Saxena 8378ab9f92 HADOOP-17288. Use shaded guava from thirdparty. Contributed by Ayush Saxena. #2505 2020-12-10 05:50:55 +05:30
Mukund Thakur 3ef0e3d615 HADOOP-17398. Skipping network I/O in S3A getFileStatus(/) breaks some tests (#2493)
Follow-on to HADOOP-17323.

Contributed by Mukund Thakur.
2020-11-26 20:26:44 +00:00
Steve Loughran 1e59bf7394
HADOOP-17385. ITestS3ADeleteCost.testDirMarkersFileCreation failure (#2473).
Contributed by Steve Loughran

The addition of deprecated S3A configuration options in HADOOP-17318
triggered a reload of default (xml resource) configurations, which breaks
tests which fail if there's a per-bucket setting inconsistent with test
setup.

Creating an S3AFS instance before creating the Configuration() instance
for test runs gets that reload out the way before test setup takes
place.

Along with the fix, extra changes in the failing test suite to fail
fast when marker policy isn't as expected, and to log FS state better.

Rather than create and discard an instance, add a new static method
to S3AFS and invoke it in test setup. This forces the load

Change-Id: Id52b1c46912c6fedd2ae270e2b1eb2222a360329
2020-11-26 17:28:01 +00:00
Steve Loughran 1eeb9d9d67
HADOOP-17318. Support concurrent S3A commit jobs with same app attempt ID. (#2399)
See also [SPARK-33402]: Jobs launched in same second have duplicate MapReduce JobIDs

Contributed by Steve Loughran.

Change-Id: Iae65333cddc84692997aae5d902ad8765b45772a
2020-11-26 17:22:56 +00:00
Steve Loughran 1ef34d0819
HADOOP-17313. FileSystem.get to support slow-to-instantiate FS clients. (#2396)
This adds a semaphore to throttle the number of FileSystem instances which
can be created simultaneously, set in "fs.creation.parallel.count".

This is designed to reduce the impact of many threads in an application calling
FileSystem.get() on a filesystem which takes time to instantiate -for example
to an object where HTTPS connections are set up during initialization.
Many threads trying to do this may create spurious delays by conflicting
for access to synchronized blocks, when simply limiting the parallelism
diminishes the conflict, so speeds up all threads trying to access
the store.

The default value, 64, is larger than is likely to deliver any speedup -but
it does mean that there should be no adverse effects from the change.

If a service appears to be blocking on all threads initializing connections to
abfs, s3a or store, try a smaller (possibly significantly smaller) value.

Contributed by Steve Loughran.

Change-Id: I57161b026f28349e339dc8b9d74f6567a62ce196
2020-11-25 14:55:29 +00:00
Mukund Thakur 9dd74141a6
HADOOP-17323. S3A getFileStatus("/") to skip IO (#2479)
Contributed by Mukund Thakur.

Change-Id: I1709ad72b829999b6dd324f0755b51bc38918d30
2020-11-24 11:34:19 +00:00
Steve Loughran 38cc47d308
HADOOP-17332. S3A MarkerTool -min and -max are inverted. (#2425)
This patch
* fixes the inversion
* adds a precondition check
* if the commands are supplied inverted, swaps them with a warning.
  This is to stop breaking any tests written to cope with the existing
  behavior.

Contributed by Steve Loughran

Change-Id: I15c40863f0db0675c7d60db477cb3bf1693cae49
2020-11-23 21:49:33 +00:00
Steve Loughran e4bc64cce0 HADOOP-17343. Upgrade AWS SDK to 1.11.901 (#2468)
Contributed by Steve Loughran.
2020-11-23 14:09:14 +00:00
Jungtaek Lim 401cadbac5
HADOOP-17388. AbstractS3ATokenIdentifier to issue date in UTC. (#2477)
Followup to HADOOP-17379.

Contributed by Jungtaek Lim.

Change-Id: I7b2fce36028d297c1e095499691a08caba92d9fd
2020-11-20 10:56:57 +00:00
Steve Loughran 4687c25389 HADOOP-17244. S3A directory delete tombstones dir markers prematurely. (#2310)
This fixes the S3Guard/Directory Marker Retention integration so that when
fs.s3a.directory.marker.retention=keep, failures during multipart delete
are handled correctly, as are incremental deletes during
directory tree operations.

In both cases, when a directory marker with children is deleted from
S3, the directory entry in S3Guard is not deleted, because it is still
critical to representing the structure of the store.

Contributed by Steve Loughran.

Change-Id: I4ca133a23ea582cd42ec35dbf2dc85b286297d2f
2020-11-18 12:30:43 +00:00
Steve Loughran 4bb9d593da
HADOOP-17261. s3a rename() needs s3:deleteObjectVersion permission (#2303)
Contributed by Steve Loughran.

Change-Id: I8e89a402a24bd9fb958e0fa93d1a28191093851d
2020-11-18 12:20:12 +00:00
Jungtaek Lim 22039a14ff
HADOOP-17379. AbstractS3ATokenIdentifier to set issue date == now. (#2466)
Unless you explicitly set it, the issue date of a delegation token identifier is 0, which confuses spark renewal (SPARK-33440). This patch makes sure that all S3A DT identifiers have the current time as issue date, fixing the problem as far as S3A tokens are concerned.

Contributed by Jungtaek Lim.

Change-Id: Ic80ac7895612a1aa669459c73a78a9c17ecf0c0d
2020-11-17 14:56:58 +00:00
Doroszlai, Attila bf2ff35a04
HADOOP-17376. ITestS3AContractRename failing against stricter tests. (#2462)
Contributed by Attila Doroszlai.

Change-Id: Ie15624ec07b1c5e34ca7fde0a72a54431d79e746
2020-11-16 11:26:06 +00:00
Dongjoon Hyun 5032f8abba
HADOOP-17258. Magic S3Guard Committer to overwrite existing pendingSet file on task commit (#2371)
Contributed by Dongjoon Hyun and Steve Loughran

Change-Id: Ibaf8082e60eff5298ff4e6513edc386c5bae0274
2020-10-12 13:42:08 +01:00
Steve Loughran 963793dd48
HADOOP-17293. S3A to always probe S3 in S3A getFileStatus on non-auth paths
This reverts changes in HADOOP-13230 to use S3Guard TTL in choosing when
to issue a HEAD request; fixing tests to compensate.

New org.apache.hadoop.fs.s3a.performance.OperationCost cost,
S3GUARD_NONAUTH_FILE_STATUS_PROBE for use in cost tests.

Contributed by Steve Loughran.

Change-Id: I418d55d2d2562a48b2a14ec7dee369db49b4e29e
2020-10-08 15:38:32 +01:00
Mukund Thakur 475dba1ddf
HADOOP-17281 Implement FileSystem.listStatusIterator() in S3AFileSystem (#2354)
Contains HADOOP-17300: FileSystem.DirListingIterator.next() call should
return NoSuchElementException

Contributed by Mukund Thakur

Change-Id: I4e7e5c6e295525db9e2de6f416f32bbb81e146d3
2020-10-07 14:00:23 +01:00
Mukund Thakur 7e642ec5a3
HADOOP-17023. Tune S3AFileSystem.listStatus() (#2257)
S3AFileSystem.listStatus() is optimized for invocations
where the path supplied is a non-empty directory.
The number of S3 requests is significantly reduced, saving
time, money, and reducing the risk of S3 throttling.

Contributed by Mukund Thakur.

Change-Id: I7cc5f87aa16a4819e245e0fbd2aad226bd500f3f
2020-09-21 17:30:15 +01:00
Steve Loughran aa80bcb1ec
Revert "HADOOP-17244. S3A directory delete tombstones dir markers prematurely. (#2280)"
This reverts commit 0c82eb0324.

Change-Id: I6bd100d9de19660b0f28ee0ab16faf747d6d9f05
2020-09-11 18:07:05 +01:00
Steve Loughran 0c82eb0324
HADOOP-17244. S3A directory delete tombstones dir markers prematurely. (#2280)
This changes directory tree deletion so that only files are incrementally deleted
from S3Guard after the objects are deleted; the directories are left alone
until metadataStore.deleteSubtree(path) is invoke.

This avoids directory tombstones being added above files/child directories,
which stop the treewalk and delete phase from working.

Also:

* Callback to delete objects splits files and dirs so that
any problems deleting the dirs doesn't trigger s3guard updates
* New statistic to measure #of objects deleted, alongside request count.
* Callback listFilesAndEmptyDirectories renamed listFilesAndDirectoryMarkers
  to clarify behavior.
* Test enhancements to replicate the failure and verify the fix

Contributed by Steve Loughran

Change-Id: I0e6ea2c35e487267033b1664228c8837279a35c7
2020-09-10 17:29:33 +01:00
Mukund Thakur 5236c96ead
HADOOP-17167 ITestS3AEncryptionWithDefaultS3Settings failing (#2187)
Now skips ITestS3AEncryptionWithDefaultS3Settings.testEncryptionOverRename
when server side encryption is not set to sse:kms

Contributed by Mukund Thakur

Change-Id: Ifd83d353e9c7c6f7e1195a2c2f138d85cf876bb1
2020-09-04 15:00:30 +01:00
Steve Loughran 38354006f8 HADOOP-17227. S3A Marker Tool tuning (#2254)
Contributed by Steve Loughran.
2020-09-04 14:58:54 +01:00
Mukund Thakur 0840c0c1f3
HADOOP-17074. S3A Listing to be fully asynchronous. (#2207)
Contributed by Mukund Thakur.

Change-Id: I1b0574a0c9ebc0805f285dd5280a00e5add081f1
2020-08-25 11:30:42 +01:00
Steve Loughran 49f8ae965e
HADOOP-13230. S3A to optionally retain directory markers.
This adds an option to disable "empty directory" marker deletion,
so avoid throttling and other scale problems.

This feature is *not* backwards compatible.
Consult the documentation and use with care.

Contributed by Steve Loughran.

Change-Id: I69a61e7584dc36e485d5e39ff25b1e3e559a1958
2020-08-15 20:19:49 +01:00
Mukund Thakur 571737f4ac HADOOP-17192. ITestS3AHugeFilesSSECDiskBlock failing (#2221)
Contributed by Mukund Thakur
2020-08-13 14:33:27 +01:00
Ayush Saxena 2943e6650f HDFS-15514. Remove useless dfs.webhdfs.enabled. Contributed by Fei Hui. 2020-08-07 22:20:42 +05:30
Mukund Thakur 251d2d1fa5
HADOOP-17131. Refactor S3A Listing code for better isolation. (#2148)
Contributed by Mukund Thakur.

Change-Id: I79160b236a92fdd67565a4b4974f1862e600c210
2020-08-04 17:13:06 +01:00
Mukund Thakur 8b601ad7e6 HADOOP-17022. Tune S3AFileSystem.listFiles() API.
Contributed by Mukund Thakur.

Change-Id: I17f5cfdcd25670ce3ddb62c13378c7e2dc06ba52
2020-07-14 15:28:27 +01:00
jimmy-zuber-amzn 79fc58def3
HADOOP-17105. S3AFS - Do not attempt to resolve symlinks in globStatus (#2113)
Contributed by Jimmy Zuber.

Change-Id: I2f247c2d2ab4f38214073e55f5cfbaa15aeaeb11
2020-07-13 19:09:50 +01:00
Steve Loughran a51d72f0c6 HDFS-13934. Multipart uploaders to be created through FileSystem/FileContext.
Contributed by Steve Loughran.

Change-Id: Iebd34140c1a0aa71f44a3f4d0fee85f6bdf123a3
2020-07-13 13:32:04 +01:00
Sebastian Nagel f9619b0b97
HADOOP-17117 Fix typos in hadoop-aws documentation (#2127)
(cherry picked from commit 5b1ed2113b)
2020-07-09 00:04:46 +09:00
Steve Loughran 7de1ac0547
HADOOP-16798. S3A Committer thread pool shutdown problems. (#1963)
Contributed by Steve Loughran.

Fixes a condition which can cause job commit to fail if a task was
aborted < 60s before the job commit commenced: the task abort
will shut down the thread pool with a hard exit after 60s; the
job commit POST requests would be scheduled through the same pool,
so be interrupted and fail. At present the access is synchronized,
but presumably the executor shutdown code is calling wait() and releasing
locks.

Task abort is triggered from the AM when task attempts succeed but
there are still active speculative task attempts running. Thus it
only surfaces when speculation is enabled and the final tasks are
speculating, which, given they are the stragglers, is not unheard of.

Note: this problem has never been seen in production; it has surfaced
in the hadoop-aws tests on a heavily overloaded desktop

Change-Id: I3b433356d01fcc50d88b4353dbca018484984bc8
2020-06-30 10:52:56 +01:00
Steve Loughran 5e290e702f
HADOOP-17050. S3A to support additional token issuers
Contributed by Steve Loughran.

S3A delegation token providers will be asked for any additional
token issuers, an array can be returned,
each one will be asked for tokens when DelegationTokenIssuer collects
all the tokens for a filesystem.

Change-Id: I1bd3035bbff98cbd8e1d1ac7fc615d937e6bb7bb
2020-06-09 14:43:02 +01:00
Steve Loughran 8a642caca8
HADOOP-16568. S3A FullCredentialsTokenBinding fails if local credentials are unset. (#1441)
Contributed by Steve Loughran.

Move the loading to deployUnbonded (where they are required) and add a safety check when a new DT is requested

Change-Id: I03c69aa2e16accfccddca756b2771ff832e7dd58
2020-06-03 17:08:52 +01:00
Mukund Thakur b0c9e4f1b5
HADOOP-16900. Very large files can be truncated when written through the S3A FileSystem.
Contributed by Mukund Thakur and Steve Loughran.

This patch ensures that writes to S3A fail when more than 10,000 blocks are
written. That upper bound still exists. To write massive files, make sure
that the value of fs.s3a.multipart.size is set to a size which is large
enough to upload the files in fewer than 10,000 blocks.

Change-Id: Icec604e2a357ffd38d7ae7bc3f887ff55f2d721a
2020-06-01 12:01:13 +01:00
Masatake Iwasaki 4d30c395f7 HADOOP-17040. Fix intermittent failure of ITestBlockingThreadPoolExecutorService. (#2020)
(cherry picked from commit 9685314633)
2020-05-22 18:53:53 +09:00
Masatake Iwasaki 6a0aeea60f HADOOP-17025. Fix invalid metastore configuration in S3GuardTool tests. (#1994)
(cherry picked from commit 99840aaba6)
2020-05-07 12:02:15 +09:00
Akira Ajisaka dfa7f160a5
Preparing for 3.3.1 development 2020-04-30 13:33:42 +09:00
Steve Loughran 0982f56f3a
HADOOP-16953. tuning s3guard disabled warnings (#1962)
Contributed by Steve Loughran.

The S3Guard absence warning of HADOOP-16484 has been changed
so that by default the S3A connector only logs at debug
when the connection to the S3 Store does not have S3Guard
enabled.

The option to control this log level is now
fs.s3a.s3guard.disabled.warn.level
and can be one of: silent, inform, warn, fail.

On a failure, an ExitException is raised with exit code 49.

For details on this safety feature, consult the s3guard documentation.

Change-Id: If868671c9260977c2b03b3e475b9c9531c98ce79
2020-04-20 15:07:00 +01:00
Steve Loughran de9a6b4588
HADOOP-16986. S3A to not need wildfly on the classpath. (#1948)
Contributed by Steve Loughran.

This is a successor to HADOOP-16346, which enabled the S3A connector
to load the native openssl SSL libraries for better HTTPS performance.

That patch required wildfly.jar to be on the classpath. This
update:

* Makes wildfly.jar optional except in the special case that
"fs.s3a.ssl.channel.mode" is set to "openssl"

* Retains the declaration of wildfly.jar as a compile-time
dependency in the hadoop-aws POM. This means that unless
explicitly excluded, applications importing that published
maven artifact will, transitively, add the specified
wildfly JAR into their classpath for compilation/testing/
distribution.

This is done for packaging and to offer that optional
speedup. It is not mandatory: applications importing
the hadoop-aws POM can exclude it if they choose.

Change-Id: I7ed3e5948d1e10ce21276b3508871709347e113d
2020-04-20 14:42:36 +01:00
Mukund Thakur 96d7ceb39a
HADOOP-13873. log DNS addresses on s3a initialization.
Contributed by Mukund Thakur.

If you set the log org.apache.hadoop.fs.s3a.impl.NetworkBinding
to DEBUG, then when the S3A bucket probe is made -the DNS address
of the S3 endpoint is calculated and printed.

This is useful to see if a large set of processes are all using
the same IP address from the pool of load balancers to which AWS
directs clients when an AWS S3 endpoint is resolved.

This can have implications for performance: if all clients
access the same load balancer performance may be suboptimal.

Note: if bucket probes are disabled, fs.s3a.bucket.probe = 0,
the DNS logging does not take place.

Change-Id: I21b3ac429dc0b543f03e357fdeb94c2d2a328dd8
2020-04-17 14:20:54 +01:00
Mukund Thakur 94da630cd2
HADOOP-16465 listLocatedStatus() optimisation (#1943)
Contributed by Mukund Thakur

Optimize S3AFileSystem.listLocatedStatus() to perform list
operations directly and then fallback to head checks for files

Change-Id: Ia2c0fa6fcc5967c49b914b92f41135d07dab0464
2020-04-15 17:04:55 +01:00
Steve Loughran 68a9562848 HADOOP-16941. ITestS3GuardOutOfBandOperations.testListingDelete failing on versioned bucket (#1919)
Contributed by Steve Loughran.

Removed the failing probe and replacing with two probes which will fail
on both versioned and unversioned buckets.
2020-04-14 10:58:13 +01:00
Steve Loughran eaaaba12b1
HADOOP-16939 fs.s3a.authoritative.path should support multiple FS URIs (#1914)
add unit test, new ITest and then fix the issue: different schema, bucket == skip

factored out the underlying logic for unit testing; also moved
maybeAddTrailingSlash to S3AUtils (while retaining/forwarnding existing method
in S3AFS).

tested: london, sole failure is
testListingDelete[auth=true](org.apache.hadoop.fs.s3a.ITestS3GuardOutOfBandOperations)

filed HADOOP-16853

Change-Id: I4b8d0024469551eda0ec70b4968cba4abed405ed
2020-03-26 12:59:11 -06:00
Nicholas Chammas 25a03bfece
HADOOP-16930. Add hadoop-aws documentation for ProfileCredentialsProvider
Contributed by Nicholas Chammas.
2020-03-25 10:39:35 +00:00
Gabor Bota c91ff8c18f
HADOOP-16858. S3Guard fsck: Add option to remove orphaned entries (#1851). Contributed by Gabor Bota.
Adding a new feature to S3GuardTool's fsck: -fix. 

Change-Id: I2cdb6601fea1d859b54370046b827ef06eb1107d
2020-03-18 12:48:52 +01:00
Steve Loughran 8d6373483e
HADOOP-16319. S3A Etag tests fail with default encryption enabled on bucket.
Contributed by Ben Roling.

ETag values are unpredictable with some S3 encryption algorithms.

Skip ITestS3AMiscOperations tests which make assertions about etags
when default encryption on a bucket is enabled.

When testing with an AWS an account which lacks the privilege
for a call to getBucketEncryption(), we don't skip the tests.
In the event of failure, developers get to expand the
permissions of the account or relax default encryption settings.
2020-03-17 13:31:48 +00:00
Steve Loughran 0a9b3c98b1
HADOOP-15430. hadoop fs -mkdir -p path-ending-with-slash/ fails with s3guard (#1646)
Contributed by Steve Loughran

* move qualify logic to S3AFileSystem.makeQualified()
* make S3AFileSystem.qualify() a private redirect to that
* ITestS3GuardFsShell turned off
2020-03-12 14:13:55 +00:00
Steve Loughran d4d4c37810
HADOOP-14630 Contract Tests to verify create, mkdirs and rename under a file is forbidden
Contributed by Steve Loughran.

Not all stores do complete validation here; in particular the S3A
Connector does not: checking up the entire directory tree to see if a path matches
is a file significantly slows things down.

This check does take place in S3A mkdirs(), which walks backwards up the list of
parent paths until it finds a directory (success) or a file (failure).
In practice production applications invariably create destination directories
before writing 1+ file into them -restricting check purely to the mkdirs()
call deliver significant speed up while implicitly including the checks.

Change-Id: I2c9df748e92b5655232e7d888d896f1868806eb0
2020-03-09 14:44:28 +00:00
Gabor Bota edc2e9d2f1
HADOOP-14936. S3Guard: remove experimental from documentation.
Contributed by Gabor Bota.
2020-03-02 18:16:52 +00:00
Mukund Thakur f864ef7429
HADOOP-16794. S3A reverts KMS encryption to the bucket's default KMS key in rename/copy.
AreContributed by Mukund Thakur.

This addresses an issue which surfaced with KMS encryption: the wrong
KMS key could be picked up in the S3 COPY operation, so
renamed files, while encrypted, would end up with the
bucket default key.

As well as adding tests in the new suite
ITestS3AEncryptionWithDefaultS3Settings,
AbstractSTestS3AHugeFiles has a new test method to
verify that the encryption settings also work
for large files copied via multipart operations.
2020-03-02 17:31:12 +00:00
spoganshev e553eda9cd
HADOOP-16767 Handle non-IO exceptions in reopen()
Contributed by Sergei Poganshev.

Catches Exception instead of IOException in closeStream() 
and so handle exceptions such as SdkClientException by 
aborting the wrapped stream. This will increase resilience
to failures, as any which occuring during stream closure
will be caught. Furthermore, because the
underlying HTTP connection is aborted, rather than closed,
it will not be recycled to cause problems on subsequent
operations.
2020-03-02 17:17:54 +00:00
Steve Loughran 929004074f
HADOOP-16853. ITestS3GuardOutOfBandOperations failing on versioned S3 buckets (#1840)
Contributed by Steve Loughran.

Signed-off-by: Mingliang Liu <liuml07@apache.org>
2020-02-24 10:45:34 -08:00
Mukund Thakur e77767bb1e
HADOOP-16711.
This adds a new option fs.s3a.bucket.probe, range (0-2) to
control which probe for a bucket existence to perform on startup.

0: no checks
1: v1 check (as has been performend until now)
2: v2 bucket check, which also incudes a permission check. Default.

When set to 0, bucket existence checks won't be done
during initialization thus making it faster.
When the bucket is not available in S3,
or if fs.s3a.endpoint points to the wrong instance of a private S3 store
consecutive calls like listing, read, write etc. will fail with
an UnknownStoreException.

Contributed by:
  * Mukund Thakur (main patch and tests)
  * Rajesh Balamohan (v0 list and performance tests)
  * lqjacklee (HADOOP-15990/v2 list)
  * Steve Loughran (UnknownStoreException support)

       modified:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
       modified:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
       modified:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ARetryPolicy.java
       modified:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java
       new file:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/UnknownStoreException.java
       new file:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/ErrorTranslation.java
       modified:   hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
       modified:   hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/performance.md
       modified:   hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/troubleshooting_s3a.md
       modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractS3AMockTest.java
       new file:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABucketExistence.java
       modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/MockS3ClientFactory.java
       modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AExceptionTranslation.java
       modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java
       modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardToolDynamoDB.java
       modified:   hadoop-tools/hadoop-aws/src/test/resources/core-site.xml

Change-Id: Ic174f803e655af172d81c1274ed92b51bdceb384
2020-02-21 13:44:46 +00:00
lqjacklee c77fc6971b
HADOOP-15961. S3A committers: make sure there's regular progress() calls.
Contributed by lqjacklee.

Change-Id: I13ca153e1e32b21dbe64d6fb25e260e0ff66154d
2020-02-17 22:06:34 +00:00
Steve Loughran 56dee66770
HADOOP-16823. Large DeleteObject requests are their own Thundering Herd.
Contributed by Steve Loughran.

During S3A rename() and delete() calls, the list of objects delete is
built up into batches of a thousand and then POSTed in a single large
DeleteObjects request.

But as the IO capacity allowed on an S3 partition may only be 3500 writes
per second *and* each entry in that POST counts as a single write, then
one of those posts alone can trigger throttling on an already loaded
S3 directory tree. Which can trigger backoff and retry, with the same
thousand entry post, and so recreate the exact same problem.

Fixes

* Page size for delete object requests is set in
  fs.s3a.bulk.delete.page.size; the default is 250.
* The property fs.s3a.experimental.aws.s3.throttling (default=true)
  can be set to false to disable throttle retry logic in the AWS
  client SDK -it is all handled in the S3A client. This
  gives more visibility in to when operations are being throttled
* Bulk delete throttling events are logged to the log
  org.apache.hadoop.fs.s3a.throttled log at INFO; if this appears
  often then choose a smaller page size.
* The metric "store_io_throttled" adds the entire count of delete
  requests when a single DeleteObjects request is throttled.
* A new quantile, "store_io_throttle_rate" can track throttling
  load over time.
* DynamoDB metastore throttle resilience issues have also been
  identified and fixed. Note: the fs.s3a.experimental.aws.s3.throttling
  flag does not apply to DDB IO precisely because there may still be
  lurking issues there and it safest to rely on the DynamoDB client
  SDK.

Change-Id: I00f85cdd94fc008864d060533f6bd4870263fd84
2020-02-13 19:09:49 +00:00