Commit Graph

1322 Commits

Author SHA1 Message Date
Steve Loughran 0a9b3c98b1
HADOOP-15430. hadoop fs -mkdir -p path-ending-with-slash/ fails with s3guard (#1646)
Contributed by Steve Loughran

* move qualify logic to S3AFileSystem.makeQualified()
* make S3AFileSystem.qualify() a private redirect to that
* ITestS3GuardFsShell turned off
2020-03-12 14:13:55 +00:00
bilaharith 0b931f36ec
Hadoop 16890. Change in expiry calculation for MSI token provider.
Contributed by Bilahari T H
2020-03-11 20:39:10 +00:00
Steve Loughran d4d4c37810
HADOOP-14630 Contract Tests to verify create, mkdirs and rename under a file is forbidden
Contributed by Steve Loughran.

Not all stores do complete validation here; in particular the S3A
Connector does not: checking up the entire directory tree to see if a path matches
is a file significantly slows things down.

This check does take place in S3A mkdirs(), which walks backwards up the list of
parent paths until it finds a directory (success) or a file (failure).
In practice production applications invariably create destination directories
before writing 1+ file into them -restricting check purely to the mkdirs()
call deliver significant speed up while implicitly including the checks.

Change-Id: I2c9df748e92b5655232e7d888d896f1868806eb0
2020-03-09 14:44:28 +00:00
Sebastian Nagel 18050bc583
HADOOP-16909 Typo in distcp counters.
Contributed by Sebastian Nagel.
2020-03-09 14:37:08 +00:00
Weiwei Yang 6dfe00c71e HADOOP-16840. AliyunOSS: getFileStatus throws FileNotFoundException in versioning bucket. Contributed by wujinhu. 2020-03-08 21:01:34 -07:00
Gabor Bota edc2e9d2f1
HADOOP-14936. S3Guard: remove experimental from documentation.
Contributed by Gabor Bota.
2020-03-02 18:16:52 +00:00
Mukund Thakur f864ef7429
HADOOP-16794. S3A reverts KMS encryption to the bucket's default KMS key in rename/copy.
AreContributed by Mukund Thakur.

This addresses an issue which surfaced with KMS encryption: the wrong
KMS key could be picked up in the S3 COPY operation, so
renamed files, while encrypted, would end up with the
bucket default key.

As well as adding tests in the new suite
ITestS3AEncryptionWithDefaultS3Settings,
AbstractSTestS3AHugeFiles has a new test method to
verify that the encryption settings also work
for large files copied via multipart operations.
2020-03-02 17:31:12 +00:00
spoganshev e553eda9cd
HADOOP-16767 Handle non-IO exceptions in reopen()
Contributed by Sergei Poganshev.

Catches Exception instead of IOException in closeStream() 
and so handle exceptions such as SdkClientException by 
aborting the wrapped stream. This will increase resilience
to failures, as any which occuring during stream closure
will be caught. Furthermore, because the
underlying HTTP connection is aborted, rather than closed,
it will not be recycled to cause problems on subsequent
operations.
2020-03-02 17:17:54 +00:00
Sneha Vijayarajan 791270a2e5
HADOOP-16730: ABFS: Support for Shared Access Signatures (SAS). Contributed by Sneha Vijayarajan. 2020-02-27 18:27:22 +00:00
Szilard Nemeth 8dc079455e YARN-8767. TestStreamingStatus fails. Contributed by Andras Bokor 2020-02-25 21:48:16 +01:00
Steve Loughran 929004074f
HADOOP-16853. ITestS3GuardOutOfBandOperations failing on versioned S3 buckets (#1840)
Contributed by Steve Loughran.

Signed-off-by: Mingliang Liu <liuml07@apache.org>
2020-02-24 10:45:34 -08:00
Sahil Takiar 42dfd270a1
HADOOP-16859: ABFS: Add unbuffer support to ABFS connector.
Contributed by Sahil Takiar
2020-02-24 16:28:00 +00:00
Mukund Thakur e77767bb1e
HADOOP-16711.
This adds a new option fs.s3a.bucket.probe, range (0-2) to
control which probe for a bucket existence to perform on startup.

0: no checks
1: v1 check (as has been performend until now)
2: v2 bucket check, which also incudes a permission check. Default.

When set to 0, bucket existence checks won't be done
during initialization thus making it faster.
When the bucket is not available in S3,
or if fs.s3a.endpoint points to the wrong instance of a private S3 store
consecutive calls like listing, read, write etc. will fail with
an UnknownStoreException.

Contributed by:
  * Mukund Thakur (main patch and tests)
  * Rajesh Balamohan (v0 list and performance tests)
  * lqjacklee (HADOOP-15990/v2 list)
  * Steve Loughran (UnknownStoreException support)

       modified:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
       modified:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
       modified:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ARetryPolicy.java
       modified:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java
       new file:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/UnknownStoreException.java
       new file:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/ErrorTranslation.java
       modified:   hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
       modified:   hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/performance.md
       modified:   hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/troubleshooting_s3a.md
       modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractS3AMockTest.java
       new file:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABucketExistence.java
       modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/MockS3ClientFactory.java
       modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AExceptionTranslation.java
       modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java
       modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardToolDynamoDB.java
       modified:   hadoop-tools/hadoop-aws/src/test/resources/core-site.xml

Change-Id: Ic174f803e655af172d81c1274ed92b51bdceb384
2020-02-21 13:44:46 +00:00
Steve Loughran e3bba5fa22
HADOOP-16706. ITestClientUrlScheme fails for accounts which don't support HTTP
Adds a new service code to recognise accounts without HTTP support; catches
that and considers such a responset a successful validation of the ability of the
client to switch to http when the test parameters expect that.

Contributed by Steve Loughran
2020-02-21 11:13:38 +00:00
lqjacklee c77fc6971b
HADOOP-15961. S3A committers: make sure there's regular progress() calls.
Contributed by lqjacklee.

Change-Id: I13ca153e1e32b21dbe64d6fb25e260e0ff66154d
2020-02-17 22:06:34 +00:00
Steve Loughran 56dee66770
HADOOP-16823. Large DeleteObject requests are their own Thundering Herd.
Contributed by Steve Loughran.

During S3A rename() and delete() calls, the list of objects delete is
built up into batches of a thousand and then POSTed in a single large
DeleteObjects request.

But as the IO capacity allowed on an S3 partition may only be 3500 writes
per second *and* each entry in that POST counts as a single write, then
one of those posts alone can trigger throttling on an already loaded
S3 directory tree. Which can trigger backoff and retry, with the same
thousand entry post, and so recreate the exact same problem.

Fixes

* Page size for delete object requests is set in
  fs.s3a.bulk.delete.page.size; the default is 250.
* The property fs.s3a.experimental.aws.s3.throttling (default=true)
  can be set to false to disable throttle retry logic in the AWS
  client SDK -it is all handled in the S3A client. This
  gives more visibility in to when operations are being throttled
* Bulk delete throttling events are logged to the log
  org.apache.hadoop.fs.s3a.throttled log at INFO; if this appears
  often then choose a smaller page size.
* The metric "store_io_throttled" adds the entire count of delete
  requests when a single DeleteObjects request is throttled.
* A new quantile, "store_io_throttle_rate" can track throttling
  load over time.
* DynamoDB metastore throttle resilience issues have also been
  identified and fixed. Note: the fs.s3a.experimental.aws.s3.throttling
  flag does not apply to DDB IO precisely because there may still be
  lurking issues there and it safest to rely on the DynamoDB client
  SDK.

Change-Id: I00f85cdd94fc008864d060533f6bd4870263fd84
2020-02-13 19:09:49 +00:00
Masatake Iwasaki d5467d299d HADOOP-16739. Fix native build failure of hadoop-pipes on CentOS 8. 2020-02-10 13:13:11 +09:00
Vinayakumar B 7dac7e1d13
HADOOP-16596. [pb-upgrade] Use shaded protobuf classes from hadoop-thirdparty dependency (#1635). Contributed by Vinayakumar B. 2020-02-07 14:51:24 +05:30
bilaharith 5944d28130
HADOOP-16825: ITestAzureBlobFileSystemCheckAccess failing.
Contributed by Bilahari T H.
2020-02-06 18:48:00 +00:00
Sneha Vijayarajan 55f2421580
HADOOP-16845: Disable ITestAbfsClient.testContinuationTokenHavingEqualSign due to ADLS Gen2 service bug.
Contributed by Sneha Vijayarajan.
2020-02-06 18:41:06 +00:00
Mukund Thakur 146ca0f545
HADOOP-16832. S3Guard testing doc: Add required parameters for S3Guard testing in IDE. (#1822). Contributed by Mukund Thakur. 2020-02-06 15:13:25 +01:00
Mustafa İman 5977360878
HADOOP-16801. S3Guard listFiles will not query S3 if all listings are authoritative (#1815). Contributed by Mustafa İman. 2020-01-30 11:16:51 +01:00
Yufei Gu 1643cfdfbb YARN-10015. Correct the sample command in SLS README file. Contributed by Aihua Xu. 2020-01-28 17:47:49 -08:00
Steve Loughran 7f40e6688a
HADOOP-16746. mkdirs and s3guard Authoritative mode.
Contributed by Steve Loughran.

This fixes two problems with S3Guard authoritative mode and
the auth directory flags which are stored in DynamoDB.

1. mkdirs was creating dir markers without the auth bit,
   forcing needless scans on newly created directories and
   files subsequently added; it was only with the first listStatus call
   on that directory that the dir would be marked as authoritative -even
   though it would be complete already.

2. listStatus(path) would reset the authoritative status bit of all
   child directories even if they were already marked as authoritative.

Issue #2 is possibly the most expensive, as any treewalk using listStatus
(e.g globfiles) would clear the auth bit for all child directories before
listing them. And this would happen every single time...
essentially you weren't getting authoritative directory listings.

For the curious, that the major bug was actually found during testing
-we'd all missed it during reviews.

A lesson there: the better the tests the fewer the bugs.

Maybe also: something obvious and significant can get by code reviews.

	modified:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
	modified:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/BulkOperationState.java
	modified:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java
	modified:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/LocalMetadataStore.java
	modified:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStore.java
	modified:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/NullMetadataStore.java
	modified:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3Guard.java
	modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardWriteBack.java
	modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestRestrictedReadAccess.java
	modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/impl/TestPartialDeleteFailures.java
	modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStore.java
	modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStoreAuthoritativeMode.java
	modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStoreScale.java
	modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardFsck.java
	modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStoreTestBase.java
	modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/TestS3Guard.java

Change-Id: Ic3ffda13f2af2430afedd50fd657b595c83e90a7
2020-01-25 18:35:02 +00:00
Mustafa Iman 839054754b
HADOOP-16792: Make S3 client request timeout configurable.
Contributed by Mustafa Iman.

This adds a new configuration option fs.s3a.connection.request.timeout
to declare the time out on HTTP requests to the AWS service;
0 means no timeout.
Measured in seconds; the usual time suffixes are all supported

Important: this is the maximum duration of any AWS service call,
including upload and copy operations. If non-zero, it must be larger
than the time to upload multi-megabyte blocks to S3 from the client,
and to rename many-GB files. Use with care.

Change-Id: I407745341068b702bf8f401fb96450a9f987c51c
2020-01-24 13:37:07 +00:00
Karthick Narendran 978c487672
HADOOP-16826. ABFS: update abfs.md to include config keys for identity transformation
Contributed by Karthick Narendran
2020-01-23 20:35:57 -08:00
Mingliang Liu 6c1fa24ac0 HADOOP-16732. S3Guard to support encrypted DynamoDB table (#1752). Contributed by Mingliang Liu. 2020-01-23 14:21:42 +01:00
Szilard Nemeth 9520b2ad79 YARN-10083. Provide utility to ask whether an application is in final status. Contributed by Adam Antal 2020-01-22 16:25:07 +01:00
Steve Loughran 5e2ce370a3 HADOOP-16759. Filesystem openFile() builder to take a FileStatus param (#1761). Contributed by Steve Loughran
* Enhanced builder + FS spec
* s3a FS to use this to skip HEAD on open
* and to use version/etag when opening the file

works with S3AFileStatus FS and S3ALocatedFileStatus
2020-01-21 14:31:51 -08:00
Sahil Takiar f206b736f0
HADOOP-16346. Stabilize S3A OpenSSL support.
Introduces `openssl` as an option for `fs.s3a.ssl.channel.mode`.
The new option is documented and marked as experimental.

For details on how to use this, consult the peformance document
in the s3a documentation.

This patch is the successor to HADOOP-16050 "S3A SSL connections
should use OpenSSL" -which was reverted because of
incompatibilities between the wildfly OpenSSL client and the AWS
HTTPS servers (HADOOP-16347). With the Wildfly release moved up
to 1.0.7.Final (HADOOP-16405) everything should now work.

Related issues:

* HADOOP-15669. ABFS: Improve HTTPS Performance
* HADOOP-16050: S3A SSL connections should use OpenSSL
* HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8
* HADOOP-16405: Upgrade Wildfly Openssl version to 1.0.7.Final

Contributed by Sahil Takiar

Change-Id: I80a4bc5051519f186b7383b2c1cea140be42444e
2020-01-21 16:37:51 +00:00
Akira Ajisaka f6d20daf40
HADOOP-16808. Use forkCount and reuseForks parameters instead of forkMode in the config of maven surefire plugin. Contributed by Xieming Li. 2020-01-21 18:03:24 +09:00
Steve Loughran 6a859d33aa
HADOOP-16785. followup to abfs close() fix.
Adds one extra test to the ABFS close logic, to explicitly
verify that the close sequence of FilterOutputStream is
not going to fail.

This is just a due-diligence patch, but it helps ensure
that no regressions creep in in future.

Contributed by Steve Loughran.

Change-Id: Ifd33a8c322d32513411405b15f50a1aebcfa6e48
2020-01-20 16:23:41 +00:00
Clemens Wolff c36f09deb9
HADOOP-16005. NativeAzureFileSystem does not support setXAttr.
Contributed by Clemens Wolff.
2020-01-14 17:28:37 -08:00
Steve Loughran 49df838995
HADOOP-16697. Tune/audit S3A authoritative mode.
Contains:

HADOOP-16474. S3Guard ProgressiveRenameTracker to mark destination
              dirirectory as authoritative on success.
HADOOP-16684. S3guard bucket info to list a bit more about
              authoritative paths.
HADOOP-16722. S3GuardTool to support FilterFileSystem.

This patch improves the marking of newly created/import directory
trees in S3Guard DynamoDB tables as authoritative.

Specific changes:

 * Renamed directories are marked as authoritative if the entire
   operation succeeded (HADOOP-16474).
 * When updating parent table entries as part of any table write,
   there's no overwriting of their authoritative flag.

s3guard import changes:

* new -verbose flag to print out what is going on.

* The "s3guard import" command lets you declare that a directory tree
is to be marked as authoritative

  hadoop s3guard import -authoritative -verbose s3a://bucket/path

When importing a listing and a file is found, the import tool queries
the metastore and only updates the entry if the file is different from
before, where different == new timestamp, etag, or length. S3Guard can get
timestamp differences due to clock skew in PUT operations.

As the recursive list performed by the import command doesn't retrieve the
versionID, the existing entry may in fact be more complete.
When updating an existing due to clock skew the existing version ID
is propagated to the new entry (note: the etags must match; this is needed
to deal with inconsistent listings).

There is a new s3guard command to audit a s3guard bucket/path's
authoritative state:

  hadoop s3guard authoritative -check-config s3a://bucket/path

This is primarily for testing/auditing.

The s3guard bucket-info command also provides some more details on the
authoritative state of a store (HADOOP-16684).

Change-Id: I58001341c04f6f3597fcb4fcb1581ccefeb77d91
2020-01-10 11:11:56 +00:00
Adam Antal 20a90c0b5b
YARN-10071. Sync Mockito version with other modules
Signed-off-by: Akira Ajisaka <aajisaka@apache.org>
2020-01-10 17:41:04 +09:00
Steve Loughran 52cc20e9ea
HADOOP-16642. ITestDynamoDBMetadataStoreScale fails when throttled.
Contributed by Steve Loughran.

Change-Id: If9b4ebe937200c17d7fdfb9923e6ae0ab4c541ef
2020-01-08 14:28:20 +00:00
Steve Loughran 17aa8f6764
HADOOP-16785. Improve wasb and abfs resilience on double close() calls.
This hardens the wasb and abfs output streams' resilience to being invoked
in/after close().

wasb:
  Explicity raise IOEs on operations invoked after close,
  rather than implicitly raise NPEs.
  This ensures that invocations which catch and swallow IOEs will perform as
  expected.

abfs:
  When rethrowing an IOException in the close() call, explicitly wrap it
  with a new instance of the same subclass.
  This is needed to handle failures in try-with-resources clauses, where
  any exception in closed() is added as a suppressed exception to the one
  thrown in the try {} clause
  *and you cannot attach the same exception to itself*

Contributed by Steve Loughran.

Change-Id: Ic44b494ff5da332b47d6c198ceb67b965d34dd1b
2020-01-08 11:46:54 +00:00
Sneha Vijayarajan d1f5976c00
HADOOP-16699. Add verbose TRACE logging to ABFS.
Contributed by Sneha Vijayarajan,

Change-Id: Ic616a10406e6e9f11616c9cc05d8630ebbedaf65
2020-01-07 18:05:47 +00:00
Steve Loughran 2bbf73f1df HADOOP-16645. S3A Delegation Token extension point to use StoreContext.
Contributed by Steve Loughran.

This is part of the ongoing refactoring of the S3A codebase, with the
delegation token support (HADOOP-14556) no longer given a direct reference
to the owning S3AFileSystem. Instead it gets a StoreContext and a new
interface, DelegationOperations, to access those operations offered by S3AFS
which are specifically needed by the DT bindings.

The sole operation needed is listAWSPolicyRules(), which is used to allow
S3A FS and the S3Guard metastore to return the AWS policy rules needed to
access their specific services/buckets/tables, allowing the AssumedRole
delegation token to be locked down.

As further restructuring takes place, that interface's implementation
can be moved to wherever the new home for those operations ends up.

Although it changes the API of an extension point, that feature (S3
Delegation Tokens) has not shipped; backwards compatibility is not a
problem except for anyone who has implemented DT support against trunk.
To those developers: sorry.

Change-Id: I770f58b49ff7634a34875ba37b7d51c94d7c21da
2020-01-07 11:17:37 +00:00
Mukund Thakur 819159fa06
HDFS-14788. Use dynamic regex filter to ignore copy of source files in Distcp.
Contributed by Mukund Thakur.

Change-Id: I781387ddce95ee300c12a160dc9a0f7d602403c3
2020-01-06 19:10:39 +00:00
Steve Loughran b6dc00f481
HADOOP-16775. DistCp reuses the same temp file within the task for different files.
Contributed by Amir Shenavandeh.

This avoids overwrite consistency issues with S3 and other stores -though
given S3's copy operation is O(data), you are still best of using -direct
when distcp-ing to it.

Change-Id: I8dc9f048ad0cc57ff01543b849da1ce4eaadf8c3
2020-01-02 15:36:33 +00:00
Akira Ajisaka f777cd398f
HADOOP-16771. Update checkstyle to 8.26 and maven-checkstyle-plugin to 3.1.0. Contributed by Andras Bokor. 2019-12-20 13:10:26 +09:00
Steve Loughran 382151670b HADOOP-16450. ITestS3ACommitterFactory to not use useInconsistentClient. (#1145)
Contributed by Steve Loughran.

Change-Id: Ifb9771a73a07f744e4ed5f5e6be72473179db439
2019-12-16 14:29:30 +01:00
Kengo Seki fd7de2b82a HADOOP-16764. Rewrite Python example codes using Python3 (#1762) 2019-12-16 11:04:20 +09:00
Mingliang Liu d12ad9e8ad
HADOOP-16757. Increase timeout unit test rule for MetadataStoreTestBase (#1757)
Contributed by Mingliang Liu.

Signed-off-by: Steve Loughran <stevel@apache.org>
2019-12-13 08:19:27 -08:00
Takanobu Asanuma c210cede5c HDFS-15044. [Dynamometer] Show the line of audit log when parsing it unsuccessfully. (#1749) 2019-12-12 07:48:14 -08:00
Mingliang Liu b56c08b2b7
HADOOP-16758. Refine testing.md to tell user better how to use auth-keys.xml (#1753)
Contributed by Mingliang Liu
2019-12-11 11:52:53 -08:00
Gabor Bota 875a3e97dd
HADOOP-16424. S3Guard fsck: Check internal consistency of the MetadataStore (#1691). Contributed by Gabor Bota. 2019-12-10 15:51:49 +01:00
aasha fccccc9703 HDFS-14869 Copy renamed files which are not excluded anymore by filter (#1530) 2019-12-06 17:41:25 +05:30
Mingliang Liu 19512b21e3
HADOOP-16735. Make it clearer in config default that EnvironmentVariableCredentialsProvider supports AWS_SESSION_TOKEN. Contributed by Mingliang Liu
This closes #1733
2019-12-05 17:37:17 -08:00