Commit Graph

1574 Commits

Author SHA1 Message Date
Josh Elser 6f2fa87fc8
HADOOP-17934 ABFS: Make sure the AbfsHttpOperation is non-null before using it (#3477)
Contributed by: Josh Elser
2021-09-30 13:38:13 +01:00
Steve Loughran d609f44aa0
HADOOP-17922. move to fs.s3a.encryption.algorithm - JCEKS integration (#3466)
The ordering of the resolution of new and deprecated s3a encryption options & secrets is the same when JCEKS and other hadoop credentials stores are used to store them as
when they are in XML files: per-bucket settings always take priority over global values,
even when the bucket-level options use the old option names.

Contributed by Mehakmeet Singh and Steve Loughran
2021-09-30 10:38:53 +01:00
Steve Loughran 2fda61fac6
HADOOP-17851. S3A to support user-specified content encoding (#3498)
The option fs.s3a.object.content.encoding declares the content encoding to be set on files when they are written; this is served up in the "Content-Encoding" HTTP header when reading objects back in.

This is useful for people loading the data into other tools in the AWS ecosystem which don't use file extensions to infer compression type (e.g. serving compressed files from S3 or importing into RDS)

Contributed by: Holden Karau
2021-09-29 13:42:07 +01:00
Petre Bogdan Stolojan b7c2864613
HADOOP-17198. Support S3 Access Points (#3260)
Add support for S3 Access Points. This provides extra security as it
ensures applications are not working with buckets belong to third parties.

To bind a bucket to an access point, set the access point (ap) ARN,
which must be done for each specific bucket, using the pattern

fs.s3a.bucket.$BUCKET.accesspoint.arn = ARN

* The global/bucket option `fs.s3a.accesspoint.required` to
mandate that buckets must declare their access point.
* This is not compatible with S3Guard.

Consult the documentation for further details.

Contributed by Bogdan Stolojan
2021-09-29 10:54:17 +01:00
Mehakmeet Singh acffe203b8
HADOOP-17195. ABFS: OutOfMemory error while uploading huge files (#3446)
Addresses the problem of processes running out of memory when
there are many ABFS output streams queuing data to upload,
especially when the network upload bandwidth is less than the rate
data is generated.

ABFS Output streams now buffer their blocks of data to
"disk", "bytebuffer" or "array", as set in
"fs.azure.data.blocks.buffer"

When buffering via disk, the location for temporary storage
is set in "fs.azure.buffer.dir"

For safe scaling: use "disk" (default); for performance, when
confident that upload bandwidth will never be a bottleneck,
experiment with the memory options.

The number of blocks a single stream can have queued for uploading
is set in "fs.azure.block.upload.active.blocks".
The default value is 20.

Contributed by Mehakmeet Singh.
2021-09-21 12:48:06 +01:00
Mehakmeet Singh c54bf19978
HADOOP-17871. S3A CSE: minor tuning (#3412)
This migrates the fs.s3a-server-side encryption configuration options
to a name which covers client-side encryption too.

fs.s3a.server-side-encryption-algorithm becomes fs.s3a.encryption.algorithm
fs.s3a.server-side-encryption.key becomes fs.s3a.encryption.key

The existing keys remain valid, simply deprecated and remapped
to the new values. If you want server-side encryption options
to be picked up regardless of hadoop versions, use
the old keys.

(the old key also works for CSE, though as no version of Hadoop
with CSE support has shipped without this remapping, it's less
relevant)


Contributed by: Mehakmeet Singh
2021-09-15 22:29:22 +01:00
Steve Loughran 10f3abeae7
Revert "HADOOP-17195. OutOfMemory error while performing hdfs CopyFromLocal to ABFS (#3406)" (#3443)
This reverts commit 52c024cc3a.
2021-09-15 22:27:49 +01:00
Mehakmeet Singh 52c024cc3a
HADOOP-17195. OutOfMemory error while performing hdfs CopyFromLocal to ABFS (#3406)
This migrates the fs.s3a-server-side encryption configuration options
to a name which covers client-side encryption too.

fs.s3a.server-side-encryption-algorithm becomes fs.s3a.encryption.algorithm
fs.s3a.server-side-encryption.key becomes fs.s3a.encryption.key

The existing keys remain valid, simply deprecated and remapped
to the new values. If you want server-side encryption options
to be picked up regardless of hadoop versions, use
the old keys.

(the old key also works for CSE, though as no version of Hadoop
with CSE support has shipped without this remapping, it's less
relevant)


Contributed by: Mehakmeet Singh
2021-09-15 22:27:28 +01:00
Steve Loughran 6e3aeb1544
HADOOP-17894. CredentialProviderFactory.getProviders() recursion loading JCEKS file from S3A (#3393)
* CredentialProviderFactory to detect and report on recursion.
* S3AFS to remove incompatible providers.
* Integration Test for this.

Contributed by Steve Loughran.
2021-09-07 15:29:37 +01:00
Mukund Thakur 9b8f81a179
HADOOP-17156. ABFS: Release the byte buffers held by input streams in close() (#3285)
Contributed By: Mukund Thakur
2021-09-07 15:13:36 +05:30
Dongjoon Hyun 265a48e245
HADOOP-17869. `fs.s3a.connection.maximum` should be bigger than `fs.s3a.threads.max` (#3337).
The value of `fs.s3a.connection.maximum` has been increased to 96

Contributed by Dongjoon Hyun
2021-08-30 18:30:43 +01:00
sumangala-patki dcddc6a59f
HADOOP-17682. ABFS: Support FileStatus input to OpenFileWithOptions() via OpenFileParameters (#2975) 2021-08-18 19:14:10 +05:30
Steve Loughran ee07b90286
HADOOP-17836. Improve logging on ABFS error reporting (#3281)
Contributed by Steve Loughran.
2021-08-18 11:39:17 +01:00
Mehakmeet Singh 8d6a686953
HADOOP-17823. S3A S3Guard tests to skip if S3-CSE are enabled (#3263)
Follow on to
* HADOOP-13887. Encrypt S3A data client-side with AWS SDK (S3-CSE)
* HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled

If the S3A bucket is set up to use S3-CSE encryption, all tests which turn
on S3Guard are skipped, so they don't raise any exceptions about
incompatible configurations.

Contributed by: Mehakmeet Singh
2021-08-05 11:46:17 +01:00
Steve Loughran a67a0fd37a
YARN-10878. move TestNMSimulator off com.google (#3268)
Converting from a class to a lambda-expression removes all need to reference the google stuff

Contributed by Steve Loughran
2021-08-05 11:34:10 +01:00
sumangala-patki 3450522c2f
HADOOP-17618. ABFS: Partially obfuscate SAS object IDs in Logs (#2845)
Contributed by Sumangala Patki
2021-08-04 19:45:57 +01:00
Steve Loughran 4627e9c7ef
HADOOP-17822. fs.s3a.acl.default not working after S3A Audit feature (#3249)
Fixes the regression caused by HADOOP-17511 by moving where the
option  fs.s3a.acl.default is read -doing it before the RequestFactory
is created.

Adds

* A unit test in TestRequestFactory to verify the ACLs are set
  on all file write operations.
* A new ITestS3ACannedACLs test which verifies that ACLs really
  do get all the way through.
* S3A Assumed Role delegation tokens to include the IAM permission
  s3:PutObjectAcl in the generated role.

Contributed by Steve Loughran
2021-08-02 15:26:56 +01:00
Steve Loughran ee466d4b40
HADOOP-17628. Distcp contract test is really slow with ABFS and S3A; timing out. (#3240)
This patch cuts down the size of directory trees used for
distcp contract tests against object stores, so making
them much faster against distant/slow stores.

On abfs, the test only runs with -Dscale (as was the case for s3a already),
and has the larger scale test timeout.

After every test case, the FileSystem IOStatistics are logged,
to provide information about what IO is taking place and
what it's performance is.

There are some test cases which upload files of 1+ MiB; you can
increase the size of the upload in the option
"scale.test.distcp.file.size.kb" 
Set it to zero and the large file tests are skipped.

Contributed by Steve Loughran.
2021-08-02 11:36:43 +01:00
Bobby Wang 266b1bd1bb
HADOOP-17812. NPE in S3AInputStream read() after failure to reconnect to store (#3222)
This improves error handling after multiple failures reading data
-when the read fails and attempts to reconnect() also fail.

Contributed by Bobby Wang.
2021-07-30 20:04:11 +01:00
Petre Bogdan Stolojan a218038960
HADOOP-17139 Re-enable optimized copyFromLocal implementation in S3AFileSystem (#3101)
This work
* Defines the behavior of FileSystem.copyFromLocal in filesystem.md
* Implements a high performance implementation of copyFromLocalOperation
  for S3 
* Adds a contract test for the operation: AbstractContractCopyFromLocalTest
* Implements the contract tests for Local and S3A FileSystems

Contributed by: Bogdan Stolojan
2021-07-30 19:42:08 +01:00
Szilard Nemeth 74770c8a16 YARN-10663. Add runningApps stats in SLS. Contributed by Vadaga Ananyo Rao 2021-07-29 17:37:40 +02:00
Szilard Nemeth 54f9fff218 YARN-10628. Add node usage metrics in SLS. Contributed by Vadaga Ananyo Rao 2021-07-29 13:43:40 +02:00
Brian Loss 1d03c69963
HADOOP-17811: ABFS ExponentialRetryPolicy doesn't pick up configuration values (#3221)
Contributed by Brian Loss.
2021-07-28 20:22:58 +01:00
Mehakmeet Singh b19dae8db3
HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled (#3239)
Contributed by Mehakmeet Singh
2021-07-28 15:34:43 +01:00
bshashikant dac10fcc20
HDFS-16145. CopyListing fails with FNF exception with snapshot diff. (#3234) 2021-07-28 10:29:00 +05:30
sumangala-patki 10ba4cc892
HADOOP-17765. ABFS: Use Unique File Paths in Tests. (#3153)
Contributed by Sumangala Patki
2021-07-27 18:49:22 +01:00
Mehakmeet Singh f813554769
HADOOP-13887. Support S3 client side encryption (S3-CSE) using AWS-SDK (#2706)
This (big!) patch adds support for client side encryption in AWS S3,
with keys managed by AWS-KMS.

Read the documentation in encryption.md very, very carefully before
use and consider it unstable.

S3-CSE is enabled in the existing configuration option
"fs.s3a.server-side-encryption-algorithm":

fs.s3a.server-side-encryption-algorithm=CSE-KMS
fs.s3a.server-side-encryption.key=<KMS_KEY_ID>

You cannot enable CSE and SSE in the same client, although
you can still enable a default SSE option in the S3 console. 
  
* Filesystem list/get status operations subtract 16 bytes from the length
  of all files >= 16 bytes long to compensate for the padding which CSE
  adds.
* The SDK always warns about the specific algorithm chosen being
  deprecated. It is critical to use this algorithm for ranged
  GET requests to work (i.e. random IO). Ignore.
* Unencrypted files CANNOT BE READ.
  The entire bucket SHOULD be encrypted with S3-CSE.
* Uploading files may be a bit slower as blocks are now
  written sequentially.
* The Multipart Upload API is disabled when S3-CSE is active.

Contributed by Mehakmeet Singh
2021-07-27 11:08:51 +01:00
Anoop Sam John dd8e540670
Addendum HADOOP-17770 WASB : Support disabling buffered reads in positional reads - Added the invalid SpotBugs warning to findbugs-exclude.xml (#3223) 2021-07-25 13:10:27 +05:30
Petre Bogdan Stolojan 63dfd84947
HADOOP-17458. S3A to treat "SdkClientException: Data read has a different length than the expected" as EOFException (#3040)
Some network exceptions can raise SdkClientException with message
`Data read has a different length than the expected`.

These should be recoverable.

Contributed by Bogdan Stolojan
2021-07-23 14:44:29 +01:00
Eric Yin de41ce8a16
HDFS-16087. Fix stuck issue in rbfbalance tool (#3141). Contributed by Eric Yin. 2021-07-21 00:01:55 +08:00
Mehakmeet Singh 997d749f8a
HADOOP-17801. No error message reported when bucket doesn't exist in S3AFS (#3202)
Contributed by: Mehakmeet Singh.
2021-07-16 15:27:00 +01:00
Mehakmeet Singh f6f105c7de
HADOOP-17803. Remove WARN logging from LoggingAuditor when executing a request outside an audit span (#3207)
Followup to HADOOP-17511. "Add audit/telemetry logging to S3A connector"

Contributed by Mehakmeet Singh
2021-07-16 11:47:05 +01:00
Anoop Sam John 177d906a67
HADOOP-17770 WASB : Support disabling buffered reads in positional reads (#3149) 2021-07-13 10:37:12 +05:30
litao fef53aacc9
HDFS-16122. Fix DistCpContext#toString() (#3191). Contributed by tomscut.
Signed-off-by: Ayush Saxena <ayushsaxena@apache.org>
2021-07-10 13:55:11 +05:30
Mukund Thakur 93ad7c32f4
HADOOP-17250 Lot of short reads can be merged with readahead. (#3110)
Introducing fs.azure.readahead.range parameter which can be set by the user.
Data will be populated in buffer for random reads as well which leads to fewer
remote calls.

This patch also changes the seek implementation to perform a lazy seek. The 
actual seek is done when a read is initiated and data is not present in the buffer else
data is returned from the buffer thus reducing the number of remote storage calls.

Contributed By: Mukund Thakur
2021-07-05 15:49:13 +05:30
sumangala-patki 35570e414a
HADOOP-17290. ABFS: Add Identifiers to Client Request Header (#2520)
Contributed by Sumangala Patki.
2021-07-02 19:13:20 +05:30
Mehakmeet Singh ea259f236c
HADOOP-17774. S3A bytesRead FS statistic showing twice the correct value (#3144)
Contributed by: Mehakmeet Singh
2021-07-02 14:03:16 +01:00
Masatake Iwasaki 3788fe52da HDFS-13916. Distcp SnapshotDiff to support WebHDFS. Contributed by Xun REN.
Signed-off-by: Masatake Iwasaki <iwasakims@apache.org>
2021-06-26 21:04:56 +00:00
Zamil Majdy ed5d10ee48
HADOOP-17764. S3AInputStream read does not re-open the input stream on the second read retry attempt (#3109)
Contributed by Zamil Majdy.
2021-06-25 20:01:48 +01:00
Steve Loughran 5b7f68ac76
HADOOP-17771. S3AFS creation fails "Unable to find a region via the region provider chain." (#3133)
This addresses the regression in Hadoop 3.3.1 where if no S3 endpoint
is set in fs.s3a.endpoint, S3A filesystem creation may fail on
non-EC2 deployments, depending on the local host environment setup.

* If fs.s3a.endpoint is empty/null, and fs.s3a.endpoint.region
  is null, the region is set to "us-east-1".
* If fs.s3a.endpoint.region is explicitly set to "" then the client
  falls back to the SDK region resolution chain; this works on EC2
* Details in troubleshooting.md, including a workaround for Hadoop-3.3.1+
* Also contains some minor restructuring of troubleshooting.md

Contributed by Steve Loughran.
2021-06-24 16:37:27 +01:00
Takanobu Asanuma 9e7c7ad129
HADOOP-17760. Delete hadoop.ssl.enabled and dfs.https.enable from docs and core-default.xml (#3099)
Reviewed-by: Ayush Saxena <ayushsaxena@apache.org>
2021-06-17 09:58:47 +09:00
snehavarma 35e4c31fff
HADOOP-17714 ABFS: testBlobBackCompatibility, testRandomRead & WasbAbfsCompatibility tests fail when triggered with default configs (#3035) 2021-06-13 23:52:29 +05:30
Anoop Sam John 5970c632d4
HADOOP-17645 Fix test failures in org.apache.hadoop.fs.azure.ITestOutputStreamSemantics. (#2926) 2021-06-13 23:07:10 +05:30
Petre Bogdan Stolojan de9ca9f155
HADOOP-17547 Magic committer to downgrade abort in cleanup if list uploads fails with access denied (#3051)
Contributed by Bogdan Stolojan
2021-06-12 17:45:12 +01:00
Anoop Sam John 2cf952baf4
HADOOP-17643 WASB : Make metadata checks case insensitive (#2972) 2021-06-12 15:25:03 +05:30
Viraj Jasani 4ef27a596f
HADOOP-17753. Keep restrict-imports-enforcer-rule for Guava Lists in top level hadoop-main pom (#3087) 2021-06-11 12:15:52 +09:00
snehavarma 4c039fafeb
HADOOP-17715 ABFS: Append blob tests with non HNS accounts fail (#3028) 2021-06-09 10:54:10 +05:30
Viraj Jasani 00d372b663
HADOOP-17725. Improve error message for token providers in ABFS (#3041)
Contributed by Viraj Jasani.
2021-06-08 22:03:03 +01:00
Akira Ajisaka 57a3613e5d
HDFS-16050. Some dynamometer tests fail. (#3079)
Signed-off-by: Takanobu Asanuma <tasanuma@apache.org>
2021-06-07 14:37:30 +09:00
Viraj Jasani f4b24c68e7
HADOOP-17743. Replace Guava Lists usage by Hadoop's own Lists in hadoop-common, hadoop-tools and cloud-storage projects (#3072) 2021-06-07 13:24:09 +09:00