Mukund Thakur b0c9e4f1b5
HADOOP-16900. Very large files can be truncated when written through the S3A FileSystem.
Contributed by Mukund Thakur and Steve Loughran.

This patch ensures that writes to S3A fail when more than 10,000 blocks are
written. That upper bound still exists. To write massive files, make sure
that the value of fs.s3a.multipart.size is set to a size which is large
enough to upload the files in fewer than 10,000 blocks.

Change-Id: Icec604e2a357ffd38d7ae7bc3f887ff55f2d721a
2020-06-01 12:01:13 +01:00
2020-04-30 13:33:42 +09:00

For the latest information about Hadoop, please visit our website at:

   http://hadoop.apache.org/

and our wiki, at:

   https://cwiki.apache.org/confluence/display/HADOOP/
Description
No description provided
Readme 1.5 GiB
Languages
Java 92.7%
C++ 2.9%
C 1.9%
JavaScript 1.2%
Shell 0.5%
Other 0.5%