b0c9e4f1b5
Contributed by Mukund Thakur and Steve Loughran. This patch ensures that writes to S3A fail when more than 10,000 blocks are written. That upper bound still exists. To write massive files, make sure that the value of fs.s3a.multipart.size is set to a size which is large enough to upload the files in fewer than 10,000 blocks. Change-Id: Icec604e2a357ffd38d7ae7bc3f887ff55f2d721a |
||
---|---|---|
.github | ||
dev-support | ||
hadoop-assemblies | ||
hadoop-build-tools | ||
hadoop-client-modules | ||
hadoop-cloud-storage-project | ||
hadoop-common-project | ||
hadoop-dist | ||
hadoop-hdfs-project | ||
hadoop-mapreduce-project | ||
hadoop-maven-plugins | ||
hadoop-minicluster | ||
hadoop-project | ||
hadoop-project-dist | ||
hadoop-tools | ||
hadoop-yarn-project | ||
licenses | ||
licenses-binary | ||
.gitattributes | ||
.gitignore | ||
BUILDING.txt | ||
Jenkinsfile | ||
LICENSE-binary | ||
LICENSE.txt | ||
NOTICE-binary | ||
NOTICE.txt | ||
README.txt | ||
pom.xml | ||
start-build-env.sh |
README.txt
For the latest information about Hadoop, please visit our website at: http://hadoop.apache.org/ and our wiki, at: https://cwiki.apache.org/confluence/display/HADOOP/