HADOOP-18246. Reduce lower limit on fs.s3a.prefetch.block.size to 1 byte. (#5120)

The minimum value of fs.s3a.prefetch.block.size is now 1

Contributed by Ankit Saurabh
This commit is contained in:
Ankit Saurabh 2023-02-02 18:45:21 +00:00 committed by GitHub
parent ad0cff2f97
commit 22f6d55b71
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
3 changed files with 6 additions and 2 deletions

View File

@ -531,8 +531,7 @@ public class S3AFileSystem extends FileSystem implements StreamCapabilities,
this.prefetchEnabled = conf.getBoolean(PREFETCH_ENABLED_KEY, PREFETCH_ENABLED_DEFAULT);
long prefetchBlockSizeLong =
longBytesOption(conf, PREFETCH_BLOCK_SIZE_KEY, PREFETCH_BLOCK_DEFAULT_SIZE,
PREFETCH_BLOCK_DEFAULT_SIZE);
longBytesOption(conf, PREFETCH_BLOCK_SIZE_KEY, PREFETCH_BLOCK_DEFAULT_SIZE, 1);
if (prefetchBlockSizeLong > (long) Integer.MAX_VALUE) {
throw new IOException("S3A prefatch block size exceeds int limit");
}

View File

@ -1108,6 +1108,7 @@ options are covered in [Testing](./testing.md).
<value>8MB</value>
<description>
The size of a single prefetched block of data.
Decreasing this will increase the number of prefetches required, and may negatively impact performance.
</description>
</property>

View File

@ -43,6 +43,10 @@ Multiple blocks may be read in parallel.
|`fs.s3a.prefetch.block.size` |Size of a block |`8M` |
|`fs.s3a.prefetch.block.count` |Number of blocks to prefetch |`8` |
The default size of a block is 8MB, and the minimum allowed block size is 1 byte.
Decreasing block size will increase the number of blocks to be read for a file.
A smaller block size may negatively impact performance as the number of prefetches required will increase.
### Key Components
`S3PrefetchingInputStream` - When prefetching is enabled, S3AFileSystem will return an instance of