Stop Allocating Buffers in CopyBytesSocketChannel (#49825) (#49832)

* Stop Allocating Buffers in CopyBytesSocketChannel (#49825)

The way things currently work, we read up to 1M from the channel
and then potentially force all of it into the `ByteBuf` passed
by Netty. Since that `ByteBuf` tends to by default be `64k` in size,
large reads will force the buffer to grow, completely circumventing
the logic of `allocHandle`.

This seems like it could break
`io.netty.channel.RecvByteBufAllocator.Handle#continueReading`
since that method for the fixed-size allocator does check
whether the last read was equal to the attempted read size.
So if we set `64k` because that's what the buffer size is,
then wirte `1M` to the buffer we will stop reading on the IO loop,
even though the channel may still have bytes that we can read right away.

More imporatantly though, this can lead to running OOM quite easily
under IO pressure as we are forcing the heap buffers passed to the read
to `reallocate`.

Closes #49699
This commit is contained in:
Armin Braun 2019-12-04 19:36:52 +01:00 committed by GitHub
parent 42f902977d
commit 91ac87d75b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 3 additions and 1 deletions

View File

@ -119,8 +119,10 @@ public class CopyBytesSocketChannel extends NioSocketChannel {
@Override
protected int doReadBytes(ByteBuf byteBuf) throws Exception {
final RecvByteBufAllocator.Handle allocHandle = unsafe().recvBufAllocHandle();
allocHandle.attemptedBytesRead(byteBuf.writableBytes());
int writeableBytes = Math.min(byteBuf.writableBytes(), MAX_BYTES_PER_WRITE);
allocHandle.attemptedBytesRead(writeableBytes);
ByteBuffer ioBuffer = getIoBuffer();
ioBuffer.limit(writeableBytes);
int bytesRead = readFromSocketChannel(javaChannel(), ioBuffer);
ioBuffer.flip();
if (bytesRead > 0) {