* Stop Allocating Buffers in CopyBytesSocketChannel (#49825) The way things currently work, we read up to 1M from the channel and then potentially force all of it into the `ByteBuf` passed by Netty. Since that `ByteBuf` tends to by default be `64k` in size, large reads will force the buffer to grow, completely circumventing the logic of `allocHandle`. This seems like it could break `io.netty.channel.RecvByteBufAllocator.Handle#continueReading` since that method for the fixed-size allocator does check whether the last read was equal to the attempted read size. So if we set `64k` because that's what the buffer size is, then wirte `1M` to the buffer we will stop reading on the IO loop, even though the channel may still have bytes that we can read right away. More imporatantly though, this can lead to running OOM quite easily under IO pressure as we are forcing the heap buffers passed to the read to `reallocate`. Closes #49699
This commit is contained in:
parent
42f902977d
commit
91ac87d75b
|
@ -119,8 +119,10 @@ public class CopyBytesSocketChannel extends NioSocketChannel {
|
||||||
@Override
|
@Override
|
||||||
protected int doReadBytes(ByteBuf byteBuf) throws Exception {
|
protected int doReadBytes(ByteBuf byteBuf) throws Exception {
|
||||||
final RecvByteBufAllocator.Handle allocHandle = unsafe().recvBufAllocHandle();
|
final RecvByteBufAllocator.Handle allocHandle = unsafe().recvBufAllocHandle();
|
||||||
allocHandle.attemptedBytesRead(byteBuf.writableBytes());
|
int writeableBytes = Math.min(byteBuf.writableBytes(), MAX_BYTES_PER_WRITE);
|
||||||
|
allocHandle.attemptedBytesRead(writeableBytes);
|
||||||
ByteBuffer ioBuffer = getIoBuffer();
|
ByteBuffer ioBuffer = getIoBuffer();
|
||||||
|
ioBuffer.limit(writeableBytes);
|
||||||
int bytesRead = readFromSocketChannel(javaChannel(), ioBuffer);
|
int bytesRead = readFromSocketChannel(javaChannel(), ioBuffer);
|
||||||
ioBuffer.flip();
|
ioBuffer.flip();
|
||||||
if (bytesRead > 0) {
|
if (bytesRead > 0) {
|
||||||
|
|
Loading…
Reference in New Issue