The state machine now simply tracks if the endpoint is selecting or has been selected.
The slight complexity is that any transition between these two states goes via a locked
state, where there is exclusive access to the interested ops and selection key.
It was possible that updateKey() was seeing a SELECTING state and
therefore attempt to call setKeyInterests(), while changeInterests()
was also seeing the SELECTING state, then moving to CHANGING so that
_interestOps was accessed concurrently.
Also made the update task to call updateKey() instead of calling
directly setKeyInterests(), in order to comply with the state
machine; this required to have onSelected() handle additional states
that are created by updateKey().
Finally, in updateKey() now setKeyInterests() is called before
updating the state to isolate the call into its own state.
Fixed IllegalStateException by handling NEED_UNWRAP for the CLOSED
state in fill().
The EOFException does not seem to be an issue with the client.
Also removed an unneeded catch block and an empty if statement.
WebSocket and multiplexed protocols are always read interested.
It may happen that while the application is writing, the write
blocks, resulting in a call to changeInterests().
At the same time, the selector may detect data to read and call
onSelected(), so there is a possibility that onSelected() runs
concurrently with changeInterests().
The fix adds an additional state (PROCESSING) that isolates the
changes that onSelected() makes to _interestOps, spin-waiting if
changeInterests() is running concurrently.
Likewise, changeInterests() spin-waits until onSelected() is running
concurrently.
The race was happening when updateKey() lost the race with
changeInterests() to update the interests and subsequently the key.
In that case, the selector thread that called updateKey() was returning
while changeInterests() was executing the CHANGING state.
Since updateKey() lost the race, the actual key interests was still
(typically) OP_READ so the selector thread would select() and call
onSelected() while changeInterests() was still executing, causing the
IllegalStateException.
Introduced parameter "dispatchIO" in the relevant factories so that
they can be configured by users and connections will be created
taking into account this parameter.
For less configurable connection factories, this parameter is
currently hardcoded to either true or false depending on the case.
For example, ALPN and NPN connections have it to false, since they
don't do any blocking operation in onFillable().
Previously, SCEP was skipping the update of the interestOps in case it
was either input or output shutdown.
Now it always set the interestOps as requested and leaves to the
connection decide what interestOps needs to be set.
Introduced a state machine to handle the various scenarios (ST = selector thread, Tx = pooled thread):
ST: call to SCEP.onSelected() moves from SELECTING -> PENDING.
ST: call to SCEP.updateKey() moves from PENDING -> UPDATING -> SELECTING
T1: call to SCEP.changeInterests() moves (SELECTING | PENDING) -> CHANGING -> SELECTING
The race between ST and T1 to move from PENDING to either UPDATING or CHANGING will be won
by one thread only, which will then perform the call to SelectionKey.setInterestOps().
Preferably, this will be done by ST during an updateKey() call. If updateKey() has already
been invoked, then changeInterests() will perform the call to SelectionKey.setInterestOps().
However, if T1 loses, it still has to perform the key update, so it will spin until ST
moves back to SELECTING.
+ From discussion with Simone, the dispatched fillInterest.onFail()
needs to occur, so that the AbstractConnection.onReadTimeout() can
handle the close conditions needed by SPDY and WebSocket.
The EndPoint close within onIdleExpired is a race condition with
this onReadTimeout desired behavior.
Introduced LeakDetector and utility classes LeakTrackingConnectionPool
and LeakTrackingByteBufferPool to track resource pool leakages.
Fixed ConnectionPool to be more precise in closing connections when
release() cannot recycle the connection.
Fixed a leak in server's HttpConnection in case a request arrives with
the Connection: close header: a ByteBuffer was allocated but never
released.
Removed code duplications, and also removed method close(),
unnecessary since onClose() was performing the exact same code.
Also reviewed subclasses of IdleTimeout to make sure that they always
call onClose() when they are "closed", to make sure that the timeout
does not fire and that there are no memory leaks (the scheduler
holding a reference to the timeout task, which in turn holds a
reference to the IdleTimeout instance).
work.
Fixed by making sure that we completely decrypt read bytes.
Before the fix, it was possible that we returned after the decryption
of one TLS frame, while another was still present in the
_encryptedBuffer.
This lead to non-clean closes of the connection, which hampered the
capability of session reuse by clients.
Now we decrypt in a loop and only return if there is nothing more
that we can decrypt.
Big refactoring to allow for additional proxy schemes that work at a
lower level than HTTP.
Introduced client-side ConnectionFactory, and binding that to a
HttpDestination, so that connections to that destination will use the
same ConnectionFactory.
The destination's ConnectionFactory is now initialized from the proxy
configuration and the transport, which is now itself a
ConnectionFactory.
The proxy configuration has also changed becoming polymorphic by
introducing a new ProxyConfiguration.Proxy abstract class,
which is implemented as HTTPProxy and can be implemented in future as
SOCKS4Proxy (and possibly others).
These were possible on a busy server, when many new connections are
created, and each triggers read interest: one connection submits the
read interest change, then runs the changes, finds that another
connections is created, runs it, which schedule a read interest
change, and so on.
Now the code is simpler, and while we always offer to the queue,
it may even be faster.
There are cases, for example in WebSocket, where we want to allocate
small buffers to write frame headers and then do a gathered write.
ArrayByteBufferPool had a minimum size of 64 bytes,
which was too big and always led to allocation rather than pooling,
leading to performance slowdowns.
Defaults are now minSize=0, increment=1024.