Improvements to the Jetty server documentation.

Using one-line-per-sentence.

Signed-off-by: Simone Bordet <simone.bordet@gmail.com>
This commit is contained in:
Simone Bordet 2020-04-17 16:16:53 +02:00
parent d681f10853
commit bea09425c8
20 changed files with 455 additions and 1186 deletions

View File

@ -19,44 +19,32 @@
[[eg-client-io-arch]]
=== Client Libraries I/O Architecture
The Jetty client libraries provide the basic components and APIs to implement
a network client.
The Jetty client libraries provide the basic components and APIs to implement a network client.
They build on the common xref:eg-io-arch[Jetty I/O Architecture] and provide client
specific concepts (such as establishing a connection to a server).
They build on the common xref:eg-io-arch[Jetty I/O Architecture] and provide client specific concepts (such as establishing a connection to a server).
There are conceptually two layers that compose the Jetty client libraries:
. xref:eg-client-io-arch-network[The network layer], that handles the low level
I/O and deals with buffers, threads, etc.
. xref:eg-client-io-arch-protocol[The protocol layer], that handles the parsing
of bytes read from the network and the generation of bytes to write to the
network.
. xref:eg-client-io-arch-network[The network layer], that handles the low level I/O and deals with buffers, threads, etc.
. xref:eg-client-io-arch-protocol[The protocol layer], that handles the parsing of bytes read from the network and the generation of bytes to write to the network.
[[eg-client-io-arch-network]]
==== Client Libraries Network Layer
The Jetty client libraries use the common I/O design described in
link:#eg-io-arch[this section].
The main client-side component is the
link:{JDURL}/org/eclipse/jetty/io/ClientConnector.html[`ClientConnector`].
The Jetty client libraries use the common I/O design described in link:#eg-io-arch[this section].
The main client-side component is the link:{JDURL}/org/eclipse/jetty/io/ClientConnector.html[`ClientConnector`].
The `ClientConnector` primarily wraps the
link:{JDURL}/org/eclipse/jetty/io/SelectorManager.html[`SelectorManager`]
and aggregates other four components:
The `ClientConnector` primarily wraps the link:{JDURL}/org/eclipse/jetty/io/SelectorManager.html[`SelectorManager`] and aggregates other four components:
* a thread pool (in form of an `java.util.concurrent.Executor`)
* a scheduler (in form of `org.eclipse.jetty.util.thread.Scheduler`)
* a byte buffer pool (in form of `org.eclipse.jetty.io.ByteBufferPool`)
* a TLS factory (in form of `org.eclipse.jetty.util.ssl.SslContextFactory.Client`)
The `ClientConnector` is where you want to set those components after you
have configured them.
If you don't explicitly set those components on the `ClientConnector`, then
appropriate defaults will be chosen when the `ClientConnector` starts.
The `ClientConnector` is where you want to set those components after you have configured them.
If you don't explicitly set those components on the `ClientConnector`, then appropriate defaults will be chosen when the `ClientConnector` starts.
The simplest example that creates and starts a `ClientConnector` is the
following:
The simplest example that creates and starts a `ClientConnector` is the following:
[source,java,indent=0]
----
@ -70,119 +58,72 @@ A more typical example:
include::../{doc_code}/embedded/client/ClientConnectorDocs.java[tags=typical]
----
A more advanced example that customizes the `ClientConnector` by overriding
some of its methods:
A more advanced example that customizes the `ClientConnector` by overriding some of its methods:
[source,java,indent=0]
----
include::../{doc_code}/embedded/client/ClientConnectorDocs.java[tags=advanced]
----
Since `ClientConnector` is the component that handles the low-level network, it
is also the component where you want to configure the low-level network configuration.
Since `ClientConnector` is the component that handles the low-level network, it is also the component where you want to configure the low-level network configuration.
The most common parameters are:
* `ClientConnector.selectors`: the number of ``java.nio.Selector``s components
(defaults to `1`) that are present to handle the ``SocketChannel``s opened by
the `ClientConnector`. You typically want to increase the number of selectors
only for those use cases where each selector should handle more than few hundreds
_concurrent_ socket events.
For example, one selector typically runs well for `250` _concurrent_ socket
events; as a rule of thumb, you can multiply that number by `10` to obtain the
number of opened sockets a selector can handle (`2500`), based on the assumption
that not all the `2500` sockets will be active _at the same time_.
* `ClientConnector.idleTimeout`: the duration of time after which
`ClientConnector` closes a socket due to inactivity (defaults to `30` seconds).
This is an important parameter to configure, and you typically want the client
idle timeout to be shorter than the server idle timeout, to avoid race
conditions where the client attempts to use a socket just before the client-side
idle timeout expires, but the server-side idle timeout has already expired and
the is already closing the socket.
* `ClientConnector.connectBlocking`: whether the operation of connecting a
socket to the server (i.e. `SocketChannel.connect(SocketAddress)`) must be a
blocking or a non-blocking operation (defaults to `false`).
* `ClientConnector.selectors`: the number of ``java.nio.Selector``s components (defaults to `1`) that are present to handle the ``SocketChannel``s opened by the `ClientConnector`.
You typically want to increase the number of selectors only for those use cases where each selector should handle more than few hundreds _concurrent_ socket events.
For example, one selector typically runs well for `250` _concurrent_ socket events; as a rule of thumb, you can multiply that number by `10` to obtain the number of opened sockets a selector can handle (`2500`), based on the assumption that not all the `2500` sockets will be active _at the same time_.
* `ClientConnector.idleTimeout`: the duration of time after which `ClientConnector` closes a socket due to inactivity (defaults to `30` seconds).
This is an important parameter to configure, and you typically want the client idle timeout to be shorter than the server idle timeout, to avoid race conditions where the client attempts to use a socket just before the client-side idle timeout expires, but the server-side idle timeout has already expired and the is already closing the socket.
* `ClientConnector.connectBlocking`: whether the operation of connecting a socket to the server (i.e. `SocketChannel.connect(SocketAddress)`) must be a blocking or a non-blocking operation (defaults to `false`).
For `localhost` or same datacenter hosts you want to set this parameter to
`true` because DNS resolution will be immediate (and likely never fail).
For generic Internet hosts (e.g. when you are implementing a web spider) you
want to set this parameter to `false`.
* `ClientConnector.connectTimeout`: the duration of time after which
`ClientConnector` aborts a connection attempt to the server (defaults to `5`
seconds).
For generic Internet hosts (e.g. when you are implementing a web spider) you want to set this parameter to `false`.
* `ClientConnector.connectTimeout`: the duration of time after which `ClientConnector` aborts a connection attempt to the server (defaults to `5` seconds).
This time includes the DNS lookup time _and_ the TCP connect time.
Please refer to the `ClientConnector`
link:{JDURL}/org/eclipse/jetty/io/ClientConnector.html[javadocs]
for the complete list of configurable parameters.
Please refer to the `ClientConnector` link:{JDURL}/org/eclipse/jetty/io/ClientConnector.html[javadocs] for the complete list of configurable parameters.
[[eg-client-io-arch-protocol]]
==== Client Libraries Protocol Layer
The protocol layer builds on top of the network layer to generate the
bytes to be written to the network and to parse the bytes read from the
network.
The protocol layer builds on top of the network layer to generate the bytes to be written to the network and to parse the bytes read from the network.
Recall from link:#eg-io-arch-connection[this section] that Jetty uses the
`Connection` abstraction to produce and interpret the network bytes.
Recall from link:#eg-io-arch-connection[this section] that Jetty uses the `Connection` abstraction to produce and interpret the network bytes.
On the client side, a `ClientConnectionFactory` implementation is the
component that creates `Connection` instances based on the protocol that
the client wants to "speak" with the server.
On the client side, a `ClientConnectionFactory` implementation is the component that creates `Connection` instances based on the protocol that the client wants to "speak" with the server.
Applications use `ClientConnector.connect(SocketAddress, Map<String, Object>)`
to establish a TCP connection to the server, and must tell
`ClientConnector` how to create the `Connection` for that particular
TCP connection, and how to notify back the application when the connection
creation succeeds or fails.
Applications use `ClientConnector.connect(SocketAddress, Map<String, Object>)` to establish a TCP connection to the server, and must tell `ClientConnector` how to create the `Connection` for that particular TCP connection, and how to notify back the application when the connection creation succeeds or fails.
This is done by passing a
link:{JDURL}/org/eclipse/jetty/io/ClientConnectionFactory.html[`ClientConnectionFactory`]
(that creates `Connection` instances) and a
link:{JDURL}/org/eclipse/jetty/util/Promise.html[`Promise`] (that is notified
of connection creation success or failure) in the context `Map` as follows:
This is done by passing a link:{JDURL}/org/eclipse/jetty/io/ClientConnectionFactory.html[`ClientConnectionFactory`] (that creates `Connection` instances) and a link:{JDURL}/org/eclipse/jetty/util/Promise.html[`Promise`] (that is notified of connection creation success or failure) in the context `Map` as follows:
[source,java,indent=0]
----
include::../{doc_code}/embedded/client/ClientConnectorDocs.java[tags=connect]
----
When a `Connection` is created successfully, its `onOpen()` method is invoked,
and then the promise is completed successfully.
When a `Connection` is created successfully, its `onOpen()` method is invoked, and then the promise is completed successfully.
It is now possible to write a super-simple `telnet` client that reads and writes
string lines:
It is now possible to write a super-simple `telnet` client that reads and writes string lines:
[source,java,indent=0]
----
include::../{doc_code}/embedded/client/ClientConnectorDocs.java[tags=telnet]
----
Note how a very basic "telnet" API that applications could use is implemented
in the form of the `onLine(Consumer<String>)` for the non-blocking receiving
side and `writeLine(String, Callback)` for the non-blocking sending side.
Note also how the `onFillable()` method implements some basic "parsing"
by looking up the `\n` character in the buffer.
Note how a very basic "telnet" API that applications could use is implemented in the form of the `onLine(Consumer<String>)` for the non-blocking receiving side and `writeLine(String, Callback)` for the non-blocking sending side.
Note also how the `onFillable()` method implements some basic "parsing" by looking up the `\n` character in the buffer.
NOTE: The "telnet" client above looks like a super-simple HTTP client because
HTTP/1.0 can be seen as a line-based protocol. HTTP/1.0 was used just as an
example, but we could have used any other line-based protocol such as
link:https://en.wikipedia.org/wiki/Simple_Mail_Transfer_Protocol[SMTP],
provided that the server was able to understand it.
NOTE: The "telnet" client above looks like a super-simple HTTP client because HTTP/1.0 can be seen as a line-based protocol.
HTTP/1.0 was used just as an example, but we could have used any other line-based protocol such as link:https://en.wikipedia.org/wiki/Simple_Mail_Transfer_Protocol[SMTP], provided that the server was able to understand it.
This is very similar to what the Jetty client implementation does for real
network protocols.
Real network protocols are of course more complicated and so is the implementation
code that handles them, but the general ideas are similar.
This is very similar to what the Jetty client implementation does for real network protocols.
Real network protocols are of course more complicated and so is the implementation code that handles them, but the general ideas are similar.
The Jetty client implementation provides a number of `ClientConnectionFactory`
implementations that can be composed to produce and interpret the network bytes.
The Jetty client implementation provides a number of `ClientConnectionFactory` implementations that can be composed to produce and interpret the network bytes.
For example, it is simple to modify the above example to use the TLS protocol
so that you will be able to connect to the server on port `443`, typically
reserved for the encrypted HTTP protocol.
For example, it is simple to modify the above example to use the TLS protocol so that you will be able to connect to the server on port `443`, typically reserved for the encrypted HTTP protocol.
The differences between the clear-text version and the TLS encrypted version
are minimal:
The differences between the clear-text version and the TLS encrypted version are minimal:
[source,java,indent=0]
----

View File

@ -19,18 +19,11 @@
[[eg-client]]
== Client Libraries
The Eclipse Jetty Project provides also provides client-side libraries
that allow you to embed a client in your applications.
A typical example is a client application that needs to contact a third party
service via HTTP (for example a REST service).
Another example is a proxy application that receives HTTP requests and
forwards them as FCGI requests to a PHP application such as WordPress,
or receives HTTP/1.1 requests and converts them to HTTP/2.
Yet another example is a client application that needs to received events
from a WebSocket server.
The Eclipse Jetty Project provides also provides client-side libraries that allow you to embed a client in your applications.
A typical example is a client application that needs to contact a third party service via HTTP (for example a REST service).
Another example is a proxy application that receives HTTP requests and forwards them as FCGI requests to a PHP application such as WordPress, or receives HTTP/1.1 requests and converts them to HTTP/2. Yet another example is a client application that needs to received events from a WebSocket server.
The client libraries are designed to be non-blocking and offer both synchronous
and asynchronous APIs and come with a large number of configuration options.
The client libraries are designed to be non-blocking and offer both synchronous and asynchronous APIs and come with a large number of configuration options.
These are the available client libraries:
@ -38,9 +31,7 @@ These are the available client libraries:
* xref:eg-client-http2[The HTTP/2 Client Library]
* xref:eg-client-websocket[The WebSocket client library]
If you are interested in the low-level details of how the Eclipse Jetty
client libraries work, or are interested in writing a custom protocol,
look at the xref:eg-client-io-arch[Client I/O Architecture].
If you are interested in the low-level details of how the Eclipse Jetty client libraries work, or are interested in writing a custom protocol, look at the xref:eg-client-io-arch[Client I/O Architecture].
include::http/client-http.adoc[]
include::http2/client-http2.adoc[]

View File

@ -182,8 +182,7 @@ While the request content is awaited and consequently uploaded by the client app
In this case, `Response.Listener` callbacks will be invoked before the request is fully sent.
This allows fine-grained control of the request/response conversation: for example the server may reject contents that are too big, send a response to the client, which in turn may stop the content upload.
Another way to provide request content is by using an `OutputStreamRequestContent`,
which allows applications to write request content when it is available to the `OutputStream` provided by `OutputStreamRequestContent`:
Another way to provide request content is by using an `OutputStreamRequestContent`, which allows applications to write request content when it is available to the `OutputStream` provided by `OutputStreamRequestContent`:
[source,java,indent=0]
----
@ -222,74 +221,42 @@ include::../../{doc_code}/embedded/client/http/HTTPClientDocs.java[tags=inputStr
Finally, let's look at the advanced usage of the response content handling.
The response content is provided by the `HttpClient` implementation to application
listeners following a reactive model similar to that of `java.util.concurrent.Flow`.
The response content is provided by the `HttpClient` implementation to application listeners following a reactive model similar to that of `java.util.concurrent.Flow`.
The listener that follows this model is `Response.DemandedContentListener`.
After the response headers have been processed by the `HttpClient` implementation,
`Response.DemandedContentListener.onBeforeContent(response, demand)` is
invoked. This allows the application to control whether to demand the first
content or not. The default implementation of this method calls `demand.accept(1)`,
which demands one chunk of content to the implementation.
After the response headers have been processed by the `HttpClient` implementation, `Response.DemandedContentListener.onBeforeContent(response, demand)` is invoked.
This allows the application to control whether to demand the first content or not.
The default implementation of this method calls `demand.accept(1)`, which demands one chunk of content to the implementation.
The implementation will deliver the chunk of content as soon as it is available.
The chunks of content are delivered to the application by invoking
`Response.DemandedContentListener.onContent(response, demand, buffer, callback)`.
The chunks of content are delivered to the application by invoking `Response.DemandedContentListener.onContent(response, demand, buffer, callback)`.
Applications implement this method to process the content bytes in the `buffer`.
Succeeding the `callback` signals to the implementation that the application
has consumed the `buffer` so that the implementation can dispose/recycle the
`buffer`. Failing the `callback` signals to the implementation to fail the
response (no more content will be delivered, and the _response failed_ event
will be emitted).
Succeeding the `callback` signals to the implementation that the application has consumed the `buffer` so that the implementation can dispose/recycle the `buffer`.
Failing the `callback` signals to the implementation to fail the response (no more content will be delivered, and the _response failed_ event will be emitted).
IMPORTANT: Succeeding the `callback` must be done only after the `buffer`
bytes have been consumed. When the `callback` is succeeded, the `HttpClient`
implementation may reuse the `buffer` and overwrite the bytes with different
bytes; if the application looks at the `buffer` _after_ having succeeded
the `callback` is may see other, unrelated, bytes.
IMPORTANT: Succeeding the `callback` must be done only after the `buffer` bytes have been consumed.
When the `callback` is succeeded, the `HttpClient` implementation may reuse the `buffer` and overwrite the bytes with different bytes; if the application looks at the `buffer` _after_ having succeeded the `callback` is may see other, unrelated, bytes.
The application uses the `demand` object to demand more content chunks.
Applications will typically demand for just one more content via
`demand.accept(1)`, but may decide to demand for more via `demand.accept(2)`
or demand "infinitely" once via `demand.accept(Long.MAX_VALUE)`.
Applications that demand for more than 1 chunk of content must be prepared
to receive all the content that they have demanded.
Applications will typically demand for just one more content via `demand.accept(1)`, but may decide to demand for more via `demand.accept(2)` or demand "infinitely" once via `demand.accept(Long.MAX_VALUE)`.
Applications that demand for more than 1 chunk of content must be prepared to receive all the content that they have demanded.
Demanding for content and consuming the content are orthogonal activities.
An application can demand "infinitely" and store aside the pairs
`(buffer, callback)` to consume them later.
If not done carefully, this may lead to excessive memory consumption, since
the ``buffer``s are not consumed.
Succeeding the ``callback``s will result in the ``buffer``s to be
disposed/recycled and may be performed at any time.
An application can demand "infinitely" and store aside the pairs `(buffer, callback)` to consume them later.
If not done carefully, this may lead to excessive memory consumption, since the ``buffer``s are not consumed.
Succeeding the ``callback``s will result in the ``buffer``s to be disposed/recycled and may be performed at any time.
An application can also demand one chunk of content, consume it (by
succeeding the associated `callback`) and then _not_ demand for more content
until a later time.
An application can also demand one chunk of content, consume it (by succeeding the associated `callback`) and then _not_ demand for more content until a later time.
Subclass `Response.AsyncContentListener` overrides the behavior of
`Response.DemandedContentListener`; when an application implementing its
`onContent(response, buffer, callback)` succeeds the `callback`, it
will have _both_ the effect of disposing/recycling the `buffer` _and_ the
effect of demanding one more chunk of content.
Subclass `Response.AsyncContentListener` overrides the behavior of `Response.DemandedContentListener`; when an application implementing its `onContent(response, buffer, callback)` succeeds the `callback`, it will have _both_ the effect of disposing/recycling the `buffer` _and_ the effect of demanding one more chunk of content.
Subclass `Response.ContentListener` overrides the behavior of
`Response.AsyncContentListener`; when an application implementing its
`onContent(response, buffer)` returns from the method itself, it will
_both_ the effect of disposing/recycling the `buffer` _and_ the effect
of demanding one more chunk of content.
Subclass `Response.ContentListener` overrides the behavior of `Response.AsyncContentListener`; when an application implementing its `onContent(response, buffer)` returns from the method itself, it will _both_ the effect of disposing/recycling the `buffer` _and_ the effect of demanding one more chunk of content.
Previous examples of response content handling were inefficient because they
involved copying the `buffer` bytes, either to accumulate them aside so that
the application could use them when the request was completed, or because
they were provided to an API such as `InputStream` that made use of `byte[]`
(and therefore a copy from `ByteBuffer` to `byte[]` is necessary).
Previous examples of response content handling were inefficient because they involved copying the `buffer` bytes, either to accumulate them aside so that the application could use them when the request was completed, or because they were provided to an API such as `InputStream` that made use of `byte[]` (and therefore a copy from `ByteBuffer` to `byte[]` is necessary).
An application that implements a forwarder between two servers can be
implemented efficiently by handling the response content without copying
the `buffer` bytes as in the following example:
An application that implements a forwarder between two servers can be implemented efficiently by handling the response content without copying the `buffer` bytes as in the following example:
[source,java,indent=0]
----

View File

@ -19,13 +19,9 @@
[[eg-client-http-authentication]]
=== HttpClient Authentication Support
Jetty's `HttpClient` supports the `BASIC` and `DIGEST` authentication
mechanisms defined by link:https://tools.ietf.org/html/rfc7235[RFC 7235],
as well as the SPNEGO authentication mechanism defined in
link:https://tools.ietf.org/html/rfc4559[RFC 4559].
Jetty's `HttpClient` supports the `BASIC` and `DIGEST` authentication mechanisms defined by link:https://tools.ietf.org/html/rfc7235[RFC 7235], as well as the SPNEGO authentication mechanism defined in link:https://tools.ietf.org/html/rfc4559[RFC 4559].
The HTTP _conversation_ - the sequence of related HTTP requests - for a
request that needs authentication is the following:
The HTTP _conversation_ - the sequence of related HTTP requests - for a request that needs authentication is the following:
[plantuml]
----
@ -43,59 +39,43 @@ HttpClient -> Server : GET + Authentication
Server -> Application : 200 OK
----
Upon receiving a HTTP 401 response code, `HttpClient` looks at the
`WWW-Authenticate` response header (the server _challenge_) and then tries to
match configured authentication credentials to produce an `Authentication`
header that contains the authentication credentials to access the resource.
Upon receiving a HTTP 401 response code, `HttpClient` looks at the `WWW-Authenticate` response header (the server _challenge_) and then tries to match configured authentication credentials to produce an `Authentication` header that contains the authentication credentials to access the resource.
You can configure authentication credentials in the `HttpClient` instance as
follows:
You can configure authentication credentials in the `HttpClient` instance as follows:
[source,java,indent=0]
----
include::../../{doc_code}/embedded/client/http/HTTPClientDocs.java[tag=addAuthentication]
----
``Authentication``s are matched against the server challenge first by
mechanism (e.g. `BASIC` or `DIGEST`), then by realm and then by URI.
``Authentication``s are matched against the server challenge first by mechanism (e.g. `BASIC` or `DIGEST`), then by realm and then by URI.
If an `Authentication` match is found, the application does not receive events
related to the HTTP 401 response. These events are handled internally by
`HttpClient` which produces another (internal) request similar to the original
request but with an additional `Authorization` header.
If an `Authentication` match is found, the application does not receive events related to the HTTP 401 response.
These events are handled internally by `HttpClient` which produces another (internal) request similar to the original request but with an additional `Authorization` header.
If the authentication is successful, the server responds with a HTTP 200 and
`HttpClient` caches the `Authentication.Result` so that subsequent requests
for a matching URI will not incur in the additional rountrip caused by the
HTTP 401 response.
If the authentication is successful, the server responds with a HTTP 200 and `HttpClient` caches the `Authentication.Result` so that subsequent requests for a matching URI will not incur in the additional rountrip caused by the HTTP 401 response.
It is possible to clear ``Authentication.Result``s in order to force
authentication again:
It is possible to clear ``Authentication.Result``s in order to force authentication again:
[source,java,indent=0]
----
include::../../{doc_code}/embedded/client/http/HTTPClientDocs.java[tag=clearResults]
----
Authentication results may be preempted to avoid the additional roundtrip
due to the server challenge in this way:
Authentication results may be preempted to avoid the additional roundtrip due to the server challenge in this way:
[source,java,indent=0]
----
include::../../{doc_code}/embedded/client/http/HTTPClientDocs.java[tag=preemptedResult]
----
In this way, requests for the given URI are enriched immediately with the
`Authorization` header, and the server should respond with HTTP 200 (and the
resource content) rather than with the 401 and the challenge.
In this way, requests for the given URI are enriched immediately with the `Authorization` header, and the server should respond with HTTP 200 (and the resource content) rather than with the 401 and the challenge.
It is also possible to preempt the authentication for a single request only,
in this way:
It is also possible to preempt the authentication for a single request only, in this way:
[source,java,indent=0]
----
include::../../{doc_code}/embedded/client/http/HTTPClientDocs.java[tag=requestPreemptedResult]
----
See also the xref:eg-client-http-proxy-authentication[proxy authentication section]
for further information about how authentication works with HTTP proxies.
See also the xref:eg-client-http-proxy-authentication[proxy authentication section] for further information about how authentication works with HTTP proxies.

View File

@ -20,54 +20,39 @@
=== HttpClient Configuration
`HttpClient` has a quite large number of configuration parameters.
Please refer to the `HttpClient`
link:{JDURL}/org/eclipse/jetty/client/HttpClient.html[javadocs]
for the complete list of configurable parameters.
Please refer to the `HttpClient` link:{JDURL}/org/eclipse/jetty/client/HttpClient.html[javadocs] for the complete list of configurable parameters.
The most common parameters are:
* `HttpClient.idleTimeout`: same as `ClientConnector.idleTimeout`
described in xref:eg-client-io-arch-network[this section].
* `HttpClient.connectBlocking`: same as `ClientConnector.connectBlocking`
described in xref:eg-client-io-arch-network[this section].
* `HttpClient.connectTimeout`: same as `ClientConnector.connectTimeout`
described in xref:eg-client-io-arch-network[this section].
* `HttpClient.maxConnectionsPerDestination`: the max number of TCP
connections that are opened for a particular destination (defaults to 64).
* `HttpClient.maxRequestsQueuedPerDestination`: the max number of requests
queued (defaults to 1024).
* `HttpClient.idleTimeout`: same as `ClientConnector.idleTimeout` described in xref:eg-client-io-arch-network[this section].
* `HttpClient.connectBlocking`: same as `ClientConnector.connectBlocking` described in xref:eg-client-io-arch-network[this section].
* `HttpClient.connectTimeout`: same as `ClientConnector.connectTimeout` described in xref:eg-client-io-arch-network[this section].
* `HttpClient.maxConnectionsPerDestination`: the max number of TCP connections that are opened for a particular destination (defaults to 64).
* `HttpClient.maxRequestsQueuedPerDestination`: the max number of requests queued (defaults to 1024).
[[eg-client-http-configuration-tls]]
==== HttpClient TLS Configuration
`HttpClient` supports HTTPS requests out-of-the-box like a browser does.
The support for HTTPS request is provided by a `SslContextFactory.Client`,
typically configured in the `ClientConnector`.
If not explicitly configured, the `ClientConnector` will allocate a default
one when started.
The support for HTTPS request is provided by a `SslContextFactory.Client`, typically configured in the `ClientConnector`.
If not explicitly configured, the `ClientConnector` will allocate a default one when started.
[source,java,indent=0]
----
include::../../{doc_code}/embedded/client/http/HTTPClientDocs.java[tags=tlsExplicit]
----
The default `SslContextFactory.Client` verifies the certificate sent by the
server by verifying the certificate chain.
This means that requests to public websites that have a valid certificates
(such as ``https://google.com``) will work out-of-the-box.
The default `SslContextFactory.Client` verifies the certificate sent by the server by verifying the certificate chain.
This means that requests to public websites that have a valid certificate (such as ``https://google.com``) will work out-of-the-box.
However, requests made to sites (typically ``localhost``) that have invalid
(for example, expired or with a wrong host) or self-signed certificates will
fail (like they will in a browser).
However, requests made to sites (typically ``localhost``) that have an invalid (for example, expired or with a wrong host) or self-signed certificate will fail (like they will in a browser).
Certificate validation is performed at two levels: at the TLS implementation
level (in the JDK) and - optionally - at the application level.
Certificate validation is performed at two levels: at the TLS implementation level (in the JDK) and - optionally - at the application level.
By default, certificate validation at the TLS level is enabled, while
certificate validation at the application level is disabled.
By default, certificate validation at the TLS level is enabled, while certificate validation at the application level is disabled.
You can configure the `SslContextFactory.Client` to skip certificate validation
at the TLS level:
You can configure the `SslContextFactory.Client` to skip certificate validation at the TLS level:
[source,java,indent=0]
----
@ -81,6 +66,4 @@ You can enable certificate validation at the application level:
include::../../{doc_code}/embedded/client/http/HTTPClientDocs.java[tags=tlsAppValidation]
----
Please refer to the `SslContextFactory.Client`
link:{JDURL}/org/eclipse/jetty/util/ssl/SslContextFactory.Client.html[javadocs]
for the complete list of configurable parameters.
Please refer to the `SslContextFactory.Client` link:{JDURL}/org/eclipse/jetty/util/ssl/SslContextFactory.Client.html[javadocs] for the complete list of configurable parameters.

View File

@ -51,9 +51,8 @@ You can remove cookies that you do not want to be sent in future HTTP requests:
include::../../{doc_code}/embedded/client/http/HTTPClientDocs.java[tag=removeCookie]
----
If you want to totally disable cookie handling, you can install a
`HttpCookieStore.Empty`. This must be done when `HttpClient` is used in a
proxy application, in this way:
If you want to totally disable cookie handling, you can install a `HttpCookieStore.Empty`.
This must be done when `HttpClient` is used in a proxy application, in this way:
[source,java,indent=0]
----
@ -71,6 +70,7 @@ The example above will retain only cookies that come from the `google.com` domai
// TODO: move this section to server-side
==== Special Characters in Cookies
Jetty is compliant with link:https://tools.ietf.org/html/rfc6265[RFC6265], and as such care must be taken when setting a cookie value that includes special characters such as `;`.
Previously, Version=1 cookies defined in link:https://tools.ietf.org/html/rfc2109[RFC2109] (and continued in link:https://tools.ietf.org/html/rfc2965[RFC2965]) allowed for special/reserved characters to be enclosed within double quotes when declared in a `Set-Cookie` response header:

View File

@ -24,13 +24,10 @@ The Jetty HTTP client module provides easy-to-use APIs and utility classes to pe
Jetty's HTTP client is non-blocking and asynchronous.
It offers an asynchronous API that never blocks for I/O, making it very efficient in thread utilization and well suited for high performance scenarios such as load testing or parallel computation.
However, when all you need to do is to perform a `GET` request to a resource, Jetty's HTTP client offers also a synchronous API; a programming interface
where the thread that issued the request blocks until the request/response conversation is complete.
However, when all you need to do is to perform a `GET` request to a resource, Jetty's HTTP client offers also a synchronous API; a programming interface where the thread that issued the request blocks until the request/response conversation is complete.
Jetty's HTTP client supports xref:#eg-client-http-transport[different transports]: HTTP/1.1, FastCGI and HTTP/2.
This means that the semantic of a HTTP request (that is, " `GET` me the resource `/index.html` ") can be carried over the network in different formats.
The most common and default format is HTTP/1.1.
That said, Jetty's HTTP client can carry the same request using the FastCGI format or the HTTP/2 format.
Jetty's HTTP client supports xref:#eg-client-http-transport[different transports]: HTTP/1.1, FastCGI and HTTP/2. This means that the semantic of a HTTP request (that is, " `GET` me the resource `/index.html` ") can be carried over the network in different formats.
The most common and default format is HTTP/1.1. That said, Jetty's HTTP client can carry the same request using the FastCGI format or the HTTP/2 format.
The FastCGI transport is heavily used in Jetty's link:#fastcgi[FastCGI support] that allows Jetty to work as a reverse proxy to PHP (exactly like Apache or Nginx do) and therefore be able to serve - for example - WordPress websites.
@ -61,8 +58,7 @@ The Maven artifact coordinates are the following:
The main class is named `org.eclipse.jetty.client.HttpClient`.
You can think of a `HttpClient` instance as a browser instance.
Like a browser it can make requests to different domains, it manages redirects, cookies and authentication, you can configure it with a proxy, and
it provides you with the responses to the requests you make.
Like a browser it can make requests to different domains, it manages redirects, cookies and authentication, you can configure it with a proxy, and it provides you with the responses to the requests you make.
In order to use `HttpClient`, you must instantiate it, configure it, and then start it:
@ -78,11 +74,8 @@ There are several reasons for having multiple `HttpClient` instances including,
* You want the two instances to behave like two different browsers and hence have different cookies, different authentication credentials, etc.
* You want to use link:#eg-client-http-transport[different transports].
Like browsers, HTTPS requests are supported out-of-the-box, as long as the server
provides a valid certificate.
In case the server does not provide a valid certificate (or in case it is self-signed)
you want to customize ``HttpClient``'s TLS configuration as described in
xref:eg-client-http-configuration-tls[this section].
Like browsers, HTTPS requests are supported out-of-the-box, as long as the server provides a valid certificate.
In case the server does not provide a valid certificate (or in case it is self-signed) you want to customize ``HttpClient``'s TLS configuration as described in xref:eg-client-http-configuration-tls[this section].
[[eg-client-http-stop]]
==== Stopping HttpClient
@ -99,64 +92,39 @@ Stopping `HttpClient` makes sure that the memory it holds (for example, authenti
[[eg-client-http-arch]]
==== HttpClient Architecture
A `HttpClient` instance can be thought as a browser instance, and it manages the
following components:
A `HttpClient` instance can be thought as a browser instance, and it manages the following components:
* a `CookieStore` (see xref:eg-client-http-cookie[this section]).
* a `AuthenticationStore` (see xref:eg-client-http-authentication[this section]).
* a `ProxyConfiguration` (see xref:eg-client-http-proxy[this section]).
* a set of _destinations_.
A _destination_ is the client-side component that represent an _origin_ on
a server, and manages a queue of requests for that origin, and a
xref:eg-client-http-connection-pool[pool of connections] to that origin.
A _destination_ is the client-side component that represent an _origin_ on a server, and manages a queue of requests for that origin, and a xref:eg-client-http-connection-pool[pool of connections] to that origin.
An _origin_ may be simply thought as the tuple `(scheme, host, port)` and it
is where the client connects to in order to communicate with the server.
An _origin_ may be simply thought as the tuple `(scheme, host, port)` and it is where the client connects to in order to communicate with the server.
However, this is not enough.
If you use `HttpClient` to write a proxy you may have different clients that
want to contact the same server.
In this case, you may not want to use the same proxy-to-server connection to
proxy requests for both clients, for example for authentication reasons: the
server may associate the connection with authentication credentials and you
do not want to use the same connection for two different users that have
different credentials.
Instead, you want to use different connections for different clients and
this can be achieved by "tagging" a destination with a tag object that
represents the remote client (for example, it could be the remote client IP
address).
If you use `HttpClient` to write a proxy you may have different clients that want to contact the same server.
In this case, you may not want to use the same proxy-to-server connection to proxy requests for both clients, for example for authentication reasons: the server may associate the connection with authentication credentials and you do not want to use the same connection for two different users that have different credentials.
Instead, you want to use different connections for different clients and this can be achieved by "tagging" a destination with a tag object that represents the remote client (for example, it could be the remote client IP address).
Two origins with the same `(scheme, host, port)` but different `tag`
create two different destinations and therefore two different connection pools.
Two origins with the same `(scheme, host, port)` but different `tag` create two different destinations and therefore two different connection pools.
However, also this is not enough.
It is possible that a server speaks different protocols on the same `port`.
A connection may start by speaking one protocol, for example HTTP/1.1, but
then be upgraded to speak a different protocol, for example HTTP/2.
After a connection has been upgraded to a second protocol, it cannot speak
the first protocol anymore, so it can only be used to communicate using
the second protocol.
A connection may start by speaking one protocol, for example HTTP/1.1, but then be upgraded to speak a different protocol, for example HTTP/2. After a connection has been upgraded to a second protocol, it cannot speak the first protocol anymore, so it can only be used to communicate using the second protocol.
Two origins with the same `(scheme, host, port)` but different
`protocol` create two different destinations and therefore two different
connection pools.
Two origins with the same `(scheme, host, port)` but different `protocol` create two different destinations and therefore two different connection pools.
Therefore an origin is identified by the tuple
`(scheme, host, port, tag, protocol)`.
Therefore an origin is identified by the tuple `(scheme, host, port, tag, protocol)`.
[[eg-client-http-connection-pool]]
==== HttpClient Connection Pooling
A destination manages a `org.eclipse.jetty.client.ConnectionPool`, where
connections to a particular origin are pooled for performance reasons:
opening a connection is a costly operation and it's better to reuse them
for multiple requests.
A destination manages a `org.eclipse.jetty.client.ConnectionPool`, where connections to a particular origin are pooled for performance reasons:
opening a connection is a costly operation and it's better to reuse them for multiple requests.
NOTE: Remember that to select a specific destination you must select a
specific origin, and that an origin is identified by the tuple
`(scheme, host, port, tag, protocol)`, so you can have multiple destinations
for the same `host` and `port`.
NOTE: Remember that to select a specific destination you must select a specific origin, and that an origin is identified by the tuple `(scheme, host, port, tag, protocol)`, so you can have multiple destinations for the same `host` and `port`.
You can access the `ConnectionPool` in this way:
@ -167,17 +135,11 @@ include::../../{doc_code}/embedded/client/http/HTTPClientDocs.java[tags=getConne
Jetty's client library provides the following `ConnectionPool` implementations:
* `DuplexConnectionPool`, historically the first implementation, only used by
the HTTP/1.1 transport.
* `MultiplexConnectionPool`, the generic implementation valid for any transport
where connections are reused with a MRU (most recently used) algorithm (that is,
the connections most recently returned to the connection pool are the more
likely to be used again).
* `RoundRobinConnectionPool`, similar to `MultiplexConnectionPool` but where
connections are reused with a round-robin algorithm.
* `DuplexConnectionPool`, historically the first implementation, only used by the HTTP/1.1 transport.
* `MultiplexConnectionPool`, the generic implementation valid for any transport where connections are reused with a MRU (most recently used) algorithm (that is, the connections most recently returned to the connection pool are the more likely to be used again).
* `RoundRobinConnectionPool`, similar to `MultiplexConnectionPool` but where connections are reused with a round-robin algorithm.
The `ConnectionPool` implementation can be customized for each destination in
by setting a `ConnectionPool.Factory` on the `HttpClientTransport`:
The `ConnectionPool` implementation can be customized for each destination in by setting a `ConnectionPool.Factory` on the `HttpClientTransport`:
[source,java,indent=0]
----
@ -214,40 +176,22 @@ Destination -> Destination : dequeue(Request)
Destination -> Connection : send(Request)
----
When a request is sent, an origin is computed from the request; `HttpClient`
uses that origin to find (or create if it does not exist) the correspondent
destination.
The request is then queued onto the destination, and this causes the
destination to ask its connection pool for a free connection.
If a connection is available, it is returned, otherwise a new connection is
created.
Once the destination has obtained the connection, it dequeues the request
and sends it over the connection.
When a request is sent, an origin is computed from the request; `HttpClient` uses that origin to find (or create if it does not exist) the correspondent destination.
The request is then queued onto the destination, and this causes the destination to ask its connection pool for a free connection.
If a connection is available, it is returned, otherwise a new connection is created.
Once the destination has obtained the connection, it dequeues the request and sends it over the connection.
The first request to a destination triggers the opening of the first
connection.
A second request with the same origin sent _after_ the first request/response
cycle is completed will reuse the same connection.
A second request with the same origin sent _concurrently_ with the first
request will cause the opening of a second connection.
The configuration parameter `HttpClient.maxConnectionsPerDestination`
(see also the xref:eg-client-http-configuration[configuration section]) controls
the max number of connections that can be opened for a destination.
The first request to a destination triggers the opening of the first connection.
A second request with the same origin sent _after_ the first request/response cycle is completed will reuse the same connection.
A second request with the same origin sent _concurrently_ with the first request will cause the opening of a second connection.
The configuration parameter `HttpClient.maxConnectionsPerDestination` (see also the xref:eg-client-http-configuration[configuration section]) controls the max number of connections that can be opened for a destination.
NOTE: If opening connections to a given origin takes a long time, then
requests for that origin will queue up in the corresponding destination.
NOTE: If opening connections to a given origin takes a long time, then requests for that origin will queue up in the corresponding destination.
Each connection can handle a limited number of concurrent requests.
For HTTP/1.1, this number is always `1`: there can only be one outstanding
request for each connection.
For HTTP/2 this number is determined by the server `max_concurrent_stream`
setting (typically around `100`, i.e. there can be up to `100` outstanding
requests for every connection).
For HTTP/1.1, this number is always `1`: there can only be one outstanding request for each connection.
For HTTP/2 this number is determined by the server `max_concurrent_stream` setting (typically around `100`, i.e. there can be up to `100` outstanding requests for every connection).
When a destination has maxed out its number of connections, and all
connections have maxed out their number of outstanding requests, more requests
sent to that destination will be queued.
When a destination has maxed out its number of connections, and all connections have maxed out their number of outstanding requests, more requests sent to that destination will be queued.
When the request queue is full, the request will be failed.
The configuration parameter `HttpClient.maxRequestsQueuedPerDestination`
(see also the xref:eg-client-http-configuration[configuration section]) controls
the max number of requests that can be queued for a destination.
The configuration parameter `HttpClient.maxRequestsQueuedPerDestination` (see also the xref:eg-client-http-configuration[configuration section]) controls the max number of requests that can be queued for a destination.

View File

@ -21,9 +21,7 @@
Jetty's `HttpClient` can be configured to use proxies to connect to destinations.
Two types of proxies are available out of the box: a HTTP proxy (provided by
class `org.eclipse.jetty.client.HttpProxy`) and a SOCKS 4 proxy (provided by
class `org.eclipse.jetty.client.Socks4Proxy`).
Two types of proxies are available out of the box: a HTTP proxy (provided by class `org.eclipse.jetty.client.HttpProxy`) and a SOCKS 4 proxy (provided by class `org.eclipse.jetty.client.Socks4Proxy`).
Other implementations may be written by subclassing `ProxyConfiguration.Proxy`.
The following is a typical configuration:
@ -33,32 +31,25 @@ The following is a typical configuration:
include::../../{doc_code}/embedded/client/http/HTTPClientDocs.java[tag=proxy]
----
You specify the proxy host and proxy port, and optionally also the addresses
that you do not want to be proxied, and then add the proxy configuration on
the `ProxyConfiguration` instance.
You specify the proxy host and proxy port, and optionally also the addresses that you do not want to be proxied, and then add the proxy configuration on the `ProxyConfiguration` instance.
Configured in this way, `HttpClient` makes requests to the HTTP proxy (for
plain-text HTTP requests) or establishes a tunnel via HTTP `CONNECT` (for
encrypted HTTPS requests).
Configured in this way, `HttpClient` makes requests to the HTTP proxy (for plain-text HTTP requests) or establishes a tunnel via HTTP `CONNECT` (for encrypted HTTPS requests).
Proxying is supported for both HTTP/1.1 and HTTP/2.
[[eg-client-http-proxy-authentication]]
==== Proxy Authentication Support
Jetty's `HttpClient` supports proxy authentication in the same way it supports
xref:eg-client-http-authentication[server authentication].
Jetty's `HttpClient` supports proxy authentication in the same way it supports xref:eg-client-http-authentication[server authentication].
In the example below, the proxy requires `BASIC` authentication, but the server
requires `DIGEST` authentication, and therefore:
In the example below, the proxy requires `BASIC` authentication, but the server requires `DIGEST` authentication, and therefore:
[source,java,indent=0]
----
include::../../{doc_code}/embedded/client/http/HTTPClientDocs.java[tag=proxyAuthentication]
----
The HTTP conversation for successful authentications on both the proxy and the
server is the following:
The HTTP conversation for successful authentications on both the proxy and the server is the following:
[plantuml]
----
@ -84,9 +75,6 @@ Proxy -> HttpClient : 200 OK
HttpClient -> Application : 200 OK
----
The application does not receive events related to the responses with code 407
and 401 since they are handled internally by `HttpClient`.
The application does not receive events related to the responses with code 407 and 401 since they are handled internally by `HttpClient`.
Similarly to the xref:eg-client-http-authentication[authentication section], the
proxy authentication result and the server authentication result can be
preempted to avoid, respectively, the 407 and 401 roundtrips.
Similarly to the xref:eg-client-http-authentication[authentication section], the proxy authentication result and the server authentication result can be preempted to avoid, respectively, the 407 and 401 roundtrips.

View File

@ -19,20 +19,13 @@
[[eg-client-http-transport]]
=== HttpClient Pluggable Transports
Jetty's `HttpClient` can be configured to use different transports to carry the
semantic of HTTP requests and responses.
Jetty's `HttpClient` can be configured to use different transports to carry the semantic of HTTP requests and responses.
This means that the intention of a client to request resource `/index.html`
using the `GET` method can be carried over the network in different formats.
This means that the intention of a client to request resource `/index.html` using the `GET` method can be carried over the network in different formats.
A `HttpClient` transport is the component that is in charge of converting a
high-level, semantic, HTTP requests such as "`GET` resource ``/index.html``"
into the specific format understood by the server (for example, HTTP/2), and to
convert the server response from the specific format (HTTP/2) into high-level,
semantic objects that can be used by applications.
A `HttpClient` transport is the component that is in charge of converting a high-level, semantic, HTTP requests such as "`GET` resource ``/index.html``" into the specific format understood by the server (for example, HTTP/2), and to convert the server response from the specific format (HTTP/2) into high-level, semantic objects that can be used by applications.
The most common protocol format is HTTP/1.1, a textual protocol with lines
separated by `\r\n`:
The most common protocol format is HTTP/1.1, a textual protocol with lines separated by `\r\n`:
[source,screen]
----
@ -56,27 +49,17 @@ x0C x0B D O C U M E
...
----
Similarly, HTTP/2 is a binary protocol that transports the same information
in a yet different format.
Similarly, HTTP/2 is a binary protocol that transports the same information in a yet different format.
A protocol may be _negotiated_ between client and server. A request for a
resource may be sent using one protocol (for example, HTTP/1.1), but the
response may arrive in a different protocol (for example, HTTP/2).
A protocol may be _negotiated_ between client and server.
A request for a resource may be sent using one protocol (for example, HTTP/1.1), but the response may arrive in a different protocol (for example, HTTP/2).
`HttpClient` supports 3 static transports, each speaking only one protocol:
xref:eg-client-http-transport-http11[HTTP/1.1],
xref:eg-client-http-transport-http2[HTTP/2] and
xref:eg-client-http-transport-fcgi[FastCGI],
all of them with 2 variants: clear-text and TLS encrypted.
`HttpClient` supports 3 static transports, each speaking only one protocol: xref:eg-client-http-transport-http11[HTTP/1.1], xref:eg-client-http-transport-http2[HTTP/2] and xref:eg-client-http-transport-fcgi[FastCGI], all of them with 2 variants: clear-text and TLS encrypted.
`HttpClient` also supports one
xref:eg-client-http-transport-dynamic[dynamic transport],
that can speak different protocols and can select the right protocol by
negotiating it with the server or by explicit indication from applications.
`HttpClient` also supports one xref:eg-client-http-transport-dynamic[dynamic transport], that can speak different protocols and can select the right protocol by negotiating it with the server or by explicit indication from applications.
Applications are typically not aware of the actual protocol being used.
This allows them to write their logic against a high-level API that hides the
details of the specific protocol being used over the network.
This allows them to write their logic against a high-level API that hides the details of the specific protocol being used over the network.
[[eg-client-http-transport-http11]]
==== HTTP/1.1 Transport
@ -88,8 +71,7 @@ HTTP/1.1 is the default transport.
include::../../{doc_code}/embedded/client/http/HTTPClientDocs.java[tag=defaultTransport]
----
If you want to customize the HTTP/1.1 transport, you can explicitly configure
it in this way:
If you want to customize the HTTP/1.1 transport, you can explicitly configure it in this way:
[source,java,indent=0]
----
@ -106,12 +88,9 @@ The HTTP/2 transport can be configured in this way:
include::../../{doc_code}/embedded/client/http/HTTPClientDocs.java[tag=http2Transport]
----
`HTTP2Client` is the lower-level client that provides an API based on HTTP/2
concepts such as _sessions_, _streams_ and _frames_ that are specific to HTTP/2.
See xref:eg-client-http2[the HTTP/2 client section] for more information.
`HTTP2Client` is the lower-level client that provides an API based on HTTP/2 concepts such as _sessions_, _streams_ and _frames_ that are specific to HTTP/2. See xref:eg-client-http2[the HTTP/2 client section] for more information.
`HttpClientTransportOverHTTP2` uses `HTTP2Client` to format high-level semantic
HTTP requests (like "GET resource /index.html") into the HTTP/2 specific format.
`HttpClientTransportOverHTTP2` uses `HTTP2Client` to format high-level semantic HTTP requests (like "GET resource /index.html") into the HTTP/2 specific format.
[[eg-client-http-transport-fcgi]]
==== FastCGI Transport
@ -123,33 +102,21 @@ The FastCGI transport can be configured in this way:
include::../../{doc_code}/embedded/client/http/HTTPClientDocs.java[tag=fcgiTransport]
----
In order to make requests using the FastCGI transport, you need to have a
FastCGI server such as https://en.wikipedia.org/wiki/PHP#PHPFPM[PHP-FPM]
(see also http://php.net/manual/en/install.fpm.php).
In order to make requests using the FastCGI transport, you need to have a FastCGI server such as https://en.wikipedia.org/wiki/PHP#PHPFPM[PHP-FPM] (see also http://php.net/manual/en/install.fpm.php).
The FastCGI transport is primarily used by Jetty's link:#fastcgi[FastCGI support]
to serve PHP pages (WordPress for example).
The FastCGI transport is primarily used by Jetty's link:#fastcgi[FastCGI support] to serve PHP pages (WordPress for example).
[[eg-client-http-transport-dynamic]]
==== Dynamic Transport
The static transports work well if you know in advance the protocol you want
to speak with the server, or if the server only supports one protocol (such
as FastCGI).
The static transports work well if you know in advance the protocol you want to speak with the server, or if the server only supports one protocol (such as FastCGI).
With the advent of HTTP/2, however, servers are now able to support multiple
protocols, at least both HTTP/1.1 and HTTP/2.
With the advent of HTTP/2, however, servers are now able to support multiple protocols, at least both HTTP/1.1 and HTTP/2.
The HTTP/2 protocol is typically negotiated between client and server.
This negotiation can happen via ALPN, a TLS extension that allows the client
to tell the server the list of protocol that the client supports, so that the
server can pick one of the client supported protocols that also the server
supports; or via HTTP/1.1 upgrade by means of the `Upgrade` header.
This negotiation can happen via ALPN, a TLS extension that allows the client to tell the server the list of protocol that the client supports, so that the server can pick one of the client supported protocols that also the server supports; or via HTTP/1.1 upgrade by means of the `Upgrade` header.
Applications can configure the dynamic transport with one or more
_application_ protocols such as HTTP/1.1 or HTTP/2. The implementation will
take care of using TLS for HTTPS URIs, using ALPN, negotiating protocols,
upgrading from one protocol to another, etc.
Applications can configure the dynamic transport with one or more _application_ protocols such as HTTP/1.1 or HTTP/2. The implementation will take care of using TLS for HTTPS URIs, using ALPN, negotiating protocols, upgrading from one protocol to another, etc.
By default, the dynamic transport only speaks HTTP/1.1:
@ -158,51 +125,37 @@ By default, the dynamic transport only speaks HTTP/1.1:
include::../../{doc_code}/embedded/client/http/HTTPClientDocs.java[tag=dynamicDefault]
----
The dynamic transport can be configured with just one protocol, making it
equivalent to the corresponding static transport:
The dynamic transport can be configured with just one protocol, making it equivalent to the corresponding static transport:
[source,java,indent=0]
----
include::../../{doc_code}/embedded/client/http/HTTPClientDocs.java[tag=dynamicOneProtocol]
----
The dynamic transport, however, has been implemented to support multiple
transports, in particular both HTTP/1.1 and HTTP/2:
The dynamic transport, however, has been implemented to support multiple transports, in particular both HTTP/1.1 and HTTP/2:
[source,java,indent=0]
----
include::../../{doc_code}/embedded/client/http/HTTPClientDocs.java[tag=dynamicH1H2]
----
IMPORTANT: The order in which the protocols are specified to
`HttpClientTransportDynamic` indicates what is the client preference.
If the protocol is negotiated via ALPN, it is the server that decides what is
the protocol to use for the communication, regardless of the client preference.
IMPORTANT: The order in which the protocols are specified to `HttpClientTransportDynamic` indicates what is the client preference.
If the protocol is negotiated via ALPN, it is the server that decides what is the protocol to use for the communication, regardless of the client preference.
If the protocol is not negotiated, the client preference is honored.
Provided that the server supports both HTTP/1.1 and HTTP/2 clear-text, client
applications can explicitly hint the version they want to use:
Provided that the server supports both HTTP/1.1 and HTTP/2 clear-text, client applications can explicitly hint the version they want to use:
[source,java,indent=0]
----
include::../../{doc_code}/embedded/client/http/HTTPClientDocs.java[tag=dynamicClearText]
----
In case of TLS encrypted communication using the HTTPS scheme, things are a
little more complicated.
In case of TLS encrypted communication using the HTTPS scheme, things are a little more complicated.
If the client application explicitly specifies the HTTP version, then ALPN
is not used on the client. By specifying the HTTP version explicitly, the
client application has prior-knowledge of what HTTP version the server
supports, and therefore ALPN is not needed.
If the server does not support the HTTP version chosen by the client, then
the communication will fail.
If the client application explicitly specifies the HTTP version, then ALPN is not used on the client.
By specifying the HTTP version explicitly, the client application has prior-knowledge of what HTTP version the server supports, and therefore ALPN is not needed.
If the server does not support the HTTP version chosen by the client, then the communication will fail.
If the client application does not explicitly specify the HTTP version,
then ALPN will be used on the client.
If the server also supports ALPN, then the protocol will be negotiated via
ALPN and the server will choose the protocol to use.
If the server does not support ALPN, the client will try to use the first
protocol configured in `HttpClientTransportDynamic`, and the communication
may succeed or fail depending on whether the server supports the protocol
chosen by the client.
If the client application does not explicitly specify the HTTP version, then ALPN will be used on the client.
If the server also supports ALPN, then the protocol will be negotiated via ALPN and the server will choose the protocol to use.
If the server does not support ALPN, the client will try to use the first protocol configured in `HttpClientTransportDynamic`, and the communication may succeed or fail depending on whether the server supports the protocol chosen by the client.

View File

@ -19,19 +19,11 @@
[[eg-client-http2]]
=== HTTP/2 Client Library
In the vast majority of cases, client applications should use the generic,
high-level, xref:eg-client-http[HTTP client library] that also provides
HTTP/2 support via the pluggable
xref:eg-client-http-transport-http2[HTTP/2 transport] or the
xref:eg-client-http-transport-dynamic[dynamic transport].
In the vast majority of cases, client applications should use the generic, high-level, xref:eg-client-http[HTTP client library] that also provides HTTP/2 support via the pluggable xref:eg-client-http-transport-http2[HTTP/2 transport] or the xref:eg-client-http-transport-dynamic[dynamic transport].
The high-level HTTP library supports cookies, authentication, redirection,
connection pooling and a number of other features that are absent in the
low-level HTTP/2 library.
The high-level HTTP library supports cookies, authentication, redirection, connection pooling and a number of other features that are absent in the low-level HTTP/2 library.
The HTTP/2 client library has been designed for those applications that need
low-level access to HTTP/2 features such as _sessions_, _streams_ and
_frames_, and this is quite a rare use case.
The HTTP/2 client library has been designed for those applications that need low-level access to HTTP/2 features such as _sessions_, _streams_ and _frames_, and this is quite a rare use case.
See also the correspondent xref:eg-server-http2[HTTP/2 server library].
@ -49,16 +41,14 @@ The Maven artifact coordinates for the HTTP/2 client library are the following:
</dependency>
----
The main class is named `org.eclipse.jetty.http2.client.HTTP2Client`, and
must be created, configured and started before use:
The main class is named `org.eclipse.jetty.http2.client.HTTP2Client`, and must be created, configured and started before use:
[source,java,indent=0]
----
include::../../{doc_code}/embedded/client/http2/HTTP2ClientDocs.java[tags=start]
----
When your application stops, or otherwise does not need `HTTP2Client` anymore,
it should stop the `HTTP2Client` instance (or instances) that were started:
When your application stops, or otherwise does not need `HTTP2Client` anymore, it should stop the `HTTP2Client` instance (or instances) that were started:
[source,java,indent=0]
----
@ -66,13 +56,8 @@ include::../../{doc_code}/embedded/client/http2/HTTP2ClientDocs.java[tags=stop]
----
`HTTP2Client` allows client applications to connect to an HTTP/2 server.
A _session_ represents a single TCP connection to an HTTP/2 server and is
defined by class `org.eclipse.jetty.http2.api.Session`.
A _session_ typically has a long life - once the TCP connection is established,
it remains open until it is not used anymore (and therefore it is closed by
the idle timeout mechanism), until a fatal error occurs (for example, a network
failure), or if one of the peers decides unilaterally to close the TCP
connection.
A _session_ represents a single TCP connection to an HTTP/2 server and is defined by class `org.eclipse.jetty.http2.api.Session`.
A _session_ typically has a long life - once the TCP connection is established, it remains open until it is not used anymore (and therefore it is closed by the idle timeout mechanism), until a fatal error occurs (for example, a network failure), or if one of the peers decides unilaterally to close the TCP connection.
include::../../http2.adoc[tag=multiplex]
@ -81,14 +66,12 @@ include::../../http2.adoc[tag=multiplex]
include::../../http2.adoc[tag=flowControl]
How a client application should handle HTTP/2 flow control is discussed in
details in xref:eg-client-http2-response[this section].
How a client application should handle HTTP/2 flow control is discussed in details in xref:eg-client-http2-response[this section].
[[eg-client-http2-connect]]
==== Connecting to the Server
The first thing an application should do is to connect to the server and
obtain a `Session`.
The first thing an application should do is to connect to the server and obtain a `Session`.
The following example connects to the server on a clear-text port:
[source,java,indent=0]
@ -103,46 +86,32 @@ The following example connects to the server on an encrypted port:
include::../../{doc_code}/embedded/client/http2/HTTP2ClientDocs.java[tags=encryptedConnect]
----
IMPORTANT: Applications must know in advance whether they want to connect to a
clear-text or encrypted port, and pass the `SslContextFactory` parameter
accordingly to the `connect(...)` method.
IMPORTANT: Applications must know in advance whether they want to connect to a clear-text or encrypted port, and pass the `SslContextFactory` parameter accordingly to the `connect(...)` method.
[[eg-client-http2-configure]]
===== Configuring the Session
The `connect(...)` method takes a `Session.Listener` parameter.
This listener's `onPreface(...)` method is invoked just before establishing the
connection to the server to gather the client configuration to send to the
server. Client applications can override this method to change the default
configuration:
This listener's `onPreface(...)` method is invoked just before establishing the connection to the server to gather the client configuration to send to the server.
Client applications can override this method to change the default configuration:
[source,java,indent=0]
----
include::../../{doc_code}/embedded/client/http2/HTTP2ClientDocs.java[tags=configure]
----
The `Session.Listener` is notified of session events originated by the server
such as receiving a `SETTINGS` frame from the server, or the server closing
the connection, or the client timing out the connection due to idleness.
Please refer to the `Session.Listener`
link:{JDURL}/org/eclipse/jetty/http2/api/Session.Listener.html[javadocs] for
the complete list of events.
The `Session.Listener` is notified of session events originated by the server such as receiving a `SETTINGS` frame from the server, or the server closing the connection, or the client timing out the connection due to idleness.
Please refer to the `Session.Listener` link:{JDURL}/org/eclipse/jetty/http2/api/Session.Listener.html[javadocs] for the complete list of events.
Once a `Session` has been established, the communication with the server happens
by exchanging _frames_, as specified in the
link:https://tools.ietf.org/html/rfc7540#section-4[HTTP/2 specification].
Once a `Session` has been established, the communication with the server happens by exchanging _frames_, as specified in the link:https://tools.ietf.org/html/rfc7540#section-4[HTTP/2 specification].
[[eg-client-http2-request]]
==== Sending a Request
Sending an HTTP request to the server, and receiving a response, creates a
_stream_ that encapsulates the exchange of HTTP/2 frames that compose the
request and the response.
Sending an HTTP request to the server, and receiving a response, creates a _stream_ that encapsulates the exchange of HTTP/2 frames that compose the request and the response.
In order to send an HTTP request to the server, the client must send a
`HEADERS` frame.
`HEADERS` frames carry the request method, the request URI and the request
headers.
In order to send an HTTP request to the server, the client must send a `HEADERS` frame.
`HEADERS` frames carry the request method, the request URI and the request headers.
Sending the `HEADERS` frame opens the `Stream`:
[source,java,indent=0,subs=normal]
@ -151,12 +120,8 @@ include::../../{doc_code}/embedded/client/http2/HTTP2ClientDocs.java[tags=newStr
----
Note how `Session.newStream(...)` takes a `Stream.Listener` parameter.
This listener is notified of stream events originated by the server such as
receiving `HEADERS` or `DATA` frames that are part of the response, discussed
in more details in the xref:eg-client-http2-response[section below].
Please refer to the `Stream.Listener`
link:{JDURL}/org/eclipse/jetty/http2/api/Stream.Listener.html[javadocs] for
the complete list of events.
This listener is notified of stream events originated by the server such as receiving `HEADERS` or `DATA` frames that are part of the response, discussed in more details in the xref:eg-client-http2-response[section below].
Please refer to the `Stream.Listener` link:{JDURL}/org/eclipse/jetty/http2/api/Stream.Listener.html[javadocs] for the complete list of events.
HTTP requests may have content, which is sent using the `Stream` APIs:
@ -165,27 +130,19 @@ HTTP requests may have content, which is sent using the `Stream` APIs:
include::../../{doc_code}/embedded/client/http2/HTTP2ClientDocs.java[tags=newStreamWithData]
----
IMPORTANT: When sending two `DATA` frames consecutively, the second call to
`Stream.data(...)` must be done only when the first is completed, or a
`WritePendingException` will be thrown.
Use the `Callback` APIs or `CompletableFuture` APIs to ensure that the second
`Stream.data(...)` call is performed when the first completed successfully.
IMPORTANT: When sending two `DATA` frames consecutively, the second call to `Stream.data(...)` must be done only when the first is completed, or a `WritePendingException` will be thrown.
Use the `Callback` APIs or `CompletableFuture` APIs to ensure that the second `Stream.data(...)` call is performed when the first completed successfully.
[[eg-client-http2-response]]
==== Receiving a Response
Response events are delivered to the `Stream.Listener` passed to
`Session.newStream(...)`.
Response events are delivered to the `Stream.Listener` passed to `Session.newStream(...)`.
An HTTP response is typically composed of a `HEADERS` frame containing the
HTTP status code and the response headers, and optionally one or more `DATA`
frames containing the response content bytes.
An HTTP response is typically composed of a `HEADERS` frame containing the HTTP status code and the response headers, and optionally one or more `DATA` frames containing the response content bytes.
The HTTP/2 protocol also supports response trailers (that is, headers that are
sent after the response content) that also are sent using a `HEADERS` frame.
The HTTP/2 protocol also supports response trailers (that is, headers that are sent after the response content) that also are sent using a `HEADERS` frame.
A client application can therefore receive the HTTP/2 frames sent by the server
by implementing the relevant methods in `Stream.Listener`:
A client application can therefore receive the HTTP/2 frames sent by the server by implementing the relevant methods in `Stream.Listener`:
[source,java,indent=0]
----
@ -197,12 +154,9 @@ include::../../http2.adoc[tag=apiFlowControl]
[[eg-client-http2-reset]]
==== Resetting a Request or Response
In HTTP/2, clients and servers have the ability to tell to the other peer that
they are not interested anymore in either the request or the response, using a
`RST_STREAM` frame.
In HTTP/2, clients and servers have the ability to tell to the other peer that they are not interested anymore in either the request or the response, using a `RST_STREAM` frame.
The `HTTP2Client` APIs allow client applications to send and receive this
"reset" frame:
The `HTTP2Client` APIs allow client applications to send and receive this "reset" frame:
[source,java,indent=0]
----
@ -212,14 +166,10 @@ include::../../{doc_code}/embedded/client/http2/HTTP2ClientDocs.java[tags=reset]
[[eg-client-http2-push]]
==== Receiving HTTP/2 Pushes
HTTP/2 servers have the ability to push resources related to a primary
resource.
When an HTTP/2 server pushes a resource, it sends to the client a
`PUSH_PROMISE` frame that contains the request URI and headers that a client
would use to request explicitly that resource.
HTTP/2 servers have the ability to push resources related to a primary resource.
When an HTTP/2 server pushes a resource, it sends to the client a `PUSH_PROMISE` frame that contains the request URI and headers that a client would use to request explicitly that resource.
Client applications can be configured to tell the server to never push
resources, see xref:eg-client-http2-configure[this section].
Client applications can be configured to tell the server to never push resources, see xref:eg-client-http2-configure[this section].
Client applications can listen to the push events, and act accordingly:
@ -228,9 +178,7 @@ Client applications can listen to the push events, and act accordingly:
include::../../{doc_code}/embedded/client/http2/HTTP2ClientDocs.java[tags=push]
----
If a client application does not want to handle a particular HTTP/2 push, it
can just reset the pushed stream to tell the server to stop sending bytes for
the pushed stream:
If a client application does not want to handle a particular HTTP/2 push, it can just reset the pushed stream to tell the server to stop sending bytes for the pushed stream:
[source,java,indent=0]
----

View File

@ -19,100 +19,56 @@
// Snippets of HTTP/2 documentation that are common between client and server.
tag::multiplex[]
HTTP/2 is a multiplexed protocol: it allows multiple HTTP/2 requests to be sent
on the same TCP connection.
HTTP/2 is a multiplexed protocol: it allows multiple HTTP/2 requests to be sent on the same TCP connection.
Each request/response cycle is represented by a _stream_.
Therefore, a single _session_ manages multiple concurrent _streams_.
A _stream_ has typically a very short life compared to the _session_: a
_stream_ only exists for the duration of the request/response cycle and then
disappears.
A _stream_ has typically a very short life compared to the _session_: a _stream_ only exists for the duration of the request/response cycle and then disappears.
end::multiplex[]
tag::flowControl[]
The HTTP/2 protocol is _flow controlled_ (see
link:https://tools.ietf.org/html/rfc7540#section-5.2[the specification]).
This means that a sender and a receiver maintain a _flow control window_ that
tracks the number of data bytes sent and received, respectively.
When a sender sends data bytes, it reduces its flow control window. When a
receiver receives data bytes, it also reduces its flow control window, and
then passes the received data bytes to the application.
The application consumes the data bytes and tells back the receiver that it
has consumed the data bytes.
The receiver then enlarges the flow control window, and arranges to send a
message to the sender with the number of bytes consumed, so that the sender
can enlarge its flow control window.
The HTTP/2 protocol is _flow controlled_ (see link:https://tools.ietf.org/html/rfc7540#section-5.2[the specification]).
This means that a sender and a receiver maintain a _flow control window_ that tracks the number of data bytes sent and received, respectively.
When a sender sends data bytes, it reduces its flow control window.
When a receiver receives data bytes, it also reduces its flow control window, and then passes the received data bytes to the application.
The application consumes the data bytes and tells back the receiver that it has consumed the data bytes.
The receiver then enlarges the flow control window, and arranges to send a message to the sender with the number of bytes consumed, so that the sender can enlarge its flow control window.
A sender can send data bytes up to its whole flow control window, then it must
stop sending until it receives a message from the receiver that the data bytes
have been consumed, which enlarges the flow control window, which allows the
sender to send more data bytes.
A sender can send data bytes up to its whole flow control window, then it must stop sending until it receives a message from the receiver that the data bytes have been consumed, which enlarges the flow control window, which allows the sender to send more data bytes.
HTTP/2 defines _two_ flow control windows: one for each _session_, and one
for each _stream_. Let's see with an example how they interact, assuming that
in this example the session flow control window is 120 bytes and the stream
flow control window is 100 bytes.
HTTP/2 defines _two_ flow control windows: one for each _session_, and one for each _stream_.
Let's see with an example how they interact, assuming that in this example the session flow control window is 120 bytes and the stream flow control window is 100 bytes.
The sender opens a session, and then opens `stream_1` on that session, and
sends `80` data bytes.
At this point the session flow control window is `40` bytes (`120 - 80`), and
``stream_1``'s flow control window is `20` bytes (`100 - 80`).
The sender opens a session, and then opens `stream_1` on that session, and sends `80` data bytes.
At this point the session flow control window is `40` bytes (`120 - 80`), and ``stream_1``'s flow control window is `20` bytes (`100 - 80`).
The sender now opens `stream_2` on the same session and sends `40` data bytes.
At this point, the session flow control window is `0` bytes (`40 - 40`),
while ``stream_2``'s flow control window is `60` (`100 - 40`).
Since now the session flow control window is `0`, the sender cannot send more
data bytes, neither on `stream_1` nor on `stream_2` despite both have their
stream flow control windows greater than `0`.
At this point, the session flow control window is `0` bytes (`40 - 40`), while ``stream_2``'s flow control window is `60` (`100 - 40`).
Since now the session flow control window is `0`, the sender cannot send more data bytes, neither on `stream_1` nor on `stream_2` despite both have their stream flow control windows greater than `0`.
The receiver consumes ``stream_2``'s `40` data bytes and sends a message to
the sender with this information.
At this point, the session flow control window is `40` (`0 + 40`),
``stream_1``'s flow control window is still `20` and ``stream_2``'s flow
control window is `100` (`60 + 40`).
If the sender opens `stream_3` and would like to send 50 data bytes, it would
only be able to send `40` because that is the maximum allowed by the session
flow control window at this point.
The receiver consumes ``stream_2``'s `40` data bytes and sends a message to the sender with this information.
At this point, the session flow control window is `40` (`0 40`), ``stream_1``'s flow control window is still `20` and ``stream_2``'s flow control window is `100` (`60 40`).
If the sender opens `stream_3` and would like to send 50 data bytes, it would only be able to send `40` because that is the maximum allowed by the session flow control window at this point.
It is therefore very important that applications notify the fact that they
have consumed data bytes as soon as possible, so that the implementation
(the receiver) can send a message to the sender (in the form of a
`WINDOW_UPDATE` frame) with the information to enlarge the flow control
window, therefore reducing the possibility that sender stalls due to the flow
control windows being reduced to `0`.
It is therefore very important that applications notify the fact that they have consumed data bytes as soon as possible, so that the implementation (the receiver) can send a message to the sender (in the form of a `WINDOW_UPDATE` frame) with the information to enlarge the flow control window, therefore reducing the possibility that sender stalls due to the flow control windows being reduced to `0`.
end::flowControl[]
tag::apiFlowControl[]
NOTE: Returning from the `onData(...)` method implicitly demands for
more `DATA` frames (unless the one just delivered was the last).
Additional `DATA` frames may be delivered immediately if they are available
or later, asynchronously, when they arrive.
NOTE: Returning from the `onData(...)` method implicitly demands for more `DATA` frames (unless the one just delivered was the last).
Additional `DATA` frames may be delivered immediately if they are available or later, asynchronously, when they arrive.
Applications that consume the content buffer within `onData(...)`
(for example, writing it to a file, or copying the bytes to another storage)
should succeed the callback as soon as they have consumed the content buffer.
This allows the implementation to reuse the buffer, reducing the memory
requirements needed to handle the content buffers.
Applications that consume the content buffer within `onData(...)` (for example, writing it to a file, or copying the bytes to another storage) should succeed the callback as soon as they have consumed the content buffer.
This allows the implementation to reuse the buffer, reducing the memory requirements needed to handle the content buffers.
Alternatively, a client application may store away _both_ the buffer and the
callback to consume the buffer bytes later, or pass _both_ the buffer and
the callback to another asynchronous API (this is typical in proxy
applications).
Alternatively, a client application may store away _both_ the buffer and the callback to consume the buffer bytes later, or pass _both_ the buffer and the callback to another asynchronous API (this is typical in proxy applications).
IMPORTANT: Completing the `Callback` is very important not only to allow the
implementation to reuse the buffer, but also tells the implementation to
enlarge the stream and session flow control windows so that the sender will
be able to send more `DATA` frames without stalling.
IMPORTANT: Completing the `Callback` is very important not only to allow the implementation to reuse the buffer, but also tells the implementation to enlarge the stream and session flow control windows so that the sender will be able to send more `DATA` frames without stalling.
Applications can also precisely control _when_ to demand more `DATA`
frames, by implementing the `onDataDemanded(...)` method instead of
`onData(...)`:
Applications can also precisely control _when_ to demand more `DATA` frames, by implementing the `onDataDemanded(...)` method instead of `onData(...)`:
[source,java,indent=0]
----
include::{doc_code}/embedded/HTTP2Docs.java[tags=dataDemanded]
----
IMPORTANT: Applications that implement `onDataDemanded(...)` must remember
to call `Stream.demand(...)`. If they don't, the implementation will not
deliver `DATA` frames and the application will stall threadlessly until an
idle timeout fires to close the stream or the session.
IMPORTANT: Applications that implement `onDataDemanded(...)` must remember to call `Stream.demand(...)`.
If they don't, the implementation will not deliver `DATA` frames and the application will stall threadlessly until an idle timeout fires to close the stream or the session.
end::apiFlowControl[]

View File

@ -20,33 +20,22 @@
[[eg-io-arch]]
== Jetty I/O Architecture
Jetty libraries (both client and server) use Java NIO to handle I/O, so that
at its core Jetty I/O is completely non-blocking.
Jetty libraries (both client and server) use Java NIO to handle I/O, so that at its core Jetty I/O is completely non-blocking.
[[eg-io-arch-selector-manager]]
=== Jetty I/O: `SelectorManager`
The core class of Jetty I/O is
link:{JDURL}/org/eclipse/jetty/io/SelectorManager.html[`SelectorManager`].
The core class of Jetty I/O is link:{JDURL}/org/eclipse/jetty/io/SelectorManager.html[`SelectorManager`].
`SelectorManager` manages internally a configurable number of
link:{JDURL}/org/eclipse/jetty/io/ManagedSelector.html[`ManagedSelector`]s.
Each `ManagedSelector` wraps an instance of `java.nio.channels.Selector` that
in turn manages a number of `java.nio.channels.SocketChannel` instances.
`SelectorManager` manages internally a configurable number of link:{JDURL}/org/eclipse/jetty/io/ManagedSelector.html[`ManagedSelector`]s.
Each `ManagedSelector` wraps an instance of `java.nio.channels.Selector` that in turn manages a number of `java.nio.channels.SocketChannel` instances.
NOTE: TODO: add image
`SocketChannel` instances can be created by network clients when connecting
to a server and by a network server when accepting connections from network
clients.
In both cases the `SocketChannel` instance is passed to `SelectorManager`
(which passes it to `ManagedSelector` and eventually to
`java.nio.channels.Selector`) to be registered for use within Jetty.
`SocketChannel` instances can be created by network clients when connecting to a server and by a network server when accepting connections from network clients.
In both cases the `SocketChannel` instance is passed to `SelectorManager` (which passes it to `ManagedSelector` and eventually to `java.nio.channels.Selector`) to be registered for use within Jetty.
It is possible for an application to create the `SocketChannel`
instances outside Jetty, even perform some initial network traffic also
outside Jetty (for example for authentication purposes), and then pass the
`SocketChannel` instance to `SelectorManager` for use within Jetty.
It is possible for an application to create the `SocketChannel` instances outside Jetty, even perform some initial network traffic also outside Jetty (for example for authentication purposes), and then pass the `SocketChannel` instance to `SelectorManager` for use within Jetty.
This example shows how a client can connect to a server:
@ -65,143 +54,81 @@ include::{doc_code}/embedded/SelectorManagerDocs.java[tags=accept]
[[eg-io-arch-endpoint-connection]]
=== Jetty I/O: `EndPoint` and `Connection`
``SocketChannel``s that are passed to `SelectorManager` are wrapped into two
related components:
an link:{JDURL}/org/eclipse/jetty/io/EndPoint.html[`EndPoint`] and a
link:{JDURL}/org/eclipse/jetty/io/Connection.html[`Connection`].
``SocketChannel``s that are passed to `SelectorManager` are wrapped into two related components: an link:{JDURL}/org/eclipse/jetty/io/EndPoint.html[`EndPoint`] and a link:{JDURL}/org/eclipse/jetty/io/Connection.html[`Connection`].
`EndPoint` is the Jetty abstraction for a `SocketChannel`: you can read bytes
from an `EndPoint` via `EndPoint.fill(ByteBuffer)`, you can write bytes to an
`EndPoint` via `EndPoint.flush(ByteBuffer...)` and
`EndPoint.write(Callback, ByteBuffer...)`, you can close an `EndPoint` via
`EndPoint.close()`, etc.
`EndPoint` is the Jetty abstraction for a `SocketChannel`: you can read bytes from an `EndPoint` via `EndPoint.fill(ByteBuffer)`, you can write bytes to an `EndPoint` via `EndPoint.flush(ByteBuffer...)` and `EndPoint.write(Callback, ByteBuffer...)`, you can close an `EndPoint` via `EndPoint.close()`, etc.
`Connection` is the Jetty abstraction that is responsible to read bytes from
the `EndPoint` and to deserialize the read bytes into objects.
For example, a HTTP/1.1 server-side `Connection` implementation is responsible
to deserialize HTTP/1.1 request bytes into a HTTP request object.
Conversely, a HTTP/1.1 client-side `Connection` implementation is responsible
to deserialize HTTP/1.1 response bytes into a HTTP response object.
`Connection` is the Jetty abstraction that is responsible to read bytes from the `EndPoint` and to deserialize the read bytes into objects.
For example, a HTTP/1.1 server-side `Connection` implementation is responsible to deserialize HTTP/1.1 request bytes into a HTTP request object.
Conversely, a HTTP/1.1 client-side `Connection` implementation is responsible to deserialize HTTP/1.1 response bytes into a HTTP response object.
`Connection` is the abstraction that implements the reading side of a specific
protocol such as HTTP/1.1, or HTTP/2, or WebSocket: it is able to read incoming
communication in that protocol.
`Connection` is the abstraction that implements the reading side of a specific protocol such as HTTP/1.1, or HTTP/2, or WebSocket: it is able to read incoming communication in that protocol.
The writing side for a specific protocol _may_ be implemented in the `Connection`
but may also be implemented in other components, although eventually the bytes
to be written will be written through the `EndPoint`.
The writing side for a specific protocol _may_ be implemented in the `Connection` but may also be implemented in other components, although eventually the bytes to be written will be written through the `EndPoint`.
While there is primarily just one implementation of `EndPoint`,
link:{JDURL}/org/eclipse/jetty/io/SocketChannelEndPoint.html[`SocketChannelEndPoint`]
(used both on the client-side and on the server-side), there are many
implementations of `Connection`, typically two for each protocol (one for the
client-side and one for the server-side).
While there is primarily just one implementation of `EndPoint`,link:{JDURL}/org/eclipse/jetty/io/SocketChannelEndPoint.html[`SocketChannelEndPoint`] (used both on the client-side and on the server-side), there are many implementations of `Connection`, typically two for each protocol (one for the client-side and one for the server-side).
The `EndPoint` and `Connection` pairs can be chained, for example in case of
encrypted communication using the TLS protocol.
There is an `EndPoint` and `Connection` TLS pair where the `EndPoint` reads the
encrypted bytes from the network and the `Connection` decrypts them; next in the
chain there is an `EndPoint` and `Connection` pair where the `EndPoint` "reads"
decrypted bytes (provided by the previous `Connection`) and the `Connection`
deserializes them into specific protocol objects (for example HTTP/2 frame
objects).
The `EndPoint` and `Connection` pairs can be chained, for example in case of encrypted communication using the TLS protocol.
There is an `EndPoint` and `Connection` TLS pair where the `EndPoint` reads the encrypted bytes from the network and the `Connection` decrypts them; next in the chain there is an `EndPoint` and `Connection` pair where the `EndPoint` "reads" decrypted bytes (provided by the previous `Connection`) and the `Connection` deserializes them into specific protocol objects (for example HTTP/2 frame objects).
Certain protocols, such as WebSocket, start the communication with the server
using one protocol (e.g. HTTP/1.1), but then change the communication to use
another protocol (e.g. WebSocket).
`EndPoint` supports changing the `Connection` object on-the-fly via
`EndPoint.upgrade(Connection)`.
This allows to use the HTTP/1.1 `Connection` during the initial communication
and later to replace it with a WebSocket `Connection`.
Certain protocols, such as WebSocket, start the communication with the server using one protocol (e.g. HTTP/1.1), but then change the communication to use another protocol (e.g. WebSocket).
`EndPoint` supports changing the `Connection` object on-the-fly via `EndPoint.upgrade(Connection)`.
This allows to use the HTTP/1.1 `Connection` during the initial communication and later to replace it with a WebSocket `Connection`.
NOTE: TODO: add a section on `UpgradeFrom` and `UpgradeTo`?
`SelectorManager` is an abstract class because while it knows how to create
concrete `EndPoint` instances, it does not know how to create protocol
specific `Connection` instances.
`SelectorManager` is an abstract class because while it knows how to create concrete `EndPoint` instances, it does not know how to create protocol specific `Connection` instances.
Creating `Connection` instances is performed on the server-side by
link:{JDURL}/org/eclipse/jetty/server/ConnectionFactory.html[`ConnectionFactory`]s.
and on the client-side by
link:{JDURL}/org/eclipse/jetty/io/ClientConnectionFactory.html[`ClientConnectionFactory`]s
Creating `Connection` instances is performed on the server-side by link:{JDURL}/org/eclipse/jetty/server/ConnectionFactory.html[`ConnectionFactory`]s and on the client-side by link:{JDURL}/org/eclipse/jetty/io/ClientConnectionFactory.html[`ClientConnectionFactory`]s
On the server-side, the component that aggregates a `SelectorManager` with a
set of ``ConnectionFactory``s is
link:{JDURL}/org/eclipse/jetty/server/ServerConnector.html[`ServerConnector`]s.
On the server-side, the component that aggregates a `SelectorManager` with a set of ``ConnectionFactory``s is link:{JDURL}/org/eclipse/jetty/server/ServerConnector.html[`ServerConnector`]s, see xref:eg-server-io-arch[].
NOTE: TODO: add a link to a server-side specific architecture section
On the client-side, the components that aggregates a `SelectorManager` with a
set of ``ClientConnectionFactory``s are
link:{JDURL}/org/eclipse/jetty/client/HttpClientTransport.html[`HttpClientTransport`]
subclasses.
NOTE: TODO: add a link to a client-side specific architecture section
On the client-side, the components that aggregates a `SelectorManager` with a set of ``ClientConnectionFactory``s are link:{JDURL}/org/eclipse/jetty/client/HttpClientTransport.html[`HttpClientTransport`] subclasses, see xref:eg-client-io-arch[].
[[eg-io-arch-endpoint]]
=== Jetty I/O: `EndPoint`
The Jetty I/O library use Java NIO to handle I/O, so that I/O is non-blocking.
At the Java NIO level, in order to be notified when a `SocketChannel` has data
to be read, the `SelectionKey.OP_READ` flag must be set.
At the Java NIO level, in order to be notified when a `SocketChannel` has data to be read, the `SelectionKey.OP_READ` flag must be set.
In the Jetty I/O library, you can call `EndPoint.fillInterested(Callback)`
to declare interest in the "read" (or "fill") event, and the `Callback` parameter
is the object that is notified when such event occurs.
In the Jetty I/O library, you can call `EndPoint.fillInterested(Callback)` to declare interest in the "read" (or "fill") event, and the `Callback` parameter is the object that is notified when such event occurs.
At the Java NIO level, a `SocketChannel` is always writable, unless it becomes
TCP congested. In order to be notified when a `SocketChannel` uncongests and it
is therefore writable again, the `SelectionKey.OP_WRITE` flag must be set.
At the Java NIO level, a `SocketChannel` is always writable, unless it becomes TCP congested.
In order to be notified when a `SocketChannel` uncongests and it is therefore writable again, the `SelectionKey.OP_WRITE` flag must be set.
In the Jetty I/O library, you can call `EndPoint.write(Callback, ByteBuffer...)`
to write the ``ByteBuffer``s and the `Callback` parameter is the object that is
notified when the whole write is finished (i.e. _all_ ``ByteBuffer``s have been
fully written, even if they are delayed by TCP congestion/uncongestion).
In the Jetty I/O library, you can call `EndPoint.write(Callback, ByteBuffer...)` to write the ``ByteBuffer``s and the `Callback` parameter is the object that is notified when the whole write is finished (i.e. _all_ ``ByteBuffer``s have been fully written, even if they are delayed by TCP congestion/uncongestion).
The `EndPoint` APIs abstract out the Java NIO details by providing non-blocking
APIs based on `Callback` objects for I/O operations.
The `EndPoint` APIs are typically called by `Connection` implementations, see
xref:eg-io-arch-connection[this section].
The `EndPoint` APIs abstract out the Java NIO details by providing non-blocking APIs based on `Callback` objects for I/O operations.
The `EndPoint` APIs are typically called by `Connection` implementations, see xref:eg-io-arch-connection[this section].
[[eg-io-arch-connection]]
=== Jetty I/O: `Connection`
`Connection` is the abstraction that deserializes incoming bytes into objects,
for example a HTTP request object or a WebSocket frame object, that can be used
by more abstract layers.
`Connection` is the abstraction that deserializes incoming bytes into objects, for example a HTTP request object or a WebSocket frame object, that can be used by more abstract layers.
`Connection` instances have two lifecycle methods:
* `Connection.onOpen()`, invoked when the `Connection` is associated with the
`EndPoint`
* `Connection.onClose(Throwable)`, invoked when the `Connection` is disassociated
from the `EndPoint`, where the `Throwable` parameter indicates whether the
disassociation was normal (when the parameter is `null`) or was due to an error
(when the parameter is not `null`)
* `Connection.onOpen()`, invoked when the `Connection` is associated with the `EndPoint`
* `Connection.onClose(Throwable)`, invoked when the `Connection` is disassociated from the `EndPoint`, where the `Throwable` parameter indicates whether the disassociation was normal (when the parameter is `null`) or was due to an error (when the parameter is not `null`)
When a `Connection` is first created, it is not registered for any Java NIO
event.
It is therefore typical to implement `onOpen()` to call
`EndPoint.fillInterested(Callback)` so that the `Connection` declares interest
for read events and it is invoked (via the `Callback`) when the read event
happens.
When a `Connection` is first created, it is not registered for any Java NIO event.
It is therefore typical to implement `onOpen()` to call `EndPoint.fillInterested(Callback)` so that the `Connection` declares interest for read events and it is invoked (via the `Callback`) when the read event happens.
Abstract class `AbstractConnection` partially implements `Connection` and
provides simpler APIs. The example below shows a typical implementation that
extends `AbstractConnection`:
Abstract class `AbstractConnection` partially implements `Connection` and provides simpler APIs.
The example below shows a typical implementation that extends `AbstractConnection`:
[source,java,indent=0]
----
include::{doc_code}/embedded/SelectorManagerDocs.java[tags=connection]
----
// TODO: Introduce Connection.Listener
[[eg-io-arch-echo]]
=== Jetty I/O: Network Echo
With the concepts above it is now possible to write a simple, fully non-blocking,
`Connection` implementation that simply echoes the bytes that it reads back
to the other peer.
With the concepts above it is now possible to write a simple, fully non-blocking, `Connection` implementation that simply echoes the bytes that it reads back to the other peer.
A naive, but wrong, implementation may be the following:
@ -212,9 +139,7 @@ include::{doc_code}/embedded/SelectorManagerDocs.java[tags=echo-wrong]
WARNING: The implementation above is wrong and leads to `StackOverflowError`.
The problem with this implementation is that if the writes always complete
synchronously (i.e. without being delayed by TCP congestion), you end up with
this sequence of calls:
The problem with this implementation is that if the writes always complete synchronously (i.e. without being delayed by TCP congestion), you end up with this sequence of calls:
----
Connection.onFillable()
@ -228,13 +153,10 @@ Connection.onFillable()
which leads to `StackOverflowError`.
This is a typical side effect of asynchronous programming using non-blocking
APIs, and happens in the Jetty I/O library as well.
This is a typical side effect of asynchronous programming using non-blocking APIs, and happens in the Jetty I/O library as well.
NOTE: The callback is invoked synchronously for efficiency reasons.
Submitting the invocation of the callback to an `Executor` to be invoked in
a different thread would cause a context switch and make simple writes
extremely inefficient.
Submitting the invocation of the callback to an `Executor` to be invoked in a different thread would cause a context switch and make simple writes extremely inefficient.
A correct implementation is the following:
@ -243,23 +165,15 @@ A correct implementation is the following:
include::{doc_code}/embedded/SelectorManagerDocs.java[tags=echo-correct]
----
The correct implementation performs consecutive reads in a loop (rather than
recursively), but _only_ if the correspondent write is completed successfully.
The correct implementation performs consecutive reads in a loop (rather than recursively), but _only_ if the correspondent write is completed successfully.
In order to detect whether the write is completed, a concurrent state machine
is used. This is necessary because the notification of the completion of the
write may happen in a different thread, while the original writing thread
may still be changing the state.
In order to detect whether the write is completed, a concurrent state machine is used.
This is necessary because the notification of the completion of the write may happen in a different thread, while the original writing thread may still be changing the state.
The original writing thread starts moves the state from `IDLE` to `WRITING`,
then issues the actual `write()` call.
The original writing thread then assumes that the `write()` did not complete
and tries to move to the `PENDING` state just after the `write()`.
If it fails to move from the `WRITING` state to the `PENDING` state, it means
that the write was completed.
Otherwise, the write is now `PENDING` and waiting for the callback to be
notified of the completion at a later time.
When the callback is notified of the `write()` completion, it checks whether
the `write()` was `PENDING`, and if it was it resumes reading.
The original writing thread starts moves the state from `IDLE` to `WRITING`, then issues the actual `write()` call.
The original writing thread then assumes that the `write()` did not complete and tries to move to the `PENDING` state just after the `write()`.
If it fails to move from the `WRITING` state to the `PENDING` state, it means that the write was completed.
Otherwise, the write is now `PENDING` and waiting for the callback to be notified of the completion at a later time.
When the callback is notified of the `write()` completion, it checks whether the `write()` was `PENDING`, and if it was it resumes reading.
NOTE: TODO: Introduce IteratingCallback?

View File

@ -19,48 +19,34 @@
[[eg-server-http-connector]]
=== Server Connectors
A `Connector` is the component that handles incoming requests from clients,
and works in conjunction with `ConnectionFactory` instances.
A `Connector` is the component that handles incoming requests from clients, and works in conjunction with `ConnectionFactory` instances.
The primary implementation is `org.eclipse.jetty.server.ServerConnector`.
`ServerConnector` uses a `java.nio.channels.ServerSocketChannel` to listen
to a TCP port and to accept TCP connections.
The primary implementation is `org.eclipse.jetty.server.ServerConnector`.`ServerConnector` uses a `java.nio.channels.ServerSocketChannel` to listen to a TCP port and to accept TCP connections.
Since `ServerConnector` wraps a `ServerSocketChannel`, it can be configured
in a similar way, for example the port to listen to, the network address
to bind to, etc.:
Since `ServerConnector` wraps a `ServerSocketChannel`, it can be configured in a similar way, for example the port to listen to, the network address to bind to, etc.:
[source,java,indent=0]
----
include::../../{doc_code}/embedded/server/http/HTTPServerDocs.java[tags=configureConnector]
----
The _acceptors_ are threads (typically only one) that compete to accept TCP
connections on the listening port.
When a connection is accepted, `ServerConnector` wraps the accepted
`SocketChannel` and passes it to the
xref:eg-io-arch-selector-manager[`SelectorManager`].
Therefore, there is a little moment where the acceptor thread is not accepting
new connections because it is busy wrapping the just accepted one to pass it
to the `SelectorManager`.
Connections that are ready to be accepted but are not accepted yet are queued
in a bounded queue (at the OS level) whose capacity can be configured with the
`ServerConnector.acceptQueueSize` parameter.
The _acceptors_ are threads (typically only one) that compete to accept TCP connections on the listening port.
When a connection is accepted, `ServerConnector` wraps the accepted `SocketChannel` and passes it to the xref:eg-io-arch-selector-manager[`SelectorManager`].
Therefore, there is a little moment where the acceptor thread is not accepting new connections because it is busy wrapping the just accepted one to pass it to the `SelectorManager`.
Connections that are ready to be accepted but are not accepted yet are queued in a bounded queue (at the OS level) whose capacity can be configured with the `ServerConnector.acceptQueueSize` parameter.
If your application must withstand a very high rate of connections opened,
configuring more than one acceptor thread may be beneficial: when one acceptor
thread accepts one connection, another acceptor thread can take over accepting
connections.
If your application must withstand a very high rate of connections opened, configuring more than one acceptor thread may be beneficial: when one acceptor thread accepts one connection, another acceptor thread can take over accepting connections.
The _selectors_ are components that manage a set of connected sockets,
implemented by xref:eg-io-arch-selector-manager[`ManagedSelector`].
Each selector requires one thread and uses the Java NIO mechanism to
efficiently handle the set of connected sockets.
As a rule of thumb, a single selector can easily manage up to 1000-5000
sockets, although the number may vary greatly depending on the application.
The _selectors_ are components that manage a set of connected sockets, implemented by xref:eg-io-arch-selector-manager[`ManagedSelector`].
Each selector requires one thread and uses the Java NIO mechanism to efficiently handle the set of connected sockets.
As a rule of thumb, a single selector can easily manage up to 1000-5000 sockets, although the number may vary greatly depending on the application.
It is possible to configure more than one `ServerConnector`, each listening
on a different port:
For example, web site applications tend to use sockets for one or more HTTP requests to retrieve resources and then the socket is idle for most of the time.
In this case a single selector may be able to manage many sockets because chances are that they will be idle most of the time.
On the contrary, web messaging applications tend to send many small messages at a very high frequency so that the socket is rarely idle.
In this case a single selector may be able to manage less sockets because chances are that many of them will be active at the same time.
It is possible to configure more than one `ServerConnector`, each listening on a different port:
[source,java,indent=0]
----
@ -70,20 +56,15 @@ include::../../{doc_code}/embedded/server/http/HTTPServerDocs.java[tags=configur
[[eg-server-http-connector-protocol]]
==== Configuring Protocols
For each accepted TCP connection, `ServerConnector` asks a `ConnectionFactory`
to create a `Connection` object that handles the network traffic on that TCP
connection, parsing and generating bytes for a specific protocol (see
xref:eg-io-arch[this section] for more details about `Connection` objects).
For each accepted TCP connection, `ServerConnector` asks a `ConnectionFactory` to create a `Connection` object that handles the network traffic on that TCP connection, parsing and generating bytes for a specific protocol (see xref:eg-io-arch[this section] for more details about `Connection` objects).
A `ServerConnector` can be configured with one or more ``ConnectionFactory``s.
If no `ConnectionFactory` is specified then `HttpConnectionFactory` is
implicitly configured.
If no `ConnectionFactory` is specified then `HttpConnectionFactory` is implicitly configured.
[[eg-server-http-connector-protocol-http11]]
===== Configuring HTTP/1.1
`HttpConnectionFactory` creates `HttpConnection` objects that parse bytes
and generate bytes for the HTTP/1.1 protocol.
`HttpConnectionFactory` creates `HttpConnection` objects that parse bytes and generate bytes for the HTTP/1.1 protocol.
This is how you configure Jetty to support clear-text HTTP/1.1:
@ -92,10 +73,7 @@ This is how you configure Jetty to support clear-text HTTP/1.1:
include::../../{doc_code}/embedded/server/http/HTTPServerDocs.java[tags=http11]
----
Supporting encrypted HTTP/1.1 (that is, requests with the HTTPS scheme)
is supported by configuring an `SslContextFactory` that has access to the
keyStore containing the private server key and public server certificate,
in this way:
Supporting encrypted HTTP/1.1 (that is, requests with the HTTPS scheme)is supported by configuring an `SslContextFactory` that has access to the keyStore containing the private server key and public server certificate, in this way:
[source,java,indent=0]
----
@ -105,20 +83,11 @@ include::../../{doc_code}/embedded/server/http/HTTPServerDocs.java[tags=tlsHttp1
[[eg-server-http-connector-protocol-proxy-http11]]
===== Configuring Jetty behind a Load Balancer
It is often the case that Jetty receives connections from a load balancer
configured to distribute the load among many Jetty backend servers.
It is often the case that Jetty receives connections from a load balancer configured to distribute the load among many Jetty backend servers.
From the Jetty point of view, all the connections arrive from the load
balancer, rather than the real clients, but is possible to configure the load
balancer to forward the real client IP address and port to the backend Jetty
server using the
link:https://www.haproxy.org/download/2.1/doc/proxy-protocol.txt[PROXY protocol].
From the Jetty point of view, all the connections arrive from the load balancer, rather than the real clients, but is possible to configure the load balancer to forward the real client IP address and port to the backend Jetty server using the link:https://www.haproxy.org/download/2.1/doc/proxy-protocol.txt[PROXY protocol].
NOTE: The PROXY protocol is widely supported by load balancers such as
link:http://cbonte.github.io/haproxy-dconv/2.2/configuration.html#5.2-send-proxy[HAProxy]
(via its `send-proxy` directive),
link:https://docs.nginx.com/nginx/admin-guide/load-balancer/using-proxy-protocol[Nginx]
(via its `proxy_protocol on` directive) and others.
NOTE: The PROXY protocol is widely supported by load balancers such as link:http://cbonte.github.io/haproxy-dconv/2.2/configuration.html#5.2-send-proxy[HAProxy] (via its `send-proxy` directive), link:https://docs.nginx.com/nginx/admin-guide/load-balancer/using-proxy-protocol[Nginx](via its `proxy_protocol on` directive) and others.
To support this case, Jetty can be configured in this way:
@ -127,57 +96,36 @@ To support this case, Jetty can be configured in this way:
include::../../{doc_code}/embedded/server/http/HTTPServerDocs.java[tags=proxyHTTP]
----
Note how the ``ConnectionFactory``s passed to `ServerConnector` are in order:
first PROXY, then HTTP/1.1.
Note also how the PROXY `ConnectionFactory` needs to know its _next_ protocol
(in this example, HTTP/1.1).
Note how the ``ConnectionFactory``s passed to `ServerConnector` are in order: first PROXY, then HTTP/1.1.
Note also how the PROXY `ConnectionFactory` needs to know its _next_ protocol (in this example, HTTP/1.1).
Each `ConnectionFactory` is asked to create a `Connection` object for each
accepted TCP connection; the `Connection` objects will be chained together
to handle the bytes, each for its own protocol.
Therefore the `ProxyConnection` will handle the PROXY protocol bytes and
`HttpConnection` will handle the HTTP/1.1 bytes producing a request object
and response object that will be processed by ``Handler``s.
Each `ConnectionFactory` is asked to create a `Connection` object for each accepted TCP connection; the `Connection` objects will be chained together to handle the bytes, each for its own protocol.
Therefore the `ProxyConnection` will handle the PROXY protocol bytes and `HttpConnection` will handle the HTTP/1.1 bytes producing a request object and response object that will be processed by ``Handler``s.
[[eg-server-http-connector-protocol-http2]]
===== Configuring HTTP/2
It is well known that the HTTP ports are `80` (for clear-text HTTP) and `443`
for encrypted HTTP.
By using those ports, a client had _prior knowledge_ that the server would
speak, respectively, the HTTP/1.x protocol and the TLS protocol (and, after
decryption, the HTTP/1.x protocol).
It is well known that the HTTP ports are `80` (for clear-text HTTP) and `443` for encrypted HTTP.
By using those ports, a client had _prior knowledge_ that the server would speak, respectively, the HTTP/1.x protocol and the TLS protocol (and, after decryption, the HTTP/1.x protocol).
HTTP/2 was designed to be a smooth transition from HTTP/1.1 for users and
as such the HTTP ports were not changed.
However the HTTP/2 protocol is, on the wire, a binary protocol, completely
different from HTTP/1.1.
Therefore, with HTTP/2, clients that connect to port `80` may speak either
HTTP/1.1 or HTTP/2, and the server must figure out which version of the HTTP
protocol the client is speaking.
HTTP/2 was designed to be a smooth transition from HTTP/1.1 for users and as such the HTTP ports were not changed.
However the HTTP/2 protocol is, on the wire, a binary protocol, completely different from HTTP/1.1.
Therefore, with HTTP/2, clients that connect to port `80` may speak either HTTP/1.1 or HTTP/2, and the server must figure out which version of the HTTP protocol the client is speaking.
Jetty can support both HTTP/1.1 and HTTP/2 on the same clear-text port by
configuring both the HTTP/1.1 and the HTTP/2 ``ConnectionFactory``s:
Jetty can support both HTTP/1.1 and HTTP/2 on the same clear-text port by configuring both the HTTP/1.1 and the HTTP/2 ``ConnectionFactory``s:
[source,java,indent=0]
----
include::../../{doc_code}/embedded/server/http/HTTPServerDocs.java[tags=http11H2C]
----
Note how the ``ConnectionFactory``s passed to `ServerConnector` are in order:
first HTTP/1.1, then HTTP/2.
This is necessary to support both protocols on the same port: Jetty will
start parsing the incoming bytes as HTTP/1.1, but then realize that they
are HTTP/2 bytes and will therefore _upgrade_ from HTTP/1.1 to HTTP/2.
Note how the ``ConnectionFactory``s passed to `ServerConnector` are in order: first HTTP/1.1, then HTTP/2.
This is necessary to support both protocols on the same port: Jetty will start parsing the incoming bytes as HTTP/1.1, but then realize that they are HTTP/2 bytes and will therefore _upgrade_ from HTTP/1.1 to HTTP/2.
This configuration is also typical when Jetty is installed in backend servers
behind a load balancer that also takes care of offloading TLS.
When Jetty is behind a load balancer, you can always prepend the PROXY
protocol as described in
xref:eg-server-http-connector-protocol-proxy-http11[this section].
This configuration is also typical when Jetty is installed in backend servers behind a load balancer that also takes care of offloading TLS.
When Jetty is behind a load balancer, you can always prepend the PROXY protocol as described in xref:eg-server-http-connector-protocol-proxy-http11[this section].
When using encrypted HTTP/2, the unencrypted protocol is negotiated by client
and server using an extension to the TLS protocol called ALPN.
When using encrypted HTTP/2, the unencrypted protocol is negotiated by client and server using an extension to the TLS protocol called ALPN.
Jetty supports ALPN and encrypted HTTP/2 with this configuration:
@ -186,10 +134,7 @@ Jetty supports ALPN and encrypted HTTP/2 with this configuration:
include::../../{doc_code}/embedded/server/http/HTTPServerDocs.java[tags=tlsALPNHTTP]
----
Note how the ``ConnectionFactory``s passed to `ServerConnector` are in order:
TLS, ALPN, HTTP/1.1, HTTP/2.
Note how the ``ConnectionFactory``s passed to `ServerConnector` are in order: TLS, ALPN, HTTP/1.1, HTTP/2.
Jetty starts parsing TLS bytes so that it can obtain the ALPN extension.
With the ALPN extension information, Jetty can negotiate a protocol and
pick, among the ``ConnectionFactory``s supported by the `ServerConnector`,
the `ConnectionFactory` correspondent to the negotiated protocol.
With the ALPN extension information, Jetty can negotiate a protocol and pick, among the ``ConnectionFactory``s supported by the `ServerConnector`, the `ConnectionFactory` correspondent to the negotiated protocol.

View File

@ -28,16 +28,10 @@ include::../../{doc_code}/embedded/server/http/HTTPServerDocs.java[tags=handlerA
The `target` parameter is an identifier for the resource.
This is normally the URI that is parsed from an HTTP request.
However, a request could be forwarded to either a named resource, in which case
`target` will be the name of the resource, or to a different URI, in which case
`target` will be the new URI.
However, a request could be forwarded to either a named resource, in which case `target` will be the name of the resource, or to a different URI, in which case `target` will be the new URI.
Applications may wrap the request or response (or both) and forward the wrapped
request or response to a different URI (which may be possibly handled by a
different `Handler`).
This is the reason why there are two request parameters in the `Handler` APIs:
the first is the unwrapped, original, request while the second is the
application-wrapped request.
Applications may wrap the request or response (or both) and forward the wrapped request or response to a different URI (which may be possibly handled by a different `Handler`).
This is the reason why there are two request parameters in the `Handler` APIs: the first is the unwrapped, original, request while the second is the application-wrapped request.
[[eg-server-http-handler-impl-hello]]
===== Hello World Handler
@ -49,23 +43,17 @@ A simple "Hello World" `Handler` is the following:
include::../../{doc_code}/embedded/server/http/HTTPServerDocs.java[tags=handlerHello]
----
Such a simple `Handler` extends from `AbstractHandler` and can access the
request and response main features, such as reading request headers and
content, or writing response headers and content.
Such a simple `Handler` extends from `AbstractHandler` and can access the request and response main features, such as reading request headers and content, or writing response headers and content.
[[eg-server-http-handler-impl-filter]]
===== Filtering Handler
A filtering `Handler` is a handler that perform some modification to the
request or response, and then either forwards the request to another
`Handler` or produces an error response:
A filtering `Handler` is a handler that perform some modification to the request or response, and then either forwards the request to another `Handler` or produces an error response:
[source,java,indent=0]
----
include::../../{doc_code}/embedded/server/http/HTTPServerDocs.java[tags=handlerFilter]
----
Note how a filtering `Handler` extends from `HandlerWrapper` and as such
needs another handler to forward the request processing to, and how the
two ``Handler``s needs to be linked together to work properly.
Note how a filtering `Handler` extends from `HandlerWrapper` and as such needs another handler to forward the request processing to, and how the two ``Handler``s needs to be linked together to work properly.

View File

@ -19,52 +19,31 @@
[[eg-server-http-handler-use]]
==== Using Provided Handlers
Web applications are the unit of deployment in an HTTP server or Servlet
container such as Jetty.
Web applications are the unit of deployment in an HTTP server or Servlet container such as Jetty.
Two different web applications are typically deployed on different
__context path__s, where a _context path_ is the initial segment of the URI
path.
For example, web application `webappA` that implements a web user interface
for an e-commerce site may be deployed to context path `/shop`, while web
application `webappB` that implements a REST API for the e-commerce business
may be deployed to `/api`.
Two different web applications are typically deployed on different __context path__s, where a _context path_ is the initial segment of the URI path.
For example, web application `webappA` that implements a web user interface for an e-commerce site may be deployed to context path `/shop`, while web application `webappB` that implements a REST API for the e-commerce business may be deployed to `/api`.
A client making a request to URI `/shop/cart` is directed by Jetty to
`webappA`, while a request to URI `/api/products` is directed to `webappB`.
A client making a request to URI `/shop/cart` is directed by Jetty to `webappA`, while a request to URI `/api/products` is directed to `webappB`.
An alternative way to deploy the two web applications of the example above
is to use _virtual hosts_.
A _virtual host_ is a subdomain of the primary domain that shares the same
IP address with the primary domain.
If the e-commerce business primary domain is `domain.com`, then a virtual
host for `webappA` could be `shop.domain.com`, while a virtual host for
`webappB` could be `api.domain.com`.
An alternative way to deploy the two web applications of the example above is to use _virtual hosts_.
A _virtual host_ is a subdomain of the primary domain that shares the same IP address with the primary domain.
If the e-commerce business primary domain is `domain.com`, then a virtual host for `webappA` could be `shop.domain.com`, while a virtual host for `webappB` could be `api.domain.com`.
Web application `webappA` can now be deployed to virtual host
`shop.domain.com` and context path `/`, while web application `webappB`
can be deployed to virtual host `api.domain.com` and context path `/`.
Both applications have the same context path `/`, but they can be
distinguished by the subdomain.
Web application `webappA` can now be deployed to virtual host `shop.domain.com` and context path `/`, while web application `webappB` can be deployed to virtual host `api.domain.com` and context path `/`.
Both applications have the same context path `/`, but they can be distinguished by the subdomain.
A client making a request to `+https://shop.domain.com/cart+` is
directed by Jetty to `webappA`, while a request to
`+https://api.domain.com/products+` is directed to `webappB`.
A client making a request to `+https://shop.domain.com/cart+` is directed by Jetty to `webappA`, while a request to `+https://api.domain.com/products+` is directed to `webappB`.
Therefore, in general, a web application is deployed to a _context_
which can be seen as the pair `(virtual_host, context_path)`.
In the first case the contexts were `(domain.com, /shop)` and
`(domain.com, /api)`, while in the second case the contexts were
`(shop.domain.com, /)` and `(api.domain.com, /)`.
Server applications using the Jetty Server Libraries create and
configure a _context_ for each web application.
Therefore, in general, a web application is deployed to a _context_ which can be seen as the pair `(virtual_host, context_path)`.
In the first case the contexts were `(domain.com, /shop)` and `(domain.com, /api)`, while in the second case the contexts were `(shop.domain.com, /)` and `(api.domain.com, /)`.
Server applications using the Jetty Server Libraries create and configure a _context_ for each web application.
[[eg-server-http-handler-use-context]]
===== ContextHandler
`ContextHandler` is a `Handler` that represents a _context_ for a web
application. It is a `HandlerWrapper` that performs some action before
and after delegating to the nested `Handler`.
`ContextHandler` is a `Handler` that represents a _context_ for a web application.
It is a `HandlerWrapper` that performs some action before and after delegating to the nested `Handler`.
// TODO: expand on what the ContextHandler does, e.g. ServletContext.
The simplest use of `ContextHandler` is the following:
@ -88,22 +67,14 @@ Server
Server applications may need to deploy to Jetty more than one web application.
Recall from the xref:eg-server-http-handler[introduction] that Jetty offers
`HandlerCollection` and `HandlerList` that may contain a sequence of children
``Handler``s.
However, both of these have no knowledge of the concept of _context_ and just
iterate through the sequence of ``Handler``s.
Recall from the xref:eg-server-http-handler[introduction] that Jetty offers `HandlerCollection` and `HandlerList` that may contain a sequence of children ``Handler``s.
However, both of these have no knowledge of the concept of _context_ and just iterate through the sequence of ``Handler``s.
A better choice for multiple web application is `ContextHandlerCollection`,
that matches a _context_ from either its _context path_ or _virtual host_,
without iterating through the ``Handler``s.
A better choice for multiple web application is `ContextHandlerCollection`, that matches a _context_ from either its _context path_ or _virtual host_, without iterating through the ``Handler``s.
If `ContextHandlerCollection` does not find a match, it just returns.
What happens next depends on the `Handler` tree structure: other ``Handler``s
may be invoked after `ContextHandlerCollection`, for example `DefaultHandler`
(see xref:eg-server-http-handler-use-util-default-handler[this section]).
Eventually, if `Request.setHandled(true)` is not called, Jetty returns a HTTP
`404` response to the client.
What happens next depends on the `Handler` tree structure: other ``Handler``s may be invoked after `ContextHandlerCollection`, for example `DefaultHandler` (see xref:eg-server-http-handler-use-util-default-handler[this section]).
Eventually, if `Request.setHandled(true)` is not called, Jetty returns an HTTP `404` response to the client.
[source,java,indent=0]
----
@ -125,11 +96,9 @@ Server
[[eg-server-http-handler-use-servlet-context]]
===== ServletContextHandler
``Handler``s are easy to write, but often web applications have already been
written using the Servlet APIs, using ``Servlet``s and ``Filter``s.
``Handler``s are easy to write, but often web applications have already been written using the Servlet APIs, using ``Servlet``s and ``Filter``s.
`ServletContextHandler` is a `ContextHandler` that provides support for the
Servlet APIs and implements the behaviors required by the Servlet specification.
`ServletContextHandler` is a `ContextHandler` that provides support for the Servlet APIs and implements the behaviors required by the Servlet specification.
The Maven artifact coordinates are:
@ -157,36 +126,23 @@ Server
└── _CrossOriginFilter /*_
----
Note how the Servlet components (they are not ``Handler``s) are represented in
_italic_.
Note how the Servlet components (they are not ``Handler``s) are represented in _italic_.
Note also how adding a `Servlet` or a `Filter` returns a _holder_ object that
can be used to specify additional configuration for that particular `Servlet`
or `Filter`.
Note also how adding a `Servlet` or a `Filter` returns a _holder_ object that can be used to specify additional configuration for that particular `Servlet` or `Filter`.
When a request arrives to `ServletContextHandler` the request URI will be
matched against the ``Filter``s and ``Servlet`` mappings and only those that
match will process the request, as dictated by the Servlet specification.
When a request arrives to `ServletContextHandler` the request URI will be matched against the ``Filter``s and ``Servlet`` mappings and only those that match will process the request, as dictated by the Servlet specification.
IMPORTANT: `ServletContextHandler` is a terminal `Handler`, that is it always
calls `Request.setHandled(true)` when invoked.
Server applications must be careful when creating the `Handler`
tree to put ``ServletContextHandler``s as last ``Handler``s in a `HandlerList`
or as children of `ContextHandlerCollection`.
IMPORTANT: `ServletContextHandler` is a terminal `Handler`, that is it always calls `Request.setHandled(true)` when invoked.
Server applications must be careful when creating the `Handler` tree to put ``ServletContextHandler``s as last ``Handler``s in a `HandlerList` or as children of `ContextHandlerCollection`.
[[eg-server-http-handler-use-webapp-context]]
===== WebAppContext
`WebAppContext` is a `ServletContextHandler` that auto configures itself by
reading a `web.xml` Servlet configuration file.
`WebAppContext` is a `ServletContextHandler` that auto configures itself by reading a `web.xml` Servlet configuration file.
Server applications can specify a `+*.war+` file or a directory with the
structure of a `+*.war+` file to `WebAppContext` to deploy a standard Servlet
web application packaged as a `war` (as defined by the Servlet specification).
Server applications can specify a `+*.war+` file or a directory with the structure of a `+*.war+` file to `WebAppContext` to deploy a standard Servlet web application packaged as a `war` (as defined by the Servlet specification).
Where server applications using `ServletContextHandler` must manually invoke
methods to add ``Servlet``s and ``Filter``s, `WebAppContext` reads
`WEB-INF/web.xml` to add ``Servlet``s and ``Filter``s.
Where server applications using `ServletContextHandler` must manually invoke methods to add ``Servlet``s and ``Filter``s, `WebAppContext` reads `WEB-INF/web.xml` to add ``Servlet``s and ``Filter``s.
[source,java,indent=0]
----
@ -196,45 +152,24 @@ include::../../{doc_code}/embedded/server/http/HTTPServerDocs.java[tags=webAppCo
[[eg-server-http-handler-use-resource-handler]]
===== ResourceHandler -- Static Content
Static content such as images or files (HTML, JavaScript, CSS) can be sent
by Jetty very efficiently because Jetty can write the content asynchronously,
using direct ``ByteBuffer``s to minimize data copy, and using a memory cache
for faster access to the data to send.
Static content such as images or files (HTML, JavaScript, CSS) can be sent by Jetty very efficiently because Jetty can write the content asynchronously, using direct ``ByteBuffer``s to minimize data copy, and using a memory cache for faster access to the data to send.
Being able to write content asynchronously means that if the network gets
congested (for example, the client reads the content very slowly) and the
server stalls the send of the requested data, then Jetty will wait to resume
the send _without_ blocking a thread to finish the send.
Being able to write content asynchronously means that if the network gets congested (for example, the client reads the content very slowly) and the server stalls the send of the requested data, then Jetty will wait to resume the send _without_ blocking a thread to finish the send.
`ResourceHandler` supports the following features:
* Welcome files, for example serving `/index.html` for request URI `/`
* Precompressed resources, serving a precompressed `/document.txt.gz` for
request URI `/document.txt`
* link:https://tools.ietf.org/html/rfc7233[Range requests], for requests
containing the `Range` header, which allows clients to pause and resume
downloads of large files
* Directory listing, serving a HTML page with the file list of the requested
directory
* Conditional headers, for requests containing the `If-Match`, `If-None-Match`,
`If-Modified-Since`, `If-Unmodified-Since` headers.
* Precompressed resources, serving a precompressed `/document.txt.gz` for request URI `/document.txt`
* link:https://tools.ietf.org/html/rfc7233[Range requests], for requests containing the `Range` header, which allows clients to pause and resume downloads of large files
* Directory listing, serving a HTML page with the file list of the requested directory
* Conditional headers, for requests containing the `If-Match`, `If-None-Match`, `If-Modified-Since`, `If-Unmodified-Since` headers.
The number of features supported and the efficiency in sending static content
are on the same level as those of common front-end servers used to serve
static content such as Nginx or Apache.
Therefore, the traditional architecture where Nginx/Apache was the front-end
server used only to send static content and Jetty was the back-end server used
only to send dynamic content is somehow obsolete as Jetty can perform
efficiently both tasks.
This leads to simpler systems (less components to configure and manage) and
more performance (no need to proxy dynamic requests from front-end servers
to back-end servers).
The number of features supported and the efficiency in sending static content are on the same level as those of common front-end servers used to serve static content such as Nginx or Apache.
Therefore, the traditional architecture where Nginx/Apache was the front-end server used only to send static content and Jetty was the back-end server used only to send dynamic content is somehow obsolete as Jetty can perform efficiently both tasks.
This leads to simpler systems (less components to configure and manage) and more performance (no need to proxy dynamic requests from front-end servers to back-end servers).
NOTE: It is common to use Nginx/Apache as load balancers, or as rewrite/redirect
servers. We typically recommend link:https://haproxy.org[HAProxy] as load
balancer, and Jetty has
xref:eg-server-http-handler-use-util-rewrite-handler[rewrite/redirect features]
as well.
NOTE: It is common to use Nginx/Apache as load balancers, or as rewrite/redirect servers.
We typically recommend link:https://haproxy.org[HAProxy] as load balancer, and Jetty has xref:eg-server-http-handler-use-util-rewrite-handler[rewrite/redirect features] as well.
This is how you configure a `ResourceHandler` to create a simple file server:
@ -250,19 +185,14 @@ If you need to serve static resources from multiple directories:
include::../../{doc_code}/embedded/server/http/HTTPServerDocs.java[tags=multipleResourcesHandler]
----
If the resource is not found, `ResourceHandler` will not call
`Request.setHandled(true)` so what happens next depends on the `Handler`
tree structure. See also
xref:eg-server-http-handler-use-util-default-handler[how to use] `DefaultHandler`.
If the resource is not found, `ResourceHandler` will not call `Request.setHandled(true)` so what happens next depends on the `Handler` tree structure.
See also xref:eg-server-http-handler-use-util-default-handler[how to use] `DefaultHandler`.
[[eg-server-http-handler-use-default-servlet]]
===== DefaultServlet -- Static Content for Servlets
If you have a
xref:eg-server-http-handler-use-servlet-context[Servlet web application],
you may want to use a `DefaultServlet` instead of `ResourceHandler`.
The features are similar, but `DefaultServlet` is more commonly used to
serve static files for Servlet web applications.
If you have a xref:eg-server-http-handler-use-servlet-context[Servlet web application], you may want to use a `DefaultServlet` instead of `ResourceHandler`.
The features are similar, but `DefaultServlet` is more commonly used to serve static files for Servlet web applications.
[source,java,indent=0]
----
@ -272,15 +202,10 @@ include::../../{doc_code}/embedded/server/http/HTTPServerDocs.java[tags=defaultS
[[eg-server-http-handler-use-util-gzip-handler]]
===== GzipHandler
`GzipHandler` provides supports for automatic decompression of compressed
request content and automatic compression of response content.
`GzipHandler` provides supports for automatic decompression of compressed request content and automatic compression of response content.
`GzipHandler` is a `HandlerWrapper` that inspects the request and, if the
request matches the `GzipHandler` configuration, just installs the required
components to eventually perform decompression of the request content or
compression of the response content.
The decompression/compression is not performed until the web application
reads request content or writes response content.
`GzipHandler` is a `HandlerWrapper` that inspects the request and, if the request matches the `GzipHandler` configuration, just installs the required components to eventually perform decompression of the request content or compression of the response content.
The decompression/compression is not performed until the web application reads request content or writes response content.
`GzipHandler` can be configured at the server level in this way:
@ -301,10 +226,7 @@ Server
└── ContextHandler N
----
However, in less common cases, you can configure `GzipHandler` on a
per-context basis, for example because you want to configure `GzipHandler`
with different parameters for each context, or because you want only some
contexts to have compression support:
However, in less common cases, you can configure `GzipHandler` on a per-context basis, for example because you want to configure `GzipHandler` with different parameters for each context, or because you want only some contexts to have compression support:
[source,java,indent=0]
----
@ -330,10 +252,7 @@ Server
[[eg-server-http-handler-use-util-rewrite-handler]]
===== RewriteHandler
`RewriteHandler` provides support for URL rewriting, very similarly to
link:https://httpd.apache.org/docs/current/mod/mod_rewrite.html[Apache's mod_rewrite]
or
link:https://nginx.org/en/docs/http/ngx_http_rewrite_module.html[Nginx rewrite module].
`RewriteHandler` provides support for URL rewriting, very similarly to link:https://httpd.apache.org/docs/current/mod/mod_rewrite.html[Apache's mod_rewrite] or link:https://nginx.org/en/docs/http/ngx_http_rewrite_module.html[Nginx rewrite module].
The Maven artifact coordinates are:
@ -346,20 +265,13 @@ The Maven artifact coordinates are:
</dependency>
----
`RewriteHandler` can be configured with a set of __rule__s; a _rule_ inspects
the request and when it matches it performs some change to the request (for
example, changes the URI path, adds/removes headers, etc.).
`RewriteHandler` can be configured with a set of __rule__s; a _rule_ inspects the request and when it matches it performs some change to the request (for example, changes the URI path, adds/removes headers, etc.).
The Jetty Server Libraries provide rules for the most common usages, but you
can write your own rules by extending the
`org.eclipse.jetty.rewrite.handler.Rule` class.
The Jetty Server Libraries provide rules for the most common usages, but you can write your own rules by extending the `org.eclipse.jetty.rewrite.handler.Rule` class.
Please refer to the `jetty-rewrite` module
link:{JDURL}/org/eclipse/jetty/rewrite/handler/package-summary.html[javadocs]
for the complete list of available rules.
Please refer to the `jetty-rewrite` module link:{JDURL}/org/eclipse/jetty/rewrite/handler/package-summary.html[javadocs] for the complete list of available rules.
You typically want to configure `RewriteHandler` at the server level, although
it is possible to configure it on a per-context basis.
You typically want to configure `RewriteHandler` at the server level, although it is possible to configure it on a per-context basis.
[source,java,indent=0]
----
@ -381,22 +293,18 @@ Server
[[eg-server-http-handler-use-util-stats-handler]]
===== StatisticsHandler
`StatisticsHandler` gathers and exposes a number of statistic values related
to request processing such as:
`StatisticsHandler` gathers and exposes a number of statistic values related to request processing such as:
* Total number of requests
* Current number of concurrent requests
* Minimum, maximum, average and standard deviation of request processing times
* Number of responses grouped by HTTP code (i.e. how many `2xx` responses, how
many `3xx` responses, etc.)
* Number of responses grouped by HTTP code (i.e. how many `2xx` responses, how many `3xx` responses, etc.)
* Total response content bytes
Server applications can read these values and use them internally, or expose
them via some service, or export them via JMX.
Server applications can read these values and use them internally, or expose them via some service, or export them via JMX.
// TODO: xref to the JMX section.
`StatisticsHandler` can be configured at the server level or at the context
level.
`StatisticsHandler` can be configured at the server level or at the context level.
[source,java,indent=0]
----
@ -418,19 +326,13 @@ Server
[[eg-server-http-handler-use-util-secure-handler]]
===== SecuredRedirectHandler -- Redirect from HTTP to HTTPS
`SecuredRedirectHandler` allows to redirect requests made with the `http`
scheme (and therefore to the clear-text port) to the `https` scheme (and
therefore to the encrypted port).
`SecuredRedirectHandler` allows to redirect requests made with the `http` scheme (and therefore to the clear-text port) to the `https` scheme (and therefore to the encrypted port).
For example a request to `+http://domain.com:8080/path?param=value+` is
redirected to `+https://domain.com:8443/path?param=value+`.
For example a request to `+http://domain.com:8080/path?param=value+` is redirected to `+https://domain.com:8443/path?param=value+`.
Server applications must configure a `HttpConfiguration` object with the
secure scheme and secure port so that `SecuredRedirectHandler` can build
the redirect URI.
Server applications must configure a `HttpConfiguration` object with the secure scheme and secure port so that `SecuredRedirectHandler` can build the redirect URI.
`SecuredRedirectHandler` is typically configured at the server level,
although it can be configured on a per-context basis.
`SecuredRedirectHandler` is typically configured at the server level, although it can be configured on a per-context basis.
[source,java,indent=0]
----
@ -440,16 +342,13 @@ include::../../{doc_code}/embedded/server/http/HTTPServerDocs.java[tags=securedH
[[eg-server-http-handler-use-util-default-handler]]
===== DefaultHandler
`DefaultHandler` is a terminal `Handler` that always calls
`Request.setHandled(true)` and performs the following:
`DefaultHandler` is a terminal `Handler` that always calls `Request.setHandled(true)` and performs the following:
* Serves the `favicon.ico` Jetty icon when it is requested
* Sends a HTTP `404` response for any other request
* The HTTP `404` response content nicely shows a HTML table with all the
contexts deployed on the `Server` instance
* The HTTP `404` response content nicely shows a HTML table with all the contexts deployed on the `Server` instance
`DefaultHandler` is best used as the last `Handler` of a `HandlerList`,
for example:
`DefaultHandler` is best used as the last `Handler` of a `HandlerList`, for example:
[source,java,indent=0]
----
@ -469,12 +368,7 @@ Server
└── DefaultHandler
----
In the example above, `ContextHandlerCollection` will try to match a request
to one of the contexts; if the match fails, `HandlerList` will call the next
`Handler` which is `DefaultHandler` that will return a HTTP `404` with an
HTML page showing the existing contexts deployed on the `Server`.
In the example above, `ContextHandlerCollection` will try to match a request to one of the contexts; if the match fails, `HandlerList` will call the next `Handler` which is `DefaultHandler` that will return a HTTP `404` with an HTML page showing the existing contexts deployed on the `Server`.
NOTE: `DefaultHandler` just sends a nicer HTTP `404` response in case of
wrong requests from clients.
Jetty will send an HTTP `404` response anyway if `DefaultHandler` is not
used.
NOTE: `DefaultHandler` just sends a nicer HTTP `404` response in case of wrong requests from clients.
Jetty will send an HTTP `404` response anyway if `DefaultHandler` is not used.

View File

@ -19,31 +19,21 @@
[[eg-server-http-handler]]
=== Server Handlers
An `org.eclipse.jetty.server.Handler` is the component that processes
incoming HTTP requests and eventually produces HTTP responses.
An `org.eclipse.jetty.server.Handler` is the component that processes incoming HTTP requests and eventually produces HTTP responses.
``Handler``s can be organized in different ways:
* in a sequence, where ``Handler``s are invoked one after the other
** `HandlerCollection` invokes _all_ ``Handler``s one after the other
** `HandlerList` invokes ``Handlers``s until one calls `Request.setHandled(true)`
to indicate that the request has been handled and no further `Handler` should
be invoked
** `HandlerList` invokes ``Handlers``s until one calls `Request.setHandled(true)` to indicate that the request has been handled and no further `Handler` should be invoked
* nested, where one `Handler` invokes the next, nested, `Handler`
** `HandlerWrapper` implements this behavior
The `HandlerCollection` behavior (invoking _all_ handlers) is useful when
for example the last `Handler` is a logging `Handler` that logs the request
(that may have been modified by previous handlers).
The `HandlerCollection` behavior (invoking _all_ handlers) is useful when for example the last `Handler` is a logging `Handler` that logs the request(that may have been modified by previous handlers).
The `HandlerList` behavior (invoking handlers up to the first that calls
`Request.setHandled(true)`) is useful when each handler processes a different
URIs or a different virtual hosts: ``Handler``s are invoked one after the
other until one matches the URI or virtual host.
The `HandlerList` behavior (invoking handlers up to the first that calls `Request.setHandled(true)`) is useful when each handler processes a different URIs or a different virtual hosts: ``Handler``s are invoked one after the other until one matches the URI or virtual host.
The nested behavior is useful to enrich the request with additional services
such as HTTP session support (`SessionHandler`), or with specific behaviors
dictated by the Servlet specification (`ServletHandler`).
The nested behavior is useful to enrich the request with additional services such as HTTP session support (`SessionHandler`), or with specific behaviors dictated by the Servlet specification (`ServletHandler`).
``Handler``s can be organized in a tree by composing them together:
@ -64,40 +54,12 @@ HandlerCollection
└── LoggingHandler
----
////
PlantUML cannot render a tree left-aligned :(
[plantuml]
----
skinparam backgroundColor transparent
skinparam monochrome true
skinparam shadowing false
skinparam padding 5
Server applications should rarely write custom ``Handler``s, preferring instead to use existing ``Handler``s provided by the Jetty Server Libraries for managing web application contexts, security, HTTP sessions and Servlet support.
Refer to xref:eg-server-http-handler-use[this section] for more information about how to use the ``Handler``s provided by the Jetty Server Libraries.
scale 1.5
hide members
hide circle
HandlerCollection -- HandlerList
HandlerCollection -- LoggingHandler
HandlerList -- App1Handler
HandlerList -- App2Handler
App2Handler -- ServletHandler
----
////
Server applications should rarely write custom ``Handler``s, preferring
instead to use existing ``Handler``s provided by the Jetty Server Libraries
for managing web application contexts, security, HTTP sessions and Servlet
support.
Refer to xref:eg-server-http-handler-use[this section] for more information about
how to use the ``Handler``s provided by the Jetty Server Libraries.
However, in some cases the additional features are not required, or additional
constraints on memory footprint, or performance, or just simplicity must be met.
However, in some cases the additional features are not required, or additional constraints on memory footprint, or performance, or just simplicity must be met.
In these cases, implementing your own `Handler` may be a better solution.
Refer to xref:eg-server-http-handler-implement[this section] for more information
about how to write your own ``Handler``s.
Refer to xref:eg-server-http-handler-implement[this section] for more information about how to write your own ``Handler``s.
include::server-http-handler-use.adoc[]
include::server-http-handler-implement.adoc[]

View File

@ -19,8 +19,7 @@
[[eg-server-http]]
=== HTTP Server Libraries
The Eclipse Jetty Project has historically provided libraries to embed an HTTP
server and a Servlet Container.
The Eclipse Jetty Project has historically provided libraries to embed an HTTP server and a Servlet Container.
The Maven artifact coordinates are:
@ -33,9 +32,7 @@ The Maven artifact coordinates are:
</dependency>
----
An `org.eclipse.jetty.server.Server` instance is the central component that
links together a collection of ``Connector``s and a collection of
``Handler``s, with threads from a `ThreadPool` doing the work.
An `org.eclipse.jetty.server.Server` instance is the central component that links together a collection of ``Connector``s and a collection of ``Handler``s, with threads from a `ThreadPool` doing the work.
[plantuml]
----
@ -54,12 +51,9 @@ Connectors - Server
Server -- Handlers
----
The components that accept connections from clients are
`org.eclipse.jetty.server.Connector` implementations.
The components that accept connections from clients are `org.eclipse.jetty.server.Connector` implementations.
When a Jetty server interprets the HTTP protocol (both HTTP/1.1 and HTTP/2),
it uses `org.eclipse.jetty.server.Handler` instances to process incoming
requests and eventually produce responses.
When a Jetty server interprets the HTTP protocol (both HTTP/1.1 and HTTP/2), it uses `org.eclipse.jetty.server.Handler` instances to process incoming requests and eventually produce responses.
A `Server` must be created, configured and started:
@ -68,41 +62,27 @@ A `Server` must be created, configured and started:
include::../../{doc_code}/embedded/server/http/HTTPServerDocs.java[tags=simple]
----
The example above shows the simplest HTTP/1.1 server; it has no support for
HTTP sessions, for HTTP authentication, or for any of the features required
by the Servlet specification.
The example above shows the simplest HTTP/1.1 server; it has no support for HTTP sessions, for HTTP authentication, or for any of the features required by the Servlet specification.
All these features are provided by the Jetty Server Libraries and server
applications only need to put the required components together to provide
all the required features, and it is discussed in details in
xref:eg-server-http-handler-use[this section].
All these features are provided by the Jetty Server Libraries and server applications only need to put the required components together to provide all the required features, and it is discussed in details in xref:eg-server-http-handler-use[this section].
[[eg-server-http-request-processing]]
==== Server Request Processing
The Jetty HTTP request processing is outlined below in the diagram below.
Request handing is slightly different for each protocol; in HTTP/2 Jetty
takes into account multiplexing, something that is not present in HTTP/1.1.
Request handing is slightly different for each protocol; in HTTP/2 Jetty takes into account multiplexing, something that is not present in HTTP/1.1.
However, the diagram below captures the essence of request handling that
is common among all protocols that carry HTTP requests.
However, the diagram below captures the essence of request handling that is common among all protocols that carry HTTP requests.
First, the Jetty I/O layer emits an event that a socket has data to read.
This event is converted to a call to `AbstractConnection.onFillable()`,
where the `Connection` first reads from the `EndPoint` into a `ByteBuffer`,
and then calls a protocol specific parser to parse the bytes in the
`ByteBuffer`.
This event is converted to a call to `AbstractConnection.onFillable()`, where the `Connection` first reads from the `EndPoint` into a `ByteBuffer`, and then calls a protocol specific parser to parse the bytes in the `ByteBuffer`.
The parser emit events such that are protocol specific; the HTTP/2 parser,
for example, emits events for each HTTP/2 frame that has been parsed.
The parser events are then converted to protocol independent events such
as _"request start"_, _"request headers"_, _"request content chunk"_, etc.
The parser emit events such that are protocol specific; the HTTP/2 parser, for example, emits events for each HTTP/2 frame that has been parsed.
The parser events are then converted to protocol independent events such as _"request start"_, _"request headers"_, _"request content chunk"_, etc.
that in turn are converted into method calls to `HttpChannel`.
When enough of the HTTP request is arrived, the `Connection` calls
`HttpChannel.handle()` that calls the `Handler` chain, that eventually
calls the server application code.
When enough of the HTTP request is arrived, the `Connection` calls `HttpChannel.handle()` that calls the `Handler` chain, that eventually calls the server application code.
[plantuml]
----
@ -133,23 +113,15 @@ Server -> Handlers : handle()
===== HttpChannel Events
The central component processing HTTP requests is `HttpChannel`.
There is a 1-to-1 relationship between an HTTP request/response and an
`HttpChannel`, no matter what is the specific protocol that carries the
HTTP request over the network (HTTP/1.1, HTTP/2 or FastCGI).
There is a 1-to-1 relationship between an HTTP request/response and an `HttpChannel`, no matter what is the specific protocol that carries the HTTP request over the network (HTTP/1.1, HTTP/2 or FastCGI).
Advanced server applications may be interested in the progress of the
processing of an HTTP request/response by `HttpChannel`.
A typical case is to know exactly _when_ the HTTP request/response
processing is complete, for example to monitor processing times.
Advanced server applications may be interested in the progress of the processing of an HTTP request/response by `HttpChannel`.
A typical case is to know exactly _when_ the HTTP request/response processing is complete, for example to monitor processing times.
NOTE: A `Handler` or a Servlet `Filter` may not report precisely when
an HTTP request/response processing is finished.
A server application may write a small enough content that is aggregated
by Jetty for efficiency reasons; the write returns immediately, but
nothing has been written to the network yet.
NOTE: A `Handler` or a Servlet `Filter` may not report precisely when an HTTP request/response processing is finished.
A server application may write a small enough content that is aggregated by Jetty for efficiency reasons; the write returns immediately, but nothing has been written to the network yet.
`HttpChannel` notifies ``HttpChannel.Listener``s of the progress of the
HTTP request/response handling.
`HttpChannel` notifies ``HttpChannel.Listener``s of the progress of the HTTP request/response handling.
Currently, the following events are available:
* `requestBegin`
@ -167,12 +139,9 @@ Currently, the following events are available:
* `responseEnd`
* `complete`
Please refer to the `HttpChannel.Listener`
link:{JDURL}/org/eclipse/jetty/server/HttpChannel.Listener.html[javadocs]
for the complete list of events.
Please refer to the `HttpChannel.Listener` link:{JDURL}/org/eclipse/jetty/server/HttpChannel.Listener.html[javadocs] for the complete list of events.
Server applications can register `HttpChannel.Listener` by adding them as
beans to the `Connector`:
Server applications can register `HttpChannel.Listener` by adding them as beans to the `Connector`:
[source,java,indent=0]
----

View File

@ -19,14 +19,9 @@
[[eg-server-http2]]
=== HTTP/2 Server Library
In the vast majority of cases, server applications should use the generic,
high-level, xref:eg-server-http[HTTP server library] that also provides
HTTP/2 support via the HTTP/2 ``ConnectionFactory``s as described in details
xref:eg-server-http-connector-protocol-http2[here].
In the vast majority of cases, server applications should use the generic, high-level, xref:eg-server-http[HTTP server library] that also provides HTTP/2 support via the HTTP/2 ``ConnectionFactory``s as described in details xref:eg-server-http-connector-protocol-http2[here].
The low-level HTTP/2 server library has been designed for those applications
that need low-level access to HTTP/2 features such as _sessions_, _streams_
and _frames_, and this is quite a rare use case.
The low-level HTTP/2 server library has been designed for those applications that need low-level access to HTTP/2 features such as _sessions_, _streams_ and _frames_, and this is quite a rare use case.
See also the correspondent xref:eg-client-http2[HTTP/2 client library].
@ -51,50 +46,34 @@ include::../../http2.adoc[tag=multiplex]
include::../../http2.adoc[tag=flowControl]
How a server application should handle HTTP/2 flow control is discussed in
details in xref:eg-server-http2-request[this section].
How a server application should handle HTTP/2 flow control is discussed in details in xref:eg-server-http2-request[this section].
[[eg-server-http2-setup]]
==== Server Setup
The low-level HTTP/2 support is provided by
`org.eclipse.jetty.http2.server.RawHTTP2ServerConnectionFactory` and
`org.eclipse.jetty.http2.api.server.ServerSessionListener`:
The low-level HTTP/2 support is provided by `org.eclipse.jetty.http2.server.RawHTTP2ServerConnectionFactory` and `org.eclipse.jetty.http2.api.server.ServerSessionListener`:
[source,java,indent=0]
----
include::../../{doc_code}/embedded/server/http2/HTTP2ServerDocs.java[tags=setup]
----
Where server applications using the
xref:eg-server-http[high-level server library] deal with HTTP
requests and responses in ``Handler``s, server applications using the
low-level HTTP/2 server library deal directly with HTTP/2 __session__s,
__stream__s and __frame__s in a `ServerSessionListener` implementation.
Where server applications using the xref:eg-server-http[high-level server library] deal with HTTP requests and responses in ``Handler``s, server applications using the low-level HTTP/2 server library deal directly with HTTP/2 __session__s, __stream__s and __frame__s in a `ServerSessionListener` implementation.
The `ServerSessionListener` interface defines a number of methods that are
invoked by the implementation upon the occurrence of HTTP/2 events, and that
server applications can override to react to those events.
The `ServerSessionListener` interface defines a number of methods that are invoked by the implementation upon the occurrence of HTTP/2 events, and that server applications can override to react to those events.
Please refer to the `ServerSessionListener`
link:{JDURL}/org/eclipse/jetty/http2/api/server/ServerSessionListener.html[javadocs]
for the complete list of events.
Please refer to the `ServerSessionListener`link:{JDURL}/org/eclipse/jetty/http2/api/server/ServerSessionListener.html[javadocs] for the complete list of events.
The first event is the _accept_ event and happens when a client opens a new
TCP connection to the server and the server accepts the connection.
This is the first occasion where server applications have access to the HTTP/2
`Session` object:
The first event is the _accept_ event and happens when a client opens a new TCP connection to the server and the server accepts the connection.
This is the first occasion where server applications have access to the HTTP/2 `Session` object:
[source,java,indent=0]
----
include::../../{doc_code}/embedded/server/http2/HTTP2ServerDocs.java[tags=accept]
----
After connecting to the server, a compliant HTTP/2 client must send the
link:https://tools.ietf.org/html/rfc7540#section-3.5[HTTP/2 client preface],
and when the server receives it, it generates the _preface_ event on the server.
This is where server applications can customize the connection settings by
returning a map of settings that the implementation will send to the client:
After connecting to the server, a compliant HTTP/2 client must send the link:https://tools.ietf.org/html/rfc7540#section-3.5[HTTP/2 client preface], and when the server receives it, it generates the _preface_ event on the server.
This is where server applications can customize the connection settings by returning a map of settings that the implementation will send to the client:
[source,java,indent=0]
----
@ -104,13 +83,9 @@ include::../../{doc_code}/embedded/server/http2/HTTP2ServerDocs.java[tags=prefac
[[eg-server-http2-request]]
==== Receiving a Request
Receiving an HTTP request from the client, and sending a response, creates
a _stream_ that encapsulates the exchange of HTTP/2 frames that compose the
request and the response.
Receiving an HTTP request from the client, and sending a response, creates a _stream_ that encapsulates the exchange of HTTP/2 frames that compose the request and the response.
An HTTP request is made of a `HEADERS` frame, that carries the request method,
the request URI and the request headers, and optional `DATA` frames that carry
the request content.
An HTTP request is made of a `HEADERS` frame, that carries the request method, the request URI and the request headers, and optional `DATA` frames that carry the request content.
Receiving the `HEADERS` frame opens the `Stream`:
@ -119,11 +94,7 @@ Receiving the `HEADERS` frame opens the `Stream`:
include::../../{doc_code}/embedded/server/http2/HTTP2ServerDocs.java[tags=request]
----
Server applications should return a `Stream.Listener` implementation from
`onNewStream(...)` to be notified of events generated by the client, such as
`DATA` frames carrying request content, or a `RST_STREAM` frame indicating
that the client wants to _reset_ the request, or an idle timeout event
indicating that the client was supposed to send more frames but it did not.
Server applications should return a `Stream.Listener` implementation from `onNewStream(...)` to be notified of events generated by the client, such as `DATA` frames carrying request content, or a `RST_STREAM` frame indicating that the client wants to _reset_ the request, or an idle timeout event indicating that the client was supposed to send more frames but it did not.
The example below shows how to receive request content:
@ -137,16 +108,11 @@ include::../../http2.adoc[tag=apiFlowControl]
[[eg-server-http2-response]]
==== Sending a Response
After receiving an HTTP request, a server application must send an HTTP
response.
After receiving an HTTP request, a server application must send an HTTP response.
An HTTP response is typically composed of a `HEADERS` frame containing the
HTTP status code and the response headers, and optionally one or more `DATA`
frames containing the response content bytes.
An HTTP response is typically composed of a `HEADERS` frame containing the HTTP status code and the response headers, and optionally one or more `DATA` frames containing the response content bytes.
The HTTP/2 protocol also supports response trailers (that is, headers that
are sent after the response content) that also are sent using a `HEADERS`
frame.
The HTTP/2 protocol also supports response trailers (that is, headers that are sent after the response content) that also are sent using a `HEADERS` frame.
A server application can send a response in this way:
@ -159,9 +125,7 @@ include::../../{doc_code}/embedded/server/http2/HTTP2ServerDocs.java[tags=respon
==== Resetting a Request
A server application may decide that it does not want to accept the request.
For example, it may throttle the client because it sent too many requests in
a time window, or the request is invalid (and does not deserve a proper HTTP
response), etc.
For example, it may throttle the client because it sent too many requests in a time window, or the request is invalid (and does not deserve a proper HTTP response), etc.
A request can be reset in this way:
@ -173,14 +137,10 @@ include::../../{doc_code}/embedded/server/http2/HTTP2ServerDocs.java[tags=reset;
[[eg-server-http2-push]]
==== HTTP/2 Push of Resources
A server application may _push_ secondary resources related to a primary
resource.
A server application may _push_ secondary resources related to a primary resource.
A client may inform the server that it does not accept pushed resources
(see link:https://tools.ietf.org/html/rfc7540#section-8.2[this section]
of the specification) via a `SETTINGS` frame.
Server applications must track `SETTINGS` frames and verify whether the
client supports HTTP/2 push, and only push if the client supports it:
A client may inform the server that it does not accept pushed resources(see link:https://tools.ietf.org/html/rfc7540#section-8.2[this section] of the specification) via a `SETTINGS` frame.
Server applications must track `SETTINGS` frames and verify whether the client supports HTTP/2 push, and only push if the client supports it:
[source,java,indent=0]
----

View File

@ -19,11 +19,9 @@
[[eg-server-io-arch]]
=== Server Libraries I/O Architecture
The Jetty server libraries provide the basic components and APIs to implement
a network server.
The Jetty server libraries provide the basic components and APIs to implement a network server.
They build on the common xref:eg-io-arch[Jetty I/O Architecture] and provide server
specific concepts.
They build on the common xref:eg-io-arch[Jetty I/O Architecture] and provide server specific concepts.
The main I/O server-side class is `org.eclipse.jetty.server.ServerConnector`.

View File

@ -19,31 +19,19 @@
[[eg-server]]
== Server Libraries
The Eclipse Jetty Project provides server-side libraries
that allow you to embed an HTTP or WebSocket server in your applications.
The Eclipse Jetty Project provides server-side libraries that allow you to embed an HTTP or WebSocket server in your applications.
A typical example is a HTTP server that needs to expose a REST endpoint.
Another example is a proxy application that receives HTTP requests and
forwards them to third party services possibly using also the Jetty
xref:eg-client[client libraries].
Another example is a proxy application that receives HTTP requests and forwards them to third party services possibly using also the Jetty xref:eg-client[client libraries].
While historically Jetty is an HTTP server, it is possible to use the Jetty
server-side libraries to write a generic network server that interprets
any network protocol (not only HTTP).
While historically Jetty is an HTTP server, it is possible to use the Jetty server-side libraries to write a generic network server that interprets any network protocol (not only HTTP).
If you are interested in the low-level details of how the Eclipse Jetty
server libraries work, or are interested in writing a custom protocol,
look at the xref:eg-server-io-arch[Server I/O Architecture].
If you are interested in the low-level details of how the Eclipse Jetty server libraries work, or are interested in writing a custom protocol, look at the xref:eg-server-io-arch[Server I/O Architecture].
The Jetty server-side libraries provide:
* HTTP support for HTTP/1.0, HTTP/1.1, HTTP/2, clear-text or encrypted, for
applications that want to embed Jetty as a generic HTTP server or proxy,
via the xref:eg-server-http[HTTP libraries]
* HTTP/2 low-level support, for applications that want to explicitly handle
low-level HTTP/2 _sessions_, _streams_ and _frames_, via the
xref:eg-server-http2[HTTP/2 libraries]
* WebSocket support, for applications that want to embed a WebSocket server,
via the xref:eg-server-websocket[WebSocket libraries]
* HTTP support for HTTP/1.0, HTTP/1.1, HTTP/2, clear-text or encrypted, for applications that want to embed Jetty as a generic HTTP server or proxy, via the xref:eg-server-http[HTTP libraries]
* HTTP/2 low-level support, for applications that want to explicitly handle low-level HTTP/2 _sessions_, _streams_ and _frames_, via the xref:eg-server-http2[HTTP/2 libraries]
* WebSocket support, for applications that want to embed a WebSocket server, via the xref:eg-server-websocket[WebSocket libraries]
// TODO: add a section on lifecycle and the component tree.