Merged branch 'jetty-9.4.x' into 'master'.

This commit is contained in:
Simone Bordet 2016-09-05 23:12:44 +02:00
commit bdf26da0c0
33 changed files with 1608 additions and 553 deletions

View File

@ -33,7 +33,7 @@ The fourth step is to create a Jetty base directory (see xref:startup-base-and-h
....
$ mkdir -p /usr/jetty/wordpress
$ cd /usr/jetty/wordpress
$ java -jar $JETTY_HOME/start.jar --add-to-startd=fcgi,http,deploy
$ java -jar $JETTY_HOME/start.jar --add-to-start=fcgi,http,deploy
....
Therefore `$JETTY_BASE=/usr/jetty/wordpress`.
@ -152,7 +152,7 @@ Enabling the `http2` is easy; in additions to the modules you have enabled above
[source, screen, subs="{sub-order}"]
....
$ cd $JETTY_BASE
$ java -jar $JETTY_HOME/start.jar --add-to-startd=http2
$ java -jar $JETTY_HOME/start.jar --add-to-start=http2
....
The command above adds the `http2` module (and its dependencies) to the existing modules and uses the default Jetty keystore to provide the key material required by TLS.

View File

@ -25,7 +25,7 @@ A demo Jetty base that supports HTTP/1, HTTPS/1 and deployment from a webapps di
$ JETTY_BASE=http2-demo
$ mkdir $JETTY_BASE
$ cd $JETTY_BASE
$ java -jar $JETTY_HOME/start.jar --add-to-startd=http,https,deploy
$ java -jar $JETTY_HOME/start.jar --add-to-start=http,https,deploy
....
The commands above create a `$JETTY_BASE` directory called `http2-demo`, and initializes the `http,` `https` and `deploy` modules (and their dependencies) to run a typical Jetty Server on port 8080 (for HTTP/1) and 8443 (for HTTPS/1).
@ -35,7 +35,7 @@ To add HTTP/2 to this demo base, it is just a matter of enabling the `http2` mod
[source, screen, subs="{sub-order}"]
....
$ java -jar $JETTY_HOME/start.jar --add-to-startd=http2
$ java -jar $JETTY_HOME/start.jar --add-to-start=http2
....
This command does not create a new connector, but instead simply adds the HTTP/2 protocol to the existing HTTPS/1 connector, so that it now supports both protocols on port 8443.
@ -61,7 +61,7 @@ HTTP/2 can be enabled on the plain text connector and the server restarted with
[source,screen]
....
$ java -jar $JETTY_HOME/start.jar --add-to-startd=http2c
$ java -jar $JETTY_HOME/start.jar --add-to-start=http2c
$ java -jar $JETTY_HOME/start.jar
..
2015-06-17 14:16:12.549:INFO:oejs.ServerConnector:main: Started ServerConnector@6f32cd1e{HTTP/1.1,[http/1.1, h2c, h2c-17]}{0.0.0.0:8080}

View File

@ -21,13 +21,12 @@ If you are using the standard distribution of Jetty, you must enable the _JNDI_
As the _plus_ module depends on the _JNDI_ module, you only need to enable the _plus_ module to enable both.
Assuming you have Jetty installed in `/opt/jetty`, and you have made a link:#startup-base-and-home[jetty base] in `/opt/jetty/my-base`, do:
[source,bash]
----
[source, screen, subs="{sub-order}"]
....
cd /opt/jetty
cd my-base
java -jar $JETTY_HOME/start.jar --add-to-startd=plus
----
java -jar $JETTY_HOME/start.jar --add-to-start=plus
....
You can now start Jetty and use JNDI within your webapps.
See link:#using-jndi[Using JNDI] for information on how to add entries to the JNDI environment that Jetty can look up within webapps.
@ -36,10 +35,9 @@ If you have extra jars associated with your JNDI resources, for example a databa
You will then need to enable the _ext_ module to ensure the jars in the `ext/` directory are on the classpath.
Assuming you have Jetty installed in `/opt/jetty`, and you have made a link:#startup-base-and-home[jetty base] in `/opt/jetty/my-base`, do:
[source,bash]
----
[source, screen, subs="{sub-order}"]
....
cd /opt/jetty
cd my-base
java -jar $JETTY_HOME/start.jar --add-to-startd=ext
----
java -jar $JETTY_HOME/start.jar --add-to-start=ext
....

View File

@ -49,14 +49,14 @@ To enable the Request Log module for the entire server via the Jetty distributio
[source, screen, subs="{sub-order}"]
----
$ java -jar ../start.jar --add-to-startd=requestlog
$ java -jar ../start.jar --add-to-start=requestlog
INFO: requestlog initialised in ${jetty.base}/start.d/requestlog.ini
MKDIR: ${jetty.base}/logs
INFO: Base directory was modified
----
The above command will add a new `requestlog.ini` file to your `{$jetty.base}/start.d` directory.
The above command will add a new `requestlog.ini` file to your link:#start-vs-startd[`{$jetty.base}/start.d` directory].
If you used `--add-to-start` it will append the configuration options for the module to the `start.ini` file located in your `{$jetty.base}` directory.
The equivalent code for embedded usages of Jetty is:

View File

@ -17,12 +17,12 @@
[[session-clustering-gcloud-datastore]]
=== Session Clustering with Google Cloud Datastore
Jetty can support session clustering by persisting sessions to https://cloud.google.com/datastore/docs/concepts/overview[Google Cloud Datastore].
Each Jetty instance locally caches sessions for which it has received requests, writing any changes to the session through to the Datastore as the request exits the server.
Jetty can support session clustering by persisting sessions to https://cloud.google.com/datastore/docs/concepts/overview[Google Cloud Datastore].
Each Jetty instance locally caches sessions for which it has received requests, writing any changes to the session through to the Datastore as the request exits the server.
Sessions must obey the Serialization contract, and servlets must call the `Session.setAttribute()` method to ensure that changes are persisted.
The persistent session mechanism works in conjunction with a load balancer that supports stickiness.
Stickiness can be based on various data items, such as source IP address or characteristics of the session ID or a load-balancer specific mechanism.
The persistent session mechanism works in conjunction with a load balancer that supports stickiness.
Stickiness can be based on various data items, such as source IP address or characteristics of the session ID or a load-balancer specific mechanism.
For those load balancers that examine the session ID, the Jetty persistent session mechanism appends a node ID to the session ID, which can be used for routing.
==== Configuration
@ -36,16 +36,16 @@ These managers also cooperate and collaborate with the `org.eclipse.jetty.server
==== The gcloud-sessions Module
When using the jetty distribution, to enable Cloud Datastore session persistence, you will first need to enable the `gcloud-sessions` link:#startup-modules[module] for your link:#creating-jetty-base[base] using the `--add-to-start` or `--add-to-startd` argument to the link:#startup-overview[start.jar].
When using the jetty distribution, to enable Cloud Datastore session persistence, you will first need to enable the `gcloud-sessions` link:#startup-modules[module] for your link:#creating-jetty-base[base] using the `--add-to-start` argument to the link:#startup-overview[start.jar].
As part of the module installation, the necessary jars will be dynamically downloaded and installed to your `${jetty.base}/lib/gcloud` directory.
If you need to up or downgrade the version of the jars, then you can delete the jars that were automatically installed and replace them.
Once you've done that, you will need to prevent jetty's startup checks from detecting the missing jars.
As part of the module installation, the necessary jars will be dynamically downloaded and installed to your `${jetty.base}/lib/gcloud` directory.
If you need to up or downgrade the version of the jars, then you can delete the jars that were automatically installed and replace them.
Once you've done that, you will need to prevent jetty's startup checks from detecting the missing jars.
To do that, you can use `--skip-file-validation=glcoud-sessions` argument to start.jar on the command line, or place that line inside `${jetty.base}/start.ini` to ensure it is used for every start.
===== Configuring the GCloudSessionIdManager
The gcloud-sessions module will have installed file called `${jetty.home}/etc/jetty-gcloud-sessions.xml`.
The gcloud-sessions module will have installed file called `${jetty.home}/etc/jetty-gcloud-sessions.xml`.
This file configures an instance of the `GCloudSessionIdManager` that will be shared across all webapps deployed on that server. It looks like this:
[source, xml, subs="{sub-order}"]
@ -53,13 +53,13 @@ This file configures an instance of the `GCloudSessionIdManager` that will be sh
include::{SRCDIR}/jetty-gcloud/jetty-gcloud-session-manager/src/main/config/etc/jetty-gcloud-sessions.xml[]
----
You configure it by setting values for properties.
You configure it by setting values for properties.
The properties will either be inserted as commented out in your `start.ini`, or your `start.d/gcloud-sessions.ini` file, depending on how you enabled the module.
The only property you always need to set is the name of the node in the cluster:
jetty.gcloudSession.workerName::
The name that uniquely identifies this node in the cluster.
The name that uniquely identifies this node in the cluster.
This value will also be used by the sticky load balancer to identify the node.
Don't forget to change the value of this property on *each* node on which you enable gcloud datastore session clustering.
@ -95,7 +95,7 @@ Follow the instructions on the https://cloud.google.com/datastore/docs/tools/dat
===== Configuring the GCloudSessionManager
As mentioned elsewhere, there must be one `SessionManager` per context (e.g. webapp).
As mentioned elsewhere, there must be one `SessionManager` per context (e.g. webapp).
Each SessionManager needs to reference the single `GCloudSessionIdManager`.
The way you configure a `GCloudSessionManager` depends on whether you're configuring from a context xml file, a `jetty-web.xml` file or code.
@ -104,7 +104,7 @@ The basic difference is how you get a reference to the Jetty `org.eclipse.jetty.
From a context xml file, you reference the Server instance as a Ref:
[source, xml, subs="{sub-order}"]
----
----
<!-- Get a reference to the GCloudSessionIdManager -->
<Ref id="Server">
<Call id="idMgr" name="getSessionIdManager"/>
@ -152,23 +152,23 @@ From a `WEB-INF/jetty-web.xml` file, you can reference the Server instance direc
The `GCloudSessionManager` supports the following configuration setters:
scavengeIntervalSec::
Time in seconds between runs of a scavenger task that looks for expired old sessions to delete.
The default is 10 minutes.
Time in seconds between runs of a scavenger task that looks for expired old sessions to delete.
The default is 10 minutes.
If set to 0, no scavenging is done.
staleIntervalSec::
The length of time a session can be in memory without being checked against the cluster.
A value of 0 indicates that the session is never checked against the cluster - the current node is considered to be the master for the session.
maxQueryResults::
The maximum number of results to return for a query to find expired sessions.
For efficiency it is important to limit the size of the result.
The default is 100.
The maximum number of results to return for a query to find expired sessions.
For efficiency it is important to limit the size of the result.
The default is 100.
If 0 or negative numbers are set, the default is used instead.
===== The gcloud-memcached-sessions module
As an optimization, you can have Jetty store your session data into GCloud Datastore but also cache it into memcached. This serves two purposes: faster read-accesses and also better support for non-sticky load balancers (although using a non-sticky load balancer is highly undesirable and not recommended).
You will need to enable the `gcloud-memcached-sessions` link:#startup-modules[module] for your link:#creating-jetty-base[base] using the `--add-to-start` or `--add-to-startd` argument to the link:#startup-overview[start.jar].
You will need to enable the `gcloud-memcached-sessions` link:#startup-modules[module] for your link:#creating-jetty-base[base] using the `--add-to-start` argument to the link:#startup-overview[start.jar].
If you already enabled the gcloud-sessions module, that's fine as the gcloud-memcached-sessions module depends on it anyway.
@ -191,7 +191,7 @@ If you have installed memcached on a host and port other than the defaults of `l
*Note that* you will be configuring a `GCloudMemcachedSessionManager` 'instead of' a `GCloudSessionManager`.
As usual, there must be only one per context (e.g. webapp).
As usual, there must be only one per context (e.g. webapp).
Each GCloudMemcachedSessionManager needs to reference the single `GCloudSessionIdManager`.
@ -201,7 +201,7 @@ The basic difference is how you get a reference to the Jetty `org.eclipse.jetty.
From a context xml file, you reference the Server instance as a Ref:
[source, xml, subs="{sub-order}"]
----
----
<!-- Get a reference to the GCloudSessionIdManager -->
<Ref id="Server">
<Call id="idMgr" name="getSessionIdManager"/>
@ -255,16 +255,16 @@ From a `WEB-INF/jetty-web.xml` file, you can reference the Server instance direc
The `GCloudMemcachedSessionManager` supports the following configuration setters:
scavengeIntervalSec::
Time in seconds between runs of a scavenger task that looks for expired old sessions to delete.
The default is 10 minutes.
Time in seconds between runs of a scavenger task that looks for expired old sessions to delete.
The default is 10 minutes.
If set to 0, no scavenging is done.
staleIntervalSec::
The length of time a session can be in memory without being checked against the cluster.
A value of 0 indicates that the session is never checked against the cluster - the current node is considered to be the master for the session.
maxQueryResults::
The maximum number of results to return for a query to find expired sessions.
For efficiency it is important to limit the size of the result.
The default is 100.
The maximum number of results to return for a query to find expired sessions.
For efficiency it is important to limit the size of the result.
The default is 100.
If 0 or negative numbers are set, the default is used instead.
host::
The address of the host where the memcached server is running. Defaults to "localhost".

View File

@ -36,7 +36,7 @@ These managers also cooperate and collaborate with the `org.eclipse.jetty.server
==== The Infinispan Module
When using the jetty distribution, to enable Infinispan session persistence, you will first need to enable the Infinispan link:#startup-modules[module] for your link:#creating-jetty-base[base] using the `--add-to-start` or `--add-to-startd` argument to the link:#startup-overview[start.jar].
When using the jetty distribution, to enable Infinispan session persistence, you will first need to enable the Infinispan link:#startup-modules[module] for your link:#creating-jetty-base[base] using the `--add-to-start` argument to the link:#startup-overview[start.jar].
As part of the module installation, the necessary Infinispan jars will be dynamically downloaded and installed to your `${jetty.base}/lib/infinispan` directory.
If you need to up or downgrade the version of the Infinispan jars, then you can delete the jars that were automatically installed and replace them.

View File

@ -17,16 +17,16 @@
[[session-clustering-jdbc]]
=== Session Clustering with a Database
Jetty can support session clustering by persisting sessions to a shared database.
Each Jetty instance locally caches sessions for which it has received requests, writing any changes to the session through to the database as the request exits the server.
Jetty can support session clustering by persisting sessions to a shared database.
Each Jetty instance locally caches sessions for which it has received requests, writing any changes to the session through to the database as the request exits the server.
Sessions must obey the Serialization contract, and servlets must call the `Session.setAttribute()` method to ensure that changes are persisted.
The persistent session mechanism works in conjunction with a load balancer that supports stickiness.
Stickiness can be based on various data items, such as source IP address or characteristics of the session ID or a load-balancer specific mechanism.
The persistent session mechanism works in conjunction with a load balancer that supports stickiness.
Stickiness can be based on various data items, such as source IP address or characteristics of the session ID or a load-balancer specific mechanism.
For those load balancers that examine the session ID, the Jetty persistent session mechanism appends a node ID to the session ID, which can be used for routing.
In this type of solution, the database can become both a bottleneck and a single point of failure.
Jetty takes steps to reduce the load on the database (discussed below), but in a heavily loaded environment you might need to investigate other optimization strategies such as local caching and database replication.
In this type of solution, the database can become both a bottleneck and a single point of failure.
Jetty takes steps to reduce the load on the database (discussed below), but in a heavily loaded environment you might need to investigate other optimization strategies such as local caching and database replication.
You should also consult your database vendor's documentation for information on how to ensure high availability and failover of your database.
==== Configuration
@ -40,7 +40,7 @@ These managers also cooperate and collaborate with the `org.eclipse.jetty.server
==== The jdbc-session Module
When using the jetty distribution, to enable jdbc session persistence, you will first need to enable the jdbc-session link:#startup-modules[module] for your link:#creating-jetty-base[base] using the `--add-to-start` or `--add-to-startd` argument to the link:#startup-overview[start.jar].
When using the jetty distribution, to enable jdbc session persistence, you will first need to enable the jdbc-session link:#startup-modules[module] for your link:#creating-jetty-base[base] using the `--add-to-start` argument to the link:#startup-overview[start.jar].
You will also find the following properties, either in your base's start.d/jdbc-session.ini file or appended to your start.ini, depending on how you enabled the module:
@ -56,26 +56,26 @@ jetty.jdbcSession.connectionURL=jdbc:derby:sessions;create=true
----
jetty.jdbcSession.workerName::
The name that uniquely identifies this node in the cluster.
The name that uniquely identifies this node in the cluster.
This value will also be used by the sticky load balancer to identify the node.
Don't forget to change the value of this property on *each* node on which you enable jdbc session clustering.
jetty.jdbcSession.scavenge::
The time in seconds between sweeps of a task which scavenges old expired sessions.
The default is 10 minutess.
The time in seconds between sweeps of a task which scavenges old expired sessions.
The default is 10 minutess.
Increasing the frequency is not recommended as doing so increases the load on the database with very little gain.
jetty.jdbcSession.datasource::
The name of a `javax.sql.DataSource` that gives access to the database that holds the session information.
The name of a `javax.sql.DataSource` that gives access to the database that holds the session information.
You should configure *either* this or the jdbc driver information described next.
jetty.jdbcSession.datasource and jetty.jdbcSession.connectionURL::
This is the name of the jdbc driver class, and a jdbc connection url suitable for that driver.
This is the name of the jdbc driver class, and a jdbc connection url suitable for that driver.
You should configure *either* this or the jdbc datasource name described above.
These properties are applied to the `JDBCSessionIdManager` described below.
===== Configuring the JDBCSessionIdManager
The jdbc-session module will have installed file called `$\{jetty.home}/etc/jetty-jdbc-sessions.xml`.
This file configures an instance of the `JDBCSessionIdManager` that will be shared across all webapps deployed on that server.
The jdbc-session module will have installed file called `$\{jetty.home}/etc/jetty-jdbc-sessions.xml`.
This file configures an instance of the `JDBCSessionIdManager` that will be shared across all webapps deployed on that server.
It looks like this:
[source, xml, subs="{sub-order}"]
@ -88,7 +88,7 @@ As well as uncommenting and setting up appropriate values for the properties dis
As Jetty configuration files are direct mappings of XML to Java, it is straight forward to do this in code:
[source, java, subs="{sub-order}"]
----
----
Server server = new Server();
...
JDBCSessionIdManager idMgr = new JDBCSessionIdManager(server);
@ -96,7 +96,7 @@ idMgr.setWorkerName("node1");
idMgr.setDriverInfo("com.mysql.jdbc.Driver", "jdbc:mysql://127.0.0.1:3306/sessions?user=janb");
idMgr.setScavengeInterval(600);
server.setSessionIdManager(idMgr);
----
====== Configuring the Database Schema
@ -108,7 +108,7 @@ The defaults used are:
[options="header"]
|===========================
|table name |JettySessionIds
|columns |id
|columns |id
|===========================
.Default Values for Session Table
@ -121,10 +121,10 @@ accessTime, lastAccessTime, createTime, cookieTime, lastSavedTime,
expiryTime, maxInterval, map
|=======================================================================
To change these values, use the link:{JDURL}/org/eclipse/jetty/server/session/SessionIdTableSchema.html[org.eclipse.jetty.server.session.SessionIdTableSchema] and link:{JDURL}/org/eclipse/jetty/server/session/SessionTableSchema.html[org.eclipse.jetty.server.session.SessionTableSchema] classes.
To change these values, use the link:{JDURL}/org/eclipse/jetty/server/session/SessionIdTableSchema.html[org.eclipse.jetty.server.session.SessionIdTableSchema] and link:{JDURL}/org/eclipse/jetty/server/session/SessionTableSchema.html[org.eclipse.jetty.server.session.SessionTableSchema] classes.
These classes have getter/setter methods for the table name and all columns.
Here's an example of changing the name of `JettySessionsId` table and its single column.
Here's an example of changing the name of `JettySessionsId` table and its single column.
This example will use java code, but as explained above, you may also do this via a Jetty xml configuration file:
[source, java, subs="{sub-order}"]
@ -137,7 +137,7 @@ idTableSchema.setIdColumn("theid");
idManager.setSessionIdTableSchema(idTableSchema);
----
In a similar fashion, you can change the names of the table and columns for the `JettySessions` table.
In a similar fashion, you can change the names of the table and columns for the `JettySessions` table.
*Note* that both the `SessionIdTableSchema` and the `SessionTableSchema` instances are set on the `JDBCSessionIdManager` class.
[source, java, subs="{sub-order}"]
@ -156,13 +156,13 @@ sessionTableSchema.setLastAccessTimeColumn("latime");
sessionTableSchema.setLastNodeColumn("lnode");
sessionTableSchema.setLastSavedTimeColumn("lstime");
sessionTableSchema.setMapColumn("mo");
sessionTableSchema.setMaxIntervalColumn("mi");
sessionTableSchema.setMaxIntervalColumn("mi");
idManager.setSessionTableSchema(sessionTableSchema);
----
===== Configuring the JDBCSessionManager
As mentioned elsewhere, there should be one `JDBCSessionManager` per context (e.g. webapp).
As mentioned elsewhere, there should be one `JDBCSessionManager` per context (e.g. webapp).
It will need to reference the single `JDBCSessionIdManager` configured previously for the Server.
The way you configure a `JDBCSessionManager` depends on whether you're configuring from a context xml file, a `jetty-web.xml` file or code.
@ -192,7 +192,7 @@ From a `WEB-INF/jetty-web.xml` file, you can reference the Server instance direc
[source, xml, subs="{sub-order}"]
----
<Get name="server">
<Get id="idMgr" name="sessionIdManager"/>
</Get>
@ -216,7 +216,7 @@ If you're embedding this in code:
//assuming you have already set up the JDBCSessionIdManager as shown earlier
//and have a reference to the Server instance:
WebAppContext wac = new WebAppContext();
... //configure your webapp context
JDBCSessionManager jdbcMgr = new JDBCSessionManager();

View File

@ -17,17 +17,17 @@
[[session-clustering-mongodb]]
=== Session Clustering with MongoDB
Jetty can support session clustering by persisting sessions into http://www.mogodb.org[MongoDB].
Each Jetty instance locally caches sessions for which it has received requests, writing any changes to the session through to the cluster as the request exits the server.
Jetty can support session clustering by persisting sessions into http://www.mogodb.org[MongoDB].
Each Jetty instance locally caches sessions for which it has received requests, writing any changes to the session through to the cluster as the request exits the server.
Sessions must obey the Serialization contract, and servlets must call the `Session.setAttribute()` method to ensure that changes are persisted.
The session persistence mechanism works in conjunction with a load balancer that supports stickiness.
Stickiness can be based on various data items, such as source IP address or characteristics of the session ID or a load-balancer specific mechanism.
The session persistence mechanism works in conjunction with a load balancer that supports stickiness.
Stickiness can be based on various data items, such as source IP address or characteristics of the session ID or a load-balancer specific mechanism.
For those load balancers that examine the session ID, the Jetty persistent session mechanism appends a node ID to the session ID, which can be used for routing.
In this type of solution, the traffic on the network needs to be carefully watched and tends to be the bottleneck.
You are probably investigating this solution in order to scale to large amount of users and sessions, so careful attention should be paid to your usage scenario.
Applications with a heavy write profile to their sessions will consume more network bandwidth than profiles that are predominately read oriented.
In this type of solution, the traffic on the network needs to be carefully watched and tends to be the bottleneck.
You are probably investigating this solution in order to scale to large amount of users and sessions, so careful attention should be paid to your usage scenario.
Applications with a heavy write profile to their sessions will consume more network bandwidth than profiles that are predominately read oriented.
We recommend using this session manager with largely read based session scenarios.
==== Configuration
@ -41,12 +41,12 @@ These managers also cooperate and collaborate with the `org.eclipse.jetty.server
==== The nosql Module
When using the jetty distribution, to enable the MongoDB session persistence mechanism, you will first need to enable the nosql link:#startup-modules[module] for your link:#creating-jetty-base[base] using the `--add-to-start` or `--add-to-startd` argument to the link:#startup-overview[start.jar].
When using the jetty distribution, to enable the MongoDB session persistence mechanism, you will first need to enable the nosql link:#startup-modules[module] for your link:#creating-jetty-base[base] using the `--add-to-start` argument to the link:#startup-overview[start.jar].
This module will automatically download the `mongodb-java-driver` and install it to your base's `lib/nosql` directory.
As part of the module installation, the necessary mongo java driver jars will be dynamically downloaded and installed to your `${jetty.base}/lib/nosql` directory.
If you need to up or downgrade the version of these jars, then you can delete the jars that were automatically installed and replace them.
Once you've done that, you will need to prevent Jetty's startup checks from detecting the missing jars.
As part of the module installation, the necessary mongo java driver jars will be dynamically downloaded and installed to your `${jetty.base}/lib/nosql` directory.
If you need to up or downgrade the version of these jars, then you can delete the jars that were automatically installed and replace them.
Once you've done that, you will need to prevent Jetty's startup checks from detecting the missing jars.
To do that, you can use `--skip-file-validation=nosql` argument to start.jar on the command line, or place that line inside `${jetty.base}/start.ini` to ensure it is used for every start.
You will also find the following properties, either in your base's `start.d/nosql.ini` file or appended to your `start.ini`, depending on how you enabled the module:
@ -61,8 +61,8 @@ jetty.nosqlSession.workerName=node1
jetty.nosqlSession.scavenge=1800
----
The `jetty.nosqlSession.workerName` is the unique name for this Jetty Server instance.
It will be used by the sticky load balancer to uniquely identify the node.
The `jetty.nosqlSession.workerName` is the unique name for this Jetty Server instance.
It will be used by the sticky load balancer to uniquely identify the node.
You should change this value on *each* node to which you install MongoDB session management.
The `jetty.nosqlSession.scavenge` property defines the time in seconds between runs of the scavenger: the scavenger is a task which runs periodically to clean out sessions that have expired but become stranded in the database for whatever reason.
@ -71,8 +71,8 @@ These properties are substituted into the configuration of the `MongoDBSessionId
===== Configuring the MongoSessionIdManager
The nosql module will have installed file called `$\{jetty.home}/etc/jetty-nosql.xml`.
This file configures an instance of the `MongoSessionIdManager` that will be shared across all webapps deployed on that server.
The nosql module will have installed file called `$\{jetty.home}/etc/jetty-nosql.xml`.
This file configures an instance of the `MongoSessionIdManager` that will be shared across all webapps deployed on that server.
It looks like this:
[source, xml, subs="{sub-order}"]
@ -80,8 +80,8 @@ It looks like this:
include::{SRCDIR}/jetty-nosql/src/main/config/etc/jetty-nosql.xml[]
----
The `MongoSessionIdManager` needs access to a MongoDB cluster, and the `jetty-nosql.xml` file assumes the defaults of localhost and default MongoDB port.
If you need to configure something else, you will need to edit this file.
The `MongoSessionIdManager` needs access to a MongoDB cluster, and the `jetty-nosql.xml` file assumes the defaults of localhost and default MongoDB port.
If you need to configure something else, you will need to edit this file.
Here's an example of a more complex setup to use a remote MongoDB instance:
[source, xml, subs="{sub-order}"]
@ -122,31 +122,31 @@ Here's an example of a more complex setup to use a remote MongoDB instance:
<Set name="scavengePeriod"><Property name="jetty.nosqlSession.scavenge" default="1800"/></Set>
</New>
</Set>
----
As Jetty configuration files are direct mappings of XML to Java, it is straight forward to do this in code:
[source, java, subs="{sub-order}"]
----
Server server = new Server();
...
MongoSessionIdManager idMgr = newMongoSessionIdManager(server);
idMgr.setWorkerName("node1");
idMgr.setScavengePeriod(1800);
server.setSessionIdManager(idMgr);
----
The MongoSessionIdManager has slightly different options than some of our more traditional session options.
The `MongoDBSessionIdManager` has the same scavenge timers which govern the setting of a valid session to invalid after a certain period of inactivity.
New to this session id manager is the extra purge setting which governs removal from the MongoDB cluster.
This can be configured through the 'purge' option. Purge is by default set to true and by default runs daily for each node on the cluster.
Also able to be configured is the age in which an invalid session will be retained which is set to 1 day by default.
This means that invalid sessions will be removed after lingering in the MongoDB instance for a day.
There is also an option for purging valid sessions that have not been used recently.
The MongoSessionIdManager has slightly different options than some of our more traditional session options.
The `MongoDBSessionIdManager` has the same scavenge timers which govern the setting of a valid session to invalid after a certain period of inactivity.
New to this session id manager is the extra purge setting which governs removal from the MongoDB cluster.
This can be configured through the 'purge' option. Purge is by default set to true and by default runs daily for each node on the cluster.
Also able to be configured is the age in which an invalid session will be retained which is set to 1 day by default.
This means that invalid sessions will be removed after lingering in the MongoDB instance for a day.
There is also an option for purging valid sessions that have not been used recently.
The default time for this is 1 week. You can disable these behaviors by setting purge to false.
scavengeDelay::
@ -154,8 +154,8 @@ scavengeDelay::
scavengePeriod::
How much time after a scavenge has completed should you wait before doing it again?
scavengeBlockSize::
Number of session ids to which to limit each scavenge query.
If you have a very large number of sessions in memory then setting this to a non 0 value may help speed up scavenging by breaking the scavenge into multiple, queries.
Number of session ids to which to limit each scavenge query.
If you have a very large number of sessions in memory then setting this to a non 0 value may help speed up scavenging by breaking the scavenge into multiple, queries.
The default is 0, which means that all session ids are considered in a single query.
purge (Boolean)::
Do you want to purge (delete) sessions that are invalid from the session store completely?
@ -164,11 +164,11 @@ purgeDelay::
purgeInvalidAge::
How old should an invalid session be before it is eligible to be purged?
purgeValidAge::
How old should a valid session be before it is eligible to be marked invalid and purged?
How old should a valid session be before it is eligible to be marked invalid and purged?
Should this occur at all?
purgeLimit::
Integer value that represents how many items to return from a purge query.
The default is 0, which is unlimited.
Integer value that represents how many items to return from a purge query.
The default is 0, which is unlimited.
If you have a lot of old expired orphaned sessions then setting this value may speed up the purge process.
preserveOnStop::
Whether or not to retain all sessions when the session manager stops.
@ -176,16 +176,16 @@ preserveOnStop::
===== Configuring a MongoSessionManager
As mentioned elsewhere, there should be one `MongoSessionManager` per context (e.g. webapp).
As mentioned elsewhere, there should be one `MongoSessionManager` per context (e.g. webapp).
It will need to reference the single `MongoSessionIdManager` configured previously for the Server.
The way you configure a link:{JDURL}/org/eclipse/jetty/nosql/MongoSessionManager.html[org.eclipse.jetty.nosql.mongodb.MongoSessionManager] depends on whether you're configuring from a link:#deployable-descriptor-file[context xml] file or a link:#jetty-web-xml-config[jetty-web.xml] file or code.
The way you configure a link:{JDURL}/org/eclipse/jetty/nosql/MongoSessionManager.html[org.eclipse.jetty.nosql.mongodb.MongoSessionManager] depends on whether you're configuring from a link:#deployable-descriptor-file[context xml] file or a link:#jetty-web-xml-config[jetty-web.xml] file or code.
The basic difference is how you get a reference to the Jetty `org.eclipse.jetty.server.Server` instance.
From a context xml file, you reference the Server instance as a Ref:
[source, xml, subs="{sub-order}"]
----
----
<Ref name="Server" id="Server">
<Call id="mongoIdMgr" name="getSessionIdManager"/>
</Ref>
@ -229,7 +229,7 @@ If you're embedding this in code:
----
//assuming you have already set up the MongoSessionIdManager as shown earlier
//and have a reference to the Server instance:
WebAppContext wac = new WebAppContext();
... //configure your webapp context
MongoSessionManager mongoMgr = new MongoSessionManager();

View File

@ -20,8 +20,8 @@
include::startup-overview.adoc[]
include::start-jar.adoc[]
include::startup-base-vs-home.adoc[]
include::startup-xml-config.adoc[]
include::startup-classpath.adoc[]
include::startup-modules.adoc[]
include::startup-xml-config.adoc[]
include::startup-unix-service.adoc[]
include::startup-windows-service.adoc[]

View File

@ -95,40 +95,31 @@ Enables debugging output of the startup procedure.
*Note*: This does not set up debug logging for Jetty itself.
For information on logging, please see the section on <<configuring-jetty-logging, Configuring Jetty Logging.>>
--start-log-file=<filename>::
Sends all startup output to the filename specified.
+
Sends all startup output to the filename specified.
Filename is relative to `${jetty.base}`.
This is useful for capturing startup issues where the Jetty-specific logger has not yet kicked in due to a possible startup configuration error.
--list-modules::
Lists all the modules defined by the system.
+
Looks for module files using the link:#startup-base-and-home[normal `${jetty.base}` and `${jetty.home}` resolution logic].
+
Also lists enabled state based on information present on the command line, and all active startup INI files.
--module=<name>,(<name>)*::
Enables one or more modules by name (use `--list-modules` to see the list of available modules).
+
This enables all transitive (dependent) modules from the module system as well.
+
If you use this from the shell command line, it is considered a temporary effect, useful for testing out a scenario.
If you want this module to always be enabled, add this command to your `${jetty.base}/start.ini.`
--create-startd::
Creates a `${jetty.base}/start.d/` directory.
If a `${jetty.base}/start.ini` file already exists, it is copied to the `${jetty.base}/start.d` directory.
--add-to-start=<name>,(<name>)*::
Enables a module by appending lines to the `${jetty.base}/start.ini` file.
+
The lines that are added are provided by the module-defined INI templates.
+
Note: Transitive modules are also appended.
--add-to-startd=<name>,(<name>)*::
Enables a module via creation of a module-specific INI file in the `${jetty.base}/start.d/` directory.
+
The content of the new INI is provided by the module-defined ini templates.
+
Note: Transitive modules are also created in the same directory as their own INI files.
[NOTE]
--
With respect to `start.ini` and `start.d/*.ini` files, only *one* of these methods should be implemented.
Mixing a `start.ini` with module specific ini files in the `{$jetty.base}/start.d` directory can lead to server issues unless great care is taken.
Please see link:#start-vs-startd[Start.ini vs. Start.d] for more information.
--
--write-module-graph=<filename>::

View File

@ -17,7 +17,7 @@
[[startup-modules]]
=== Managing Startup Modules
Starting with Jetty 9.1, a new Module system was introduced, replacing the previous `start.config` + `OPTIONS` techniques from past Jetty Distributions.
Jetty 9.1 a new Module system replacing the previous `start.config` + `OPTIONS` techniques from past Jetty Distributions.
The standard Jetty Distribution ships with several modules defined in `${jetty.home}/modules/`.
@ -43,7 +43,7 @@ List of Jetty IoC XML Configurations::
If the default XML is not sufficient to satisfy your needs, you can override this XML by making your own in the `${jetty.base}/etc/` directory, with the same name.
The resolution steps for Jetty Base and Jetty Home will ensure that your copy from `${jetty.base}` will be picked up over the default one in `${jetty.home}`.
Jetty INI Template::
Each module can optionally declare a startup ini template that is used to insert/append/inject sample configuration elements into the `start.ini` or `start.d/*.ini` files when using the `--add-to-start=<name>` or `--add-to-startd=<name>` command line arguments in `start.jar`.
Each module can optionally declare a startup ini template that is used to insert/append/inject sample configuration elements into the `start.ini` or `start.d/*.ini` files when using the `--add-to-start=<name>` command line argument in `start.jar`.
Commonly used to present some of the parameterized property options from the Jetty IoC XML configuration files also referenced in the same module.
The `[ini-template]` section declares this section of sample configuration.
Required Files and Directories::
@ -62,15 +62,15 @@ Download File;;
[[enabling-modules]]
==== Enabling Modules
Jetty ships with many modules defined, and a small subset predefined in the `start.ini` found in the jetty distribution.
____
[TIP]
The default distribution has a co-mingled `${jetty.home}` and `${jetty.base}`. Where the directories for `${jetty.home}` and `${jetty.base}` point to the same location.
The default distribution has a co-mingled `${jetty.home}` and `${jetty.base}` where the directories for `${jetty.home}` and `${jetty.base}` point to the same location.
It is highly encouraged that you learn about the differences in link:#startup-base-and-home[Jetty Base vs Jetty Home] and take full advantage of this setup.
____
When you want enable a module, you can use the `--module=<modulename>` syntax on the command line to enable that module and all of its dependent modules.
Jetty ships with many modules defined in `${jetty.home}/modules`.
Enabling a module is a simple process: simply add the `--add-to-start` syntax on the command line.
Doing this will enable the module and any dependent modules.
An example of this, with a new, empty, base directory.
We can see from this output, that the directory is new.
@ -95,14 +95,95 @@ include::screen-http-webapp-deploy-listconfig.adoc[]
You now have a configured and functional server, albeit with no webapps deployed.
At this point you can place a webapp (war file) in the `mybase/webapps/` directory and and start Jetty.
[[start-vs-startd]]
==== Start.ini vs. Start.d
In the above example, when a module is activated the contents of that module file are added in `${jetty.base}/start.ini`.
As additional modules are added, their contents are appended to this file.
This can be beneficial if you want all of your module configurations in a single file, but for large server instances with lots of modules it can pose a challenge to quickly find and make changes or to remove a module.
As an alternative to a single `start.ini` file you can opt to house modules in a `${jetty.base}/start.d` directory.
Modules activated when a `start.d` directory exists will be stored as a single file per module.
Below is an example of a fresh `${jetty.base}` that will create a `start.d` directory and activate several modules.
[source, screen, subs="{sub-order}"]
....
[jetty.home]$ mkdir mybase
[jetty.home]$ cd mybase/
[mybase]$ java -jar ../start.jar --create-startd
INFO : Base directory was modified
[mybase]$ ls -all
total 0
drwxr-xr-x 3 staff staff 102 Aug 29 15:16 .
drwxr-xr-x@ 26 staff staff 884 Aug 29 15:16 ..
drwxr-xr-x 6 staff staff 204 Aug 29 15:19 start.d
[mybase]$ java -jar ../start.jar --add-to-start=server,client,webapp,websocket
INFO : webapp initialised in ${jetty.base}/start.d/webapp.ini
INFO : server initialised in ${jetty.base}/start.d/server.ini
INFO : websocket initialised in ${jetty.base}/start.d/websocket.ini
INFO : client initialised in ${jetty.base}/start.d/client.ini
INFO : Base directory was modified
[mybase]$ cd start.d/
[mybase]$ ls -all
total 32
drwxr-xr-x 6 staff staff 204 Aug 29 15:19 .
drwxr-xr-x 3 staff staff 102 Aug 29 15:16 ..
-rw-r--r-- 1 staff staff 175 Aug 29 15:19 client.ini
-rw-r--r-- 1 staff staff 2250 Aug 29 15:19 server.ini
-rw-r--r-- 1 staff staff 265 Aug 29 15:19 webapp.ini
-rw-r--r-- 1 staff staff 177 Aug 29 15:19 websocket.ini
....
In the example, we first create a new `${jetty.base}` and then create the `start.d` directory with the `--create-startd` command.
Next, we use the `--add-to-start` command which activates the modules and creates their respective ini files in the `start.d` directory.
If you have an existing `start.ini` file but would like to use the `start.d` structure for additional modules, you can use the `--create-startd` command as well.
Doing this will create the `start.d` directory and copy your existing `start.ini` file in to it.
Any new modules added to the server will have their own `<module name>.ini` file created in the `start.d` directory.
[source, screen, subs="{sub-order}"]
....
[mybase]$ java -jar ../start.jar --add-to-start=server,client,webapp,websocket
INFO : webapp initialised in ${jetty.base}/start.ini
INFO : server initialised in ${jetty.base}/start.ini
INFO : websocket initialised in ${jetty.base}/start.ini
INFO : client initialised in ${jetty.base}/start.ini
INFO : Base directory was modified
[mybase]$ java -jar ../start.jar --create-startd
INFO : Base directory was modified
[mybase]$ tree
.
└── start.d
└── start.ini
[mybase]$ java -jar ../start.jar --add-to-start=ssl
INFO : ssl initialised in ${jetty.base}/start.d/ssl.ini
INFO : Base directory was modified
[mybase]$ tree
.
├── etc
│   └── keystore
└── start.d
├── ssl.ini
└── start.ini
....
[NOTE]
--
It is *not* recommended to use both a `${jetty.base}/start.ini` file and a `${jetty.base}/start.d` directory at the same time and doing so can cause issues.
--
[[startup-configuring-modules]]
==== Configuring Modules
Once a module has been enabled for the server, it can be further configured to meet your needs.
This is done by editing the associated ini file for the module.
If your server setup is using a centralized ini configuration, you will edit the `{$jetty.base}/server.ini` file.
If you have elected to manage each module within it's own ini file, you can find these files in the `{$jetty.base}/start.d` directory.
If your server setup is using a centralized ini configuration, you will edit the `${jetty.base}/server.ini` file.
If you have elected to manage each module within it's own ini file, you can find these files in the `${jetty.base}/start.d` directory.
When a module is activated, a number of properties are set by default.
To view these defaults, open up the associated ini file.

View File

@ -81,8 +81,8 @@ For more information on the alternatives see the section on link:#startup-module
____
. Edit the configuration for the `setuid` module to substitute the `userid` and `groupid` of the user to switch to after starting.
If you used the `--add-to-start` command, this configuration is in the `start.ini` file.
If you used the `--add-to-startd` command instead, this configuration is in the `start.d/setuid.ini` file instead.
If your server instance has a `${jetty.base/start.d}` directory, this configuration is in the `start.d/setuid.ini` file instead.
Otherwise. this configuration is in the `${jetty.base}start.ini` file.
Below are the lines to configure:
+

View File

@ -40,7 +40,7 @@ In a standard Jetty distribution it can be configured with the following command
[source, screen, subs="{sub-order}"]
----
$ java -jar $JETTY_HOME/start.jar --add-to-startd=quickstart
$ java -jar $JETTY_HOME/start.jar --add-to-start=quickstart
----
Deployed webapps need to be instances of link:{JDURL}/org/eclipse/jetty/quickstart/QuickStartWebApp.html[`org.eclipse.jetty.quickstart.QuickStartWebApp`] rather than the normal `org.eclipse.jetty.webapp.WebAppContext`.

View File

@ -148,10 +148,10 @@ There are 2 aspects to this:
To accomplish the above, use the Jetty link:#startup-overview[startup] link:#startup-modules[modules mechanism] to add the JAAS link:#startup-modules[module]:
[source,bash]
----
java -jar start.jar --add-to-startd=jaas
----
[source, screen, subs="{sub-order}"]
....
java -jar start.jar --add-to-start=jaas
....
____
[NOTE]

View File

@ -28,7 +28,7 @@ For example:
[source, screen, subs="{sub-order}"]
....
$ java -jar start.jar --add-to-startd=spring
$ java -jar start.jar --add-to-start=spring
....
This (or the alternative link:#start-jar[--add-to-start]=spring command) creates a `${jetty.home}/lib/spring` directory and populates it with the jetty-spring integration jar.
@ -42,7 +42,7 @@ The following is an example mimicking the default jetty startup configuration.
[source, xml, subs="{sub-order}"]
----
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE beans PUBLIC "-//SPRING//DTD BEAN//EN" "http://www.springframework.org/dtd/spring-beans.dtd">
@ -98,5 +98,5 @@ The following is an example mimicking the default jetty startup configuration.
</bean>
</beans>
----

View File

@ -32,9 +32,9 @@ To start Jetty on the default port of 8080, run the following command:
2015-06-04 10:50:45.030:INFO:oejs.Server:main: Started @558ms
----
You can point a browser at this server at link:http://localhost:8080[].
However, as there are no webapps deployed in the $JETTY_HOME directory, you will see a 404 error page served by Jetty.
*Note* the HomeBase warning - it is _not_ recommended to run Jetty from the $JETTY_HOME directory.
You can point a browser at this server at link:http://localhost:8080[].
However, as there are no webapps deployed in the $JETTY_HOME directory, you will see a 404 error page served by Jetty.
*Note* the HomeBase warning - it is _not_ recommended to run Jetty from the $JETTY_HOME directory.
Instead, see how to link:#creating-jetty-base[create a Jetty Base] below.
[[demo-webapps-base]]
@ -89,15 +89,15 @@ You can see the configuration of the demo-base by using the following commands:
...
----
The `--list-modules` command will return a complete list of available and enabled modules for the server.
It will also display the location of the modules, how and in what order they are implemented, dependent modules, and associated jar files.
The `--list-modules` command will return a complete list of available and enabled modules for the server.
It will also display the location of the modules, how and in what order they are implemented, dependent modules, and associated jar files.
The `--list-config` command displays a trove of information about the server including the Java and Jetty environments, the configuration order, any JVM arguments or System Properties set, general server properties, a full listing of the Jetty server class path, and active Jetty XML files.
[[creating-jetty-base]]
==== Creating a new Jetty Base
The `demo-base` directory described above is an example of the link:#startup-base-and-home[jetty.base] mechanism added in Jetty 9.1.
A Jetty base directory allows the configuration and web applications of a server instance to be stored separately from the Jetty distribution, so that upgrades can be done with minimal disruption.
The `demo-base` directory described above is an example of the link:#startup-base-and-home[jetty.base] mechanism added in Jetty 9.1.
A Jetty base directory allows the configuration and web applications of a server instance to be stored separately from the Jetty distribution, so that upgrades can be done with minimal disruption.
Jetty's default configuration is based on two properties:
jetty.home::
@ -127,7 +127,9 @@ WARNING: Nothing to start, exiting ...
Usage: java -jar start.jar [options] [properties] [configs]
java -jar start.jar --help # for more information
> java -jar $JETTY_HOME/start.jar --add-to-startd=http,deploy
> java -jar $JETTY_HOME/start.jar --create-startd
INFO : Base directory was modified
> java -jar $JETTY_HOME/start.jar --add-to-start=http,deploy
INFO: server initialised (transitively) in ${jetty.base}/start.d/server.ini
INFO: http initialised in ${jetty.base}/start.d/http.ini
@ -163,7 +165,7 @@ You can configure Jetty to run on a different port by setting the `jetty.http.po
...
----
Alternatively, property values can be added to the effective command line built from either the `start.ini` file or `start.d/http.ini` files.
Alternatively, property values can be added to the effective command line built from either the `start.ini` file or `start.d/http.ini` files.
By default, the Jetty distribution defines the `jetty.http.port` property in the `start.d/http.ini` file, which may be edited to set another value.
____
@ -186,7 +188,7 @@ To add HTTPS and HTTP2 connectors to a Jetty configuration, the modules can be a
[source, screen, subs="{sub-order}"]
----
> java -jar $JETTY_HOME/start.jar --add-to-startd=https,http2
> java -jar $JETTY_HOME/start.jar --add-to-start=https,http2
[...]
> java -jar $JETTY_HOME/start.jar
@ -196,7 +198,7 @@ To add HTTPS and HTTP2 connectors to a Jetty configuration, the modules can be a
[...]
----
The `--add-to-startd` command sets up the effective command line in the ini files to run an ssl connection that supports the HTTPS and HTTP2 protocols as follows:
The `--add-to-start` command sets up the effective command line in the ini files to run an ssl connection that supports the HTTPS and HTTP2 protocols as follows:
* creates `start.d/ssl.ini` that configures an SSL connector (eg port, keystore etc.) by adding `etc/jetty-ssl.xml` and `etc/jetty-ssl-context.xml` to the effective command line.
* creates `start.d/alpn.ini` that configures protocol negotiation on the SSL connector by adding `etc/jetty-alpn.xml` to the effective command line.
@ -204,11 +206,6 @@ The `--add-to-startd` command sets up the effective command line in the ini file
* creates `start.d/http2.ini` that configures the HTTP/2 protocol on the SSL connector by adding `etc/jetty-http2.xml` to the effective command line.
* checks for the existence of a `etc/keystore` file and if not present, downloads a demonstration keystore file.
____
[NOTE]
If a single `start.ini` file is preferred over individual `start.d/*.ini` files, then the option --add-to-start=module may be used to append the module activation to the start.ini file rather than create a file in start.d
____
[[quickstart-changing-https-port]]
===== Changing the Jetty HTTPS Port
@ -220,13 +217,12 @@ You can configure the SSL connector to run on a different port by setting the `j
> java -jar $JETTY_HOME/start.jar jetty.ssl.port=8444
----
Alternatively, property values can be added to the effective command line built from the `start.ini` file and `start.d/*.ini` files.
If you used the `--add-to-startd` command to enable HTTPS , then you can edit this property in the `start.d/https.ini` file.
If you used `--add-to-start` command, then you can edit this property in the `start.ini` file.
Alternatively, property values can be added to the effective command line built from the `start.ini` file or `start.d/*.ini` files, depending on your set up.
Please see the section on link:#start-vs-startd[Start.ini vs. Start.d] for more information.
==== More start.jar options
The job of the `start.jar` is to interpret the command line, `start.ini` and `start.d` directory (and associated .ini files) to build a Java classpath and list of properties and configuration files to pass to the main class of the Jetty XML configuration mechanism.
The job of the `start.jar` is to interpret the command line, `start.ini` and `start.d` directory (and associated .ini files) to build a Java classpath and list of properties and configuration files to pass to the main class of the Jetty XML configuration mechanism.
The `start.jar` mechanism has many options which are documented in the xref:startup[] administration section and you can see them in summary by using the command:
[source, screen, subs="{sub-order}"]

View File

@ -18,7 +18,6 @@
package org.eclipse.jetty.http;
import java.io.EOFException;
import java.io.IOException;
import java.nio.BufferOverflowException;
import java.nio.ByteBuffer;

View File

@ -35,8 +35,6 @@ import org.eclipse.jetty.util.annotation.ManagedAttribute;
import org.eclipse.jetty.util.annotation.ManagedObject;
import org.eclipse.jetty.util.annotation.Name;
import org.eclipse.jetty.util.component.LifeCycle;
import org.eclipse.jetty.util.thread.ExecutionStrategy;
import org.eclipse.jetty.util.thread.strategy.ProduceExecuteConsume;
@ManagedObject
public abstract class AbstractHTTP2ServerConnectionFactory extends AbstractConnectionFactory
@ -49,7 +47,7 @@ public abstract class AbstractHTTP2ServerConnectionFactory extends AbstractConne
private int maxConcurrentStreams = 128;
private int maxHeaderBlockFragment = 0;
private FlowControlStrategy.Factory flowControlStrategyFactory = () -> new BufferingFlowControlStrategy(0.5F);
private ExecutionStrategy.Factory executionStrategyFactory = new ProduceExecuteConsume.Factory();
private long streamIdleTimeout;
public AbstractHTTP2ServerConnectionFactory(@Name("config") HttpConfiguration httpConfiguration)
{
@ -99,26 +97,6 @@ public abstract class AbstractHTTP2ServerConnectionFactory extends AbstractConne
this.initialStreamRecvWindow = initialStreamRecvWindow;
}
/**
* @deprecated use {@link #getInitialStreamRecvWindow()} instead,
* since "send" is meant on the client, but this is the server configuration
*/
@Deprecated
public int getInitialStreamSendWindow()
{
return getInitialStreamRecvWindow();
}
/**
* @deprecated use {@link #setInitialStreamRecvWindow(int)} instead,
* since "send" is meant on the client, but this is the server configuration
*/
@Deprecated
public void setInitialStreamSendWindow(int initialStreamSendWindow)
{
setInitialStreamRecvWindow(initialStreamSendWindow);
}
@ManagedAttribute("The max number of concurrent streams per session")
public int getMaxConcurrentStreams()
{
@ -150,6 +128,17 @@ public abstract class AbstractHTTP2ServerConnectionFactory extends AbstractConne
this.flowControlStrategyFactory = flowControlStrategyFactory;
}
@ManagedAttribute("The stream idle timeout in milliseconds")
public long getStreamIdleTimeout()
{
return streamIdleTimeout;
}
public void setStreamIdleTimeout(long streamIdleTimeout)
{
this.streamIdleTimeout = streamIdleTimeout;
}
public HttpConfiguration getHttpConfiguration()
{
return httpConfiguration;
@ -168,8 +157,11 @@ public abstract class AbstractHTTP2ServerConnectionFactory extends AbstractConne
// For a single stream in a connection, there will be a race between
// the stream idle timeout and the connection idle timeout. However,
// the typical case is that the connection will be busier and the
// stream idle timeout will expire earlier that the connection's.
session.setStreamIdleTimeout(endPoint.getIdleTimeout());
// stream idle timeout will expire earlier than the connection's.
long streamIdleTimeout = getStreamIdleTimeout();
if (streamIdleTimeout <= 0)
streamIdleTimeout = endPoint.getIdleTimeout();
session.setStreamIdleTimeout(streamIdleTimeout);
session.setInitialSessionRecvWindow(getInitialSessionRecvWindow());
ServerParser parser = newServerParser(connector, session);

View File

@ -154,9 +154,9 @@ public class HTTP2ServerConnection extends HTTP2Connection implements Connection
public boolean onStreamTimeout(IStream stream, Throwable failure)
{
HttpChannelOverHTTP2 channel = (HttpChannelOverHTTP2)stream.getAttribute(IStream.CHANNEL_ATTRIBUTE);
boolean result = !channel.isRequestHandled();
boolean result = channel.onStreamTimeout(failure);
if (LOG.isDebugEnabled())
LOG.debug("{} idle timeout on {}: {}", result ? "Processing" : "Ignoring", stream, failure);
LOG.debug("{} idle timeout on {}: {}", result ? "Processed" : "Ignored", stream, failure);
return result;
}
@ -178,7 +178,7 @@ public class HTTP2ServerConnection extends HTTP2Connection implements Connection
result &= !channel.isRequestHandled();
}
if (LOG.isDebugEnabled())
LOG.debug("{} idle timeout on {}: {}", result ? "Processing" : "Ignoring", session, failure);
LOG.debug("{} idle timeout on {}: {}", result ? "Processed" : "Ignored", session, failure);
return result;
}

View File

@ -72,6 +72,18 @@ public class HttpChannelOverHTTP2 extends HttpChannel
return _expect100Continue;
}
@Override
public void setIdleTimeout(long timeoutMs)
{
getStream().setIdleTimeout(timeoutMs);
}
@Override
public long getIdleTimeout()
{
return getStream().getIdleTimeout();
}
public Runnable onRequest(HeadersFrame frame)
{
try
@ -256,11 +268,11 @@ public class HttpChannelOverHTTP2 extends HttpChannel
handle);
}
boolean delayed = _delayedUntilContent;
boolean wasDelayed = _delayedUntilContent;
_delayedUntilContent = false;
if (delayed)
if (wasDelayed)
_handled = true;
return handle || delayed ? this : null;
return handle || wasDelayed ? this : null;
}
public boolean isRequestHandled()
@ -268,6 +280,21 @@ public class HttpChannelOverHTTP2 extends HttpChannel
return _handled;
}
public boolean onStreamTimeout(Throwable failure)
{
if (!_handled)
return true;
HttpInput input = getRequest().getHttpInput();
boolean readFailed = input.failed(failure);
if (readFailed)
handle();
boolean writeFailed = getHttpTransport().onStreamTimeout(failure);
return readFailed || writeFailed;
}
public void onFailure(Throwable failure)
{
onEarlyEOF();

View File

@ -37,13 +37,13 @@ import org.eclipse.jetty.util.Callback;
import org.eclipse.jetty.util.Promise;
import org.eclipse.jetty.util.log.Log;
import org.eclipse.jetty.util.log.Logger;
import org.eclipse.jetty.util.thread.Invocable.InvocationType;
public class HttpTransportOverHTTP2 implements HttpTransport
{
private static final Logger LOG = Log.getLogger(HttpTransportOverHTTP2.class);
private final AtomicBoolean commit = new AtomicBoolean();
private final TransportCallback transportCallback = new TransportCallback();
private final Connector connector;
private final HTTP2ServerConnection connection;
private IStream stream;
@ -100,35 +100,22 @@ public class HttpTransportOverHTTP2 implements HttpTransport
{
if (hasContent)
{
commit(info, false, new Callback()
Callback commitCallback = new Callback.Nested(callback)
{
@Override
public InvocationType getInvocationType()
{
// TODO is this dependent on the callback itself?
return InvocationType.NON_BLOCKING;
}
@Override
public void succeeded()
{
if (LOG.isDebugEnabled())
LOG.debug("HTTP2 Response #{}/{} committed", stream.getId(), Integer.toHexString(stream.getSession().hashCode()));
send(content, lastContent, callback);
if (transportCallback.start(callback, false))
send(content, lastContent, transportCallback);
}
@Override
public void failed(Throwable x)
{
if (LOG.isDebugEnabled())
LOG.debug("HTTP2 Response #" + stream.getId() + "/" + Integer.toHexString(stream.getSession().hashCode()) + " failed to commit", x);
callback.failed(x);
}
});
};
if (transportCallback.start(commitCallback, true))
commit(info, false, transportCallback);
}
else
{
commit(info, lastContent, callback);
if (transportCallback.start(callback, false))
commit(info, lastContent, transportCallback);
}
}
else
@ -140,7 +127,8 @@ public class HttpTransportOverHTTP2 implements HttpTransport
{
if (hasContent || lastContent)
{
send(content, lastContent, callback);
if (transportCallback.start(callback, false))
send(content, lastContent, transportCallback);
}
else
{
@ -211,6 +199,11 @@ public class HttpTransportOverHTTP2 implements HttpTransport
stream.data(frame, callback);
}
public boolean onStreamTimeout(Throwable failure)
{
return transportCallback.onIdleTimeout(failure);
}
@Override
public void onCompleted()
{
@ -239,4 +232,105 @@ public class HttpTransportOverHTTP2 implements HttpTransport
if (stream != null)
stream.reset(new ResetFrame(stream.getId(), ErrorCode.INTERNAL_ERROR.code), Callback.NOOP);
}
private class TransportCallback implements Callback
{
private State state = State.IDLE;
private Callback callback;
private boolean commit;
public boolean start(Callback callback, boolean commit)
{
State state;
synchronized (this)
{
state = this.state;
if (state == State.IDLE)
{
this.state = State.WRITING;
this.callback = callback;
this.commit = commit;
return true;
}
}
callback.failed(new IllegalStateException("Invalid transport state: " + state));
return false;
}
@Override
public void succeeded()
{
boolean commit;
Callback callback = null;
synchronized (this)
{
commit = this.commit;
if (state != State.TIMEOUT)
{
callback = this.callback;
this.state = State.IDLE;
}
}
if (LOG.isDebugEnabled())
LOG.debug("HTTP2 Response #{} {}", stream.getId(), commit ? "committed" : "flushed content");
if (callback != null)
callback.succeeded();
}
@Override
public void failed(Throwable x)
{
boolean commit;
Callback callback = null;
synchronized (this)
{
commit = this.commit;
if (state != State.TIMEOUT)
{
callback = this.callback;
this.state = State.FAILED;
}
}
if (LOG.isDebugEnabled())
LOG.debug("HTTP2 Response #" + stream.getId() + " failed to " + (commit ? "commit" : "flush"), x);
if (callback != null)
callback.failed(x);
}
@Override
public InvocationType getInvocationType()
{
Callback callback;
synchronized (this)
{
callback = this.callback;
}
return callback.getInvocationType();
}
private boolean onIdleTimeout(Throwable failure)
{
boolean result;
Callback callback = null;
synchronized (this)
{
result = state == State.WRITING;
if (result)
{
callback = this.callback;
this.state = State.TIMEOUT;
}
}
if (LOG.isDebugEnabled())
LOG.debug("HTTP2 Response #" + stream.getId() + " idle timeout", failure);
if (result)
callback.failed(failure);
return result;
}
}
private enum State
{
IDLE, WRITING, FAILED, TIMEOUT
}
}

View File

@ -79,6 +79,7 @@ public class HttpChannel implements Runnable, HttpOutput.Interceptor
private final Response _response;
private MetaData.Response _committedMetaData;
private RequestLog _requestLog;
private long _oldIdleTimeout;
/** Bytes written after interception (eg after compression) */
private long _written;
@ -550,6 +551,11 @@ public class HttpChannel implements Runnable, HttpOutput.Interceptor
if (_configuration.getSendDateHeader() && !fields.contains(HttpHeader.DATE))
fields.put(_connector.getServer().getDateField());
long idleTO=_configuration.getIdleTimeout();
_oldIdleTimeout=getIdleTimeout();
if (idleTO>=0 && _oldIdleTimeout!=idleTO)
setIdleTimeout(idleTO);
_request.setMetaData(request);
if (LOG.isDebugEnabled())
@ -581,6 +587,10 @@ public class HttpChannel implements Runnable, HttpOutput.Interceptor
if (_requestLog!=null )
_requestLog.log(_request, _response);
long idleTO=_configuration.getIdleTimeout();
if (idleTO>=0 && getIdleTimeout()!=_oldIdleTimeout)
setIdleTimeout(_oldIdleTimeout);
_transport.onCompleted();
}

View File

@ -367,6 +367,8 @@ public class HttpChannelState
protected Action unhandle()
{
Action action;
boolean read_interested=false;
try(Locker.Lock lock= _locker.lock())
{
if(DEBUG)
@ -424,8 +426,8 @@ public class HttpChannelState
_state=State.ASYNC_WAIT;
action=Action.WAIT;
if (_asyncReadUnready)
_channel.asyncReadFillInterested();
Scheduler scheduler = _channel.getScheduler();
read_interested=true;
Scheduler scheduler=_channel.getScheduler();
if (scheduler!=null && _timeoutMs>0)
_event.setTimeoutTask(scheduler.schedule(_event,_timeoutMs,TimeUnit.MILLISECONDS));
}
@ -463,6 +465,9 @@ public class HttpChannelState
}
}
if (read_interested)
_channel.asyncReadFillInterested();
return action;
}
@ -537,7 +542,7 @@ public class HttpChannelState
}
final AtomicReference<Throwable> error=new AtomicReference<Throwable>();
final AtomicReference<Throwable> error=new AtomicReference<>();
if (listeners!=null)
{
Runnable task=new Runnable()

View File

@ -56,6 +56,7 @@ public class HttpConfiguration
private int _responseHeaderSize=8*1024;
private int _headerCacheSize=512;
private int _securePort;
private long _idleTimeout=-1;
private long _blockingTimeout=-1;
private String _secureScheme = HttpScheme.HTTPS.asString();
private boolean _sendServerVersion = true;
@ -65,6 +66,7 @@ public class HttpConfiguration
private boolean _persistentConnectionsEnabled = true;
private int _maxErrorDispatches = 10;
private boolean _useDirectByteBuffers = false;
private long _minRequestDataRate;
/* ------------------------------------------------------------ */
/**
@ -114,6 +116,7 @@ public class HttpConfiguration
_headerCacheSize=config._headerCacheSize;
_secureScheme=config._secureScheme;
_securePort=config._securePort;
_idleTimeout=config._idleTimeout;
_blockingTimeout=config._blockingTimeout;
_sendDateHeader=config._sendDateHeader;
_sendServerVersion=config._sendServerVersion;
@ -207,6 +210,31 @@ public class HttpConfiguration
return _persistentConnectionsEnabled;
}
/* ------------------------------------------------------------ */
/** Get the max idle time in ms.
* <p>The max idle time is applied to a HTTP request for IO operations and
* delayed dispatch.
* @return the max idle time in ms or if == 0 implies an infinite timeout, &lt;0
* implies no HTTP channel timeout and the connection timeout is used instead.
*/
public long getIdleTimeout()
{
return _idleTimeout;
}
/* ------------------------------------------------------------ */
/** Set the max idle time in ms.
* <p>The max idle time is applied to a HTTP request for IO operations and
* delayed dispatch.
* @param timeoutMs the max idle time in ms or if == 0 implies an infinite timeout, &lt;0
* implies no HTTP channel timeout and the connection timeout is used instead.
*/
public void setIdleTimeout(long timeoutMs)
{
_idleTimeout=timeoutMs;
}
/* ------------------------------------------------------------ */
/** Get the timeout applied to blocking operations.
* <p>This timeout is in addition to the {@link Connector#getIdleTimeout()}, and applies
@ -496,4 +524,22 @@ public class HttpConfiguration
{
_maxErrorDispatches=max;
}
/* ------------------------------------------------------------ */
/**
* @return The minimum request data rate in bytes per second; or &lt;=0 for no limit
*/
public long getMinRequestDataRate()
{
return _minRequestDataRate;
}
/* ------------------------------------------------------------ */
/**
* @param bytesPerSecond The minimum request data rate in bytes per second; or &lt;=0 for no limit
*/
public void setMinRequestDataRate(long bytesPerSecond)
{
_minRequestDataRate=bytesPerSecond;
}
}

View File

@ -122,9 +122,7 @@ public class HttpConnection extends AbstractConnection implements Runnable, Http
protected HttpChannelOverHttp newHttpChannel()
{
HttpChannelOverHttp httpChannel = new HttpChannelOverHttp(this, _connector, _config, getEndPoint(), this);
return httpChannel;
return new HttpChannelOverHttp(this, _connector, _config, getEndPoint(), this);
}
protected HttpParser newHttpParser(HttpCompliance compliance)
@ -283,9 +281,8 @@ public class HttpConnection extends AbstractConnection implements Runnable, Http
while (_parser.inContentState())
{
int filled = fillRequestBuffer();
boolean handle = parseRequestBuffer();
handled|=handle;
if (handle || filled<=0 || _channel.getRequest().getHttpInput().hasContent())
handled = parseRequestBuffer();
if (handled || filled<=0 || _channel.getRequest().getHttpInput().hasContent())
break;
}
return handled;

View File

@ -25,11 +25,14 @@ import java.util.ArrayDeque;
import java.util.Deque;
import java.util.Objects;
import java.util.concurrent.Executor;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
import javax.servlet.ReadListener;
import javax.servlet.ServletInputStream;
import org.eclipse.jetty.http.BadMessageException;
import org.eclipse.jetty.http.HttpStatus;
import org.eclipse.jetty.io.EofException;
import org.eclipse.jetty.io.RuntimeIOException;
import org.eclipse.jetty.util.BufferUtil;
@ -57,14 +60,14 @@ public class HttpInput extends ServletInputStream implements Runnable
private final HttpChannelState _channelState;
private ReadListener _listener;
private State _state = STREAM;
private long _firstByteTimeStamp = -1;
private long _contentArrived;
private long _contentConsumed;
private long _blockingTimeoutAt = -1;
private long _blockUntil;
public HttpInput(HttpChannelState state)
{
_channelState=state;
if (_channelState.getHttpChannel().getHttpConfiguration().getBlockingTimeout()>0)
_blockingTimeoutAt=0;
_channelState = state;
}
protected HttpChannelState getHttpChannelState()
@ -84,33 +87,36 @@ public class HttpInput extends ServletInputStream implements Runnable
}
_listener = null;
_state = STREAM;
_contentArrived = 0;
_contentConsumed = 0;
_firstByteTimeStamp = -1;
_blockUntil = 0;
}
}
@Override
public int available()
{
int available=0;
boolean woken=false;
int available = 0;
boolean woken = false;
synchronized (_inputQ)
{
Content content = _inputQ.peek();
if (content==null)
if (content == null)
{
try
{
produceContent();
}
catch(IOException e)
catch (IOException e)
{
woken=failed(e);
woken = failed(e);
}
content = _inputQ.peek();
}
if (content!=null)
available= remaining(content);
if (content != null)
available = remaining(content);
}
if (woken)
@ -125,12 +131,16 @@ public class HttpInput extends ServletInputStream implements Runnable
executor.execute(channel);
}
private long getBlockingTimeout()
{
return getHttpChannelState().getHttpChannel().getHttpConfiguration().getBlockingTimeout();
}
@Override
public int read() throws IOException
{
int read = read(_oneByteBuffer, 0, 1);
if (read==0)
if (read == 0)
throw new IllegalStateException("unready read=0");
return read < 0 ? -1 : _oneByteBuffer[0] & 0xFF;
}
@ -140,17 +150,36 @@ public class HttpInput extends ServletInputStream implements Runnable
{
synchronized (_inputQ)
{
if (_blockingTimeoutAt>=0 && !isAsync())
_blockingTimeoutAt=System.currentTimeMillis()+getHttpChannelState().getHttpChannel().getHttpConfiguration().getBlockingTimeout();
if (!isAsync())
{
if (_blockUntil == 0)
{
long blockingTimeout = getBlockingTimeout();
if (blockingTimeout > 0)
_blockUntil = System.nanoTime() + TimeUnit.MILLISECONDS.toNanos(blockingTimeout);
}
}
while(true)
long minRequestDataRate = _channelState.getHttpChannel().getHttpConfiguration().getMinRequestDataRate();
if (minRequestDataRate > 0 && _firstByteTimeStamp != -1)
{
long period = System.nanoTime() - _firstByteTimeStamp;
if (period > 0)
{
long minimum_data = minRequestDataRate * TimeUnit.NANOSECONDS.toMillis(period) / TimeUnit.SECONDS.toMillis(1);
if (_contentArrived < minimum_data)
throw new BadMessageException(HttpStatus.REQUEST_TIMEOUT_408, String.format("Request data rate < %d B/s", minRequestDataRate));
}
}
while (true)
{
Content item = nextContent();
if (item!=null)
if (item != null)
{
int l = get(item, b, off, len);
if (LOG.isDebugEnabled())
LOG.debug("{} read {} from {}",this,l,item);
LOG.debug("{} read {} from {}", this, l, item);
consumeNonContent();
@ -168,6 +197,7 @@ public class HttpInput extends ServletInputStream implements Runnable
* produce more Content and add it via {@link #addContent(Content)}.
* For protocols that are constantly producing (eg HTTP2) this can
* be left as a noop;
*
* @throws IOException if unable to produce content
*/
protected void produceContent() throws IOException
@ -184,7 +214,7 @@ public class HttpInput extends ServletInputStream implements Runnable
protected Content nextContent() throws IOException
{
Content content = pollContent();
if (content==null && !isFinished())
if (content == null && !isFinished())
{
produceContent();
content = pollContent();
@ -192,9 +222,11 @@ public class HttpInput extends ServletInputStream implements Runnable
return content;
}
/** Poll the inputQ for Content.
/**
* Poll the inputQ for Content.
* Consumed buffers and {@link PoisonPillContent}s are removed and
* EOF state updated if need be.
*
* @return Content or null
*/
protected Content pollContent()
@ -209,20 +241,20 @@ public class HttpInput extends ServletInputStream implements Runnable
if (LOG.isDebugEnabled())
LOG.debug("{} consumed {}", this, content);
if (content==EOF_CONTENT)
if (content == EOF_CONTENT)
{
if (_listener==null)
_state=EOF;
if (_listener == null)
_state = EOF;
else
{
_state=AEOF;
_state = AEOF;
boolean woken = _channelState.onReadReady(); // force callback?
if (woken)
wake();
}
}
else if (content==EARLY_EOF_CONTENT)
_state=EARLY_EOF;
else if (content == EARLY_EOF_CONTENT)
_state = EARLY_EOF;
content = _inputQ.peek();
}
@ -262,7 +294,7 @@ public class HttpInput extends ServletInputStream implements Runnable
protected Content nextReadable() throws IOException
{
Content content = pollReadable();
if (content==null && !isFinished())
if (content == null && !isFinished())
{
produceContent();
content = pollReadable();
@ -270,9 +302,11 @@ public class HttpInput extends ServletInputStream implements Runnable
return content;
}
/** Poll the inputQ for Content or EOF.
/**
* Poll the inputQ for Content or EOF.
* Consumed buffers and non EOF {@link PoisonPillContent}s are removed.
* EOF state is not updated.
*
* @return Content, EOF or null
*/
protected Content pollReadable()
@ -283,7 +317,7 @@ public class HttpInput extends ServletInputStream implements Runnable
// Skip consumed items at the head of the queue except EOF
while (content != null)
{
if (content==EOF_CONTENT || content==EARLY_EOF_CONTENT || remaining(content)>0)
if (content == EOF_CONTENT || content == EARLY_EOF_CONTENT || remaining(content) > 0)
return content;
_inputQ.poll();
@ -308,17 +342,17 @@ public class HttpInput extends ServletInputStream implements Runnable
/**
* Copies the given content into the given byte buffer.
*
* @param content the content to copy from
* @param buffer the buffer to copy into
* @param offset the buffer offset to start copying from
* @param length the space available in the buffer
* @param content the content to copy from
* @param buffer the buffer to copy into
* @param offset the buffer offset to start copying from
* @param length the space available in the buffer
* @return the number of bytes actually copied
*/
protected int get(Content content, byte[] buffer, int offset, int length)
{
int l = Math.min(content.remaining(), length);
content.getContent().get(buffer, offset, l);
_contentConsumed+=l;
_contentConsumed += l;
return l;
}
@ -326,16 +360,16 @@ public class HttpInput extends ServletInputStream implements Runnable
* Consumes the given content.
* Calls the content succeeded if all content consumed.
*
* @param content the content to consume
* @param length the number of bytes to consume
* @param content the content to consume
* @param length the number of bytes to consume
*/
protected void skip(Content content, int length)
{
int l = Math.min(content.remaining(), length);
ByteBuffer buffer = content.getContent();
buffer.position(buffer.position()+l);
_contentConsumed+=l;
if (l>0 && !content.hasContent())
buffer.position(buffer.position() + l);
_contentConsumed += l;
if (l > 0 && !content.hasContent())
pollContent(); // hungry succeed
}
@ -349,23 +383,26 @@ public class HttpInput extends ServletInputStream implements Runnable
{
try
{
long timeout=0;
if (_blockingTimeoutAt>=0)
long timeout = 0;
if (_blockUntil != 0)
{
timeout=_blockingTimeoutAt-System.currentTimeMillis();
if (timeout<=0)
timeout = TimeUnit.NANOSECONDS.toMillis(_blockUntil - System.nanoTime());
if (timeout <= 0)
throw new TimeoutException();
}
if (LOG.isDebugEnabled())
LOG.debug("{} blocking for content timeout={}", this,timeout);
if (timeout>0)
LOG.debug("{} blocking for content timeout={}", this, timeout);
if (timeout > 0)
_inputQ.wait(timeout);
else
_inputQ.wait();
if (_blockingTimeoutAt>0 && System.currentTimeMillis()>=_blockingTimeoutAt)
throw new TimeoutException();
// TODO: cannot return unless there is content or timeout,
// TODO: so spurious wakeups are not handled correctly.
if (_blockUntil != 0 && TimeUnit.NANOSECONDS.toMillis(_blockUntil - System.nanoTime()) <= 0)
throw new TimeoutException(String.format("Blocking timeout %d ms", getBlockingTimeout()));
}
catch (Throwable e)
{
@ -378,23 +415,24 @@ public class HttpInput extends ServletInputStream implements Runnable
* <p>Typically used to push back content that has
* been read, perhaps mutated. The bytes prepended are
* deducted for the contentConsumed total</p>
*
* @param item the content to add
* @return true if content channel woken for read
*/
public boolean prependContent(Content item)
{
boolean woken=false;
boolean woken = false;
synchronized (_inputQ)
{
_inputQ.push(item);
_contentConsumed-=item.remaining();
_contentConsumed -= item.remaining();
if (LOG.isDebugEnabled())
LOG.debug("{} prependContent {}", this, item);
if (_listener==null)
if (_listener == null)
_inputQ.notify();
else
woken=_channelState.onReadPossible();
woken = _channelState.onReadPossible();
}
return woken;
@ -408,17 +446,20 @@ public class HttpInput extends ServletInputStream implements Runnable
*/
public boolean addContent(Content item)
{
boolean woken=false;
boolean woken = false;
synchronized (_inputQ)
{
if (_firstByteTimeStamp == -1)
_firstByteTimeStamp = System.nanoTime();
_contentArrived += item.remaining();
_inputQ.offer(item);
if (LOG.isDebugEnabled())
LOG.debug("{} addContent {}", this, item);
if (_listener==null)
if (_listener == null)
_inputQ.notify();
else
woken=_channelState.onReadPossible();
woken = _channelState.onReadPossible();
}
return woken;
@ -428,7 +469,7 @@ public class HttpInput extends ServletInputStream implements Runnable
{
synchronized (_inputQ)
{
return _inputQ.size()>0;
return _inputQ.size() > 0;
}
}
@ -454,6 +495,7 @@ public class HttpInput extends ServletInputStream implements Runnable
* <p>
* Typically this will result in an EOFException being thrown
* from a subsequent read rather than a -1 return.
*
* @return true if content channel woken for read
*/
public boolean earlyEOF()
@ -464,11 +506,12 @@ public class HttpInput extends ServletInputStream implements Runnable
/**
* This method should be called to signal that all the expected
* content arrived.
*
* @return true if content channel woken for read
*/
public boolean eof()
{
return addContent(EOF_CONTENT);
return addContent(EOF_CONTENT);
}
public boolean consumeAll()
@ -507,7 +550,7 @@ public class HttpInput extends ServletInputStream implements Runnable
{
synchronized (_inputQ)
{
return _state==ASYNC;
return _state == ASYNC;
}
}
@ -520,7 +563,6 @@ public class HttpInput extends ServletInputStream implements Runnable
}
}
@Override
public boolean isReady()
{
@ -528,18 +570,18 @@ public class HttpInput extends ServletInputStream implements Runnable
{
synchronized (_inputQ)
{
if (_listener == null )
if (_listener == null)
return true;
if (_state instanceof EOFState)
return true;
if (nextReadable()!=null)
if (nextReadable() != null)
return true;
_channelState.onReadUnready();
}
return false;
}
catch(IOException e)
catch (IOException e)
{
LOG.ignore(e);
return true;
@ -550,7 +592,7 @@ public class HttpInput extends ServletInputStream implements Runnable
public void setReadListener(ReadListener readListener)
{
readListener = Objects.requireNonNull(readListener);
boolean woken=false;
boolean woken = false;
try
{
synchronized (_inputQ)
@ -558,11 +600,11 @@ public class HttpInput extends ServletInputStream implements Runnable
if (_listener != null)
throw new IllegalStateException("ReadListener already set");
if (_state != STREAM)
throw new IllegalStateException("State "+STREAM+" != " + _state);
throw new IllegalStateException("State " + STREAM + " != " + _state);
_state = ASYNC;
_listener = readListener;
boolean content=nextContent()!=null;
boolean content = nextContent() != null;
if (content)
woken = _channelState.onReadReady();
@ -570,7 +612,7 @@ public class HttpInput extends ServletInputStream implements Runnable
_channelState.onReadUnready();
}
}
catch(IOException e)
catch (IOException e)
{
throw new RuntimeIOException(e);
}
@ -581,7 +623,7 @@ public class HttpInput extends ServletInputStream implements Runnable
public boolean failed(Throwable x)
{
boolean woken=false;
boolean woken = false;
synchronized (_inputQ)
{
if (_state instanceof ErrorState)
@ -589,16 +631,15 @@ public class HttpInput extends ServletInputStream implements Runnable
else
_state = new ErrorState(x);
if (_listener==null)
if (_listener == null)
_inputQ.notify();
else
woken=_channelState.onReadPossible();
woken = _channelState.onReadPossible();
}
return woken;
}
/* ------------------------------------------------------------ */
/*
* <p>
* While this class is-a Runnable, it should never be dispatched in it's own thread. It is a
@ -611,26 +652,26 @@ public class HttpInput extends ServletInputStream implements Runnable
{
final Throwable error;
final ReadListener listener;
boolean aeof=false;
boolean aeof = false;
synchronized (_inputQ)
{
if (_state==EOF)
if (_state == EOF)
return;
if (_state==AEOF)
if (_state == AEOF)
{
_state=EOF;
aeof=true;
_state = EOF;
aeof = true;
}
listener = _listener;
error = _state instanceof ErrorState?((ErrorState)_state).getError():null;
error = _state instanceof ErrorState ? ((ErrorState)_state).getError() : null;
}
try
{
if (error!=null)
if (error != null)
{
_channelState.getHttpChannel().getResponse().getHttpFields().add(HttpConnection.CONNECTION_CLOSE);
listener.onError(error);
@ -650,7 +691,7 @@ public class HttpInput extends ServletInputStream implements Runnable
LOG.debug(e);
try
{
if (aeof || error==null)
if (aeof || error == null)
{
_channelState.getHttpChannel().getResponse().getHttpFields().add(HttpConnection.CONNECTION_CLOSE);
listener.onError(e);
@ -674,10 +715,10 @@ public class HttpInput extends ServletInputStream implements Runnable
Content content;
synchronized (_inputQ)
{
state=_state;
consumed=_contentConsumed;
q=_inputQ.size();
content=_inputQ.peekFirst();
state = _state;
consumed = _contentConsumed;
q = _inputQ.size();
content = _inputQ.peekFirst();
}
return String.format("%s@%x[c=%d,q=%d,[0]=%s,s=%s]",
getClass().getSimpleName(),
@ -691,10 +732,11 @@ public class HttpInput extends ServletInputStream implements Runnable
public static class PoisonPillContent extends Content
{
private final String _name;
public PoisonPillContent(String name)
{
super(BufferUtil.EMPTY_BUFFER);
_name=name;
_name = name;
}
@Override
@ -718,7 +760,7 @@ public class HttpInput extends ServletInputStream implements Runnable
public Content(ByteBuffer content)
{
_content=content;
_content = content;
}
@Override
@ -745,7 +787,7 @@ public class HttpInput extends ServletInputStream implements Runnable
@Override
public String toString()
{
return String.format("Content@%x{%s}",hashCode(),BufferUtil.toDetailString(_content));
return String.format("Content@%x{%s}", hashCode(), BufferUtil.toDetailString(_content));
}
}
@ -770,9 +812,10 @@ public class HttpInput extends ServletInputStream implements Runnable
protected class ErrorState extends EOFState
{
final Throwable _error;
ErrorState(Throwable error)
{
_error=error;
_error = error;
}
public Throwable getError()
@ -791,7 +834,7 @@ public class HttpInput extends ServletInputStream implements Runnable
@Override
public String toString()
{
return "ERROR:"+_error;
return "ERROR:" + _error;
}
}

View File

@ -54,75 +54,78 @@ import org.eclipse.jetty.util.log.Logger;
* close the stream, to be reopened after the inclusion ends.</p>
*/
public class HttpOutput extends ServletOutputStream implements Runnable
{
{
/**
* The HttpOutput.Inteceptor is a single intercept point for all
* output written to the HttpOutput: via writer; via output stream;
* The HttpOutput.Interceptor is a single intercept point for all
* output written to the HttpOutput: via writer; via output stream;
* asynchronously; or blocking.
* <p>
* The Interceptor can be used to implement translations (eg Gzip) or
* additional buffering that acts on all output. Interceptors are
* The Interceptor can be used to implement translations (eg Gzip) or
* additional buffering that acts on all output. Interceptors are
* created in a chain, so that multiple concerns may intercept.
* <p>
* The {@link HttpChannel} is an {@link Interceptor} and is always the
* The {@link HttpChannel} is an {@link Interceptor} and is always the
* last link in any Interceptor chain.
* <p>
* Responses are committed by the first call to
* Responses are committed by the first call to
* {@link #write(ByteBuffer, boolean, Callback)}
* and closed by a call to {@link #write(ByteBuffer, boolean, Callback)}
* with the last boolean set true. If no content is available to commit
* and closed by a call to {@link #write(ByteBuffer, boolean, Callback)}
* with the last boolean set true. If no content is available to commit
* or close, then a null buffer is passed.
*/
public interface Interceptor
{
/**
/**
* Write content.
* The response is committed by the first call to write and is closed by
* a call with last == true. Empty content buffers may be passed to
* a call with last == true. Empty content buffers may be passed to
* force a commit or close.
* @param content The content to be written or an empty buffer.
* @param last True if this is the last call to write
* @param callback The callback to use to indicate {@link Callback#succeeded()}
* or {@link Callback#failed(Throwable)}.
*
* @param content The content to be written or an empty buffer.
* @param last True if this is the last call to write
* @param callback The callback to use to indicate {@link Callback#succeeded()}
* or {@link Callback#failed(Throwable)}.
*/
void write(ByteBuffer content, boolean last, Callback callback);
/**
* @return The next Interceptor in the chain or null if this is the
* @return The next Interceptor in the chain or null if this is the
* last Interceptor in the chain.
*/
Interceptor getNextInterceptor();
/**
* @return True if the Interceptor is optimized to receive direct
* @return True if the Interceptor is optimized to receive direct
* {@link ByteBuffer}s in the {@link #write(ByteBuffer, boolean, Callback)}
* method. If false is returned, then passing direct buffers may cause
* method. If false is returned, then passing direct buffers may cause
* inefficiencies.
*/
boolean isOptimizedForDirectBuffers();
/**
* Reset the buffers.
* <p>If the Interceptor contains buffers then reset them.
* @throws IllegalStateException Thrown if the response has been
* committed and buffers and/or headers cannot be reset.
*
* @throws IllegalStateException Thrown if the response has been
* committed and buffers and/or headers cannot be reset.
*/
default void resetBuffer() throws IllegalStateException
{
Interceptor next = getNextInterceptor();
if (next!=null)
if (next != null)
next.resetBuffer();
};
}
}
private static Logger LOG = Log.getLogger(HttpOutput.class);
private final HttpChannel _channel;
private final SharedBlockingCallback _writeBlock;
private final SharedBlockingCallback _writeBlocker;
private Interceptor _interceptor;
/** Bytes written via the write API (excludes bytes written via sendContent). Used to autocommit once content length is written. */
/**
* Bytes written via the write API (excludes bytes written via sendContent). Used to autocommit once content length is written.
*/
private long _written;
private ByteBuffer _aggregate;
@ -140,33 +143,25 @@ public class HttpOutput extends ServletOutputStream implements Runnable
isReady() OPEN:true READY:true READY:true UNREADY:false UNREADY:false CLOSED:true
write completed - - - ASYNC READY->owp -
*/
private enum OutputState { OPEN, ASYNC, READY, PENDING, UNREADY, ERROR, CLOSED }
private final AtomicReference<OutputState> _state=new AtomicReference<>(OutputState.OPEN);
private enum OutputState
{
OPEN, ASYNC, READY, PENDING, UNREADY, ERROR, CLOSED
}
private final AtomicReference<OutputState> _state = new AtomicReference<>(OutputState.OPEN);
public HttpOutput(HttpChannel channel)
{
_channel = channel;
_interceptor = channel;
_writeBlock = new SharedBlockingCallback()
{
@Override
protected long getIdleTimeout()
{
long bto = getHttpChannel().getHttpConfiguration().getBlockingTimeout();
if (bto>0)
return bto;
if (bto<0)
return -1;
return _channel.getIdleTimeout();
}
};
_writeBlocker = new WriteBlocker(channel);
HttpConfiguration config = channel.getHttpConfiguration();
_bufferSize = config.getOutputBufferSize();
_commitSize = config.getOutputAggregationSize();
if (_commitSize>_bufferSize)
if (_commitSize > _bufferSize)
{
LOG.warn("OutputAggregationSize {} exceeds bufferSize {}",_commitSize,_bufferSize);
_commitSize=_bufferSize;
LOG.warn("OutputAggregationSize {} exceeds bufferSize {}", _commitSize, _bufferSize);
_commitSize = _bufferSize;
}
}
@ -182,7 +177,7 @@ public class HttpOutput extends ServletOutputStream implements Runnable
public void setInterceptor(Interceptor filter)
{
_interceptor=filter;
_interceptor = filter;
}
public boolean isWritten()
@ -202,7 +197,7 @@ public class HttpOutput extends ServletOutputStream implements Runnable
private boolean isLastContentToWrite(int len)
{
_written+=len;
_written += len;
return _channel.getResponse().isAllContentWritten(_written);
}
@ -213,12 +208,12 @@ public class HttpOutput extends ServletOutputStream implements Runnable
protected Blocker acquireWriteBlockingCallback() throws IOException
{
return _writeBlock.acquire();
return _writeBlocker.acquire();
}
private void write(ByteBuffer content, boolean complete) throws IOException
{
try (Blocker blocker = _writeBlock.acquire())
try (Blocker blocker = _writeBlocker.acquire())
{
write(content, complete, blocker);
blocker.block();
@ -250,14 +245,13 @@ public class HttpOutput extends ServletOutputStream implements Runnable
{
while(true)
{
OutputState state=_state.get();
OutputState state = _state.get();
switch (state)
{
case CLOSED:
{
return;
}
case ASYNC:
{
// A close call implies a write operation, thus in asynchronous mode
@ -269,7 +263,6 @@ public class HttpOutput extends ServletOutputStream implements Runnable
continue;
break;
}
case UNREADY:
case PENDING:
{
@ -279,7 +272,7 @@ public class HttpOutput extends ServletOutputStream implements Runnable
// complete is called. Because the prior write has not yet completed
// and/or isReady has not been called, this close is allowed, but will
// abort the response.
if (!_state.compareAndSet(state,OutputState.CLOSED))
if (!_state.compareAndSet(state, OutputState.CLOSED))
continue;
IOException ex = new IOException("Closed while Pending/Unready");
LOG.warn(ex.toString());
@ -287,17 +280,16 @@ public class HttpOutput extends ServletOutputStream implements Runnable
_channel.abort(ex);
return;
}
default:
{
if (!_state.compareAndSet(state,OutputState.CLOSED))
if (!_state.compareAndSet(state, OutputState.CLOSED))
continue;
// Do a normal close by writing the aggregate buffer or an empty buffer. If we are
// not including, then indicate this is the last write.
try
{
write(BufferUtil.hasContent(_aggregate)?_aggregate:BufferUtil.EMPTY_BUFFER, !_channel.getResponse().isIncluding());
write(BufferUtil.hasContent(_aggregate) ? _aggregate : BufferUtil.EMPTY_BUFFER, !_channel.getResponse().isIncluding());
}
catch (IOException x)
{
@ -320,9 +312,9 @@ public class HttpOutput extends ServletOutputStream implements Runnable
*/
void closed()
{
while(true)
while (true)
{
OutputState state=_state.get();
OutputState state = _state.get();
switch (state)
{
case CLOSED:
@ -331,8 +323,8 @@ public class HttpOutput extends ServletOutputStream implements Runnable
}
case UNREADY:
{
if (_state.compareAndSet(state,OutputState.ERROR))
_writeListener.onError(_onError==null?new EofException("Async closed"):_onError);
if (_state.compareAndSet(state, OutputState.ERROR))
_writeListener.onError(_onError == null ? new EofException("Async closed") : _onError);
break;
}
default:
@ -372,12 +364,12 @@ public class HttpOutput extends ServletOutputStream implements Runnable
public boolean isClosed()
{
return _state.get()==OutputState.CLOSED;
}
return _state.get() == OutputState.CLOSED;
}
public boolean isAsync()
{
switch(_state.get())
switch (_state.get())
{
case ASYNC:
case READY:
@ -388,16 +380,16 @@ public class HttpOutput extends ServletOutputStream implements Runnable
return false;
}
}
@Override
public void flush() throws IOException
{
while(true)
while (true)
{
switch(_state.get())
switch (_state.get())
{
case OPEN:
write(BufferUtil.hasContent(_aggregate)?_aggregate:BufferUtil.EMPTY_BUFFER, false);
write(BufferUtil.hasContent(_aggregate) ? _aggregate : BufferUtil.EMPTY_BUFFER, false);
return;
case ASYNC:
@ -431,9 +423,9 @@ public class HttpOutput extends ServletOutputStream implements Runnable
public void write(byte[] b, int off, int len) throws IOException
{
// Async or Blocking ?
while(true)
while (true)
{
switch(_state.get())
switch (_state.get())
{
case OPEN:
// process blocking below
@ -448,7 +440,7 @@ public class HttpOutput extends ServletOutputStream implements Runnable
// Should we aggregate?
boolean last = isLastContentToWrite(len);
if (!last && len<=_commitSize)
if (!last && len <= _commitSize)
{
if (_aggregate == null)
_aggregate = _channel.getByteBufferPool().acquire(getBufferSize(), _interceptor.isOptimizedForDirectBuffers());
@ -457,7 +449,7 @@ public class HttpOutput extends ServletOutputStream implements Runnable
int filled = BufferUtil.fill(_aggregate, b, off, len);
// return if we are not complete, not full and filled all the content
if (filled==len && !BufferUtil.isFull(_aggregate))
if (filled == len && !BufferUtil.isFull(_aggregate))
{
if (!_state.compareAndSet(OutputState.PENDING, OutputState.ASYNC))
throw new IllegalStateException();
@ -465,12 +457,12 @@ public class HttpOutput extends ServletOutputStream implements Runnable
}
// adjust offset/length
off+=filled;
len-=filled;
off += filled;
len -= filled;
}
// Do the asynchronous writing from the callback
new AsyncWrite(b,off,len,last).iterate();
new AsyncWrite(b, off, len, last).iterate();
return;
case PENDING:
@ -494,7 +486,7 @@ public class HttpOutput extends ServletOutputStream implements Runnable
// Should we aggregate?
int capacity = getBufferSize();
boolean last = isLastContentToWrite(len);
if (!last && len<=_commitSize)
if (!last && len <= _commitSize)
{
if (_aggregate == null)
_aggregate = _channel.getByteBufferPool().acquire(capacity, _interceptor.isOptimizedForDirectBuffers());
@ -503,21 +495,21 @@ public class HttpOutput extends ServletOutputStream implements Runnable
int filled = BufferUtil.fill(_aggregate, b, off, len);
// return if we are not complete, not full and filled all the content
if (filled==len && !BufferUtil.isFull(_aggregate))
if (filled == len && !BufferUtil.isFull(_aggregate))
return;
// adjust offset/length
off+=filled;
len-=filled;
off += filled;
len -= filled;
}
// flush any content from the aggregate
if (BufferUtil.hasContent(_aggregate))
{
write(_aggregate, last && len==0);
write(_aggregate, last && len == 0);
// should we fill aggregate again from the buffer?
if (len>0 && !last && len<=_commitSize && len<=BufferUtil.space(_aggregate))
if (len > 0 && !last && len <= _commitSize && len <= BufferUtil.space(_aggregate))
{
BufferUtil.append(_aggregate, b, off, len);
return;
@ -525,26 +517,26 @@ public class HttpOutput extends ServletOutputStream implements Runnable
}
// write any remaining content in the buffer directly
if (len>0)
if (len > 0)
{
// write a buffer capacity at a time to avoid JVM pooling large direct buffers
// http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6210541
ByteBuffer view = ByteBuffer.wrap(b, off, len);
while (len>getBufferSize())
while (len > getBufferSize())
{
int p=view.position();
int l=p+getBufferSize();
view.limit(p+getBufferSize());
write(view,false);
len-=getBufferSize();
view.limit(l+Math.min(len,getBufferSize()));
int p = view.position();
int l = p + getBufferSize();
view.limit(p + getBufferSize());
write(view, false);
len -= getBufferSize();
view.limit(l + Math.min(len, getBufferSize()));
view.position(l);
}
write(view,last);
write(view, last);
}
else if (last)
{
write(BufferUtil.EMPTY_BUFFER,true);
write(BufferUtil.EMPTY_BUFFER, true);
}
if (last)
@ -554,9 +546,9 @@ public class HttpOutput extends ServletOutputStream implements Runnable
public void write(ByteBuffer buffer) throws IOException
{
// Async or Blocking ?
while(true)
while (true)
{
switch(_state.get())
switch (_state.get())
{
case OPEN:
// process blocking below
@ -571,7 +563,7 @@ public class HttpOutput extends ServletOutputStream implements Runnable
// Do the asynchronous writing from the callback
boolean last = isLastContentToWrite(buffer.remaining());
new AsyncWrite(buffer,last).iterate();
new AsyncWrite(buffer, last).iterate();
return;
case PENDING:
@ -590,17 +582,16 @@ public class HttpOutput extends ServletOutputStream implements Runnable
break;
}
// handle blocking write
int len=BufferUtil.length(buffer);
int len = BufferUtil.length(buffer);
boolean last = isLastContentToWrite(len);
// flush any content from the aggregate
if (BufferUtil.hasContent(_aggregate))
write(_aggregate, last && len==0);
write(_aggregate, last && len == 0);
// write any remaining content in the buffer directly
if (len>0)
if (len > 0)
write(buffer, last);
else if (last)
write(BufferUtil.EMPTY_BUFFER, true);
@ -612,13 +603,13 @@ public class HttpOutput extends ServletOutputStream implements Runnable
@Override
public void write(int b) throws IOException
{
_written+=1;
boolean complete=_channel.getResponse().isAllContentWritten(_written);
_written += 1;
boolean complete = _channel.getResponse().isAllContentWritten(_written);
// Async or Blocking ?
while(true)
while (true)
{
switch(_state.get())
switch (_state.get())
{
case OPEN:
if (_aggregate == null)
@ -692,7 +683,7 @@ public class HttpOutput extends ServletOutputStream implements Runnable
public void sendContent(ByteBuffer content) throws IOException
{
if (LOG.isDebugEnabled())
LOG.debug("sendContent({})",BufferUtil.toDetailString(content));
LOG.debug("sendContent({})", BufferUtil.toDetailString(content));
write(content, true);
closed();
@ -706,7 +697,7 @@ public class HttpOutput extends ServletOutputStream implements Runnable
*/
public void sendContent(InputStream in) throws IOException
{
try(Blocker blocker = _writeBlock.acquire())
try (Blocker blocker = _writeBlocker.acquire())
{
new InputStreamWritingCB(in, blocker).iterate();
blocker.block();
@ -728,7 +719,7 @@ public class HttpOutput extends ServletOutputStream implements Runnable
*/
public void sendContent(ReadableByteChannel in) throws IOException
{
try(Blocker blocker = _writeBlock.acquire())
try (Blocker blocker = _writeBlocker.acquire())
{
new ReadableByteChannelWritingCB(in, blocker).iterate();
blocker.block();
@ -750,7 +741,7 @@ public class HttpOutput extends ServletOutputStream implements Runnable
*/
public void sendContent(HttpContent content) throws IOException
{
try(Blocker blocker = _writeBlock.acquire())
try (Blocker blocker = _writeBlocker.acquire())
{
sendContent(content, blocker);
blocker.block();
@ -766,13 +757,14 @@ public class HttpOutput extends ServletOutputStream implements Runnable
/**
* Asynchronous send of whole content.
* @param content The whole content to send
*
* @param content The whole content to send
* @param callback The callback to use to notify success or failure
*/
public void sendContent(ByteBuffer content, final Callback callback)
{
if (LOG.isDebugEnabled())
LOG.debug("sendContent(buffer={},{})",BufferUtil.toDetailString(content),callback);
LOG.debug("sendContent(buffer={},{})", BufferUtil.toDetailString(content), callback);
write(content, true, new Callback.Nested(callback)
{
@ -796,13 +788,13 @@ public class HttpOutput extends ServletOutputStream implements Runnable
* Asynchronous send of stream content.
* The stream will be closed after reading all content.
*
* @param in The stream content to send
* @param in The stream content to send
* @param callback The callback to use to notify success or failure
*/
public void sendContent(InputStream in, Callback callback)
{
if (LOG.isDebugEnabled())
LOG.debug("sendContent(stream={},{})",in,callback);
LOG.debug("sendContent(stream={},{})", in, callback);
new InputStreamWritingCB(in, callback).iterate();
}
@ -811,13 +803,13 @@ public class HttpOutput extends ServletOutputStream implements Runnable
* Asynchronous send of channel content.
* The channel will be closed after reading all content.
*
* @param in The channel content to send
* @param in The channel content to send
* @param callback The callback to use to notify success or failure
*/
public void sendContent(ReadableByteChannel in, Callback callback)
{
if (LOG.isDebugEnabled())
LOG.debug("sendContent(channel={},{})",in,callback);
LOG.debug("sendContent(channel={},{})", in, callback);
new ReadableByteChannelWritingCB(in, callback).iterate();
}
@ -826,12 +818,12 @@ public class HttpOutput extends ServletOutputStream implements Runnable
* Asynchronous send of HTTP content.
*
* @param httpContent The HTTP content to send
* @param callback The callback to use to notify success or failure
* @param callback The callback to use to notify success or failure
*/
public void sendContent(HttpContent httpContent, Callback callback)
{
if (LOG.isDebugEnabled())
LOG.debug("sendContent(http={},{})",httpContent,callback);
LOG.debug("sendContent(http={},{})", httpContent, callback);
if (BufferUtil.hasContent(_aggregate))
{
@ -846,7 +838,7 @@ public class HttpOutput extends ServletOutputStream implements Runnable
while (true)
{
switch(_state.get())
switch (_state.get())
{
case OPEN:
if (!_state.compareAndSet(OutputState.OPEN, OutputState.PENDING))
@ -867,37 +859,36 @@ public class HttpOutput extends ServletOutputStream implements Runnable
break;
}
ByteBuffer buffer = _channel.useDirectBuffers() ? httpContent.getDirectBuffer() : null;
if (buffer == null)
buffer = httpContent.getIndirectBuffer();
if (buffer!=null)
if (buffer != null)
{
sendContent(buffer,callback);
sendContent(buffer, callback);
return;
}
try
{
ReadableByteChannel rbc=httpContent.getReadableByteChannel();
if (rbc!=null)
ReadableByteChannel rbc = httpContent.getReadableByteChannel();
if (rbc != null)
{
// Close of the rbc is done by the async sendContent
sendContent(rbc,callback);
sendContent(rbc, callback);
return;
}
InputStream in = httpContent.getInputStream();
if (in!=null)
if (in != null)
{
sendContent(in,callback);
sendContent(in, callback);
return;
}
throw new IllegalArgumentException("unknown content for "+httpContent);
throw new IllegalArgumentException("unknown content for " + httpContent);
}
catch(Throwable th)
catch (Throwable th)
{
abort(th);
callback.failed(th);
@ -917,7 +908,7 @@ public class HttpOutput extends ServletOutputStream implements Runnable
public void recycle()
{
_interceptor=_channel;
_interceptor = _channel;
if (BufferUtil.hasContent(_aggregate))
BufferUtil.clear(_aggregate);
_written = 0;
@ -949,15 +940,12 @@ public class HttpOutput extends ServletOutputStream implements Runnable
throw new IllegalStateException();
}
/**
* @see javax.servlet.ServletOutputStream#isReady()
*/
@Override
public boolean isReady()
{
while (true)
{
switch(_state.get())
switch (_state.get())
{
case OPEN:
return true;
@ -993,31 +981,31 @@ public class HttpOutput extends ServletOutputStream implements Runnable
@Override
public void run()
{
loop: while (true)
while (true)
{
OutputState state = _state.get();
if(_onError!=null)
if (_onError != null)
{
switch(state)
switch (state)
{
case CLOSED:
case ERROR:
{
_onError=null;
break loop;
_onError = null;
return;
}
default:
{
if (_state.compareAndSet(state, OutputState.ERROR))
{
Throwable th=_onError;
_onError=null;
Throwable th = _onError;
_onError = null;
if (LOG.isDebugEnabled())
LOG.debug("onError",th);
LOG.debug("onError", th);
_writeListener.onError(th);
close();
break loop;
return;
}
}
}
@ -1042,7 +1030,7 @@ public class HttpOutput extends ServletOutputStream implements Runnable
try
{
_writeListener.onWritePossible();
break loop;
break;
}
catch (Throwable e)
{
@ -1066,25 +1054,25 @@ public class HttpOutput extends ServletOutputStream implements Runnable
@Override
public String toString()
{
return String.format("%s@%x{%s}",this.getClass().getSimpleName(),hashCode(),_state.get());
return String.format("%s@%x{%s}", this.getClass().getSimpleName(), hashCode(), _state.get());
}
private abstract class AsyncICB extends IteratingCallback
{
final boolean _last;
AsyncICB(boolean last)
{
_last=last;
_last = last;
}
@Override
protected void onCompleteSuccess()
{
while(true)
while (true)
{
OutputState last=_state.get();
switch(last)
OutputState last = _state.get();
switch (last)
{
case PENDING:
if (!_state.compareAndSet(OutputState.PENDING, OutputState.ASYNC))
@ -1113,7 +1101,7 @@ public class HttpOutput extends ServletOutputStream implements Runnable
@Override
public void onCompleteFailure(Throwable e)
{
_onError=e==null?new IOException():e;
_onError = e == null ? new IOException() : e;
if (_channel.getState().onWritePossible())
_channel.execute(_channel);
}
@ -1133,15 +1121,15 @@ public class HttpOutput extends ServletOutputStream implements Runnable
{
if (BufferUtil.hasContent(_aggregate))
{
_flushed=true;
_flushed = true;
write(_aggregate, false, this);
return Action.SCHEDULED;
}
if (!_flushed)
{
_flushed=true;
write(BufferUtil.EMPTY_BUFFER,false,this);
_flushed = true;
write(BufferUtil.EMPTY_BUFFER, false, this);
return Action.SCHEDULED;
}
@ -1159,23 +1147,23 @@ public class HttpOutput extends ServletOutputStream implements Runnable
public AsyncWrite(byte[] b, int off, int len, boolean last)
{
super(last);
_buffer=ByteBuffer.wrap(b, off, len);
_len=len;
_buffer = ByteBuffer.wrap(b, off, len);
_len = len;
// always use a view for large byte arrays to avoid JVM pooling large direct buffers
_slice=_len<getBufferSize()?null:_buffer.duplicate();
_slice = _len < getBufferSize() ? null : _buffer.duplicate();
}
public AsyncWrite(ByteBuffer buffer, boolean last)
{
super(last);
_buffer=buffer;
_len=buffer.remaining();
_buffer = buffer;
_len = buffer.remaining();
// Use a slice buffer for large indirect to avoid JVM pooling large direct buffers
if (_buffer.isDirect()||_len<getBufferSize())
_slice=null;
if (_buffer.isDirect() || _len < getBufferSize())
_slice = null;
else
{
_slice=_buffer.duplicate();
_slice = _buffer.duplicate();
}
}
@ -1185,16 +1173,16 @@ public class HttpOutput extends ServletOutputStream implements Runnable
// flush any content from the aggregate
if (BufferUtil.hasContent(_aggregate))
{
_completed=_len==0;
_completed = _len == 0;
write(_aggregate, _last && _completed, this);
return Action.SCHEDULED;
}
// Can we just aggregate the remainder?
if (!_last && _len<BufferUtil.space(_aggregate) && _len<_commitSize)
if (!_last && _len < BufferUtil.space(_aggregate) && _len < _commitSize)
{
int position = BufferUtil.flipToFill(_aggregate);
BufferUtil.put(_buffer,_aggregate);
BufferUtil.put(_buffer, _aggregate);
BufferUtil.flipToFlush(_aggregate, position);
return Action.SUCCEEDED;
}
@ -1203,21 +1191,21 @@ public class HttpOutput extends ServletOutputStream implements Runnable
if (_buffer.hasRemaining())
{
// if there is no slice, just write it
if (_slice==null)
if (_slice == null)
{
_completed=true;
_completed = true;
write(_buffer, _last, this);
return Action.SCHEDULED;
}
// otherwise take a slice
int p=_buffer.position();
int l=Math.min(getBufferSize(),_buffer.remaining());
int pl=p+l;
int p = _buffer.position();
int l = Math.min(getBufferSize(), _buffer.remaining());
int pl = p + l;
_slice.limit(pl);
_buffer.position(pl);
_slice.position(p);
_completed=!_buffer.hasRemaining();
_completed = !_buffer.hasRemaining();
write(_slice, _last && _completed, this);
return Action.SCHEDULED;
}
@ -1226,13 +1214,13 @@ public class HttpOutput extends ServletOutputStream implements Runnable
// need to do so
if (_last && !_completed)
{
_completed=true;
_completed = true;
write(BufferUtil.EMPTY_BUFFER, true, this);
return Action.SCHEDULED;
}
if (LOG.isDebugEnabled() && _completed)
LOG.debug("EOF of {}",this);
LOG.debug("EOF of {}", this);
return Action.SUCCEEDED;
}
}
@ -1254,7 +1242,7 @@ public class HttpOutput extends ServletOutputStream implements Runnable
public InputStreamWritingCB(InputStream in, Callback callback)
{
super(callback);
_in=in;
_in = in;
_buffer = _channel.getByteBufferPool().acquire(getBufferSize(), false);
}
@ -1266,7 +1254,7 @@ public class HttpOutput extends ServletOutputStream implements Runnable
if (_eof)
{
if (LOG.isDebugEnabled())
LOG.debug("EOF of {}",this);
LOG.debug("EOF of {}", this);
// Handle EOF
_in.close();
closed();
@ -1275,20 +1263,20 @@ public class HttpOutput extends ServletOutputStream implements Runnable
}
// Read until buffer full or EOF
int len=0;
while (len<_buffer.capacity() && !_eof)
int len = 0;
while (len < _buffer.capacity() && !_eof)
{
int r=_in.read(_buffer.array(),_buffer.arrayOffset()+len,_buffer.capacity()-len);
if (r<0)
_eof=true;
int r = _in.read(_buffer.array(), _buffer.arrayOffset() + len, _buffer.capacity() - len);
if (r < 0)
_eof = true;
else
len+=r;
len += r;
}
// write what we have
_buffer.position(0);
_buffer.limit(len);
write(_buffer,_eof,this);
write(_buffer, _eof, this);
return Action.SCHEDULED;
}
@ -1302,8 +1290,8 @@ public class HttpOutput extends ServletOutputStream implements Runnable
}
}
/* ------------------------------------------------------------ */
/** An iterating callback that will take content from a
/**
* An iterating callback that will take content from a
* ReadableByteChannel and write it to the {@link HttpChannel}.
* A {@link ByteBuffer} of size {@link HttpOutput#getBufferSize()} is used that will be direct if
* {@link HttpChannel#useDirectBuffers()} is true.
@ -1320,7 +1308,7 @@ public class HttpOutput extends ServletOutputStream implements Runnable
public ReadableByteChannelWritingCB(ReadableByteChannel in, Callback callback)
{
super(callback);
_in=in;
_in = in;
_buffer = _channel.getByteBufferPool().acquire(getBufferSize(), _channel.useDirectBuffers());
}
@ -1332,7 +1320,7 @@ public class HttpOutput extends ServletOutputStream implements Runnable
if (_eof)
{
if (LOG.isDebugEnabled())
LOG.debug("EOF of {}",this);
LOG.debug("EOF of {}", this);
_in.close();
closed();
_channel.getByteBufferPool().release(_buffer);
@ -1342,11 +1330,11 @@ public class HttpOutput extends ServletOutputStream implements Runnable
// Read from stream until buffer full or EOF
BufferUtil.clearToFill(_buffer);
while (_buffer.hasRemaining() && !_eof)
_eof = (_in.read(_buffer)) < 0;
_eof = (_in.read(_buffer)) < 0;
// write what we have
BufferUtil.flipToFlush(_buffer, 0);
write(_buffer,_eof,this);
write(_buffer, _eof, this);
return Action.SCHEDULED;
}
@ -1361,4 +1349,22 @@ public class HttpOutput extends ServletOutputStream implements Runnable
}
}
private static class WriteBlocker extends SharedBlockingCallback
{
private final HttpChannel _channel;
private WriteBlocker(HttpChannel channel)
{
_channel = channel;
}
@Override
protected long getIdleTimeout()
{
long blockingTimeout = _channel.getHttpConfiguration().getBlockingTimeout();
if (blockingTimeout == 0)
return _channel.getIdleTimeout();
return blockingTimeout;
}
}
}

View File

@ -73,7 +73,12 @@ public abstract class ConnectorTimeoutTest extends HttpServerTestFixture
{
super.before();
if (_httpConfiguration!=null)
{
_httpConfiguration.setBlockingTimeout(-1L);
_httpConfiguration.setMinRequestDataRate(-1);
_httpConfiguration.setIdleTimeout(-1);
}
}
@Test(timeout=60000)
@ -732,41 +737,6 @@ public abstract class ConnectorTimeoutTest extends HttpServerTestFixture
int offset=in.indexOf("Hello World");
Assert.assertTrue(offset > 0);
}
@Test(timeout=60000)
public void testMaxIdleWithDelayedDispatch() throws Exception
{
configureServer(new EchoHandler());
Socket client=newSocket(_serverURI.getHost(),_serverURI.getPort());
client.setSoTimeout(10000);
Assert.assertFalse(client.isClosed());
OutputStream os=client.getOutputStream();
InputStream is=client.getInputStream();
String content="Wibble";
byte[] contentB=content.getBytes("utf-8");
os.write((
"POST /echo HTTP/1.1\r\n"+
"host: "+_serverURI.getHost()+":"+_serverURI.getPort()+"\r\n"+
"content-type: text/plain; charset=utf-8\r\n"+
"content-length: "+contentB.length+"\r\n"+
"\r\n").getBytes("utf-8"));
os.flush();
long start = System.currentTimeMillis();
IO.toString(is);
Thread.sleep(sleepTime);
Assert.assertEquals(-1, is.read());
Assert.assertTrue(System.currentTimeMillis() - start > minimumTestRuntime);
Assert.assertTrue(System.currentTimeMillis() - start < maximumTestRuntime);
}
protected static class SlowResponseHandler extends AbstractHandler
{

View File

@ -18,27 +18,37 @@
package org.eclipse.jetty.server;
import static org.junit.Assert.assertTrue;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.io.UnsupportedEncodingException;
import java.net.Socket;
import java.nio.charset.StandardCharsets;
import java.util.Locale;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.eclipse.jetty.server.handler.AbstractHandler;
import org.eclipse.jetty.server.session.SessionHandler;
import org.eclipse.jetty.util.IO;
import org.eclipse.jetty.util.log.StacklessLogging;
import org.hamcrest.Matchers;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;
import static org.hamcrest.Matchers.containsString;
import static org.junit.Assert.assertTrue;
public class ServerConnectorTimeoutTest extends ConnectorTimeoutTest
{
@Before
public void init() throws Exception
{
ServerConnector connector = new ServerConnector(_server,1,1);
connector.setIdleTimeout(MAX_IDLE_TIME); // 250 msec max idle
connector.setIdleTimeout(MAX_IDLE_TIME);
startServer(connector);
}
@ -113,4 +123,49 @@ public class ServerConnectorTimeoutTest extends ConnectorTimeoutTest
return response;
}
}
@Test
public void testHttpWriteIdleTimeout() throws Exception
{
_httpConfiguration.setBlockingTimeout(500);
configureServer(new AbstractHandler()
{
@Override
public void handle(String target, Request baseRequest, HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException
{
baseRequest.setHandled(true);
IO.copy(request.getInputStream(), response.getOutputStream());
}
});
Socket client=newSocket(_serverURI.getHost(),_serverURI.getPort());
client.setSoTimeout(10000);
Assert.assertFalse(client.isClosed());
OutputStream os=client.getOutputStream();
InputStream is=client.getInputStream();
try (StacklessLogging scope = new StacklessLogging(HttpChannel.class))
{
os.write((
"POST /echo HTTP/1.0\r\n"+
"host: "+_serverURI.getHost()+":"+_serverURI.getPort()+"\r\n"+
"content-type: text/plain; charset=utf-8\r\n"+
"content-length: 20\r\n"+
"\r\n").getBytes("utf-8"));
os.flush();
os.write("123456789\n".getBytes("utf-8"));
os.flush();
Thread.sleep(1000);
os.write("=========\n".getBytes("utf-8"));
os.flush();
Thread.sleep(2000);
String response =IO.toString(is);
Assert.assertThat(response,containsString(" 500 "));
Assert.assertThat(response, Matchers.not(containsString("=========")));
}
}
}

View File

@ -1,3 +1,3 @@
org.eclipse.jetty.util.log.class=org.eclipse.jetty.util.log.StdErrLog
org.eclipse.jetty.LEVEL=INFO
#org.eclipse.jetty.LEVEL=DEBUG
#org.eclipse.jetty.server.LEVEL=DEBUG

View File

@ -60,16 +60,16 @@ public class SharedBlockingCallback
private final Condition _idle = _lock.newCondition();
private final Condition _complete = _lock.newCondition();
private Blocker _blocker = new Blocker();
protected long getIdleTimeout()
{
return -1;
}
public Blocker acquire() throws IOException
{
_lock.lock();
long idle = getIdleTimeout();
_lock.lock();
try
{
while (_blocker._state != IDLE)
@ -84,8 +84,9 @@ public class SharedBlockingCallback
_idle.await();
}
_blocker._state = null;
return _blocker;
}
catch (final InterruptedException e)
catch (InterruptedException x)
{
throw new InterruptedIOException();
}
@ -93,7 +94,6 @@ public class SharedBlockingCallback
{
_lock.unlock();
}
return _blocker;
}
protected void notComplete(Blocker blocker)
@ -161,8 +161,15 @@ public class SharedBlockingCallback
_state=cause;
_complete.signalAll();
}
else
else if (_state instanceof BlockerTimeoutException)
{
// Failure arrived late, block() already
// modified the state, nothing more to do.
}
else
{
throw new IllegalStateException(_state);
}
}
finally
{
@ -179,19 +186,24 @@ public class SharedBlockingCallback
*/
public void block() throws IOException
{
_lock.lock();
long idle = getIdleTimeout();
_lock.lock();
try
{
while (_state == null)
{
if (idle>0 && (idle < Long.MAX_VALUE/2))
if (idle > 0)
{
// Wait a little bit longer than expected callback idle timeout
if (!_complete.await(idle+idle/2,TimeUnit.MILLISECONDS))
// The callback has not arrived in sufficient time.
// We will synthesize a TimeoutException
_state=new BlockerTimeoutException();
// Waiting here may compete with the idle timeout mechanism,
// so here we wait a little bit longer to favor the normal
// idle timeout mechanism that will call failed(Throwable).
long excess = Math.min(idle / 2, 1000);
if (!_complete.await(idle + excess, TimeUnit.MILLISECONDS))
{
// Method failed(Throwable) has not been called yet,
// so we will synthesize a special TimeoutException.
_state = new BlockerTimeoutException();
}
}
else
{

View File

@ -45,6 +45,7 @@ import org.eclipse.jetty.util.SocketAddressResolver;
import org.eclipse.jetty.util.ssl.SslContextFactory;
import org.eclipse.jetty.util.thread.QueuedThreadPool;
import org.junit.After;
import org.junit.Assume;
import org.junit.Rule;
import org.junit.runner.RunWith;
import org.junit.runners.Parameterized;
@ -61,6 +62,7 @@ public abstract class AbstractTest
@Rule
public final TestTracker tracker = new TestTracker();
protected final HttpConfiguration httpConfig = new HttpConfiguration();
protected final Transport transport;
protected SslContextFactory sslContextFactory;
protected Server server;
@ -69,6 +71,7 @@ public abstract class AbstractTest
public AbstractTest(Transport transport)
{
Assume.assumeNotNull(transport);
this.transport = transport;
}
@ -118,14 +121,13 @@ public abstract class AbstractTest
{
case HTTP:
{
result.add(new HttpConnectionFactory(new HttpConfiguration()));
result.add(new HttpConnectionFactory(httpConfig));
break;
}
case HTTPS:
{
HttpConfiguration configuration = new HttpConfiguration();
configuration.addCustomizer(new SecureRequestCustomizer());
HttpConnectionFactory http = new HttpConnectionFactory(configuration);
httpConfig.addCustomizer(new SecureRequestCustomizer());
HttpConnectionFactory http = new HttpConnectionFactory(httpConfig);
SslConnectionFactory ssl = new SslConnectionFactory(sslContextFactory, http.getProtocol());
result.add(ssl);
result.add(http);
@ -133,14 +135,13 @@ public abstract class AbstractTest
}
case H2C:
{
result.add(new HTTP2CServerConnectionFactory(new HttpConfiguration()));
result.add(new HTTP2CServerConnectionFactory(httpConfig));
break;
}
case H2:
{
HttpConfiguration configuration = new HttpConfiguration();
configuration.addCustomizer(new SecureRequestCustomizer());
HTTP2ServerConnectionFactory h2 = new HTTP2ServerConnectionFactory(configuration);
httpConfig.addCustomizer(new SecureRequestCustomizer());
HTTP2ServerConnectionFactory h2 = new HTTP2ServerConnectionFactory(httpConfig);
ALPNServerConnectionFactory alpn = new ALPNServerConnectionFactory("h2");
SslConnectionFactory ssl = new SslConnectionFactory(sslContextFactory, alpn.getProtocol());
result.add(ssl);
@ -150,7 +151,7 @@ public abstract class AbstractTest
}
case FCGI:
{
result.add(new ServerFCGIConnectionFactory(new HttpConfiguration()));
result.add(new ServerFCGIConnectionFactory(httpConfig));
break;
}
default:

View File

@ -0,0 +1,732 @@
//
// ========================================================================
// Copyright (c) 1995-2016 Mort Bay Consulting Pty. Ltd.
// ------------------------------------------------------------------------
// All rights reserved. This program and the accompanying materials
// are made available under the terms of the Eclipse Public License v1.0
// and Apache License v2.0 which accompanies this distribution.
//
// The Eclipse Public License is available at
// http://www.eclipse.org/legal/epl-v10.html
//
// The Apache License v2.0 is available at
// http://www.opensource.org/licenses/apache2.0.php
//
// You may elect to redistribute this code under either of these licenses.
// ========================================================================
//
package org.eclipse.jetty.http.client;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
import javax.servlet.AsyncContext;
import javax.servlet.ReadListener;
import javax.servlet.ServletException;
import javax.servlet.ServletInputStream;
import javax.servlet.ServletOutputStream;
import javax.servlet.WriteListener;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.eclipse.jetty.client.util.DeferredContentProvider;
import org.eclipse.jetty.http.BadMessageException;
import org.eclipse.jetty.http.HttpStatus;
import org.eclipse.jetty.http2.server.AbstractHTTP2ServerConnectionFactory;
import org.eclipse.jetty.server.HttpChannel;
import org.eclipse.jetty.server.Request;
import org.eclipse.jetty.server.handler.AbstractHandler;
import org.eclipse.jetty.util.Callback;
import org.eclipse.jetty.util.log.StacklessLogging;
import org.junit.Assert;
import org.junit.Test;
public class ServerTimeoutsTest extends AbstractTest
{
public ServerTimeoutsTest(Transport transport)
{
// Skip FCGI for now, not much interested in its server-side behavior.
super(transport == Transport.FCGI ? null : transport);
}
private void setServerIdleTimeout(long idleTimeout)
{
AbstractHTTP2ServerConnectionFactory h2 = connector.getConnectionFactory(AbstractHTTP2ServerConnectionFactory.class);
if (h2 != null)
h2.setStreamIdleTimeout(idleTimeout);
else
connector.setIdleTimeout(idleTimeout);
}
@Test
public void testDelayedDispatchRequestWithDelayedFirstContentIdleTimeoutFires() throws Exception
{
httpConfig.setDelayDispatchUntilContent(true);
CountDownLatch handlerLatch = new CountDownLatch(1);
start(new AbstractHandler()
{
@Override
public void handle(String target, Request baseRequest, HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException
{
baseRequest.setHandled(true);
handlerLatch.countDown();
}
});
long idleTimeout = 1000;
setServerIdleTimeout(idleTimeout);
CountDownLatch resultLatch = new CountDownLatch(1);
client.POST(newURI())
.content(new DeferredContentProvider())
.send(result ->
{
if (result.isFailed())
resultLatch.countDown();
});
// We did not send the content, the request was not
// dispatched, the server should have idle timed out.
Assert.assertFalse(handlerLatch.await(2 * idleTimeout, TimeUnit.MILLISECONDS));
Assert.assertTrue(resultLatch.await(5, TimeUnit.SECONDS));
}
@Test
public void testNoBlockingTimeoutBlockingReadIdleTimeoutFires() throws Exception
{
httpConfig.setBlockingTimeout(-1);
CountDownLatch handlerLatch = new CountDownLatch(1);
start(new BlockingReadHandler(handlerLatch));
long idleTimeout = 1000;
setServerIdleTimeout(idleTimeout);
try (StacklessLogging stackless = new StacklessLogging(HttpChannel.class))
{
DeferredContentProvider contentProvider = new DeferredContentProvider(ByteBuffer.allocate(1));
CountDownLatch resultLatch = new CountDownLatch(1);
client.POST(newURI())
.content(contentProvider)
.send(result ->
{
if (result.getResponse().getStatus() == HttpStatus.INTERNAL_SERVER_ERROR_500)
resultLatch.countDown();
});
// Blocking read should timeout.
Assert.assertTrue(handlerLatch.await(2 * idleTimeout, TimeUnit.MILLISECONDS));
// Complete the request.
contentProvider.close();
Assert.assertTrue(resultLatch.await(5, TimeUnit.SECONDS));
}
}
@Test
public void testBlockingTimeoutSmallerThanIdleTimeoutBlockingReadBlockingTimeoutFires() throws Exception
{
long blockingTimeout = 1000;
httpConfig.setBlockingTimeout(blockingTimeout);
CountDownLatch handlerLatch = new CountDownLatch(1);
start(new BlockingReadHandler(handlerLatch));
long idleTimeout = 3 * blockingTimeout;
setServerIdleTimeout(idleTimeout);
try (StacklessLogging stackless = new StacklessLogging(HttpChannel.class))
{
DeferredContentProvider contentProvider = new DeferredContentProvider(ByteBuffer.allocate(1));
CountDownLatch resultLatch = new CountDownLatch(1);
client.POST(newURI())
.content(contentProvider)
.send(result ->
{
if (result.getResponse().getStatus() == HttpStatus.INTERNAL_SERVER_ERROR_500)
resultLatch.countDown();
});
// Blocking read should timeout.
Assert.assertTrue(handlerLatch.await(2 * blockingTimeout, TimeUnit.MILLISECONDS));
// Complete the request.
contentProvider.close();
Assert.assertTrue(resultLatch.await(5, TimeUnit.SECONDS));
}
}
@Test
public void testBlockingTimeoutLargerThanIdleTimeoutBlockingReadIdleTimeoutFires() throws Exception
{
long idleTimeout = 1000;
long blockingTimeout = 3 * idleTimeout;
httpConfig.setBlockingTimeout(blockingTimeout);
CountDownLatch handlerLatch = new CountDownLatch(1);
start(new BlockingReadHandler(handlerLatch));
setServerIdleTimeout(idleTimeout);
try (StacklessLogging stackless = new StacklessLogging(HttpChannel.class))
{
DeferredContentProvider contentProvider = new DeferredContentProvider(ByteBuffer.allocate(1));
CountDownLatch resultLatch = new CountDownLatch(1);
client.POST(newURI())
.content(contentProvider)
.send(result ->
{
if (result.getResponse().getStatus() == HttpStatus.INTERNAL_SERVER_ERROR_500)
resultLatch.countDown();
});
// Blocking read should timeout.
Assert.assertTrue(handlerLatch.await(2 * idleTimeout, TimeUnit.MILLISECONDS));
// Complete the request.
contentProvider.close();
Assert.assertTrue(resultLatch.await(5, TimeUnit.SECONDS));
}
}
@Test
public void testNoBlockingTimeoutBlockingWriteIdleTimeoutFires() throws Exception
{
httpConfig.setBlockingTimeout(-1);
CountDownLatch handlerLatch = new CountDownLatch(1);
start(new BlockingWriteHandler(handlerLatch));
long idleTimeout = 1000;
setServerIdleTimeout(idleTimeout);
try (StacklessLogging stackless = new StacklessLogging(HttpChannel.class))
{
BlockingQueue<Callback> callbacks = new LinkedBlockingQueue<>();
CountDownLatch resultLatch = new CountDownLatch(1);
client.newRequest(newURI())
.onResponseContentAsync((response, content, callback) ->
{
// Do not succeed the callback so the server will block writing.
callbacks.offer(callback);
})
.send(result ->
{
if (result.isFailed())
resultLatch.countDown();
});
// Blocking write should timeout.
Assert.assertTrue(handlerLatch.await(2 * idleTimeout, TimeUnit.MILLISECONDS));
// After the server stopped sending, consume on the client to read the early EOF.
while (true)
{
Callback callback = callbacks.poll(1, TimeUnit.SECONDS);
if (callback == null)
break;
callback.succeeded();
}
Assert.assertTrue(resultLatch.await(5, TimeUnit.SECONDS));
}
}
@Test
public void testBlockingTimeoutSmallerThanIdleTimeoutBlockingWriteBlockingTimeoutFires() throws Exception
{
long blockingTimeout = 1000;
httpConfig.setBlockingTimeout(blockingTimeout);
CountDownLatch handlerLatch = new CountDownLatch(1);
start(new BlockingWriteHandler(handlerLatch));
long idleTimeout = 3 * blockingTimeout;
setServerIdleTimeout(idleTimeout);
try (StacklessLogging stackless = new StacklessLogging(HttpChannel.class))
{
BlockingQueue<Callback> callbacks = new LinkedBlockingQueue<>();
CountDownLatch resultLatch = new CountDownLatch(1);
client.newRequest(newURI())
.onResponseContentAsync((response, content, callback) ->
{
// Do not succeed the callback so the server will block writing.
callbacks.offer(callback);
})
.send(result ->
{
if (result.isFailed())
resultLatch.countDown();
});
// Blocking write should timeout.
Assert.assertTrue(handlerLatch.await(2 * blockingTimeout, TimeUnit.MILLISECONDS));
// After the server stopped sending, consume on the client to read the early EOF.
while (true)
{
Callback callback = callbacks.poll(1, TimeUnit.SECONDS);
if (callback == null)
break;
callback.succeeded();
}
Assert.assertTrue(resultLatch.await(5, TimeUnit.SECONDS));
}
}
@Test
public void testBlockingTimeoutLargerThanIdleTimeoutBlockingWriteIdleTimeoutFires() throws Exception
{
long idleTimeout = 1000;
long blockingTimeout = 3 * idleTimeout;
httpConfig.setBlockingTimeout(blockingTimeout);
CountDownLatch handlerLatch = new CountDownLatch(1);
start(new BlockingWriteHandler(handlerLatch));
setServerIdleTimeout(idleTimeout);
try (StacklessLogging stackless = new StacklessLogging(HttpChannel.class))
{
BlockingQueue<Callback> callbacks = new LinkedBlockingQueue<>();
CountDownLatch resultLatch = new CountDownLatch(1);
client.newRequest(newURI())
.onResponseContentAsync((response, content, callback) ->
{
// Do not succeed the callback so the server will block writing.
callbacks.offer(callback);
})
.send(result ->
{
if (result.isFailed())
resultLatch.countDown();
});
// Blocking read should timeout.
Assert.assertTrue(handlerLatch.await(2 * idleTimeout, TimeUnit.MILLISECONDS));
// After the server stopped sending, consume on the client to read the early EOF.
while (true)
{
Callback callback = callbacks.poll(1, TimeUnit.SECONDS);
if (callback == null)
break;
callback.succeeded();
}
Assert.assertTrue(resultLatch.await(5, TimeUnit.SECONDS));
}
}
@Test
public void testBlockingTimeoutWithSlowRead() throws Exception
{
long idleTimeout = 1000;
long blockingTimeout = 2 * idleTimeout;
httpConfig.setBlockingTimeout(blockingTimeout);
CountDownLatch handlerLatch = new CountDownLatch(1);
start(new AbstractHandler()
{
@Override
public void handle(String target, Request baseRequest, HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException
{
try
{
baseRequest.setHandled(true);
ServletInputStream input = request.getInputStream();
while (true)
{
int read = input.read();
if (read < 0)
break;
}
}
catch (IOException x)
{
handlerLatch.countDown();
throw x;
}
}
});
setServerIdleTimeout(idleTimeout);
try (StacklessLogging stackless = new StacklessLogging(HttpChannel.class))
{
DeferredContentProvider contentProvider = new DeferredContentProvider();
CountDownLatch resultLatch = new CountDownLatch(1);
client.newRequest(newURI())
.content(contentProvider)
.send(result ->
{
// Result may fail to send the whole request body,
// but the response has arrived successfully.
if (result.getResponse().getStatus() == HttpStatus.INTERNAL_SERVER_ERROR_500)
resultLatch.countDown();
});
// The writes should be slow but not trigger the idle timeout.
long period = idleTimeout / 2;
long writes = 2 * (blockingTimeout / period);
for (long i = 0; i < writes; ++i)
{
contentProvider.offer(ByteBuffer.allocate(1));
Thread.sleep(period);
}
contentProvider.close();
// Blocking read should timeout.
Assert.assertTrue(handlerLatch.await(2 * idleTimeout, TimeUnit.MILLISECONDS));
Assert.assertTrue(resultLatch.await(5, TimeUnit.SECONDS));
}
}
@Test
public void testAsyncReadIdleTimeoutFires() throws Exception
{
CountDownLatch handlerLatch = new CountDownLatch(1);
start(new AbstractHandler()
{
@Override
public void handle(String target, Request baseRequest, HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException
{
baseRequest.setHandled(true);
AsyncContext asyncContext = request.startAsync();
asyncContext.setTimeout(0);
ServletInputStream input = request.getInputStream();
input.setReadListener(new ReadListener()
{
@Override
public void onDataAvailable() throws IOException
{
Assert.assertEquals(0, input.read());
Assert.assertFalse(input.isReady());
}
@Override
public void onAllDataRead() throws IOException
{
}
@Override
public void onError(Throwable failure)
{
if (failure instanceof TimeoutException)
{
response.setStatus(HttpStatus.INTERNAL_SERVER_ERROR_500);
asyncContext.complete();
handlerLatch.countDown();
}
}
});
}
});
long idleTimeout = 1000;
setServerIdleTimeout(idleTimeout);
DeferredContentProvider contentProvider = new DeferredContentProvider(ByteBuffer.allocate(1));
CountDownLatch resultLatch = new CountDownLatch(1);
client.POST(newURI())
.content(contentProvider)
.send(result ->
{
if (result.getResponse().getStatus() == HttpStatus.INTERNAL_SERVER_ERROR_500)
resultLatch.countDown();
});
// Async read should timeout.
Assert.assertTrue(handlerLatch.await(2 * idleTimeout, TimeUnit.MILLISECONDS));
// Complete the request.
contentProvider.close();
Assert.assertTrue(resultLatch.await(5, TimeUnit.SECONDS));
}
@Test
public void testAsyncWriteIdleTimeoutFires() throws Exception
{
CountDownLatch handlerLatch = new CountDownLatch(1);
start(new AbstractHandler()
{
@Override
public void handle(String target, Request baseRequest, HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException
{
baseRequest.setHandled(true);
AsyncContext asyncContext = request.startAsync();
asyncContext.setTimeout(0);
ServletOutputStream output = response.getOutputStream();
output.setWriteListener(new WriteListener()
{
@Override
public void onWritePossible() throws IOException
{
output.write(new byte[64 * 1024 * 1024]);
}
@Override
public void onError(Throwable failure)
{
if (failure instanceof TimeoutException)
{
asyncContext.complete();
handlerLatch.countDown();
}
}
});
}
});
long idleTimeout = 1000;
setServerIdleTimeout(idleTimeout);
BlockingQueue<Callback> callbacks = new LinkedBlockingQueue<>();
CountDownLatch resultLatch = new CountDownLatch(1);
client.newRequest(newURI())
.onResponseContentAsync((response, content, callback) ->
{
// Do not succeed the callback so the server will block writing.
callbacks.offer(callback);
})
.send(result ->
{
if (result.isFailed())
resultLatch.countDown();
});
// Async write should timeout.
Assert.assertTrue(handlerLatch.await(2 * idleTimeout, TimeUnit.MILLISECONDS));
// After the server stopped sending, consume on the client to read the early EOF.
while (true)
{
Callback callback = callbacks.poll(1, TimeUnit.SECONDS);
if (callback == null)
break;
callback.succeeded();
}
Assert.assertTrue(resultLatch.await(5, TimeUnit.SECONDS));
}
@Test
public void testBlockingReadWithMinimumDataRateBelowLimit() throws Exception
{
int bytesPerSecond = 20;
httpConfig.setMinRequestDataRate(bytesPerSecond);
CountDownLatch handlerLatch = new CountDownLatch(1);
start(new AbstractHandler()
{
@Override
public void handle(String target, Request baseRequest, HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException
{
try
{
baseRequest.setHandled(true);
ServletInputStream input = request.getInputStream();
while (true)
{
int read = input.read();
if (read < 0)
break;
}
}
catch (BadMessageException x)
{
handlerLatch.countDown();
throw x;
}
}
});
DeferredContentProvider contentProvider = new DeferredContentProvider();
CountDownLatch resultLatch = new CountDownLatch(1);
client.newRequest(newURI())
.content(contentProvider)
.send(result ->
{
if (result.getResponse().getStatus() == HttpStatus.REQUEST_TIMEOUT_408)
resultLatch.countDown();
});
for (int i = 0; i < 3; ++i)
{
contentProvider.offer(ByteBuffer.allocate(bytesPerSecond / 2));
Thread.sleep(1000);
}
contentProvider.close();
// Request should timeout.
Assert.assertTrue(handlerLatch.await(5, TimeUnit.SECONDS));
Assert.assertTrue(resultLatch.await(5, TimeUnit.SECONDS));
}
@Test
public void testBlockingReadWithMinimumDataRateAboveLimit() throws Exception
{
int bytesPerSecond = 20;
httpConfig.setMinRequestDataRate(bytesPerSecond);
CountDownLatch handlerLatch = new CountDownLatch(1);
start(new AbstractHandler()
{
@Override
public void handle(String target, Request baseRequest, HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException
{
baseRequest.setHandled(true);
ServletInputStream input = request.getInputStream();
while (true)
{
int read = input.read();
if (read < 0)
break;
}
handlerLatch.countDown();
}
});
DeferredContentProvider contentProvider = new DeferredContentProvider();
CountDownLatch resultLatch = new CountDownLatch(1);
client.newRequest(newURI())
.content(contentProvider)
.send(result ->
{
if (result.getResponse().getStatus() == HttpStatus.OK_200)
resultLatch.countDown();
});
for (int i = 0; i < 3; ++i)
{
contentProvider.offer(ByteBuffer.allocate(bytesPerSecond * 2));
Thread.sleep(1000);
}
contentProvider.close();
Assert.assertTrue(handlerLatch.await(5, TimeUnit.SECONDS));
Assert.assertTrue(resultLatch.await(5, TimeUnit.SECONDS));
}
@Test
public void testBlockingReadHttpIdleTimeoutOverridesIdleTimeout() throws Exception
{
long httpIdleTimeout = 1000;
long idleTimeout = 3 * httpIdleTimeout;
httpConfig.setIdleTimeout(httpIdleTimeout);
CountDownLatch handlerLatch = new CountDownLatch(1);
start(new BlockingReadHandler(handlerLatch));
setServerIdleTimeout(idleTimeout);
try (StacklessLogging stackless = new StacklessLogging(HttpChannel.class))
{
DeferredContentProvider contentProvider = new DeferredContentProvider(ByteBuffer.allocate(1));
CountDownLatch resultLatch = new CountDownLatch(1);
client.POST(newURI())
.content(contentProvider)
.send(result ->
{
if (result.getResponse().getStatus() == HttpStatus.INTERNAL_SERVER_ERROR_500)
resultLatch.countDown();
});
// Blocking read should timeout.
Assert.assertTrue(handlerLatch.await(2 * httpIdleTimeout, TimeUnit.MILLISECONDS));
// Complete the request.
contentProvider.close();
Assert.assertTrue(resultLatch.await(5, TimeUnit.SECONDS));
}
}
@Test
public void testAsyncReadHttpIdleTimeoutOverridesIdleTimeout() throws Exception
{
long httpIdleTimeout = 1000;
long idleTimeout = 3 * httpIdleTimeout;
httpConfig.setIdleTimeout(httpIdleTimeout);
CountDownLatch handlerLatch = new CountDownLatch(1);
start(new AbstractHandler()
{
@Override
public void handle(String target, Request baseRequest, HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException
{
baseRequest.setHandled(true);
AsyncContext asyncContext = request.startAsync();
asyncContext.setTimeout(0);
ServletInputStream input = request.getInputStream();
input.setReadListener(new ReadListener()
{
@Override
public void onDataAvailable() throws IOException
{
Assert.assertEquals(0, input.read());
Assert.assertFalse(input.isReady());
}
@Override
public void onAllDataRead() throws IOException
{
}
@Override
public void onError(Throwable failure)
{
if (failure instanceof TimeoutException)
{
response.setStatus(HttpStatus.INTERNAL_SERVER_ERROR_500);
asyncContext.complete();
handlerLatch.countDown();
}
}
});
}
});
setServerIdleTimeout(idleTimeout);
DeferredContentProvider contentProvider = new DeferredContentProvider(ByteBuffer.allocate(1));
CountDownLatch resultLatch = new CountDownLatch(1);
client.POST(newURI())
.content(contentProvider)
.send(result ->
{
if (result.getResponse().getStatus() == HttpStatus.INTERNAL_SERVER_ERROR_500)
resultLatch.countDown();
});
// Async read should timeout.
Assert.assertTrue(handlerLatch.await(2 * idleTimeout, TimeUnit.MILLISECONDS));
// Complete the request.
contentProvider.close();
Assert.assertTrue(resultLatch.await(5, TimeUnit.SECONDS));
}
private static class BlockingReadHandler extends AbstractHandler
{
private final CountDownLatch handlerLatch;
public BlockingReadHandler(CountDownLatch handlerLatch)
{
this.handlerLatch = handlerLatch;
}
@Override
public void handle(String target, Request baseRequest, HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException
{
baseRequest.setHandled(true);
ServletInputStream input = request.getInputStream();
Assert.assertEquals(0, input.read());
try
{
input.read();
}
catch (IOException x)
{
handlerLatch.countDown();
throw x;
}
}
}
private static class BlockingWriteHandler extends AbstractHandler
{
private final CountDownLatch handlerLatch;
private BlockingWriteHandler(CountDownLatch handlerLatch)
{
this.handlerLatch = handlerLatch;
}
@Override
public void handle(String target, Request baseRequest, HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException
{
baseRequest.setHandled(true);
ServletOutputStream output = response.getOutputStream();
try
{
output.write(new byte[64 * 1024 * 1024]);
}
catch (IOException x)
{
handlerLatch.countDown();
throw x;
}
}
}
}