HTTP Caching
General Concepts HttpClient Cache provides an HTTP/1.1-compliant caching layer to be used with HttpClient--the Java equivalent of a browser cache. The implementation follows the Decorator design pattern, where the CachingHttpClient class is a drop-in replacement for a DefaultHttpClient; requests that can be satisfied entirely from the cache will not result in actual origin requests. Stale cache entries are automatically validated with the origin where possible, using conditional GETs and the If-Modified-Since and/or If-None-Match request headers. HTTP/1.1 caching in general is designed to be semantically transparent; that is, a cache should not change the meaning of the request-response exchange between client and server. As such, it should be safe to drop a CachingHttpClient into an existing compliant client-server relationship. Although the caching module is part of the client from an HTTP protocol point of view, the implementation aims to be compatible with the requirements placed on a transparent caching proxy. Finally, CachingHttpClient includes support the Cache-Control extensions specified by RFC 5861 (stale-if-error and stale-while-revalidate). When CachingHttpClient executes a request, it goes through the following flow: Check the request for basic compliance with the HTTP 1.1 protocol and attempt to correct the request. Flush any cache entries which would be invalidated by this request. Determine if the current request would be servable from cache. If not, directly pass through the request to the origin server and return the response, after caching it if appropriate. If it was a a cache-servable request, it will attempt to read it from the cache. If it is not in the cache, call the origin server and cache the response, if appropriate. If the cached response is suitable to be served as a response, construct a BasicHttpResponse containing a ByteArrayEntity and return it. Otherwise, attempt to revalidate the cache entry against the origin server. In the case of a cached response which cannot be revalidated, call the origin server and cache the response, if appropriate. When CachingHttpClient receives a response, it goes through the following flow: Examining the response for protocol compliance Determine whether the response is cacheable If it is cacheable, attempt to read up to the maximum size allowed in the configuration and store it in the cache. If the response is too large for the cache, reconstruct the partially consumed response and return it directly without caching it. It is important to note that CachingHttpClient is not, itself, an implementation of HttpClient, but that it decorates an instance of an HttpClient implementation. If you do not provide an implementation, it will use DefaultHttpClient internally by default.
RFC-2616 Compliance HttpClient Cache makes an effort to be at least conditionally compliant with RFC-2616. That is, wherever the specification indicates MUST or MUST NOT for HTTP caches, the caching layer attempts to behave in a way that satisfies those requirements. This means the caching module won't produce incorrect behavior when you drop it in. At the same time, the project is continuing to work on unconditional compliance, which would add compliance with all the SHOULDs and SHOULD NOTs, many of which we already comply with. We just can't claim fully unconditional compliance until we satisfy all of them.
Example Usage This is a simple example of how to set up a basic CachingHttpClient. As configured, it will store a maximum of 1000 cached objects, each of which may have a maximum body size of 8192 bytes. The numbers selected here are for example only and not intended to be prescriptive or considered as recommendations.
Configuration As the CachingHttpClient is a decorator, much of the configuration you may want to do can be done on the HttpClient used as the "backend" by the HttpClient (this includes setting options like timeouts and connection pool sizes). For caching-specific configuration, you can provide a CacheConfig instance to customize behavior across the following areas: Cache size. If the backend storage supports these limits, you can specify the maximum number of cache entries as well as the maximum cacheable response body size. Public/private caching. By default, the caching module considers itself to be a shared (public) cache, and will not, for example, cache responses to requests with Authorization headers or responses marked with "Cache-Control: private". If, however, the cache is only going to be used by one logical "user" (behaving similarly to a browser cache), then you will want to turn off the shared cache setting. Heuristic caching.Per RFC2616, a cache MAY cache certain cache entries even if no explicit cache control headers are set by the origin. This behavior is off by default, but you may want to turn this on if you are working with an origin that doesn't set proper headers but where you still want to cache the responses. You will want to enable heuristic caching, then specify either a default freshness lifetime and/or a fraction of the time since the resource was last modified. See Sections 13.2.2 and 13.2.4 of the HTTP/1.1 RFC for more details on heuristic caching. Background validation. The cache module supports the stale-while-revalidate directive of RFC5861, which allows certain cache entry revalidations to happen in the background. You may want to tweak the settings for the minimum and maximum number of background worker threads, as well as the maximum time they can be idle before being reclaimed. You can also control the size of the queue used for revalidations when there aren't enough workers to keep up with demand.
Storage Backends The default implementation of CachingHttpClient stores cache entries and cached response bodies in memory in the JVM of your application. While this offers high performance, it may not be appropriate for your application due to the limitation on size or because the cache entries are ephemeral and don't survive an application restart. The current release includes support for storing cache entries using Ehcache and memcached implementations, which allow for spilling cache entries to disk or storing them in an external process. If none of those options are suitable for your application, it is possible to provide your own storage backend by implementing the HttpCacheStorage interface and then supplying that to CachingHttpClient at construction time. In this case, the cache entries will be stored using your scheme but you will get to reuse all of the logic surrounding HTTP/1.1 compliance and cache handling. Generally speaking, it should be possible to create an HttpCacheStorage implementation out of anything that supports a key/value store (similar to the Java Map interface) with the ability to apply atomic updates. Finally, because the CachingHttpClient is a decorator for HttpClient, it's entirely possible to set up a multi-tier caching hierarchy; for example, wrapping an in-memory CachingHttpClient around one that stores cache entries on disk or remotely in memcached, following a pattern similar to virtual memory, L1/L2 processor caches, etc.