Improving performance
Understanding Collection performance
We've already spent quite some time talking about collections.
In this section we will highlight a couple more issues about
how collections behave at runtime.
Taxonomy
Hibernate defines three basic kinds of collections:
collections of values
one to many associations
many to many associations
This classification distinguishes the various table and foreign key
relationships but does not tell us quite everything we need to know
about the relational model. To fully understand the relational structure
and performance characteristics, we must also consider the structure of
the primary key that is used by Hibernate to update or delete collection
rows. This suggests the following classification:
indexed collections
sets
bags
All indexed collections (maps, lists, arrays) have a primary key consisting
of the <key> and <index>
columns. In this case collection updates are usually extremely efficient -
the primary key may be efficiently indexed and a particular row may be efficiently
located when Hibernate tries to update or delete it.
Sets have a primary key consisting of <key> and element
columns. This may be less efficient for some types of collection element, particularly
composite elements or large text or binary fields; the database may not be able to index
a complex primary key as efficently. On the other hand, for one to many or many to many
associations, particularly in the case of synthetic identifiers, it is likely to be just
as efficient. (Side-note: if you want SchemaExport to actually create
the primary key of a <set> for you, you must declare all columns
as not-null="true".)
<idbag> mappings define a surrogate key, so they are
always very efficient to update. In fact, they are the best case.
Bags are the worst case. Since a bag permits duplicate element values and has no
index column, no primary key may be defined. Hibernate has no way of distinguishing
between duplicate rows. Hibernate resolves this problem by completely removing
(in a single DELETE) and recreating the collection whenever it
changes. This might be very inefficient.
Note that for a one-to-many association, the "primary key" may not be the physical
primary key of the database table - but even in this case, the above classification
is still useful. (It still reflects how Hibernate "locates" individual rows of the
collection.)
Lists, maps, idbags and sets are the most efficient collections to update
From the discussion above, it should be clear that indexed collections
and (usually) sets allow the most efficient operation in terms of adding,
removing and updating elements.
There is, arguably, one more advantage that indexed collections have over sets for
many to many associations or collections of values. Because of the structure of a
Set, Hibernate doesn't ever UPDATE a row when
an element is "changed". Changes to a Set always work via
INSERT and DELETE (of individual rows). Once
again, this consideration does not apply to one to many associations.
After observing that arrays cannot be lazy, we would conclude that lists, maps and
idbags are the most performant (non-inverse) collection types, with sets not far
behind. Sets are expected to be the most common kind of collection in Hibernate
applications. This is because the "set" semantics are most natural in the relational
model.
However, in well-designed Hibernate domain models, we usually see that most collections
are in fact one-to-many associations with inverse="true". For these
associations, the update is handled by the many-to-one end of the association, and so
considerations of collection update performance simply do not apply.
Bags and lists are the most efficient inverse collections
Just before you ditch bags forever, there is a particular case in which bags (and also lists)
are much more performant than sets. For a collection with inverse="true"
(the standard bidirectional one-to-many relationship idiom, for example) we can add elements
to a bag or list without needing to initialize (fetch) the bag elements! This is because
Collection.add() or Collection.addAll() must always
return true for a bag or List (unlike a Set). This can
make the following common code much faster.
One shot delete
Occasionally, deleting collection elements one by one can be extremely inefficient. Hibernate
isn't completely stupid, so it knows not to do that in the case of an newly-empty collection
(if you called list.clear(), for example). In this case, Hibernate will
issue a single DELETE and we are done!
Suppose we add a single element to a collection of size twenty and then remove two elements.
Hibernate will issue one INSERT statement and two DELETE
statements (unless the collection is a bag). This is certainly desirable.
However, suppose that we remove eighteen elements, leaving two and then add thee new elements.
There are two possible ways to proceed
delete eighteen rows one by one and then insert three rows
remove the whole collection (in one SQL DELETE) and insert
all five current elements (one by one)
Hibernate isn't smart enough to know that the second option is probably quicker in this case.
(And it would probably be undesirable for Hibernate to be that smart; such behaviour might
confuse database triggers, etc.)
Fortunately, you can force this behaviour (ie. the second strategy) at any time by discarding
(ie. dereferencing) the original collection and returning a newly instantiated collection with
all the current elements. This can be very useful and powerful from time to time.
Of course, one-shot-delete does not apply to collections mapped inverse="true".
Fetching strategies
A fetching strategy describes the number of instances, the depth of a
subgraph of instances, and SQL SELECTs that are used
to retrieve these instances. Hibernate supports several strategies and you
can configure them on a global level, per entity class, per association, or
even for a particular query in HQL and with Criteria.
Hibernate offers the following fetching strategies:
Lazy fetching - an associated instance (or a
collection) will only be loaded when needed, using an additional
defered SELECT.
Batch fetching - an optimization strategy
for lazy fetching, Hibernate not only retrieves a single instance
(or collection), but several in the same SELECT.
Eager fetching - Hibernate retrieves the
associated instance (or collection) in the same SELECT,
using an OUTER JOIN.
Select fetching - a second SELECT
is used to retrieve the associated instance (or collection), but
it might be executed immediately and not defered until first access
(as with lazy fetching).
By default, Hibernate3 will only load the given entity using a single
SELECT statement if you retrieve an object with
load() or get(). This means that
all single-ended associations and collections are set for lazy fetching
by default. You can change this global default by setting the
default-lazy attribute on the hibernate-mapping
element to false.
We'll now have a closer look at the individual fetching strategies and how
to change them for single-ended associations and collections.
Collection fetching
Initialization of collections owned by persistent instances happens transparently
to the user, so the application would not normally need to worry about this (in
fact, transparent lazy initialization is the main reason why Hibernate needs its
own collection implementations). However, if the application tries something like
this:
It could be in for a nasty surprise. Since the permissions collection was not
initialized when the Session was closed, the collection
will not be able to load its state. Hibernate does not support lazy
initialization for detached objects. The fix is to move the
line that reads from the collection to just before the commit. (There are
other more advanced ways to solve this problem, some are discussed later.)
It's possible to use a non-lazy collection. However, it is intended that lazy
initialization be used for almost all collections, especially for collections
of entity references (its the default). If you define too many non-lazy associations
in your object model, Hibernate will end up needing to fetch the entire database
into memory in every transaction! Still, sometimes you want to use an additional
SELECT for a particular collection right away, not defered
until the first access happens:
Hibernate will now execute an immediate second SELECT loading
the collection of Permission instances, when a particular
User is retrieved.
Any kind of lazy fetching (and also Select fetching) is extremely vulnerable to
N+1 selects problems. So usually, we choose lazy fetching only as a default
strategy, and override it for a particular transaction, using the HQL
LEFT JOIN FETCH clause. This tells Hibernate to fetch the
association eagerly in the first select, using an outer join. In the
Criteria API, you would use
setFetchMode(FetchMode.EAGER).
You can always force outer join association fetching in the mapping file, by setting
fetch="join" (or use the old outer-join="true"
syntax). We don't recommend this setting, especially not for collections, since it is
incredibly rare to find an entity which is always used when
an associated entity is used, at least in a sufficiently large system.
Eager fetching for collections has another restriction: you may only set one
collection role per persistent class to be fetched per outer join. Hibernate forbids
Cartesian products when possible, SELECTing two collections per
outer join would create one. This would almost always be slower than two (lazy or
non-defered) SELECTs. The restriction to a single outer-joined
collection applies to both the mapping fetching strategies and to HQL/Criteria queries.
Single-ended association proxies
Lazy fetching for collections is implemented using Hibernate's own implementation
of persistent collections. However, a different mechanism is needed for lazy
behavior in single-ended associations. The target entity of the association must
be proxied. Hibernate implements lazy initializing proxies for persistent objects
using runtime bytecode enhancement (via the excellent CGLIB library).
By default, Hibernate3 generates proxies (at startup) for all persistent classes
and uses them to enable lazy fetching of many-to-one and
one-to-one associations.
The mapping file may declare an interface to use as the proxy interface for that
class, with the proxy attribute. By default, Hibernate uses a subclass
of the class. Note that the proxied class must implement a default constructor
with at least package visibility. We recommend this constructor for all persistent classes!
There are some gotchas to be aware of when extending this approach to polymorphic
classes, eg.
......
.....
]]>
Firstly, instances of Cat will never be castable to
DomesticCat, even if the underlying instance is an
instance of DomesticCat:
Secondly, it is possible to break proxy ==.
However, the situation is not quite as bad as it looks. Even though we now have two references
to different proxy objects, the underlying instance will still be the same object:
Third, you may not use a CGLIB proxy for a final class or a class
with any final methods.
Finally, if your persistent object acquires any resources upon instantiation (eg. in
initializers or default constructor), then those resources will also be acquired by
the proxy. The proxy class is an actual subclass of the persistent class.
These problems are all due to fundamental limitations in Java's single inheritence model.
If you wish to avoid these problems your persistent classes must each implement an interface
that declares its business methods. You should specify these interfaces in the mapping file. eg.
......
.....
]]>
where Cat implements the interface ICat and
DomesticCat implements the interface IDomesticCat. Then
proxies for instances of Cat and DomesticCat may be returned
by load() or iterate(). (Note that find()
does not usually return proxies.)
Relationships are also lazily initialized. This means you must declare any properties to be of
type Cat, not CatImpl.
Certain operations do not require proxy initialization
equals(), if the persistent class does not override
equals()
hashCode(), if the persistent class does not override
hashCode()
The identifier getter method
Hibernate will detect persistent classes that override equals() or
hashCode().
You may of course also use Eager or Select fetching strategies for single-ended
associations:
]]>
The first mapping tells Hibernate to fetch the associated mother
entity in the same initial SELECT using an OUTER JOIN.
You can set this option on as many *-to-one associations as you like, there is no
danger of creating a Cartesian product (opposed to collections). Note that you can
set the maximum depth of outer joined tables with the global configuration option
max_fetch_depth (see ).
The second mapping enables an additional SELECT for the
retrieval of the father. Note that Hibernate does not guarantee
when this query will be executed. If it should be executed
immediately (right after the initial SELECT), disable proxying
on the target of the association by setting it to lazy="false":
...]]>
(Note that this example uses only a single persistent class Cat
and self-referencing associations. This doesn't change the fetching behavior, as expexted.)
Initializing collections and proxies
An exception (LazyInitializationException) will be thrown by
Hibernate if an unitialized collection or proxy is accessed outside of the scope
of the Session, ie. when the entity owning the collection or
having the reference to the proxy is in detached state.
Sometimes we need to ensure that a proxy or collection is initialized before closing the
Session. Of course, we can alway force initialization by calling
cat.getSex() or cat.getKittens().size(), for example.
But that is confusing to readers of the code and is not convenient for generic code.
The static methods Hibernate.initialize() and Hibernate.isInitialized()
provide the application with a convenient way of working with lazyily initialized collections or
proxies. Hibernate.initialize(cat) will force the initialization of a proxy,
cat, as long as its Session is still open.
Hibernate.initialize( cat.getKittens() ) has a similar effect for the collection
of kittens.
Another option is to keep the Session open until all needed
collections and proxies have been loaded. In some application architectures,
particularly where the code that accesses data using Hibernate, and the code that
uses it are in different application layers, it can be a problem to ensure that the
Session is open when a collection is initialized. There are
two basic ways to deal with this issue:
In a web-based application, a servlet filter can be used to close the
Session only at the very end of a user request, once
the rendering of the view is complete (the Open Session in
View pattern). Of course, this places heavy
demands on the correctness of the exception handling of your application
infrastructure. It is vitally important that the Session
is closed and the transaction ended before returning to the user, even
when an exception occurs during rendering of the view. The servlet filter
has to be able to access the Session for this approach.
We recommend that a ThreadLocal variable be used to
hold the current Session (see chapter 1,
, for an example implementation).
In an application with a seperate business tier, the business logic must
"prepare" all collections that will be needed by the web tier before
returning. This means that the business tier should load all the data and
return all the data already initialized to the presentation/web tier that
is required for a particular use case. Usually, the application calls
Hibernate.initialize() for each collection that will
be needed in the web tier (this call must occur before the session is closed)
or retrieves the collection eagerly using a Hibernate query with a
FETCH clause or a FetchMode.JOIN in
Criteria. This is usually easier if you adopt the
Command pattern instead of a Session Facade.
You may also attach a previously loaded object to a new Session
with merge() or lock() before
accessing unitialized collections (or other proxies). Hibernate can not
do this automatically, as it would introduce ad hoc transaction semantics!
Sometimes you don't want to initialize a large collection, but still need some
information about it (like its size) or a subset of the data.
You can use a collection filter to get the size of a collection without initializing it:
The createFilter() method is also used to efficiently retrieve subsets
of a collection without needing to initialize the whole collection:
Using batch fetching
Hibernate can make efficient use of batch fetching, that is, Hibernate can load several uninitialized
proxies if one proxy is accessed (or collections. Batch fetching is an optimization for the lazy
loading strategy. There are two ways you can tune batch fetching: on the class and the collection level.
Batch fetching for classes/entities is easier to understand. Imagine you have the following situation
at runtime: You have 25 Cat instances loaded in a Session, each
Cat has a reference to its owner, a Person.
The Person class is mapped with a proxy, lazy="true". If you now
iterate through all cats and call getOwner() on each, Hibernate will by default
execute 25 SELECT statements, to retrieve the proxied owners. You can tune this
behavior by specifying a batch-size in the mapping of Person:
...]]>
Hibernate will now execute only three queries, the pattern is 10, 10, 5. You can see that batch fetching
is a blind guess, as far as performance optimization goes, it depends on the number of unitilized proxies
in a particular Session.
You may also enable batch fetching of collections. For example, if each Person has
a lazy collection of Cats, and 10 persons are currently loaded in the
Sesssion, iterating through all persons will generate 10 SELECTs,
one for every call to getCats(). If you enable batch fetching for the
cats collection in the mapping of Person, Hibernate can pre-fetch
collections:
...
]]>
With a batch-size of 3, Hibernate will load 3, 3, 3, 1 collections in 4
SELECTs. Again, the value of the attribute depends on the expected number of
uninitialized collections in a particular Session.
Batch fetching of collections is particularly useful if you have a nested tree of items, ie.
the typical bill-of-materials pattern. (Although a nested set or a
materialized path might be a better option for read-mostly trees.)
Using lazy property fetching
Hibernate3 supports the lazy fetching of individual properties. This optimization technique
is also known as fetch groups. Please note that this is mostly a
marketing feature, as in practice, optimizing row reads is much more important than
optimization of column reads. However, only loading some properties of a class might
be useful in extreme cases, when legacy tables have hundreds of columns and the data model
can not be improved.
To enable lazy property loading, set the lazy attribute on your
particular property mappings:
]]>
Lazy property loading requires buildtime bytecode instrumentation! If your persistent
classes are not enhanced, Hibernate will silently ignore lazy property settings and
fall back to immediate fetching.
For bytecode instrumentation, use the following Ant task:
]]>
A different (better?) way to avoid unnecessary column reads, at least for
read-only transactons is to use the projection features of HQL. This avoids
the need for buildtime bytecode processing.
TODO: Document issues with lazy property loading
A completely different way to avoid problems with N+1 selects is to use the second-level
cache.
The Second Level Cache
A Hibernate Session is a transaction-level cache of persistent data. It is
possible to configure a cluster or JVM-level (SessionFactory-level) cache on
a class-by-class and collection-by-collection basis. You may even plug in a clustered cache. Be
careful. Caches are never aware of changes made to the persistent store by another application
(though they may be configured to regularly expire cached data).
By default, Hibernate uses EHCache for JVM-level caching. (JCS support is now deprecated and will
be removed in a future version of Hibernate.) You may choose a different implementation by
specifying the name of a class that implements org.hibernate.cache.CacheProvider
using the property hibernate.cache.provider_class.
Cache Providers
Cache
Provider class
Type
Cluster Safe
Query Cache Supported
Hashtable (not intended for production use)
org.hibernate.cache.HashtableCacheProvider
memory
yes
EHCache
org.hibernate.cache.EhCacheProvider
memory, disk
yes
OSCache
org.hibernate.cache.OSCacheProvider
memory, disk
yes
SwarmCache
org.hibernate.cache.SwarmCacheProvider
clustered (ip multicast)
yes (clustered invalidation)
JBoss TreeCache
org.hibernate.cache.TreeCacheProvider
clustered (ip multicast), transactional
yes (replication)
yes (clock sync req.)
Cache mappings
The <cache> element of a class or collection mapping has the
following form:
]]>
usage specifies the caching strategy:
transactional,
read-write,
nonstrict-read-write or
read-only
Alternatively (preferrably?), you may specify <class-cache> and
<collection-cache> elements in hibernate.cfg.xml.
The usage attribute specifies a cache concurrency strategy.
Strategy: read only
If your application needs to read but never modify instances of a persistent class, a
read-only cache may be used. This is the simplest and best performing
strategy. Its even perfectly safe for use in a cluster.
....
]]>
Strategy: read/write
If the application needs to update data, a read-write cache might be appropriate.
This cache strategy should never be used if serializable transaction isolation level is required.
If the cache is used in a JTA environment, you must specify the property
hibernate.transaction.manager_lookup_class, naming a strategy for obtaining the
JTA TransactionManager. In other environments, you should ensure that the transaction
is completed when Session.close() or Session.disconnect() is called.
If you wish to use this strategy in a cluster, you should ensure that the underlying cache implementation
supports locking. The built-in cache providers do not.
....
....
]]>
Strategy: nonstrict read/write
If the application only occasionally needs to update data (ie. if it is extremely unlikely that two
transactions would try to update the same item simultaneously) and strict transaction isolation is
not required, a nonstrict-read-write cache might be appropriate. If the cache is
used in a JTA environment, you must specify hibernate.transaction.manager_lookup_class.
In other environments, you should ensure that the transaction is completed when
Session.close() or Session.disconnect() is called.
Strategy: transactional
The transactional cache strategy provides support for fully transactional cache
providers such as JBoss TreeCache. Such a cache may only be used in a JTA environment and you must
specify hibernate.transaction.manager_lookup_class.
None of the cache providers support all of the cache concurrency strategies. The following table shows
which providers are compatible with which concurrency strategies.
Cache Concurrency Strategy Support
Cache
read-only
nonstrict-read-write
read-write
transactional
Hashtable (not intended for production use)
yes
yes
yes
EHCache
yes
yes
yes
OSCache
yes
yes
yes
SwarmCache
yes
yes
JBoss TreeCache
yes
yes
Managing the Session Cache
Whenever you pass an object to save(), update()
or saveOrUpdate() and whenever you retrieve an object using
load(), get(), list(),
iterate() or scroll(), that object is added
to the internal cache of the Session.
When flush() is subsequently called,
the state of that object will be synchronized with the database. If you do not want
this synchronization to occur or if you are processing a huge number of objects and
need to manage memory efficiently, the evict() method may be
used to remove the object and its collections from the cache.
Hibernate will evict associated entities automatically if the association is mapped
with cascade="all" or cascade="all-delete-orphan".
The Session also provides a contains() method
to determine if an instance belongs to the session cache.
To completely evict all objects from the session cache, call Session.clear()
For the second-level cache, there are methods defined on SessionFactory for
evicting the cached state of an instance, entire class, collection instance or entire collection
role.
The Query Cache
Query result sets may also be cached. This is only useful for queries that are run
frequently with the same parameters. To use the query cache you must first enable it
by setting the property hibernate.cache.use_query_cache=true. This
causes the creation of two cache regions - one holding cached query result sets
(org.hibernate.cache.QueryCache), the other holding timestamps
of most recent updates to queried tables
(org.hibernate.cache.UpdateTimestampsCache). Note that the query
cache does not cache the state of any entities in the result set; it caches only
identifier values and results of value type. So the query cache is usually used in
conjunction with the second-level cache.
Most queries do not benefit from caching, so by default queries are not cached. To
enable caching, call Query.setCacheable(true). This call allows
the query to look for existing cache results or add its results to the cache when
it is executed.
If you require fine-grained control over query cache expiration policies, you may
specify a named cache region for a particular query by calling
Query.setCacheRegion().
If the query should force a refresh of its query cache region, you may call
Query.setForceCacheRefresh() to true.
This is particularly useful in cases where underlying data may have been updated
via a seperate process (i.e., not modified through Hibernate) and allows the
application to selectively refresh the query cache regions based on its
knowledge of those events. This is an alternative to eviction of a query
cache region. If you need fine-grained refresh control for many queries, use
this function instead of a new region for each query.
TODO: document statistics