Batch processing
A naive approach to inserting 100 000 rows in the database using Hibernate might
look like this:
This would fall over with an OutOfMemoryException somewhere
around the 50 000th row. That's because Hibernate caches all the newly inserted
Customer instances in the session-level cache.
In this chapter we'll show you how to avoid this problem. First, however, if you
are doing batch processing, it is absolutely critical that you enable the use of
JDBC batching, if you intend to achieve reasonable performance. Set the JDBC batch
size to a reasonable number (say, 10-50):
You also might like to do this kind of work in a process where interaction with
the second-level cache is completely disabled:
Batch inserts
When making new objects persistent, you must flush() and
then clear() the session regularly, to control the size of
the first-level cache.
Batch updates
For retrieving and updating data the same ideas apply. In addition, you need to
use scroll() to take advantage of server-side cursors for
queries that return many rows of data.
DML-style operations
As already discussed, automatic and transparent object/relational mapping is concerned
with the management of object state. This implies that the object state is available
in memory, hence manipulating (using the SQL Data Manipulation Language
(DML) statements: INSERT, UPDATE, DELETE)
data directly in the database will not affect in-memory state. However, Hibernate provides methods
for bulk SQL-style DML statement execution which are performed through the
Hibernate Query Language (HQL).
The psuedo-syntax for UPDATE and DELETE statements
is: ( UPDATE | DELETE ) FROM? EntityName (WHERE where_conditions)?. Some
points to note:
In the from-clause, the FROM keyword is optional
There can only be a single entity named in the from-clause; it can optionally be
aliased. If the entity name is aliased, then any property references must
be qualified using that alias; if the entity name is not aliased, then it is
illegal for any property references to be qualified.
No joins (either implicit or explicit) can be specified in a bulk HQL query. Sub-queries
may be used in the where-clause; the subqueries, themselves, can contain joins.
The where-clause is also optional.
As an example, to execute an HQL UPDATE, use the
Query.executeUpdate() method (the method is named for
those familiar with JDBC's PreparedStatement.executeUpdate()):
To execute an HQL DELETE, use the same Query.executeUpdate()
method:
The int value returned by the Query.executeUpdate()
method indicate the number of entities effected by the operation. Consider this may or may not
correlate to the number of rows effected in the database. An HQL bulk operation might result in
multiple actual SQL statements being executed, for joined-subclass, for example. The returned
number indicates the number of actual entities affected by the statement. Going back to the
example of joined-subclass, a delete against one of the subclasses may actually result
in deletes against not just the table to which that subclass is mapped, but also the "root"
table and potentially joined-subclass tables further down the inheritence hierarchy.
The psuedo-syntax for INSERT statements is:
INSERT INTO EntityName (properties_list)? select_statement. Some
points to note:
Only the INSERT INTO ... SELECT ... form is supported; not the INSERT INTO ... VALUES ... form.
The properties_list is optional. It is analogous to the column speficiation
in the SQL INSERT statement. If omitted, all "eligible" (see next) properties are
automatically included.
For entities involved in mapped inheritence, only properties directly defined on that
given class-level can be used in the properties_list. Superclass properties are not
allowed; and subclass properties do not make sense. In other words, INSERT
statements are inherently non-polymorphic.
select_statement can be any valid HQL select query, with the caveat that the return types
must match the types expected by the insert. Currently, this is checked during query
compilation rather than allowing the check to relegate to the database. Note however
that this might cause problems between Hibernate Types which are
equivalent as opposed to equal. This might cause
issues with mismatches between a property defined as a org.hibernate.type.DateType
and a property defined as a org.hibernate.type.TimestampType, even though the
database might not make a distinction or might be able to handle the conversion.
An example HQL INSERT statement execution: