diff --git a/documentation/src/main/docbook/integration/en-US/Author_Group.xml b/documentation/src/main/docbook/integration/en-US/Author_Group.xml
deleted file mode 100644
index 564da0dcb5..0000000000
--- a/documentation/src/main/docbook/integration/en-US/Author_Group.xml
+++ /dev/null
@@ -1,92 +0,0 @@
-
-
-
-
-
-
- Gavin
- King
-
-
-
-
- Christian
- Bauer
-
-
-
-
- Steve
- Ebersole
-
-
-
-
- Max
- Rydahl
- Andersen
-
-
-
-
- Emmanuel
- Bernard
-
-
-
-
- Hardy
- Ferentschik
-
-
-
-
- Adam
- Warski
-
-
-
-
- Gail
- Badner
-
-
-
-
- Brett
- Meyer
-
-
-
-
-
- James
- Cobb
-
-
- Graphic Design
-
-
-
-
- Cheyenne
- Weaver
-
-
- Graphic Design
-
-
-
-
-
- Misty
- Stanley-Jones
-
-
-
-
diff --git a/documentation/src/main/docbook/integration/en-US/Batch_Processing.xml b/documentation/src/main/docbook/integration/en-US/Batch_Processing.xml
deleted file mode 100644
index 04b5e880d9..0000000000
--- a/documentation/src/main/docbook/integration/en-US/Batch_Processing.xml
+++ /dev/null
@@ -1,252 +0,0 @@
-
-
-
-
-
- Batch Processing
-
- The following example shows an antipattern for batch inserts.
-
-
- Naive way to insert 100000 lines with Hibernate
-
-
- This fails with exception OutOfMemoryException after around 50000 rows on most
- systems. The reason is that Hibernate caches all the newly inserted Customer instances in the session-level
- cache. There are several ways to avoid this problem.
-
-
-
- Before batch processing, enable JDBC batching. To enable JDBC batching, set the property
- hibernate.jdbc.batch_size to an integer between 10 and 50.
-
-
-
- Hibernate disables insert batching at the JDBC level transparently if you use an identity identifier generator.
-
-
-
- If the above approach is not appropriate, you can disable the second-level cache, by setting
- hibernate.cache.use_second_level_cache to false.
-
-
-
- Batch inserts
-
- When you make new objects persistent, employ methods flush() and
- clear() to the session regularly, to control the size of the first-level cache.
-
-
- Flushing and clearing the Session
-
-
-
-
-
- Batch updates
-
- When you retriev and update data, flush() and clear() the
- session regularly. In addition, use method scroll() to take advantage of server-side
- cursors for queries that return many rows of data.
-
-
- Using scroll()
-
-
-
-
-
- StatelessSession
-
- StatelessSession is a command-oriented API provided by Hibernate. Use it to stream
- data to and from the database in the form of detached objects. A StatelessSession
- has no persistence context associated with it and does not provide many of the higher-level life cycle
- semantics. Some of the things not provided by a StatelessSession include:
-
-
- Features and behaviors not provided by StatelessSession
-
-
- a first-level cache
-
-
-
-
- interaction with any second-level or query cache
-
-
-
-
- transactional write-behind or automatic dirty checking
-
-
-
-
- Limitations of StatelessSession
-
-
- Operations performed using a stateless session never cascade to associated instances.
-
-
-
-
- Collections are ignored by a stateless session.
-
-
-
-
- Operations performed via a stateless session bypass Hibernate's event model and interceptors.
-
-
-
-
- Due to the lack of a first-level cache, Stateless sessions are vulnerable to data aliasing effects.
-
-
-
-
- A stateless session is a lower-level abstraction that is much closer to the underlying JDBC.
-
-
-
-
- Using a StatelessSession
-
-
- The Customer instances returned by the query are immediately detached. They are never
- associated with any persistence context.
-
-
-
- The insert(), update(), and delete()
- operations defined by the StatelessSession interface operate directly on database
- rows. They cause the corresponding SQL operations to be executed immediately. They have different semantics from
- the save(), saveOrUpdate(), and
- delete() operations defined by the Session interface.
-
-
-
-
- Hibernate Query Language for DML
-
- DML, or Data Markup Language, refers to SQL statements such as INSERT,
- UPDATE, and DELETE. Hibernate provides methods for bulk SQL-style DML
- statement execution, in the form of Hibernate Query Language (HQL).
-
-
- HQL for UPDATE and DELETE
-
- Psuedo-syntax for UPDATE and DELETE statements using HQL
-
- ( UPDATE | DELETE ) FROM? EntityName (WHERE where_conditions)?
-
-
- The ? suffix indications an optional parameter. The FROM and
- WHERE clauses are each optional.
-
-
-
- The FROM clause can only refer to a single entity, which can be aliased. If the entity name
- is aliased, any property references must be qualified using that alias. If the entity name is not aliased, then
- it is illegal for any property references to be qualified.
-
-
- Joins, either implicit or explicit, are prohibited in a bulk HQL query. You can use sub-queries in the
- WHERE clause, and the sub-queries themselves can contain joins.
-
-
- Executing an HQL UPDATE, using the Query.executeUpdate() method
-
-
-
- In keeping with the EJB3 specification, HQL UPDATE statements, by default, do not effect the version or the
- timestamp property values for the affected entities. You can use a versioned update to force Hibernate to reset
- the version or timestamp property values, by adding the VERSIONED keyword after the
- UPDATE keyword.
-
-
- Updating the version of timestamp
-
-
-
-
- If you use the VERSIONED statement, you cannot use custom version types, which use class
- org.hibernate.usertype.UserVersionType.
-
-
-
- A HQL DELETE statement
-
-
-
- Method Query.executeUpdate() returns an int value, which indicates the
- number of entities effected by the operation. This may or may not correlate to the number of rows effected in
- the database. An HQL bulk operation might result in multiple SQL statements being executed, such as for
- joined-subclass. In the example of joined-subclass, a DELETE against one of the subclasses
- may actually result in deletes in the tables underlying the join, or further down the inheritance hierarchy.
-
-
-
-
- HQL syntax for INSERT
-
- Pseudo-syntax for INSERT statements
-
- INSERT INTO EntityName properties_listselect_statement
-
-
-
- Only the INSERT INTO ... SELECT ... form is supported. You cannot specify explicit values to
- insert.
-
-
- The properties_list is analogous to the column specification in the SQL
- INSERT statement. For entities involved in mapped inheritance, you can only use properties directly
- defined on that given class-level in the properties_list. Superclass properties are
- not allowed and subclass properties are irrelevant. In other words, INSERT statements are
- inherently non-polymorphic.
-
-
- The select_statement can be any valid HQL select query, but the return types must
- match the types expected by the INSERT. Hibernate verifies the return types during query compilation, instead of
- expecting the database to check it. Problems might result from Hibernate types which are equivalent, rather than
- equal. One such example is a mismatch between a property defined as an org.hibernate.type.DateType
- and a property defined as an org.hibernate.type.TimestampType, even though the database may not
- make a distinction, or may be capable of handling the conversion.
-
-
- If id property is not specified in the properties_list,
- Hibernate generates a value automatically. Automatic generation is only available if you use ID generators which
- operate on the database. Otherwise, Hibernate throws an exception during parsing. Available in-database
- generators are org.hibernate.id.SequenceGenerator and its subclasses, and objects which
- implement org.hibernate.id.PostInsertIdentifierGenerator. The most notable
- exception is org.hibernate.id.TableHiLoGenerator, which does not expose a selectable way
- to get its values.
-
-
- For properties mapped as either version or timestamp, the insert statement gives you two options. You can either
- specify the property in the properties_list, in which case its value is taken from the corresponding select
- expressions, or omit it from the properties_list, in which case the seed value defined by the
- org.hibernate.type.VersionType is used.
-
-
- HQL INSERT statement
-
-
-
-
- More information on HQL
-
- This section is only a brief overview of HQL. For more information, see .
-
-
-
-
-
diff --git a/documentation/src/main/docbook/integration/en-US/Caching.xml b/documentation/src/main/docbook/integration/en-US/Caching.xml
deleted file mode 100644
index 99aec1183e..0000000000
--- a/documentation/src/main/docbook/integration/en-US/Caching.xml
+++ /dev/null
@@ -1,557 +0,0 @@
-
-
-
-
-
-
- Caching
-
-
-
- The query cache
-
- If you have queries that run over and over, with the same parameters, query caching provides performance gains.
-
-
- Caching introduces overhead in the area of transactional processing. For example, if you cache results of a query
- against an object, Hibernate needs to keep track of whether any changes have been committed against the object,
- and invalidate the cache accordingly. In addition, the benefit from caching query results is limited, and highly
- dependent on the usage patterns of your application. For these reasons, Hibernate disables the query cache by
- default.
-
-
- Enabling the query cache
-
- Set the hibernate.cache.use_query_cache property to true.
-
- This setting creates two new cache regions:
-
-
-
-
- org.hibernate.cache.internal.StandardQueryCache holds the cached query results.
-
-
-
-
- org.hibernate.cache.spi.UpdateTimestampsCache holds timestamps of the most recent updates to
- queryable tables. These timestamps validate results served from the query cache.
-
-
-
-
-
- Adjust the cache timeout of the underlying cache region
-
- If you configure your underlying cache implementation to use expiry or timeouts, set the cache timeout of the
- underlying cache region for the UpdateTimestampsCache to a higher value than the timeouts of any
- of the query caches. It is possible, and recommended, to set the UpdateTimestampsCache region never to
- expire. To be specific, a LRU (Least Recently Used) cache expiry policy is never appropriate.
-
-
-
- Enable results caching for specific queries
-
- Since most queries do not benefit from caching of their results, you need to enable caching for individual
- queries, e ven after enabling query caching overall. To enable results caching for a particular query, call
- org.hibernate.Query.setCacheable(true). This call allows the query to look for
- existing cache results or add its results to the cache when it is executed.
-
-
-
-
- The query cache does not cache the state of the actual entities in the cache. It caches identifier values and
- results of value type. Therefore, always use the query cache in conjunction with the second-level
- cache for those entities which should be cached as part of a query result cache.
-
-
-
- Query cache regions
-
- For fine-grained control over query cache expiration policies, specify a named cache region for a particular
- query by calling Query.setCacheRegion().
-
-
-
- Method setCacheRegion
-
-
-
-
- To force the query cache to refresh one of its regions and disregard any cached results in the region, call
- org.hibernate.Query.setCacheMode(CacheMode.REFRESH). In conjunction with the region defined for the
- given query, Hibernate selectively refreshes the results cached in that particular region. This is much more
- efficient than bulk eviction of the region via org.hibernate.SessionFactory.evictQueries().
-
-
-
-
-
-
-
- Second-level cache providers
-
- Hibernate is compatible with several second-level cache providers. None of the providers support all of
- Hibernate's possible caching strategies. lists the providers, along with
- their interfaces and supported caching strategies. For definitions of caching strategies, see .
-
-
-
- Configuring your cache providers
-
- You can configure your cache providers using either annotations or mapping files.
-
-
- Entities
-
- By default, entities are not part of the second-level cache, and their use is not recommended. If you
- absolutely must use entities, set the shared-cache-mode element in
- persistence.xml, or use property javax.persistence.sharedCache.mode
- in your configuration. Use one of the values in .
-
-
-
- Possible values for Shared Cache Mode
-
-
-
- Value
- Description
-
-
-
-
- ENABLE_SELECTIVE
-
-
- Entities are not cached unless you explicitly mark them as cachable. This is the default and
- recommended value.
-
-
-
-
- DISABLE_SELECTIVE
-
-
- Entities are cached unless you explicitly mark them as not cacheable.
-
-
-
-
- ALL
-
-
- All entities are always cached even if you mark them as not cacheable.
-
-
-
-
- NONE
-
-
- No entities are cached even if you mark them as cacheable. This option basically disables second-level
- caching.
-
-
-
-
-
-
-
- Set the global default cache concurrency strategy The cache concurrency strategy with the
- hibernate.cache.default_cache_concurrency_strategy configuration property. See for possible values.
-
-
-
- When possible, define the cache concurrency strategy per entity rather than globally. Use the
- @org.hibernate.annotations.Cache annotation.
-
-
-
- Configuring cache providers using annotations
-
-
- You can cache the content of a collection or the identifiers, if the collection contains other entities. Use
- the @Cache annotation on the Collection property.
-
-
- @Cache can take several attributes.
-
-
- Attributes of @Cache annotation
-
- usage
-
-
- The given cache concurrency strategy, which may be:
-
-
-
-
- NONE
-
-
-
-
- READ_ONLY
-
-
-
-
- NONSTRICT_READ_WRITE
-
-
-
-
- READ_WRITE
-
-
-
-
- TRANSACTIONAL
-
-
-
-
-
-
- region
-
-
- The cache region. This attribute is optional, and defaults to the fully-qualified class name of the
- class, or the qually-qualified role name of the collection.
-
-
-
-
- include
-
-
- Whether or not to include all properties.. Optional, and can take one of two possible values.
-
-
-
-
- A value of all includes all properties. This is the default.
-
-
-
-
- A value of non-lazy only includes non-lazy properties.
-
-
-
-
-
-
-
-
-
- Configuring cache providers using mapping files
-
-
- Just as in the , you can provide attributes in the
- mapping file. There are some specific differences in the syntax for the attributes in a mapping file.
-
-
-
- usage
-
-
- The caching strategy. This attribute is required, and can be any of the following values.
-
-
- transactional
- read-write
- nonstrict-read-write
- read-only
-
-
-
-
- region
-
-
- The name of the second-level cache region. This optional attribute defaults to the class or collection
- role name.
-
-
-
-
- include
-
-
- Whether properties of the entity mapped with lazy=true can be cached when
- attribute-level lazy fetching is enabled. Defaults to all and can also be
- non-lazy.
-
-
-
-
-
- Instead of <cache>, you can use <class-cache> and
- <collection-cache> elements in hibernate.cfg.xml.
-
-
-
-
- Caching strategies
-
-
- read-only
-
-
- A read-only cache is good for data that needs to be read often but not modified. It is simple, performs
- well, and is safe to use in a clustered environment.
-
-
-
-
- nonstrict-read-write
-
-
- Some applications only rarely need to modify data. This is the case if two transactions are unlikely to
- try to update the same item simultaneously. In this case, you do not need strict transaction isolation,
- and a nonstrict-read-write cache might be appropriate. If the cache is used in a JTA environment, you must
- specify hibernate.transaction.manager_lookup_class. In other environments, ensore
- that the transaction is complete before you call Session.close() or
- Session.disconnect().
-
-
-
-
- read-write
-
-
- A read-write cache is appropriate for an application which needs to update data regularly. Do not use a
- read-write strategy if you need serializable transaction isolation. In a JTA environment, specify a
- strategy for obtaining the JTA TransactionManager by setting the property
- hibernate.transaction.manager_lookup_class. In non-JTA environments, be sure the
- transaction is complete before you call Session.close() or
- Session.disconnect().
-
-
-
- To use the read-write strategy in a clustered environment, the underlying cache implementation must
- support locking. The build-in cache providers do not support locking.
-
-
-
-
-
- transactional
-
-
- The transactional cache strategy provides support for transactional cache providers such as JBoss
- TreeCache. You can only use such a cache in a JTA environment, and you must first specify
- hibernate.transaction.manager_lookup_class.
-
-
-
-
-
-
- Second-level cache providers for Hibernate
-
-
-
-
- Cache
- Interface
- Supported strategies
-
-
-
-
- HashTable (testing only)
-
-
-
- read-only
- nonstrict-read-write
- read-write
-
-
-
-
- EHCache
-
-
-
- read-only
- nonstrict-read-write
- read-write
- transactional
-
-
-
-
- Infinispan
-
-
-
- read-only
- transactional
-
-
-
-
-
-
-
-
-
-
- Managing the cache
-
-
- Moving items into and out of the cache
-
- Actions that add an item to internal cache of the Session
-
- Saving or updating an item
-
-
-
-
- save()
-
-
-
-
- update()
-
-
-
-
- saveOrUpdate()
-
-
-
-
-
-
- Retrieving an item
-
-
-
-
- load()
-
-
-
-
- get()
-
-
-
-
- list()
-
-
-
-
- iterate()
-
-
-
-
- scroll()
-
-
-
-
-
-
-
- Syncing or removing a cached item
-
- The state of an object is synchronized with the database when you call method
- flush(). To avoid this synchronization, you can remove the object and all collections
- from the first-level cache with the evict() method. To remove all items from the
- Session cache, use method Session.clear().
-
-
-
- Evicting an item from the first-level cache
-
-
-
- Determining whether an item belongs to the Session cache
-
- The Session provides a contains() method to determine if an instance belongs to the
- session cache.
-
-
-
-
- Second-level cache eviction
-
- You can evict the cached state of an instance, entire class, collection instance or entire collection role,
- using methods of SessionFactory.
-
-
-
-
- Interactions between a Session and the second-level cache
-
- The CacheMode controls how a particular session interacts with the second-level cache.
-
-
-
-
-
- CacheMode.NORMAL
- reads items from and writes them to the second-level cache.
-
-
- CacheMode.GET
- reads items from the second-level cache, but does not write to the second-level cache except to
- update data.
-
-
- CacheMode.PUT
- writes items to the second-level cache. It does not read from the second-level cache. It bypasses
- the effect of hibernate.cache.use_minimal_puts and forces a refresh of the
- second-level cache for all items read from the database.
-
-
-
-
-
-
-
- Browsing the contents of a second-level or query cache region
-
- After enabling statistics, you can browse the contents of a second-level cache or query cache region.
-
-
- Enabling Statistics
-
-
- Set hibernate.generate_statistics to true.
-
-
-
-
- Optionally, set hibernate.cache.use_structured_entries to true, to cause
- Hibernate to store the cache entries in a human-readable format.
-
-
-
-
- Browsing the second-level cache entries via the Statistics API
-
-
-
-
-
-
diff --git a/documentation/src/main/docbook/integration/en-US/Data_Categorizations.xml b/documentation/src/main/docbook/integration/en-US/Data_Categorizations.xml
deleted file mode 100644
index 198b6c7706..0000000000
--- a/documentation/src/main/docbook/integration/en-US/Data_Categorizations.xml
+++ /dev/null
@@ -1,458 +0,0 @@
-
-
-
-
-
- Data categorizations
-
- Hibernate understands both the Java and JDBC representations of application data. The ability to read and write
- object data to a database is called marshalling, and is the function of a Hibernate
- type. A type is an implementation of the
- org.hibernate.type.Type interface. A Hibernate type describes
- various aspects of behavior of the Java type such as how to check for equality and how to clone values.
-
-
- Usage of the word type
-
- A Hibernate type is neither a Java type nor a SQL datatype. It provides information about
- both of these.
-
-
- When you encounter the term type in regards to Hibernate, it may refer to the Java type,
- the JDBC type, or the Hibernate type, depending on context.
-
-
-
- Hibernate categorizes types into two high-level groups:
- and .
-
-
-
- Value types
-
- A value type does not define its own lifecycle. It is, in effect, owned by an , which defines its
- lifecycle. Value types are further classified into three sub-categories.
-
-
-
-
-
-
-
-
-
- Basic types
-
- Basic value types usually map a single database value, or column, to a single, non-aggregated Java
- type. Hibernate provides a number of built-in basic types, which follow the natural mappings recommended in the
- JDBC specifications. You can override these mappings and provide and use alternative mappings. These topics are
- discussed further on.
-
-
- Basic Type Mappings
-
-
-
- Hibernate type
- Database type
- JDBC type
- Type registry
-
-
-
-
- org.hibernate.type.StringType
- string
- VARCHAR
- string, java.lang.String
-
-
- org.hibernate.type.MaterializedClob
- string
- CLOB
- materialized_clob
-
-
- org.hibernate.type.TextType
- string
- LONGVARCHAR
- text
-
-
- org.hibernate.type.CharacterType
- char, java.lang.Character
- CHAR
- char, java.lang.Character
-
-
- org.hibernate.type.BooleanType
- boolean
- BIT
- boolean, java.lang.Boolean
-
-
- org.hibernate.type.NumericBooleanType
- boolean
- INTEGER, 0 is false, 1 is true
- numeric_boolean
-
-
- org.hibernate.type.YesNoType
- boolean
- CHAR, 'N'/'n' is false, 'Y'/'y' is true. The uppercase value is written to the database.
- yes_no
-
-
- org.hibernate.type.TrueFalseType
- boolean
- CHAR, 'F'/'f' is false, 'T'/'t' is true. The uppercase value is written to the database.
- true_false
-
-
- org.hibernate.type.ByteType
- byte, java.lang.Byte
- TINYINT
- byte, java.lang.Byte
-
-
- org.hibernate.type.ShortType
- short, java.lang.Short
- SMALLINT
- short, java.lang.Short
-
-
- org.hibernate.type.IntegerTypes
- int, java.lang.Integer
- INTEGER
- int, java.lang.Integer
-
-
- org.hibernate.type.LongType
- long, java.lang.Long
- BIGINT
- long, java.lang.Long
-
-
- org.hibernate.type.FloatType
- float, java.lang.Float
- FLOAT
- float, java.lang.Float
-
-
- org.hibernate.type.DoubleType
- double, java.lang.Double
- DOUBLE
- double, java.lang.Double
-
-
- org.hibernate.type.BigIntegerType
- java.math.BigInteger
- NUMERIC
- big_integer
-
-
- org.hibernate.type.BigDecimalType
- java.math.BigDecimal
- NUMERIC
- big_decimal, java.math.bigDecimal
-
-
- org.hibernate.type.TimestampType
- java.sql.Timestamp
- TIMESTAMP
- timestamp, java.sql.Timestamp
-
-
- org.hibernate.type.TimeType
- java.sql.Time
- TIME
- time, java.sql.Time
-
-
- org.hibernate.type.DateType
- java.sql.Date
- DATE
- date, java.sql.Date
-
-
- org.hibernate.type.CalendarType
- java.util.Calendar
- TIMESTAMP
- calendar, java.util.Calendar
-
-
- org.hibernate.type.CalendarDateType
- java.util.Calendar
- DATE
- calendar_date
-
-
- org.hibernate.type.CurrencyType
- java.util.Currency
- VARCHAR
- currency, java.util.Currency
-
-
- org.hibernate.type.LocaleType
- java.util.Locale
- VARCHAR
- locale, java.utility.locale
-
-
- org.hibernate.type.TimeZoneType
- java.util.TimeZone
- VARCHAR, using the TimeZone ID
- timezone, java.util.TimeZone
-
-
- org.hibernate.type.UrlType
- java.net.URL
- VARCHAR
- url, java.net.URL
-
-
- org.hibernate.type.ClassType
- java.lang.Class
- VARCHAR, using the class name
- class, java.lang.Class
-
-
- org.hibernate.type.BlobType
- java.sql.Blob
- BLOB
- blog, java.sql.Blob
-
-
- org.hibernate.type.ClobType
- java.sql.Clob
- CLOB
- clob, java.sql.Clob
-
-
- org.hibernate.type.BinaryType
- primitive byte[]
- VARBINARY
- binary, byte[]
-
-
- org.hibernate.type.MaterializedBlobType
- primitive byte[]
- BLOB
- materized_blob
-
-
- org.hibernate.type.ImageType
- primitive byte[]
- LONGVARBINARY
- image
-
-
- org.hibernate.type.BinaryType
- java.lang.Byte[]
- VARBINARY
- wrapper-binary
-
-
- org.hibernate.type.CharArrayType
- char[]
- VARCHAR
- characters, char[]
-
-
- org.hibernate.type.CharacterArrayType
- java.lang.Character[]
- VARCHAR
- wrapper-characters, Character[], java.lang.Character[]
-
-
- org.hibernate.type.UUIDBinaryType
- java.util.UUID
- BINARY
- uuid-binary, java.util.UUID
-
-
- org.hibernate.type.UUIDCharType
- java.util.UUID
- CHAR, can also read VARCHAR
- uuid-char
-
-
- org.hibernate.type.PostgresUUIDType
- java.util.UUID
- PostgreSQL UUID, through Types#OTHER, which complies to the PostgreSQL JDBC driver
- definition
- pg-uuid
-
-
- org.hibernate.type.SerializableType
- implementors of java.lang.Serializable
- VARBINARY
- Unlike the other value types, multiple instances of this type are registered. It is registered
- once under java.io.Serializable, and registered under the specific java.io.Serializable implementation
- class names.
-
-
-
-
-
-
- National Character Types
-
- National Character types, which is a new feature since JDBC 4.0 API, now available in hibernate type system.
- National Language Support enables you retrieve data or insert data into a database in any character
- set that the underlying database supports.
-
-
-
- Depending on your environment, you might want to set the configuration option hibernate.use_nationalized_character_data
- to true and having all string or clob based attributes having this national character support automatically.
- There is nothing else to be changed, and you don't have to use any hibernate specific mapping, so it is portable
- ( though the national character support feature is not required and may not work on other JPA provider impl ).
-
-
-
- The other way of using this feature is having the @Nationalized annotation on the attribute
- that should be nationalized. This only works on string based attributes, including string, char, char array and clob.
-
-
- @Entity( name="NationalizedEntity")
- public static class NationalizedEntity {
- @Id
- private Integer id;
-
- @Nationalized
- private String nvarcharAtt;
-
- @Lob
- @Nationalized
- private String materializedNclobAtt;
-
- @Lob
- @Nationalized
- private NClob nclobAtt;
-
- @Nationalized
- private Character ncharacterAtt;
-
- @Nationalized
- private Character[] ncharArrAtt;
-
- @Type(type = "ntext")
- private String nlongvarcharcharAtt;
- }
-
-
-
-
-
- Composite types
-
- Composite types, or embedded types, as they are called by the Java
- Persistence API, have traditionally been called components in Hibernate. All of these
- terms mean the same thing.
-
-
- Components represent aggregations of values into a single Java type. An example is an
- Address class, which aggregates street, city, state, and postal code. A composite type
- behaves in a similar way to an entity. They are each classes written specifically for an application. They may
- both include references to other application-specific classes, as well as to collections and simple JDK
- types. The only distinguishing factors are that a component does not have its own lifecycle or define an
- identifier.
-
-
-
-
-
- Collection types
-
- A collection type refers to the data type itself, not its contents.
-
-
- A Collection denotes a one-to-one or one-to-many relationship between tables of a database.
-
-
- Refer to the chapter on Collections for more information on collections.
-
-
-
-
- Entity Types
-
- Entities are application-specific classes which correlate to rows in a table, using a unique identifier. Because
- of the requirement for a unique identifier, ntities exist independently and define their own lifecycle. As an
- example, deleting a Membership should not delete the User or the Group. For more information, see the chapter on
- Persistent Classes.
-
-
-
-
- Implications of different data categorizations
-
- NEEDS TO BE WRITTEN
-
-
-
-
-
diff --git a/documentation/src/main/docbook/integration/en-US/Database_Access.xml b/documentation/src/main/docbook/integration/en-US/Database_Access.xml
deleted file mode 100644
index 976f6b26d3..0000000000
--- a/documentation/src/main/docbook/integration/en-US/Database_Access.xml
+++ /dev/null
@@ -1,916 +0,0 @@
-
-
-
-
-
- Database access
-
-
- Connecting
-
- Hibernate connects to databases on behalf of your application. It can connect through a variety of mechanisms,
- including:
-
-
- Stand-alone built-in connection pool
- javax.sql.DataSource
- Connection pools, including support for two different third-party opensource JDBC connection pools:
-
- c3p0
- proxool
-
-
-
- Application-supplied JDBC connections. This is not a recommended approach and exists for legacy reasons
-
-
-
-
- The built-in connection pool is not intended for production environments.
-
-
-
- Hibernate obtains JDBC connections as needed though the
- ConnectionProvider interface
- which is a service contract. Applications may also supply their own
- ConnectionProvider implementation
- to define a custom approach for supplying connections to Hibernate (from a different connection pool
- implementation, for example).
-
-
-
-
-
- Configuration
-
- You can configure database connections using a properties file, an XML deployment descriptor or
- programmatically.
-
-
- hibernate.properties for a c3p0 connection pool
-
-
-
- hibernate.cfg.xml for a connection to the bundled HSQL database
-
-
-
-
- Programatic configuration
-
- An instance of object org.hibernate.cfg.Configuration represents an entire set of
- mappings of an application's Java types to an SQL database. The
- org.hibernate.cfg.Configuration builds an immutable
- org.hibernate.SessionFactory, and compiles the mappings from various XML mapping
- files. You can specify the mapping files directly, or Hibernate can find them for you.
-
-
- Specifying the mapping files directly
-
- You can obtain a org.hibernate.cfg.Configuration instance by instantiating it
- directly and specifying XML mapping documents. If the mapping files are in the classpath, use method
- addResource().
-
-
-
-
- Letting Hibernate find the mapping files for you
-
- The addClass() method directs Hibernate to search the CLASSPATH for the mapping
- files, eliminating hard-coded file names. In the following example, it searches for
- org/hibernate/auction/Item.hbm.xml and
- org/hibernate/auction/Bid.hbm.xml.
-
-
-
-
- Specifying configuration properties
-
-
-
- Other ways to configure Hibernate programmatically
-
-
- Pass an instance of java.util.Properties to
- Configuration.setProperties().
-
-
-
-
- Set System properties using java
- -Dproperty=value
-
-
-
-
-
-
-
- Obtaining a JDBC connection
-
- After you configure the , you can use method
- openSession of class org.hibernate.SessionFactory to open
- sessions. Sessions will obtain JDBC connections as needed based on the provided configuration.
-
-
- Specifying configuration properties
-
-
-
- Most important Hibernate JDBC properties
- hibernate.connection.driver_class
- hibernate.connection.url
- hibernate.connection.username
- hibernate.connection.password
- hibernate.connection.pool_size
-
-
- All available Hibernate settings are defined as constants and discussed on the
- org.hibernate.cfg.AvailableSettings interface. See its source code or
- JavaDoc for details.
-
-
-
-
-
- Connection pooling
-
- Hibernate's internal connection pooling algorithm is rudimentary, and is provided for development and testing
- purposes. Use a third-party pool for best performance and stability. To use a third-party pool, replace the
- hibernate.connection.pool_size property with settings specific to your connection pool of
- choice. This disables Hibernate's internal connection pool.
-
-
-
- c3p0 connection pool
-
- C3P0 is an open source JDBC connection pool distributed along with Hibernate in the lib/
- directory. Hibernate uses its org.hibernate.service.jdbc.connections.internal.C3P0ConnectionProvider for
- connection pooling if you set the hibernate.c3p0.* properties. properties.
-
-
- Important configuration properties for the c3p0 connection pool
- hibernate.c3p0.min_size
- hibernate.c3p0.max_size
- hibernate.c3p0.timeout
- hibernate.c3p0.max_statements
-
-
-
-
- Proxool connection pool
-
- Proxool is another open source JDBC connection pool distributed along with Hibernate in the
- lib/ directory. Hibernate uses its
- org.hibernate.service.jdbc.connections.internal.ProxoolConnectionProvider for connection pooling if you set the
- hibernate.proxool.* properties. Unlike c3p0, proxool requires some additional configuration
- parameters, as described by the Proxool documentation available at .
-
-
- Important configuration properties for the Proxool connection pool
-
-
-
-
-
- Property
- Description
-
-
-
-
- hibernate.proxool.xml
- Configure Proxool provider using an XML file (.xml is appended automatically)
-
-
- hibernate.proxool.properties
- Configure the Proxool provider using a properties file (.properties is appended
- automatically)
-
-
- hibernate.proxool.existing_pool
- Whether to configure the Proxool provider from an existing pool
-
-
- hibernate.proxool.pool_alias
- Proxool pool alias to use. Required.
-
-
-
-
-
-
-
-
- Obtaining connections from an application server, using JNDI
-
- To use Hibernate inside an application server, configure Hibernate to obtain connections from an application
- server javax.sql.Datasource registered in JNDI, by setting at least one of the following
- properties:
-
-
- Important Hibernate properties for JNDI datasources
- hibernate.connection.datasource (required)
- hibernate.jndi.url
- hibernate.jndi.class
- hibernate.connection.username
- hibernate.connection.password
-
-
- JDBC connections obtained from a JNDI datasource automatically participate in the container-managed transactions
- of the application server.
-
-
-
-
- Other connection-specific configuration
-
- You can pass arbitrary connection properties by prepending hibernate.connection to the
- connection property name. For example, specify a charSet connection property as
- hibernate.connection.charSet.
-
-
- You can define your own plugin strategy for obtaining JDBC connections by implementing the interface
- ConnectionProvider and specifying your custom
- implementation with the hibernate.connection.provider_class property.
-
-
-
-
- Optional configuration properties
-
- In addition to the properties mentioned in the previous sections, Hibernate includes many other optional
- properties. See for a more complete list.
-
-
-
-
-
- Dialects
-
- Although SQL is relatively standardized, each database vendor uses a subset of supported syntax. This is referred
- to as a dialect. Hibernate handles variations across these dialects through its
- org.hibernate.dialect.Dialect class and the various subclasses for each vendor dialect.
-
-
-
-
-
- Specifying the Dialect to use
-
- The developer may manually specify the Dialect to use by setting the
- hibernate.dialect configuration property to the name of a specific
- org.hibernate.dialect.Dialect class to use.
-
-
-
-
- Dialect resolution
-
- Assuming a ConnectionProvider has been
- set up, Hibernate will attempt to automatically determine the Dialect to use based on the
- java.sql.DatabaseMetaData reported by a
- java.sql.Connection obtained from that
- ConnectionProvider.
-
-
- This functionality is provided by a series of
- org.hibernate.engine.jdbc.dialect.spi.DialectResolver instances registered
- with Hibernate internally. Hibernate comes with a standard set of recognitions. If your application
- requires extra Dialect resolution capabilities, it would simply register a custom implementation
- of org.hibernate.engine.jdbc.dialect.spi.DialectResolver as follows:
-
-
-
- Registered org.hibernate.engine.jdbc.dialect.spi.DialectResolver are
- prepended to an internal list of resolvers, so they take precedence
- before any already registered resolvers including the standard one.
-
-
-
-
-
- Automatic schema generation with SchemaExport
-
- SchemaExport is a Hibernate utility which generates DDL from your mapping files. The generated schema includes
- referential integrity constraints, primary and foreign keys, for entity and collection tables. It also creates
- tables and sequences for mapped identifier generators.
-
-
-
- You must specify a SQL Dialect via the hibernate.dialect property when using this tool,
- because DDL is highly vendor-specific. See for information.
-
-
-
- Before Hibernate can generate your schema, you must customize your mapping files.
-
-
-
- Customizing the mapping files
-
- Hibernate provides several elements and attributes to customize your mapping files. They are listed in , and a logical order of customization is presented in .
-
-
- Elements and attributes provided for customizing mapping files
-
-
-
-
-
-
- Name
- Type of value
- Description
-
-
-
-
- length
- number
- Column length
-
-
- precision
- number
- Decimal precision of column
-
-
- scale
- number
- Decimal scale of column
-
-
- not-null
- true or false
- Whether a column is allowed to hold null values
-
-
- unique
- true or false
- Whether values in the column must be unique
-
-
- index
- string
- The name of a multi-column index
-
-
- unique-key
- string
- The name of a multi-column unique constraint
-
-
- foreign-key
- string
- The name of the foreign key constraint generated for an association. This applies to
- <one-to-one>, <many-to-one>, <key>, and <many-to-many> mapping
- elements. inverse="true" sides are skipped by SchemaExport.
-
-
- sql-type
- string
- Overrides the default column type. This applies to the <column> element only.
-
-
- default
- string
- Default value for the column
-
-
- check
- string
- An SQL check constraint on either a column or atable
-
-
-
-
-
- Customizing the schema
-
- Set the length, precision, and scale of mapping elements.
-
- Many Hibernate mapping elements define optional attributes named ,
- , and .
-
-
-
-
- Set the , , attributes.
-
- The and attributes generate constraints on table columns.
-
-
- The unique-key attribute groups columns in a single, unique key constraint. The attribute overrides
- the name of any generated unique key constraint.
-
-
-
-
- Set the and attributes.
-
- The attribute specifies the name of an index for Hibernate to create using the mapped
- column or columns. You can group multiple columns into the same index by assigning them the same index name.
-
-
- A foreign-key attribute overrides the name of any generated foreign key constraint.
-
-
-
-
- Set child elements.
-
- Many mapping elements accept one or more child <column> elements. This is particularly useful for
- mapping types involving multiple columns.
-
-
-
-
- Set the attribute.
-
- The attribute represents a default value for a column. Assign the same value to the
- mapped property before saving a new instance of the mapped class.
-
-
-
-
- Set the attribure.
-
- Use the attribute to override the default mapping of a Hibernate type to SQL
- datatype.
-
-
-
-
- Set the attribute.
-
- use the attribute to specify a check constraint.
-
-
-
-
- Add <comment> elements to your schema.
-
- Use the <comment> element to specify comments for the generated schema.
-
-
-
-
-
-
-
- Running the SchemaExport tool
-
- The SchemaExport tool writes a DDL script to standard output, executes the DDL statements, or both.
-
-
- SchemaExport syntax
-
- java -cp hibernate_classpaths org.hibernate.tool.hbm2ddl.SchemaExport optionsmapping_files
-
-
-
- SchemaExport Options
-
-
-
-
-
- Option
- Description
-
-
-
-
- --quiet
- do not output the script to standard output
-
-
- --drop
- only drop the tables
-
-
- --create
- only create the tables
-
-
- --text
- do not export to the database
-
-
- --output=my_schema.ddl
- output the ddl script to a file
-
-
- --naming=eg.MyNamingStrategy
- select a NamingStrategy
-
-
- --config=hibernate.cfg.xml
- read Hibernate configuration from an XML file
-
-
- --properties=hibernate.properties
- read database properties from a file
-
-
- --format
- format the generated SQL nicely in the script
-
-
- --delimiter=;
- set an end-of-line delimiter for the script
-
-
-
-
-
- Embedding SchemaExport into your application
-
-
-
-
-
diff --git a/documentation/src/main/docbook/integration/en-US/Envers.xml b/documentation/src/main/docbook/integration/en-US/Envers.xml
deleted file mode 100644
index d55e045ae0..0000000000
--- a/documentation/src/main/docbook/integration/en-US/Envers.xml
+++ /dev/null
@@ -1,1759 +0,0 @@
-
-
-
-
-
- Envers
-
-
- The aim of Hibernate Envers is to provide historical versioning of your application's entity data. Much
- like source control management tools such as Subversion or Git, Hibernate Envers manages a notion of revisions
- if your application data through the use of audit tables. Each transaction relates to one global revision number
- which can be used to identify groups of changes (much like a change set in source control). As the revisions
- are global, having a revision number, you can query for various entities at that revision, retrieving a
- (partial) view of the database at that revision. You can find a revision number having a date, and the other
- way round, you can get the date at which a revision was committed.
-
-
-
-
-
- Basics
-
-
- To audit changes that are performed on an entity, you only need two things: the
- hibernate-envers jar on the classpath and an @Audited annotation
- on the entity.
-
-
-
-
- Unlike in previous versions, you no longer need to specify listeners in the Hibernate configuration
- file. Just putting the Envers jar on the classpath is enough - listeners will be registered
- automatically.
-
-
-
-
- And that's all - you can create, modify and delete the entities as always. If you look at the generated
- schema for your entities, or at the data persisted by Hibernate, you will notice that there are no changes.
- However, for each audited entity, a new table is introduced - entity_table_AUD,
- which stores the historical data, whenever you commit a transaction. Envers automatically creates audit
- tables if hibernate.hbm2ddl.auto option is set to create,
- create-drop or update. Otherwise, to export complete database schema
- programatically, use org.hibernate.envers.tools.hbm2ddl.EnversSchemaGenerator. Appropriate DDL
- statements can be also generated with Ant task described later in this manual.
-
-
-
- Instead of annotating the whole class and auditing all properties, you can annotate
- only some persistent properties with @Audited. This will cause only
- these properties to be audited.
-
-
-
- The audit (history) of an entity can be accessed using the AuditReader interface, which
- can be obtained having an open EntityManager or Session via
- the AuditReaderFactory. See the javadocs for these classes for details on the
- functionality offered.
-
-
-
-
- Configuration
-
- It is possible to configure various aspects of Hibernate Envers behavior, such as table names, etc.
-
-
-
- Envers Configuration Properties
-
-
-
-
-
-
-
- Property name
- Default value
- Description
-
-
-
-
-
-
- org.hibernate.envers.audit_table_prefix
-
-
-
-
- String that will be prepended to the name of an audited entity to create the name of the
- entity, that will hold audit information.
-
-
-
-
- org.hibernate.envers.audit_table_suffix
-
-
- _AUD
-
-
- String that will be appended to the name of an audited entity to create the name of the
- entity, that will hold audit information. If you audit an entity with a table name Person,
- in the default setting Envers will generate a Person_AUD table to store
- historical data.
-
-
-
-
- org.hibernate.envers.revision_field_name
-
-
- REV
-
-
- Name of a field in the audit entity that will hold the revision number.
-
-
-
-
- org.hibernate.envers.revision_type_field_name
-
-
- REVTYPE
-
-
- Name of a field in the audit entity that will hold the type of the revision (currently,
- this can be: add, mod, del).
-
-
-
-
- org.hibernate.envers.revision_on_collection_change
-
-
- true
-
-
- Should a revision be generated when a not-owned relation field changes (this can be either
- a collection in a one-to-many relation, or the field using "mappedBy" attribute in a
- one-to-one relation).
-
-
-
-
- org.hibernate.envers.do_not_audit_optimistic_locking_field
-
-
- true
-
-
- When true, properties to be used for optimistic locking, annotated with
- @Version, will be automatically not audited (their history won't be
- stored; it normally doesn't make sense to store it).
-
-
-
-
- org.hibernate.envers.store_data_at_delete
-
-
- false
-
-
- Should the entity data be stored in the revision when the entity is deleted (instead of only
- storing the id and all other properties as null). This is not normally needed, as the data is
- present in the last-but-one revision. Sometimes, however, it is easier and more efficient to
- access it in the last revision (then the data that the entity contained before deletion is
- stored twice).
-
-
-
-
- org.hibernate.envers.default_schema
-
-
- null (same schema as table being audited)
-
-
- The default schema name that should be used for audit tables. Can be overridden using the
- @AuditTable(schema="...") annotation. If not present, the schema will
- be the same as the schema of the table being audited.
-
-
-
-
- org.hibernate.envers.default_catalog
-
-
- null (same catalog as table being audited)
-
-
- The default catalog name that should be used for audit tables. Can be overridden using the
- @AuditTable(catalog="...") annotation. If not present, the catalog will
- be the same as the catalog of the normal tables.
-
-
-
-
- org.hibernate.envers.audit_strategy
-
-
- org.hibernate.envers.strategy.DefaultAuditStrategy
-
-
- The audit strategy that should be used when persisting audit data. The default stores only
- the revision, at which an entity was modified. An alternative, the
- org.hibernate.envers.strategy.ValidityAuditStrategy stores both the
- start revision and the end revision. Together these define when an audit row was valid,
- hence the name ValidityAuditStrategy.
-
-
-
-
- org.hibernate.envers.audit_strategy_validity_end_rev_field_name
-
-
- REVEND
-
-
- The column name that will hold the end revision number in audit entities. This property is
- only valid if the validity audit strategy is used.
-
-
-
-
- org.hibernate.envers.audit_strategy_validity_store_revend_timestamp
-
-
- false
-
-
- Should the timestamp of the end revision be stored, until which the data was valid, in
- addition to the end revision itself. This is useful to be able to purge old Audit records
- out of a relational database by using table partitioning. Partitioning requires a column
- that exists within the table. This property is only evaluated if the ValidityAuditStrategy
- is used.
-
-
-
-
- org.hibernate.envers.audit_strategy_validity_revend_timestamp_field_name
-
-
- REVEND_TSTMP
-
-
- Column name of the timestamp of the end revision until which the data was valid. Only used
- if the ValidityAuditStrategy is used, and
- org.hibernate.envers.audit_strategy_validity_store_revend_timestamp
- evaluates to true
-
-
-
-
- org.hibernate.envers.use_revision_entity_with_native_id
-
-
- true
-
-
- Boolean flag that determines the strategy of revision number generation. Default
- implementation of revision entity uses native identifier generator. If current database
- engine does not support identity columns, users are advised to set this property to false.
- In this case revision numbers are created by preconfigured
- org.hibernate.id.enhanced.SequenceStyleGenerator. See:
-
- org.hibernate.envers.DefaultRevisionEntity
- org.hibernate.envers.enhanced.SequenceIdRevisionEntity
-
-
-
-
-
- org.hibernate.envers.track_entities_changed_in_revision
-
-
- false
-
-
- Should entity types, that have been modified during each revision, be tracked. The default
- implementation creates REVCHANGES table that stores entity names
- of modified persistent objects. Single record encapsulates the revision identifier
- (foreign key to REVINFO table) and a string value. For more
- information refer to
- and .
-
-
-
-
- org.hibernate.envers.global_with_modified_flag
-
-
- false, can be individually overriden with @Audited(withModifiedFlag=true)
-
-
- Should property modification flags be stored for all audited entities and all properties.
- When set to true, for all properties an additional boolean column in the audit tables will
- be created, filled with information if the given property changed in the given revision.
- When set to false, such column can be added to selected entities or properties using the
- @Audited annotation.
- For more information refer to
- and .
-
-
-
-
- org.hibernate.envers.modified_flag_suffix
-
-
- _MOD
-
-
- The suffix for columns storing "Modified Flags".
- For example: a property called "age", will by default get modified flag with column name "age_MOD".
-
-
-
-
- org.hibernate.envers.embeddable_set_ordinal_field_name
-
-
- SETORDINAL
-
-
- Name of column used for storing ordinal of the change in sets of embeddable elements.
-
-
-
-
- org.hibernate.envers.cascade_delete_revision
-
-
- false
-
-
- While deleting revision entry, remove data of associated audited entities.
- Requires database support for cascade row removal.
-
-
-
-
- org.hibernate.envers.allow_identifier_reuse
-
-
- false
-
-
- Guarantees proper validity audit strategy behavior when application reuses identifiers
- of deleted entities. Exactly one row with null end date exists
- for each identifier.
-
-
-
-
-
-
-
-
- The following configuration options have been added recently and should be regarded as experimental:
-
-
- org.hibernate.envers.track_entities_changed_in_revision
-
-
- org.hibernate.envers.using_modified_flag
-
-
- org.hibernate.envers.modified_flag_suffix
-
-
-
-
-
-
-
- Additional mapping annotations
-
-
- The name of the audit table can be set on a per-entity basis, using the
- @AuditTable annotation. It may be tedious to add this
- annotation to every audited entity, so if possible, it's better to use a prefix/suffix.
-
-
-
- If you have a mapping with secondary tables, audit tables for them will be generated in
- the same way (by adding the prefix and suffix). If you wish to overwrite this behaviour,
- you can use the @SecondaryAuditTable and
- @SecondaryAuditTables annotations.
-
-
-
- If you'd like to override auditing behaviour of some fields/properties inherited from
- @Mappedsuperclass or in an embedded component, you can
- apply the @AuditOverride(s) annotation on the subtype or usage site
- of the component.
-
-
-
- If you want to audit a relation mapped with @OneToMany+@JoinColumn,
- please see for a description of the additional
- @AuditJoinTable annotation that you'll probably want to use.
-
-
-
- If you want to audit a relation, where the target entity is not audited (that is the case for example with
- dictionary-like entities, which don't change and don't have to be audited), just annotate it with
- @Audited(targetAuditMode = RelationTargetAuditMode.NOT_AUDITED). Then, while reading historic
- versions of your entity, the relation will always point to the "current" related entity. By default Envers
- throws javax.persistence.EntityNotFoundException when "current" entity does not
- exist in the database. Apply @NotFound(action = NotFoundAction.IGNORE) annotation
- to silence the exception and assign null value instead. Hereby solution causes implicit eager loading
- of to-one relations.
-
-
-
- If you'd like to audit properties of a superclass of an entity, which are not explicitly audited (which
- don't have the @Audited annotation on any properties or on the class), you can list the
- superclasses in the auditParents attribute of the @Audited
- annotation. Please note that auditParents feature has been deprecated. Use
- @AuditOverride(forClass = SomeEntity.class, isAudited = true/false) instead.
-
-
-
-
- Choosing an audit strategy
-
- After the basic configuration it is important to choose the audit strategy that will be used to persist
- and retrieve audit information. There is a trade-off between the performance of persisting and the
- performance of querying the audit information. Currently there two audit strategies.
-
-
-
-
- The default audit strategy persists the audit data together with a start revision. For each row
- inserted, updated or deleted in an audited table, one or more rows are inserted in the audit
- tables, together with the start revision of its validity. Rows in the audit tables are never
- updated after insertion. Queries of audit information use subqueries to select the applicable
- rows in the audit tables. These subqueries are notoriously slow and difficult to index.
-
-
-
-
- The alternative is a validity audit strategy. This strategy stores the start-revision and the
- end-revision of audit information. For each row inserted, updated or deleted in an audited table,
- one or more rows are inserted in the audit tables, together with the start revision of its
- validity. But at the same time the end-revision field of the previous audit rows (if available)
- are set to this revision. Queries on the audit information can then use 'between start and end
- revision' instead of subqueries as used by the default audit strategy.
-
-
- The consequence of this strategy is that persisting audit information will be a bit slower,
- because of the extra updates involved, but retrieving audit information will be a lot faster.
- This can be improved by adding extra indexes.
-
-
-
-
-
-
- Revision Log
- Logging data for revisions
-
-
- When Envers starts a new revision, it creates a new revision entity which stores
- information about the revision. By default, that includes just
-
-
-
-
- revision number - An integral value (int/Integer or
- long/Long). Essentially the primary key of the revision
-
-
-
-
- revision timestamp - either a long/Long or
- java.util.Date value representing the instant at which the revision was made.
- When using a java.util.Date, instead of a long/Long for
- the revision timestamp, take care not to store it to a column data type which will loose precision.
-
-
-
-
-
- Envers handles this information as an entity. By default it uses its own internal class to act as the
- entity, mapped to the REVINFO table.
- You can, however, supply your own approach to collecting this information which might be useful to
- capture additional details such as who made a change or the ip address from which the request came. There
- are 2 things you need to make this work.
-
-
-
-
- First, you will need to tell Envers about the entity you wish to use. Your entity must use the
- @org.hibernate.envers.RevisionEntity annotation. It must
- define the 2 attributes described above annotated with
- @org.hibernate.envers.RevisionNumber and
- @org.hibernate.envers.RevisionTimestamp, respectively. You can extend
- from org.hibernate.envers.DefaultRevisionEntity, if you wish, to inherit all
- these required behaviors.
-
-
- Simply add the custom revision entity as you do your normal entities. Envers will "find it". Note
- that it is an error for there to be multiple entities marked as
- @org.hibernate.envers.RevisionEntity
-
-
-
-
- Second, you need to tell Envers how to create instances of your revision entity which is handled
- by the newRevision method of the
- org.jboss.envers.RevisionListener interface.
-
-
- You tell Envers your custom org.hibernate.envers.RevisionListener
- implementation to use by specifying it on the
- @org.hibernate.envers.RevisionEntity annotation, using the
- value attribute. If your RevisionListener
- class is inaccessible from @RevisionEntity (e.g. exists in a different
- module), set org.hibernate.envers.revision_listener property to it's fully
- qualified name. Class name defined by the configuration parameter overrides revision entity's
- value attribute.
-
-
-
-
-
-
- An alternative method to using the org.hibernate.envers.RevisionListener
- is to instead call the getCurrentRevision method of the
- org.hibernate.envers.AuditReader interface to obtain the current revision,
- and fill it with desired information. The method accepts a persist parameter indicating
- whether the revision entity should be persisted prior to returning from this method. true
- ensures that the returned entity has access to its identifier value (revision number), but the revision
- entity will be persisted regardless of whether there are any audited entities changed. false
- means that the revision number will be null, but the revision entity will be persisted
- only if some audited entities have changed.
-
-
-
-
- Example of storing username with revision
-
-
- ExampleRevEntity.java
-
-
- ExampleListener.java
-
-
-
-
- Tracking entity names modified during revisions
-
- By default entity types that have been changed in each revision are not being tracked. This implies the
- necessity to query all tables storing audited data in order to retrieve changes made during
- specified revision. Envers provides a simple mechanism that creates REVCHANGES
- table which stores entity names of modified persistent objects. Single record encapsulates the revision
- identifier (foreign key to REVINFO table) and a string value.
-
-
- Tracking of modified entity names can be enabled in three different ways:
-
-
-
-
- Set org.hibernate.envers.track_entities_changed_in_revision parameter to
- true. In this case
- org.hibernate.envers.DefaultTrackingModifiedEntitiesRevisionEntity will
- be implicitly used as the revision log entity.
-
-
-
-
- Create a custom revision entity that extends
- org.hibernate.envers.DefaultTrackingModifiedEntitiesRevisionEntity class.
-
-
-
-
-
-
- Mark an appropriate field of a custom revision entity with
- @org.hibernate.envers.ModifiedEntityNames annotation. The property is
- required to be of ]]> type.
-
-
- modifiedEntityNames;
-
- ...
-}]]>
-
-
-
- Users, that have chosen one of the approaches listed above, can retrieve all entities modified in a
- specified revision by utilizing API described in .
-
-
- Users are also allowed to implement custom mechanism of tracking modified entity types. In this case, they
- shall pass their own implementation of
- org.hibernate.envers.EntityTrackingRevisionListener interface as the value
- of @org.hibernate.envers.RevisionEntity annotation.
- EntityTrackingRevisionListener interface exposes one method that notifies
- whenever audited entity instance has been added, modified or removed within current revision boundaries.
-
-
-
- Custom implementation of tracking entity classes modified during revisions
-
- CustomEntityTrackingRevisionListener.java
-
-
- CustomTrackingRevisionEntity.java
- modifiedEntityTypes =
- new HashSet();
-
- public void addModifiedEntityType(String entityClassName) {
- modifiedEntityTypes.add(new ModifiedEntityTypeEntity(this, entityClassName));
- }
-
- ...
-}
-]]>
-
- ModifiedEntityTypeEntity.java
-
- modifiedEntityTypes = revEntity.getModifiedEntityTypes()]]>
-
-
-
-
-
-
- Tracking entity changes at property level
-
- By default the only information stored by Envers are revisions of modified entities.
- This approach lets user create audit queries based on historical values of entity's properties.
-
- Sometimes it is useful to store additional metadata for each revision, when you are interested also in
- the type of changes, not only about the resulting values. The feature described in
-
- makes it possible to tell which entities were modified in given revision.
-
- Feature described here takes it one step further. "Modification Flags" enable Envers to track which
- properties of audited entities were modified in a given revision.
-
-
- Tracking entity changes at property level can be enabled by:
-
-
-
-
- setting org.hibernate.envers.global_with_modified_flag configuration
- property to true. This global switch will cause adding modification flags
- for all audited properties in all audited entities.
-
-
-
-
- using @Audited(withModifiedFlag=true) on a property or on an entity.
-
-
-
-
- The trade-off coming with this functionality is an increased size of
- audit tables and a very little, almost negligible, performance drop
- during audit writes. This is due to the fact that every tracked
- property has to have an accompanying boolean column in the
- schema that stores information about the property's modifications. Of
- course it is Envers' job to fill these columns accordingly - no additional work by the
- developer is required. Because of costs mentioned, it is recommended
- to enable the feature selectively, when needed with use of the
- granular configuration means described above.
-
-
- To see how "Modified Flags" can be utilized, check out the very
- simple query API that uses them: .
-
-
-
-
-
- Queries
-
-
- You can think of historic data as having two dimension. The first - horizontal -
- is the state of the database at a given revision. Thus, you can
- query for entities as they were at revision N. The second - vertical - are the
- revisions, at which entities changed. Hence, you can query for revisions,
- in which a given entity changed.
-
-
-
- The queries in Envers are similar to Hibernate Criteria queries, so if you are common with them,
- using Envers queries will be much easier.
-
-
-
- The main limitation of the current queries implementation is that you cannot
- traverse relations. You can only specify constraints on the ids of the
- related entities, and only on the "owning" side of the relation. This however
- will be changed in future releases.
-
-
-
- Please note, that queries on the audited data will be in many cases much slower
- than corresponding queries on "live" data, as they involve correlated subselects.
-
-
-
- In the future, queries will be improved both in terms of speed and possibilities, when using the valid-time
- audit strategy, that is when storing both start and end revisions for entities. See
- .
-
-
-
-
- Querying for entities of a class at a given revision
-
-
- The entry point for this type of queries is:
-
-
-
-
-
- You can then specify constraints, which should be met by the entities returned, by
- adding restrictions, which can be obtained using the AuditEntity
- factory class. For example, to select only entities, where the "name" property
- is equal to "John":
-
-
-
-
-
- And to select only entites that are related to a given entity:
-
-
-
-
-
- You can limit the number of results, order them, and set aggregations and projections
- (except grouping) in the usual way.
- When your query is complete, you can obtain the results by calling the
- getSingleResult() or getResultList() methods.
-
-
-
- A full query, can look for example like this:
-
-
-
-
-
-
-
-
- Querying for revisions, at which entities of a given class changed
-
-
- The entry point for this type of queries is:
-
-
-
-
-
- You can add constraints to this query in the same way as to the previous one.
- There are some additional possibilities:
-
-
-
-
-
- using AuditEntity.revisionNumber() you can specify constraints, projections
- and order on the revision number, in which the audited entity was modified
-
-
-
-
- similarly, using AuditEntity.revisionProperty(propertyName) you can specify constraints,
- projections and order on a property of the revision entity, corresponding to the revision
- in which the audited entity was modified
-
-
-
-
- AuditEntity.revisionType() gives you access as above to the type of
- the revision (ADD, MOD, DEL).
-
-
-
-
-
- Using these methods,
- you can order the query results by revision number, set projection or constraint
- the revision number to be greater or less than a specified value, etc. For example, the
- following query will select the smallest revision number, at which entity of class
- MyEntity with id entityId has changed, after revision
- number 42:
-
-
-
-
-
- The second additional feature you can use in queries for revisions is the ability
- to maximalize/minimize a property. For example, if you want to select the
- revision, at which the value of the actualDate for a given entity
- was larger then a given value, but as small as possible:
-
-
-
-
-
- The minimize() and maximize() methods return a criteria,
- to which you can add constraints, which must be met by the entities with the
- maximized/minimized properties. AggregatedAuditExpression#computeAggregationInInstanceContext()
- enables the possibility to compute aggregated expression in the context of each entity instance
- separately. It turns out useful when querying for latest revisions of all entities of a particular type.
-
-
-
- You probably also noticed that there are two boolean parameters, passed when
- creating the query. The first one, selectEntitiesOnly, is only valid when
- you don't set an explicit projection. If true, the result of the query will be
- a list of entities (which changed at revisions satisfying the specified
- constraints).
-
-
-
- If false, the result will be a list of three element arrays. The
- first element will be the changed entity instance. The second will be an entity
- containing revision data (if no custom entity is used, this will be an instance
- of DefaultRevisionEntity). The third will be the type of the
- revision (one of the values of the RevisionType enumeration:
- ADD, MOD, DEL).
-
-
-
- The second parameter, selectDeletedEntities, specifies if revisions,
- in which the entity was deleted should be included in the results. If yes, such entities
- will have the revision type DEL and all fields, except the id,
- null.
-
-
-
-
-
-
- Querying for revisions of entity that modified given property
-
-
- For the two types of queries described above it's possible to use
- special Audit criteria called
- hasChanged()
- and
- hasNotChanged()
- that makes use of the functionality
- described in .
- They're best suited for vertical queries,
- however existing API doesn't restrict their usage for horizontal
- ones.
-
- Let's have a look at following examples:
-
-
-
-
-
-
- This query will return all revisions of MyEntity with given id,
- where the
- actualDate
- property has been changed.
- Using this query we won't get all other revisions in which
- actualDate
- wasn't touched. Of course nothing prevents user from combining
- hasChanged condition with some additional criteria - add method
- can be used here in a normal way.
-
-
-
-
-
-
- This query will return horizontal slice for MyEntity at the time
- revisionNumber was generated. It will be limited to revisions
- that modified
- prop1
- but not prop2.
- Note that the result set will usually also contain revisions
- with numbers lower than the revisionNumber, so we cannot read
- this query as "Give me all MyEntities changed in revisionNumber
- with
- prop1
- modified and
- prop2
- untouched". To get such result we have to use the
- forEntitiesModifiedAtRevision query:
-
-
-
-
-
-
-
-
-
- Querying for entities modified in a given revision
-
- The basic query allows retrieving entity names and corresponding Java classes changed in a specified revision:
-
- > modifiedEntityTypes = getAuditReader()
- .getCrossTypeRevisionChangesReader().findEntityTypes(revisionNumber);]]>
-
- Other queries (also accessible from org.hibernate.envers.CrossTypeRevisionChangesReader):
-
-
-
-
- List]]> findEntities(Number)
- - Returns snapshots of all audited entities changed (added, updated and removed) in a given revision.
- Executes n+1 SQL queries, where n is a number of different entity
- classes modified within specified revision.
-
-
-
-
- List]]> findEntities(Number, RevisionType)
- - Returns snapshots of all audited entities changed (added, updated or removed) in a given revision
- filtered by modification type. Executes n+1 SQL queries, where n
- is a number of different entity classes modified within specified revision.
-
-
-
-
- >]]> findEntitiesGroupByRevisionType(Number)
- - Returns a map containing lists of entity snapshots grouped by modification operation (e.g.
- addition, update and removal). Executes 3n+1 SQL queries, where n
- is a number of different entity classes modified within specified revision.
-
-
-
-
- Note that methods described above can be legally used only when default mechanism of
- tracking changed entity names is enabled (see ).
-
-
-
-
-
-
- Conditional auditing
-
- Envers persists audit data in reaction to various Hibernate events (e.g. post update, post insert, and
- so on), using a series of even listeners from the org.hibernate.envers.event.spi
- package. By default, if the Envers jar is in the classpath, the event listeners are auto-registered with
- Hibernate.
-
-
- Conditional auditing can be implemented by overriding some of the Envers event listeners.
- To use customized Envers event listeners, the following steps are needed:
-
-
-
- Turn off automatic Envers event listeners registration by setting the
- hibernate.listeners.envers.autoRegister Hibernate property to
- false.
-
-
-
-
- Create subclasses for appropriate event listeners. For example, if you want to
- conditionally audit entity insertions, extend the
- org.hibernate.envers.event.spi.EnversPostInsertEventListenerImpl
- class. Place the conditional-auditing logic in the subclasses, call the super method if
- auditing should be performed.
-
-
-
-
- Create your own implementation of org.hibernate.integrator.spi.Integrator,
- similar to org.hibernate.envers.boot.internal.EnversIntegrator. Use your event
- listener classes instead of the default ones.
-
-
-
-
- For the integrator to be automatically used when Hibernate starts up, you will need to add a
- META-INF/services/org.hibernate.integrator.spi.Integrator file to your jar.
- The file should contain the fully qualified name of the class implementing the interface.
-
-
-
-
-
-
-
- Understanding the Envers Schema
-
-
- For each audited entity (that is, for each entity containing at least one audited field), an audit table is
- created. By default, the audit table's name is created by adding a "_AUD" suffix to the original table name,
- but this can be overridden by specifying a different suffix/prefix in the configuration or per-entity using
- the @org.hibernate.envers.AuditTable annotation.
-
-
-
- Audit table columns
-
-
- id of the original entity (this can be more then one column in the case of composite primary keys)
-
-
-
-
- revision number - an integer. Matches to the revision number in the revision entity table.
-
-
-
-
- revision type - a small integer
-
-
-
-
- audited fields from the original entity
-
-
-
-
-
- The primary key of the audit table is the combination of the original id of the entity and the revision
- number - there can be at most one historic entry for a given entity instance at a given revision.
-
-
-
- The current entity data is stored in the original table and in the audit table. This is a duplication of
- data, however as this solution makes the query system much more powerful, and as memory is cheap, hopefully
- this won't be a major drawback for the users. A row in the audit table with entity id ID, revision N and
- data D means: entity with id ID has data D from revision N upwards. Hence, if we want to find an entity at
- revision M, we have to search for a row in the audit table, which has the revision number smaller or equal
- to M, but as large as possible. If no such row is found, or a row with a "deleted" marker is found, it means
- that the entity didn't exist at that revision.
-
-
-
- The "revision type" field can currently have three values: 0, 1, 2, which means ADD, MOD and DEL,
- respectively. A row with a revision of type DEL will only contain the id of the entity and no data (all
- fields NULL), as it only serves as a marker saying "this entity was deleted at that revision".
-
-
-
- Additionally, there is a revision entity table which contains the information about the
- global revision. By default the generated table is named REVINFO and
- contains just 2 columns: ID and TIMESTAMP.
- A row is inserted into this table on each new revision, that is, on each commit of a transaction, which
- changes audited data. The name of this table can be configured, the name of its columns as well as adding
- additional columns can be achieved as discussed in .
-
-
-
- While global revisions are a good way to provide correct auditing of relations, some people have pointed out
- that this may be a bottleneck in systems, where data is very often modified. One viable solution is to
- introduce an option to have an entity "locally revisioned", that is revisions would be created for it
- independently. This wouldn't enable correct versioning of relations, but wouldn't also require the
- REVINFO table. Another possibility is to introduce a notion of
- "revisioning groups": groups of entities which share revision numbering. Each such group would have to
- consist of one or more strongly connected component of the graph induced by relations between entities.
- Your opinions on the subject are very welcome on the forum! :)
-
-
-
-
-
- Generating schema with Ant
-
-
- If you'd like to generate the database schema file with the Hibernate Tools Ant task,
- you'll probably notice that the generated file doesn't contain definitions of audit
- tables. To generate also the audit tables, you simply need to use
- org.hibernate.tool.ant.EnversHibernateToolTask instead of the usual
- org.hibernate.tool.ant.HibernateToolTask. The former class extends
- the latter, and only adds generation of the version entities. So you can use the task
- just as you used to.
-
-
-
- For example:
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-]]>
-
-
- Will generate the following schema:
-
-
-
-
-
-
-
- Mapping exceptions
-
-
-
- What isn't and will not be supported
-
-
- Bags, as they can contain non-unique elements.
- The reason is that persisting, for example a bag of String-s, violates a principle
- of relational databases: that each table is a set of tuples. In case of bags,
- however (which require a join table), if there is a duplicate element, the two
- tuples corresponding to the elements will be the same. Hibernate allows this,
- however Envers (or more precisely: the database connector) will throw an exception
- when trying to persist two identical elements, because of a unique constraint violation.
-
-
-
- There are at least two ways out if you need bag semantics:
-
-
-
-
-
- use an indexed collection, with the @IndexColumn annotation, or
-
-
-
-
- provide a unique id for your elements with the @CollectionId annotation.
-
-
-
-
-
-
-
-
- What isn't and will be supported
-
-
-
-
- Bag style collection which identifier column has been defined using
- @CollectionId annotation (JIRA ticket HHH-3950).
-
-
-
-
-
-
-
-
- @OneToMany+@JoinColumn
-
-
- When a collection is mapped using these two annotations, Hibernate doesn't
- generate a join table. Envers, however, has to do this, so that when you read the
- revisions in which the related entity has changed, you don't get false results.
-
-
- To be able to name the additional join table, there is a special annotation:
- @AuditJoinTable, which has similar semantics to JPA's
- @JoinTable.
-
-
-
- One special case are relations mapped with @OneToMany+@JoinColumn on
- the one side, and @ManyToOne+@JoinColumn(insertable=false, updatable=false)
- on the many side. Such relations are in fact bidirectional, but the owning side is the collection.
-
-
- To properly audit such relations with Envers, you can use the @AuditMappedBy annotation.
- It enables you to specify the reverse property (using the mappedBy element). In case
- of indexed collections, the index column must also be mapped in the referenced entity (using
- @Column(insertable=false, updatable=false), and specified using
- positionMappedBy. This annotation will affect only the way
- Envers works. Please note that the annotation is experimental and may change in the future.
-
-
-
-
-
-
- Advanced: Audit table partitioning
-
-
-
- Benefits of audit table partitioning
-
-
- Because audit tables tend to grow indefinitely they can quickly become really large. When the audit tables have grown
- to a certain limit (varying per RDBMS and/or operating system) it makes sense to start using table partitioning.
- SQL table partitioning offers a lot of advantages including, but certainly not limited to:
-
-
-
- Improved query performance by selectively moving rows to various partitions (or even purging old rows)
-
-
-
-
- Faster data loads, index creation, etc.
-
-
-
-
-
-
-
-
-
- Suitable columns for audit table partitioning
-
- Generally SQL tables must be partitioned on a column that exists within the table. As a rule it makes sense to use
- either the end revision or the end revision timestamp column for
- partioning of audit tables.
-
-
- End revision information is not available for the default AuditStrategy.
-
-
-
- Therefore the following Envers configuration options are required:
-
-
- org.hibernate.envers.audit_strategy =
- org.hibernate.envers.strategy.ValidityAuditStrategy
-
-
- org.hibernate.envers.audit_strategy_validity_store_revend_timestamp =
- true
-
-
-
- Optionally, you can also override the default values using following properties:
-
-
- org.hibernate.envers.audit_strategy_validity_end_rev_field_name
-
-
- org.hibernate.envers.audit_strategy_validity_revend_timestamp_field_name
-
-
-
- For more information, see .
-
-
-
-
-
- The reason why the end revision information should be used for audit table partioning is based on the assumption that
- audit tables should be partionioned on an 'increasing level of interestingness', like so:
-
-
-
-
-
-
- A couple of partitions with audit data that is not very (or no longer) interesting.
- This can be stored on slow media, and perhaps even be purged eventually.
-
-
-
-
- Some partitions for audit data that is potentially interesting.
-
-
-
-
- One partition for audit data that is most likely to be interesting.
- This should be stored on the fastest media, both for reading and writing.
-
-
-
-
-
-
-
-
-
-
- Audit table partitioning example
-
- In order to determine a suitable column for the 'increasing level of interestingness',
- consider a simplified example of a salary registration for an unnamed agency.
-
-
-
- Currently, the salary table contains the following rows for a certain person X:
-
-
-
-
-
- The salary for the current fiscal year (2010) is unknown. The agency requires that all changes in registered
- salaries for a fiscal year are recorded (i.e. an audit trail). The rationale behind this is that decisions
- made at a certain date are based on the registered salary at that time. And at any time it must be possible
- reproduce the reason why a certain decision was made at a certain date.
-
-
-
- The following audit information is available, sorted on in order of occurrence:
-
-
-
-
-
-
- Determining a suitable partitioning column
-
- To partition this data, the 'level of interestingness' must be defined.
- Consider the following:
-
-
-
- For fiscal year 2006 there is only one revision. It has the oldest revision timestamp
- of all audit rows, but should still be regarded as interesting because it is the latest modification
- for this fiscal year in the salary table; its end revision timestamp is null.
-
-
- Also note that it would be very unfortunate if in 2011 there would be an update of the salary for fiscal
- year 2006 (which is possible in until at least 10 years after the fiscal year) and the audit
- information would have been moved to a slow disk (based on the age of the
- revision timestamp). Remember that in this case Envers will have to update
- the end revision timestamp of the most recent audit row.
-
-
-
-
- There are two revisions in the salary of fiscal year 2007 which both have nearly the same
- revision timestamp and a different end revision timestamp.
- On first sight it is evident that the first revision was a mistake and probably uninteresting.
- The only interesting revision for 2007 is the one with end revision timestamp null.
-
-
-
-
- Based on the above, it is evident that only the end revision timestamp is suitable for
- audit table partitioning. The revision timestamp is not suitable.
-
-
-
-
-
-
- Determining a suitable partitioning scheme
-
- A possible partitioning scheme for the salary table would be as follows:
-
-
-
- end revision timestamp year = 2008
-
-
- This partition contains audit data that is not very (or no longer) interesting.
-
-
-
-
- end revision timestamp year = 2009
-
-
- This partition contains audit data that is potentially interesting.
-
-
-
-
- end revision timestamp year >= 2010 or null
-
-
- This partition contains the most interesting audit data.
-
-
-
-
-
-
- This partitioning scheme also covers the potential problem of the update of the
- end revision timestamp, which occurs if a row in the audited table is modified.
- Even though Envers will update the end revision timestamp of the audit row to
- the system date at the instant of modification, the audit row will remain in the same partition
- (the 'extension bucket').
-
-
-
- And sometime in 2011, the last partition (or 'extension bucket') is split into two new partitions:
-
-
-
- end revision timestamp year = 2010
-
-
- This partition contains audit data that is potentially interesting (in 2011).
-
-
-
-
- end revision timestamp year >= 2011 or null
-
-
- This partition contains the most interesting audit data and is the new 'extension bucket'.
-
-
-
-
-
-
-
-
-
-
-
- Envers links
-
-
-
-
- Hibernate main page
-
-
-
-
- Forum
-
-
-
-
- JIRA issue tracker
- (when adding issues concerning Envers, be sure to select the "envers" component!)
-
-
-
-
- IRC channel
-
-
-
-
- Envers Blog
-
-
-
-
- FAQ
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/JMX.xml b/documentation/src/main/docbook/integration/en-US/JMX.xml
deleted file mode 100644
index 4f4f7284a1..0000000000
--- a/documentation/src/main/docbook/integration/en-US/JMX.xml
+++ /dev/null
@@ -1,12 +0,0 @@
-
-
-
-
- JMX
-
-
diff --git a/documentation/src/main/docbook/integration/en-US/Locking.xml b/documentation/src/main/docbook/integration/en-US/Locking.xml
deleted file mode 100644
index b8bcfa7e66..0000000000
--- a/documentation/src/main/docbook/integration/en-US/Locking.xml
+++ /dev/null
@@ -1,332 +0,0 @@
-
-
-
-
- Locking
-
- Locking refers to actions taken to prevent data in a relational database from changing between the time it is read
- and the time that it is used.
-
-
- Your locking strategy can be either optimistic or pessimistic.
-
-
- Locking strategies
-
- Optimistic
-
-
- Optimistic locking assumes that multiple transactions can complete without affecting each other, and that
- therefore transactions can proceed without locking the data resources that they affect. Before committing,
- each transaction verifies that no other transaction has modified its data. If the check reveals conflicting
- modifications, the committing transaction rolls back.
-
-
-
-
- Pessimistic
-
-
- Pessimistic locking assumes that concurrent transactions will conflict with each other, and requires resources
- to be locked after they are read and only unlocked after the application has finished using the data.
-
-
-
-
-
- Hibernate provides mechanisms for implementing both types of locking in your applications.
-
-
- Optimistic
-
- When your application uses long transactions or conversations that span several database transactions, you can
- store versioning data, so that if the same entity is updated by two conversations, the last to commit changes is
- informed of the conflict, and does not override the other conversation's work. This approach guarantees some
- isolation, but scales well and works particularly well in Read-Often Write-Sometimes
- situations.
-
-
- Hibernate provides two different mechanisms for storing versioning information, a dedicated version number or a
- timestamp.
-
-
-
- Version number
-
-
-
-
-
-
-
- Timestamp
-
-
-
-
-
-
-
-
-
- A version or timestamp property can never be null for a detached instance. Hibernate detects any instance with a
- null version or timestamp as transient, regardless of other unsaved-value strategies that you specify. Declaring
- a nullable version or timestamp property is an easy way to avoid problems with transitive reattachment in
- Hibernate, especially useful if you use assigned identifiers or composite keys.
-
-
-
-
- Dedicated version number
-
- The version number mechanism for optimistic locking is provided through a @Version
- annotation.
-
-
- The @Version annotation
-
-
- Here, the version property is mapped to the OPTLOCK column, and the entity manager uses it
- to detect conflicting updates, and prevent the loss of updates that would be overwritten by a
- last-commit-wins strategy.
-
-
-
- The version column can be any kind of type, as long as you define and implement the appropriate
- UserVersionType.
-
-
- Your application is forbidden from altering the version number set by Hibernate. To artificially increase the
- version number, see the documentation for properties
- LockModeType.OPTIMISTIC_FORCE_INCREMENT or
- LockModeType.PESSIMISTIC_FORCE_INCREMENTcheck in the Hibernate Entity Manager reference
- documentation.
-
-
- Database-generated version numbers
-
- If the version number is generated by the database, such as a trigger, use the annotation
- @org.hibernate.annotations.Generated(GenerationTime.ALWAYS).
-
-
-
- Declaring a version property in hbm.xml
-
-
-
-
-
- column
- The name of the column holding the version number. Optional, defaults to the property
- name.
-
-
- name
- The name of a property of the persistent class.
-
-
- type
- The type of the version number. Optional, defaults to
- integer.
-
-
- access
- Hibernate's strategy for accessing the property value. Optional, defaults to
- property.
-
-
- unsaved-value
- Indicates that an instance is newly instantiated and thus unsaved. This distinguishes it
- from detached instances that were saved or loaded in a previous session. The default value,
- undefined, indicates that the identifier property value should be
- used. Optional.
-
-
- generated
- Indicates that the version property value is generated by the database. Optional, defaults
- to never.
-
-
- insert
- Whether or not to include the version column in SQL insert
- statements. Defaults to true, but you can set it to false if the
- database column is defined with a default value of 0.
-
-
-
-
-
-
-
-
- Timestamp
-
- Timestamps are a less reliable way of optimistic locking than version numbers, but can be used by applications
- for other purposes as well. Timestamping is automatically used if you the @Version annotation on a
- Date or Calendar.
-
-
- Using timestamps for optimistic locking
-
-
-
- Hibernate can retrieve the timestamp value from the database or the JVM, by reading the value you specify for
- the @org.hibernate.annotations.Source annotation. The value can be either
- org.hibernate.annotations.SourceType.DB or
- org.hibernate.annotations.SourceType.VM. The default behavior is to use the database, and is
- also used if you don't specify the annotation at all.
-
-
- The timestamp can also be generated by the database instead of Hibernate, if you use the
- @org.hibernate.annotations.Generated(GenerationTime.ALWAYS) annotation.
-
-
- The timestamp element in hbm.xml
-
-
-
-
-
- column
- The name of the column which holds the timestamp. Optional, defaults to the property
- namel
-
-
- name
- The name of a JavaBeans style property of Java type Date or Timestamp of the persistent
- class.
-
-
- access
- The strategy Hibernate uses to access the property value. Optional, defaults to
- property.
-
-
- unsaved-valueA version property which indicates than instance is newly
- instantiated, and unsaved. This distinguishes it from detached instances that were saved or loaded in a
- previous session. The default value of undefined indicates that Hibernate uses the
- identifier property value.
-
-
- source
- Whether Hibernate retrieves the timestamp from the database or the current
- JVM. Database-based timestamps incur an overhead because Hibernate needs to query the database each time
- to determine the incremental next value. However, database-derived timestamps are safer to use in a
- clustered environment. Not all database dialects are known to support the retrieval of the database's
- current timestamp. Others may also be unsafe for locking, because of lack of precision.
-
-
- generated
- Whether the timestamp property value is generated by the database. Optional, defaults to
- never.
-
-
-
-
-
-
-
-
-
-
- Pessimistic
-
- Typically, you only need to specify an isolation level for the JDBC connections and let the database handle
- locking issues. If you do need to obtain exclusive pessimistic locks or re-obtain locks at the start of a new
- transaction, Hibernate gives you the tools you need.
-
-
-
- Hibernate always uses the locking mechanism of the database, and never lock objects in memory.
-
-
-
- The LockMode class
-
- The LockMode class defines the different lock levels that Hibernate can acquire.
-
-
-
-
-
- LockMode.WRITE
- acquired automatically when Hibernate updates or inserts a row.
-
-
- LockMode.UPGRADE
- acquired upon explicit user request using SELECT ... FOR UPDATE on databases
- which support that syntax.
-
-
- LockMode.UPGRADE_NOWAIT
- acquired upon explicit user request using a SELECT ... FOR UPDATE NOWAIT in
- Oracle.
-
-
- LockMode.UPGRADE_SKIPLOCKED
- acquired upon explicit user request using a SELECT ... FOR UPDATE SKIP LOCKED in
- Oracle, or SELECT ... with (rowlock,updlock,readpast) in SQL Server.
-
-
- LockMode.READ
- acquired automatically when Hibernate reads data under Repeatable Read or
- Serializable isolation level. It can be re-acquired by explicit user
- request.
-
-
- LockMode.NONE
- The absence of a lock. All objects switch to this lock mode at the end of a
- Transaction. Objects associated with the session via a call to update() or
- saveOrUpdate() also start out in this lock mode.
-
-
-
-
-
- The explicit user request mentioned above occurs as a consequence of any of the following actions:
-
-
-
-
- A call to Session.load(), specifying a LockMode.
-
-
-
-
- A call to Session.lock().
-
-
-
-
- A call to Query.setLockMode().
-
-
-
-
- If you call Session.load() with option ,
- or , and the requested object is not already
- loaded by the session, the object is loaded using SELECT ... FOR UPDATE. If you call
- load() for an object that is already loaded with a less restrictive lock than the one
- you request, Hibernate calls lock() for that object.
-
-
- Session.lock() performs a version number check if the specified lock mode is
- READ, UPGRADE, UPGRADE_NOWAIT or
- UPGRADE_SKIPLOCKED. In the case of UPGRADE,
- UPGRADE_NOWAIT or UPGRADE_SKIPLOCKED, SELECT ... FOR UPDATE
- syntax is used.
-
-
- If the requested lock mode is not supported by the database, Hibernate uses an appropriate alternate mode
- instead of throwing an exception. This ensures that applications are portable.
-
-
-
-
diff --git a/documentation/src/main/docbook/integration/en-US/Mapping_Association.xml b/documentation/src/main/docbook/integration/en-US/Mapping_Association.xml
deleted file mode 100644
index 089843d1e5..0000000000
--- a/documentation/src/main/docbook/integration/en-US/Mapping_Association.xml
+++ /dev/null
@@ -1,17 +0,0 @@
-
-
-
-
- Mapping associations
-
- The most basic form of mapping in Hibernate is mapping a persistent entity class to a database table.
- You can expand on this concept by mapping associated classes together.
- shows a Person class with a
-
-
-
diff --git a/documentation/src/main/docbook/integration/en-US/Mapping_Entities.xml b/documentation/src/main/docbook/integration/en-US/Mapping_Entities.xml
deleted file mode 100644
index 66f7435eb6..0000000000
--- a/documentation/src/main/docbook/integration/en-US/Mapping_Entities.xml
+++ /dev/null
@@ -1,15 +0,0 @@
-
-
-
-
- Mapping entities
-
- Hierarchies
-
-
-
diff --git a/documentation/src/main/docbook/integration/en-US/appendices/legacy_criteria/Legacy_Criteria.xml b/documentation/src/main/docbook/integration/en-US/appendices/legacy_criteria/Legacy_Criteria.xml
deleted file mode 100644
index 2001abea70..0000000000
--- a/documentation/src/main/docbook/integration/en-US/appendices/legacy_criteria/Legacy_Criteria.xml
+++ /dev/null
@@ -1,549 +0,0 @@
-
-
-
-
-
- Legacy Hibernate Criteria Queries
-
-
-
-
- This appendix covers the legacy Hibernate org.hibernate.Criteria API, which
- should be considered deprecated. New development should focus on the JPA
- javax.persistence.criteria.CriteriaQuery API. Eventually,
- Hibernate-specific criteria features will be ported as extensions to the JPA
- javax.persistence.criteria.CriteriaQuery. For details on the JPA APIs, see
- .
-
-
- This information is copied as-is from the older Hibernate documentation.
-
-
-
-
- Hibernate features an intuitive, extensible criteria query API.
-
-
-
- Creating a Criteria instance
-
-
- The interface org.hibernate.Criteria represents a query against
- a particular persistent class. The Session is a factory for
- Criteria instances.
-
-
-
-
-
-
-
- Narrowing the result set
-
-
- An individual query criterion is an instance of the interface
- org.hibernate.criterion.Criterion. The class
- org.hibernate.criterion.Restrictions defines
- factory methods for obtaining certain built-in
- Criterion types.
-
-
-
-
-
- Restrictions can be grouped logically.
-
-
-
-
-
-
-
- There are a range of built-in criterion types (Restrictions
- subclasses). One of the most useful allows you to specify SQL directly.
-
-
-
-
-
- The {alias} placeholder will be replaced by the row alias
- of the queried entity.
-
-
-
- You can also obtain a criterion from a
- Property instance. You can create a Property
- by calling Property.forName():
-
-
-
-
-
-
-
- Ordering the results
-
-
- You can order the results using org.hibernate.criterion.Order.
-
-
-
-
-
-
-
-
-
- Associations
-
-
- By navigating
- associations using createCriteria() you can specify constraints upon related entities:
-
-
-
-
-
- The second createCriteria() returns a new
- instance of Criteria that refers to the elements of
- the kittens collection.
-
-
-
- There is also an alternate form that is useful in certain circumstances:
-
-
-
-
-
- (createAlias() does not create a new instance of
- Criteria.)
-
-
-
- The kittens collections held by the Cat instances
- returned by the previous two queries are not pre-filtered
- by the criteria. If you want to retrieve just the kittens that match the
- criteria, you must use a ResultTransformer.
-
-
-
-
-
- Additionally you may manipulate the result set using a left outer join:
-
-
-
-
- This will return all of the Cats with a mate whose name starts with "good"
- ordered by their mate's age, and all cats who do not have a mate.
- This is useful when there is a need to order or limit in the database
- prior to returning complex/large result sets, and removes many instances where
- multiple queries would have to be performed and the results unioned
- by java in memory.
-
-
- Without this feature, first all of the cats without a mate would need to be loaded in one query.
-
-
- A second query would need to retreive the cats with mates who's name started with "good" sorted by the mates age.
-
-
- Thirdly, in memory; the lists would need to be joined manually.
-
-
-
-
- Dynamic association fetching
-
-
- You can specify association fetching semantics at runtime using
- setFetchMode().
-
-
-
-
-
- This query will fetch both mate and kittens
- by outer join. See for more information.
-
-
-
-
-
- Components
-
-
- To add a restriction against a property of an embedded component, the component property
- name should be prepended to the property name when creating the Restriction.
- The criteria object should be created on the owning entity, and cannot be created on the component
- itself. For example, suppose the Cat has a component property fullName
- with sub-properties firstName and lastName:
-
-
-
-
-
-
- Note: this does not apply when querying collections of components, for that see below
-
-
-
-
-
-
- Collections
-
- When using criteria against collections, there are two distinct cases. One is if
- the collection contains entities (eg. <one-to-many/>
- or <many-to-many/>) or components
- (<composite-element/> ),
- and the second is if the collection contains scalar values
- (<element/>).
- In the first case, the syntax is as given above in the section
- where we restrict the kittens
- collection. Essentially we create a Criteria object against the collection
- property and restrict the entity or component properties using that instance.
-
-
- For queryng a collection of basic values, we still create the Criteria
- object against the collection, but to reference the value, we use the special property
- "elements". For an indexed collection, we can also reference the index property using
- the special property "indices".
-
-
-
-
-
-
- Example queries
-
-
- The class org.hibernate.criterion.Example allows
- you to construct a query criterion from a given instance.
-
-
-
-
-
- Version properties, identifiers and associations are ignored. By default,
- null valued properties are excluded.
-
-
-
- You can adjust how the Example is applied.
-
-
-
-
-
- You can even use examples to place criteria upon associated objects.
-
-
-
-
-
-
-
- Projections, aggregation and grouping
-
- The class org.hibernate.criterion.Projections is a
- factory for Projection instances. You can apply a
- projection to a query by calling setProjection().
-
-
-
-
-
-
-
- There is no explicit "group by" necessary in a criteria query. Certain
- projection types are defined to be grouping projections,
- which also appear in the SQL group by clause.
-
-
-
- An alias can be assigned to a projection so that the projected value
- can be referred to in restrictions or orderings. Here are two different ways to
- do this:
-
-
-
-
-
-
-
- The alias() and as() methods simply wrap a
- projection instance in another, aliased, instance of Projection.
- As a shortcut, you can assign an alias when you add the projection to a
- projection list:
-
-
-
-
-
-
-
- You can also use Property.forName() to express projections:
-
-
-
-
-
-
-
-
-
- Detached queries and subqueries
-
- The DetachedCriteria class allows you to create a query outside the scope
- of a session and then execute it using an arbitrary Session.
-
-
-
-
-
- A DetachedCriteria can also be used to express a subquery. Criterion
- instances involving subqueries can be obtained via Subqueries or
- Property.
-
-
-
-
-
-
-
- Correlated subqueries are also possible:
-
-
-
-
-
- Example of multi-column restriction based on a subquery:
-
-
-
-
-
-
-
-
-
- Queries by natural identifier
-
-
- For most queries, including criteria queries, the query cache is not efficient
- because query cache invalidation occurs too frequently. However, there is a special
- kind of query where you can optimize the cache invalidation algorithm: lookups by a
- constant natural key. In some applications, this kind of query occurs frequently.
- The criteria API provides special provision for this use case.
-
-
-
- First, map the natural key of your entity using
- <natural-id> and enable use of the second-level cache.
-
-
-
-
-
-
-
-
-
-
-
-
-]]>
-
-
- This functionality is not intended for use with entities with
- mutable natural keys.
-
-
-
- Once you have enabled the Hibernate query cache,
- the Restrictions.naturalId() allows you to make use of
- the more efficient cache algorithm.
-
-
-
-
-
-
-
diff --git a/documentation/src/main/docbook/integration/en-US/appendix-Configuration_Properties.xml b/documentation/src/main/docbook/integration/en-US/appendix-Configuration_Properties.xml
deleted file mode 100644
index d340b5e38b..0000000000
--- a/documentation/src/main/docbook/integration/en-US/appendix-Configuration_Properties.xml
+++ /dev/null
@@ -1,441 +0,0 @@
-
-
-
-
-
- Configuration properties
-
-
- Strategy configurations
-
- Many configuration settings define pluggable strategies that Hibernate uses for various purposes.
- The configuration of many of these strategy type settings accept definition in various forms. The
- documentation of such configuration settings refer here. The types of forms available in such cases
- include:
-
-
-
- short name (if defined)
-
-
- Certain built-in strategy implementations have a corresponding short name.
-
-
-
-
- strategy instance
-
-
- An instance of the strategy implementation to use can be specified
-
-
-
-
- strategy Class reference
-
-
- A java.lang.Class reference of the strategy implementation to use can
- be specified
-
-
-
-
- strategy Class name
-
-
- The class name (java.lang.String) of the strategy implementation to
- use can be specified
-
-
-
-
-
-
-
-
- General Configuration
-
-
-
-
-
-
-
- hibernate.dialect
- A fully-qualified classname
-
-
- The classname of a Hibernate org.hibernate.dialect.Dialect from which Hibernate
- can generate SQL optimized for a particular relational database.
-
-
- In most cases Hibernate can choose the correct org.hibernate.dialect.Dialect
- implementation based on the JDBC metadata returned by the JDBC driver.
-
-
-
-
- hibernate.show_sql
- true or false
- Write all SQL statements to the console. This is an alternative to setting the log category
- org.hibernate.SQL to debug.
-
-
- hibernate.format_sql
- true or false
- Pretty-print the SQL in the log and console.
-
-
- hibernate.default_schema
- A schema name
- Qualify unqualified table names with the given schema or tablespace in generated SQL.
-
-
- hibernate.default_catalog
- A catalog name
- Qualifies unqualified table names with the given catalog in generated SQL.
-
-
- hibernate.session_factory_name
- A JNDI name
- The org.hibernate.SessionFactory is automatically bound to this name in JNDI
- after it is created.
-
-
- hibernate.max_fetch_depth
- A value between 0 and 3
- Sets a maximum depth for the outer join fetch tree for single-ended associations. A single-ended
- assocation is a one-to-one or many-to-one assocation. A value of 0 disables default outer
- join fetching.
-
-
- hibernate.default_batch_fetch_size
- 4,8, or 16
- Default size for Hibernate batch fetching of associations.
-
-
- hibernate.default_entity_mode
- dynamic-map or pojo
- Default mode for entity representation for all sessions opened from this
- SessionFactory, defaults to pojo.
-
-
- hibernate.order_updates
- true or false
- Forces Hibernate to order SQL updates by the primary key value of the items being updated. This
- reduces the likelihood of transaction deadlocks in highly-concurrent systems.
-
-
- hibernate.order_by.default_null_ordering
- none, first or last
- Defines precedence of null values in ORDER BY clause. Defaults to
- none which varies between RDBMS implementation.
-
-
- hibernate.generate_statistics
- true or false
- Causes Hibernate to collect statistics for performance tuning.
-
-
- hibernate.use_identifier_rollback
- true or false
- If true, generated identifier properties are reset to default values when objects are
- deleted.
-
-
- hibernate.use_sql_comments
- true or false
- If true, Hibernate generates comments inside the SQL, for easier debugging.
-
-
-
-
-
-
- Database configuration
-
- JDBC properties
-
-
-
- Property
- Example
- Purpose
-
-
-
-
- hibernate.jdbc.fetch_size
- 0 or an integer
- A non-zero value determines the JDBC fetch size, by calling
- Statement.setFetchSize().
-
-
- hibernate.jdbc.batch_size
- A value between 5 and 30
- A non-zero value causes Hibernate to use JDBC2 batch updates.
-
-
- hibernate.jdbc.batch_versioned_data
- true or false
- Set this property to true if your JDBC driver returns correct row counts
- from executeBatch(). This option is usually safe, but is disabled by default. If
- enabled, Hibernate uses batched DML for automatically versioned data.
-
-
- hibernate.jdbc.factory_class
- The fully-qualified class name of the factory
- Select a custom org.hibernate.jdbc.Batcher. Irrelevant for most
- applications.
-
-
- hibernate.jdbc.use_scrollable_resultset
- true or false
- Enables Hibernate to use JDBC2 scrollable resultsets. This property is only relevant for
- user-supplied JDBC connections. Otherwise, Hibernate uses connection metadata.
-
-
- hibernate.jdbc.use_streams_for_binary
- true or false
- Use streams when writing or reading binary or serializable types to
- or from JDBC. This is a system-level property.
-
-
- hibernate.jdbc.use_get_generated_keys
- true or false
- Allows Hibernate to use JDBC3 PreparedStatement.getGeneratedKeys() to
- retrieve natively-generated keys after insert. You need the JDBC3+ driver and JRE1.4+. Disable this property
- if your driver has problems with the Hibernate identifier generators. By default, it tries to detect the
- driver capabilities from connection metadata.
-
-
-
-
-
- Cache Properties
-
-
-
-
-
-
- Property
- Example
- Purpose
-
-
-
-
- hibernate.cache.provider_class
- Fully-qualified classname
- The classname of a custom CacheProvider.
-
-
- hibernate.cache.use_minimal_puts
- true or false
- Optimizes second-level cache operation to minimize writes, at the cost of more frequent reads. This
- is most useful for clustered caches and is enabled by default for clustered cache implementations.
-
-
- hibernate.cache.use_query_cache
- true or false
- Enables the query cache. You still need to set individual queries to be cachable.
-
-
- hibernate.cache.use_second_level_cachetrue or
- falseCompletely disable the second level cache, which is enabled
- by default for classes which specify a <cache> mapping.
-
-
- hibernate.cache.query_cache_factory
- Fully-qualified classname
- A custom QueryCache interface. The default is the built-in
- StandardQueryCache.
-
-
- hibernate.cache.region_prefix
- A string
- A prefix for second-level cache region names.
-
-
- hibernate.cache.use_structured_entries
- true or false
- Forces Hibernate to store data in the second-level cache in a more human-readable format.
-
-
- hibernate.cache.auto_evict_collection_cache
- true or false (default: false)
- Enables the automatic eviction of a bi-directional association's collection cache when an element
- in the ManyToOne collection is added/updated/removed without properly managing the change on the OneToMany
- side.
-
-
- hibernate.cache.use_reference_entries
- true or false
- Optimizes second-level cache operation to store immutable entities (aka "reference") which do
- not have associations into cache directly, this case, lots of disasseble and deep copy operations
- can be avoid.
- Default value of this property is false.
-
-
-
-
-
-
- Transactions properties
-
-
-
-
-
-
-
- Property
- Example
- Purpose
-
-
-
-
- hibernate.transaction.factory_class
-
- jdbc or
-
-
-
- Names the org.hibernate.engine.transaction.spi.TransactionFactory
- strategy implementation to use. See and
-
-
-
-
-
- jta.UserTransaction
- A JNDI name
- The JTATransactionFactory needs a JNDI name to obtain the JTA
- UserTransaction from the application server.
-
-
- hibernate.transaction.manager_lookup_class
- A fully-qualified classname
- The classname of a TransactionManagerLookup, which is used in
- conjunction with JVM-level or the hilo generator in a JTA environment.
-
-
- hibernate.transaction.flush_before_completion
- true or false
- Causes the session be flushed during the before completion phase of the
- transaction. If possible, use built-in and automatic session context management instead.
-
-
- hibernate.transaction.auto_close_session
- true or false
- Causes the session to be closed during the after completion phase of the
- transaction. If possible, use built-in and automatic session context management instead.
-
-
-
-
-
-
- Each of the properties in the following table are prefixed by hibernate.. It has been removed
- in the table to conserve space.
-
-
-
- Miscellaneous properties
-
-
-
- Property
- Example
- Purpose
-
-
-
-
- current_session_context_class
- One of jta, thread, managed, or
- custom.Class
- Supply a custom strategy for the scoping of the Current
- Session.
-
-
- factory_class
- org.hibernate.hql.internal.ast.ASTQueryTranslatorFactory or
- org.hibernate.hql.internal.classic.ClassicQueryTranslatorFactory
- Chooses the HQL parser implementation.
-
-
- query.substitutions
- hqlLiteral=SQL_LITERAL or hqlFunction=SQLFUNC
-
- Map from tokens in Hibernate queries to SQL tokens, such as function or literal names.
-
-
- hbm2ddl.auto
- validate, update, create,
- create-drop
- Validates or exports schema DDL to the database when the SessionFactory is
- created. With create-drop, the database schema is dropped when the
- SessionFactory is closed explicitly.
-
-
-
-
- Proxool connection pool properties
-
-
-
-
-
- Property
- Description
-
-
-
-
- hibernate.proxool.xml
- Configure Proxool provider using an XML file (.xml is appended automatically)
-
-
- hibernate.proxool.properties
- Configure the Proxool provider using a properties file (.properties is appended
- automatically)
-
-
- hibernate.proxool.existing_pool
- Whether to configure the Proxool provider from an existing pool
-
-
- hibernate.proxool.pool_alias
- Proxool pool alias to use. Required.
-
-
-
-
-
-
- For information on specific configuration of Proxool, refer to the Proxool documentation available from
- .
-
-
-
-
diff --git a/documentation/src/main/docbook/integration/en-US/appendix-Troubleshooting.xml b/documentation/src/main/docbook/integration/en-US/appendix-Troubleshooting.xml
deleted file mode 100644
index 829c705f76..0000000000
--- a/documentation/src/main/docbook/integration/en-US/appendix-Troubleshooting.xml
+++ /dev/null
@@ -1,63 +0,0 @@
-
-
-
-
-
- Troubleshooting
-
-
- Log messages
-
- This section discusses certain log messages you might see from Hibernate and the "meaning" of those
- messages. Specifically, it will discuss certain messages having a "message id", which for Hibernate
- is always the code HHH followed by a numeric code. The table below is ordered
- by this code.
-
-
- Explanation of identified log messages
-
-
-
- Key
- Explanation
-
-
-
-
- HHH000002
-
-
- Indicates that a session was left associated with the
- org.hibernate.context.internal.ThreadLocalSessionContext that is used
- to implement thread-based current session management. Internally that class uses a
- ThreadLocal, and in environments where Threads are pooled this could represent a
- potential "bleed through" situation. Consider using a different
- org.hibernate.context.spi.CurrentSessionContext
- implementation. Otherwise, make sure the sessions always get unbound properly.
-
-
-
-
- HHH000408
-
-
- Using workaround for JVM bug in java.sql.Timestamp. Certain
- JVMs are known to have a bug in the implementation of
- java.sql.Timestamp that causes the following condition to
- evaluate to false: new Timestamp(x).getTime() != x.
- A huge concern here is to make sure you are not using temporal based optimistic
- locking on such JVMs.
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/events/Events.xml b/documentation/src/main/docbook/integration/en-US/chapters/events/Events.xml
deleted file mode 100644
index faefdf73c8..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/events/Events.xml
+++ /dev/null
@@ -1,300 +0,0 @@
-
-
-
-
-
- Interceptors and events
-
-
- It is useful for the application to react to certain events that occur inside Hibernate. This allows for the
- implementation of generic functionality and the extension of Hibernate functionality.
-
-
-
- Interceptors
-
-
- The org.hibernate.Interceptor interface provides callbacks from the session
- to the application, allowing the application to inspect and/or manipulate properties of a persistent object
- before it is saved, updated, deleted or loaded. One possible use for this is to track auditing information.
- For example, the following example shows an Interceptor implementation
- that automatically sets the createTimestamp property when an
- Auditable entity is created and updates the
- lastUpdateTimestamp property when an Auditable entity is
- updated.
-
-
-
-
- You can either implement Interceptor directly or extend
- org.hibernate.EmptyInterceptor.
-
-
-
-
- An Interceptor can be either Session-scoped or SessionFactory-scoped.
-
-
-
- A Session-scoped interceptor is specified when a session is opened.
-
-
-
-
-
- A SessionFactory-scoped interceptor is registered with the Configuration object
- prior to building the SessionFactory. Unless a session is opened explicitly specifying the interceptor to
- use, the SessionFactory-scoped interceptor will be applied to all sessions opened from that SessionFactory.
- SessionFactory-scoped interceptors must be thread safe. Ensure that you do not store session-specific
- states, since multiple sessions will use this interceptor potentially concurrently.
-
-
-
-
-
-
-
- Native Event system
-
-
- If you have to react to particular events in the persistence layer, you can also use the Hibernate
- event architecture. The event system can be used in place of or in addition to
- interceptors.
-
-
-
- Many methods of the Session interface correlate to an event type. The
- full range of defined event types is declared as enum values on
- org.hibernate.event.spi.EventType. When a request is made of one of these methods,
- the Session generates an appropriate event and passes it to the configured event listener(s) for that type.
- Applications are free to implement a customization of one of the listener interfaces
- (i.e., the LoadEvent is processed by the registered implementation
- of the LoadEventListener interface), in which case their
- implementation would be responsible for processing any load() requests
- made of the Session.
-
-
-
-
- See for information on registering custom event
- listeners.
-
-
-
-
- The listeners should be considered stateless; they are shared between requests, and should not save any
- state as instance variables.
-
-
-
- A custom listener implements the appropriate interface for the event it wants to process and/or extend one
- of the convenience base classes (or even the default event listeners used by Hibernate out-of-the-box as
- these are declared non-final for this purpose). Here is an example of a custom load event listener:
-
-
-
-
- Custom LoadListener example
-
-
-
-
-
- Hibernate declarative security
-
- Usually, declarative security in Hibernate applications is managed in a session facade
- layer. Hibernate allows certain actions to be permissioned via JACC, and authorized
- via JAAS. This is an optional functionality that is built on top of the event architecture.
-
-
-
- First, you must configure the appropriate event listeners, to enable the use of JACC authorization.
- Again, see for the details. Below is an example of an
- appropriate org.hibernate.integrator.spi.Integrator implementation for
- this purpose.
-
-
-
-
- JACC listener registration example
-
-
-
-
-
- You must also decide how to configure your JACC provider. Consult your JACC provider documentation.
-
-
-
-
-
- JPA Callbacks
-
- JPA also defines a more limited set of callbacks through annotations.
-
-
-
- Callback annotations
-
-
-
-
-
- Type
- Description
-
-
-
-
-
- @PrePersist
-
-
- Executed before the entity manager persist operation is actually executed or cascaded.
- This call is synchronous with the persist operation.
-
-
-
-
- @PreRemove
-
-
- Executed before the entity manager remove operation is actually executed or cascaded.
- This call is synchronous with the remove operation.
-
-
-
-
- @PostPersist
-
-
- Executed after the entity manager persist operation is actually executed or cascaded.
- This call is invoked after the database INSERT is executed.
-
-
-
-
- @PostRemove
-
-
- Executed after the entity manager remove operation is actually executed or cascaded.
- This call is synchronous with the remove operation.
-
-
-
-
- @PreUpdate
-
-
- Executed before the database UPDATE operation.
-
-
-
-
- @PostUpdate
-
-
- Executed after the database UPDATE operation.
-
-
-
-
- @PostLoad
-
-
- Executed after an entity has been loaded into the current persistence context or an entity
- has been refreshed.
-
-
-
-
-
-
-
- There are 2 available approaches defined for specifying callback handling:
-
-
-
-
- The first approach is to annotate methods on the entity itself to receive notification of
- particular entity life cycle event(s).
-
-
-
-
- The second is to use a separate entity listener class. An entity listener is a stateless class
- with a no-arg constructor. The callback annotations are placed on a method of this class instead
- of the entity class. The entity listener class is then associated with the entity using the
- javax.persistence.EntityListeners annotation
-
-
-
-
-
- Example of specifying JPA callbacks
-
-
-
-
- These approaches can be mixed, meaning you can use both together.
-
-
- Regardless of whether the callback method is defined on the entity or on an entity listener, it must have
- a void-return signature. The name of the method is irrelevant as it is the placement of the callback
- annotations that makes the method a callback. In the case of callback methods defined on the
- entity class, the method must additionally have a no-argument signature. For callback methods defined on
- an entity listener class, the method must have a single argument signature; the type of that argument can
- be either java.lang.Object (to facilitate attachment to multiple entities) or the
- specific entity type.
-
-
- A callback method can throw a RuntimeException. If the callback method does
- throw a RuntimeException, then the current transaction, if any, must be rolled back.
-
-
- A callback method must not invoke EntityManager or
- Query methods!
-
-
- It is possible that multiple callback methods are defined for a particular lifecycle event. When that
- is the case, the defined order of execution is well defined by the JPA spec (specifically section 3.5.4):
-
-
-
-
- Any default listeners associated with the entity are invoked first, in the order they were
- specified in the XML. See the javax.persistence.ExcludeDefaultListeners
- annotation.
-
-
-
-
- Next, entity listener class callbacks associated with the entity hierarchy are invoked, in the order
- they are defined in the EntityListeners. If multiple classes in the
- entity hierarchy define entity listeners, the listeners defined for a superclass are invoked before
- the listeners defined for its subclasses. See the
- javax.persistence.ExcludeSuperclassListeners annotation.
-
-
-
-
- Lastly, callback methods defined on the entity hierarchy are invoked. If a callback type is
- annotated on both an entity and one or more of its superclasses without method overriding, both
- would be called, the most general superclass first. An entity class is also allowed to override
- a callback method defined in a superclass in which case the super callback would not get invoked;
- the overriding method would get invoked provided it is annotated.
-
-
-
-
-
-
-
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/events/extras/AuditInterceptor.java b/documentation/src/main/docbook/integration/en-US/chapters/events/extras/AuditInterceptor.java
deleted file mode 100644
index 76e8b5bde2..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/events/extras/AuditInterceptor.java
+++ /dev/null
@@ -1,79 +0,0 @@
-import java.io.Serializable;
-import java.util.Date;
-
-import org.hibernate.EmptyInterceptor;
-import org.hibernate.Transaction;
-import org.hibernate.type.Type;
-
-public class AuditInterceptor extends EmptyInterceptor {
-
- private int updates;
- private int creates;
- private int loads;
-
- public void onDelete(Object entity,
- Serializable id,
- Object[] state,
- String[] propertyNames,
- Type[] types) {
- // do nothing
- }
-
- public boolean onFlushDirty(Object entity,
- Serializable id,
- Object[] currentState,
- Object[] previousState,
- String[] propertyNames,
- Type[] types) {
-
- if ( entity instanceof Auditable ) {
- updates++;
- for ( int i=0; i < propertyNames.length; i++ ) {
- if ( "lastUpdateTimestamp".equals( propertyNames[i] ) ) {
- currentState[i] = new Date();
- return true;
- }
- }
- }
- return false;
- }
-
- public boolean onLoad(Object entity,
- Serializable id,
- Object[] state,
- String[] propertyNames,
- Type[] types) {
- if ( entity instanceof Auditable ) {
- loads++;
- }
- return false;
- }
-
- public boolean onSave(Object entity,
- Serializable id,
- Object[] state,
- String[] propertyNames,
- Type[] types) {
-
- if ( entity instanceof Auditable ) {
- creates++;
- for ( int i=0; i
-
-
-
-
- Fetching
-
-
- Fetching, essentially, is the process of grabbing data from the database and making it available to the
- application. Tuning how an application does fetching is one of the biggest factors in determining how an
- application will perform. Fetching too much data, in terms of width (values/columns) and/or
- depth (results/rows), adds unnecessary overhead in terms of both JDBC communication and ResultSet processing.
- Fetching too little data causes additional fetches to be needed. Tuning how an application
- fetches data presents a great opportunity to influence the application's overall performance.
-
-
-
- The basics
-
-
- The concept of fetching breaks down into two different questions.
-
-
-
- When should the data be fetched? Now? Later?
-
-
-
-
- How should the data be fetched?
-
-
-
-
-
-
-
- "now" is generally termed eager or immediate. "later" is
- generally termed lazy or delayed.
-
-
-
-
- There are a number of scopes for defining fetching:
-
-
-
- static - Static definition of fetching strategies is done in the
- mappings. The statically-defined fetch strategies is used in the absence of any dynamically
- defined strategies Except in the case of HQL/JPQL; see xyz.
-
-
-
-
- dynamic (sometimes referred to as runtime) - Dynamic definition is
- really use-case centric. There are 2 main ways to define dynamic fetching:
-
-
-
-
- fetch profiles - defined in mappings, but can be
- enabled/disabled on the Session.
-
-
-
-
- HQL/JPQL and both Hibernate and JPA Criteria queries have the ability to specify
- fetching, specific to said query.
-
-
-
-
-
-
-
-
- The strategies
-
- SELECT
-
-
- Performs a separate SQL select to load the data. This can either be EAGER (the second select
- is issued immediately) or LAZY (the second select is delayed until the data is needed). This
- is the strategy generally termed N+1.
-
-
-
-
- JOIN
-
-
- Inherently an EAGER style of fetching. The data to be fetched is obtained through the use of
- an SQL join.
-
-
-
-
- BATCH
-
-
- Performs a separate SQL select to load a number of related data items using an
- IN-restriction as part of the SQL WHERE-clause based on a batch size. Again, this can either
- be EAGER (the second select is issued immediately) or LAZY (the second select is delayed until
- the data is needed).
-
-
-
-
- SUBSELECT
-
-
- Performs a separate SQL select to load associated data based on the SQL restriction used to
- load the owner. Again, this can either be EAGER (the second select is issued immediately)
- or LAZY (the second select is delayed until the data is needed).
-
-
-
-
-
-
-
- Applying fetch strategies
-
-
- Let's consider these topics as it relates to an simple domain model and a few use cases.
-
-
-
- Sample domain model
-
-
-
-
-
-
-
- The Hibernate recommendation is to statically mark all associations lazy and to use dynamic fetching
- strategies for eagerness. This is unfortunately at odds with the JPA specification which defines that
- all one-to-one and many-to-one associations should be eagerly fetched by default. Hibernate, as a JPA
- provider honors that default.
-
-
-
-
- No fetching
- The login use-case
-
- For the first use case, consider the application's login process for an Employee. Lets assume that
- login only requires access to the Employee information, not Project nor Department information.
-
-
-
- No fetching example
-
-
-
-
- In this example, the application gets the Employee data. However, because all associations from
- Employee are declared as LAZY (JPA defines the default for collections as LAZY) no other data is
- fetched.
-
-
-
- If the login process does not need access to the Employee information specifically, another
- fetching optimization here would be to limit the width of the query results.
-
-
-
- No fetching (scalar) example
-
-
-
-
-
- Dynamic fetching via queries
- The projects for an employee use-case
-
-
- For the second use case, consider a screen displaying the Projects for an Employee. Certainly access
- to the Employee is needed, as is the collection of Projects for that Employee. Information
- about Departments, other Employees or other Projects is not needed.
-
-
-
- Dynamic query fetching example
-
-
-
-
-
- In this example we have an Employee and their Projects loaded in a single query shown both as an HQL
- query and a JPA Criteria query. In both cases, this resolves to exactly one database query to get
- all that information.
-
-
-
-
- Dynamic fetching via profiles
- The projects for an employee use-case using natural-id
-
-
- Suppose we wanted to leverage loading by natural-id to obtain the Employee information in the
- "projects for and employee" use-case. Loading by natural-id uses the statically defined fetching
- strategies, but does not expose a means to define load-specific fetching. So we would leverage a
- fetch profile.
-
-
-
- Fetch profile example
-
-
-
-
-
- Here the Employee is obtained by natural-id lookup and the Employee's Project data is fetched eagerly.
- If the Employee data is resolved from cache, the Project data is resolved on its own. However,
- if the Employee data is not resolved in cache, the Employee and Project data is resolved in one
- SQL query via join as we saw above.
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/Department.java b/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/Department.java
deleted file mode 100644
index 06eab06a51..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/Department.java
+++ /dev/null
@@ -1,10 +0,0 @@
-@Entity
-public class Department {
- @Id
- private Long id;
-
- @OneToMany(mappedBy="department")
- private List employees;
-
- ...
-}
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/Employee.java b/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/Employee.java
deleted file mode 100644
index 62ed3e54e0..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/Employee.java
+++ /dev/null
@@ -1,24 +0,0 @@
-@Entity
-public class Employee {
- @Id
- private Long id;
-
- @NaturalId
- private String userid;
-
- @Column( name="pswd" )
- @ColumnTransformer( read="decrypt(pswd)" write="encrypt(?)" )
- private String password;
-
- private int accessLevel;
-
- @ManyToOne( fetch=LAZY )
- @JoinColumn
- private Department department;
-
- @ManyToMany(mappedBy="employees")
- @JoinColumn
- private Set projects;
-
- ...
-}
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/FetchOverrides.java b/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/FetchOverrides.java
deleted file mode 100644
index 4144ea27dc..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/FetchOverrides.java
+++ /dev/null
@@ -1,10 +0,0 @@
-@FetchProfile(
- name="employee.projects",
- fetchOverrides={
- @FetchOverride(
- entity=Employee.class,
- association="projects",
- mode=JOIN
- )
- }
-)
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/Login.java b/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/Login.java
deleted file mode 100644
index f916bb847a..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/Login.java
+++ /dev/null
@@ -1,4 +0,0 @@
-String loginHql = "select e from Employee e where e.userid = :userid and e.password = :password";
-Employee employee = (Employee) session.createQuery( loginHql )
- ...
- .uniqueResult();
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/LoginScalar.java b/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/LoginScalar.java
deleted file mode 100644
index 8905b0ce4a..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/LoginScalar.java
+++ /dev/null
@@ -1,4 +0,0 @@
-String loginHql = "select e.accessLevel from Employee e where e.userid = :userid and e.password = :password";
-Employee employee = (Employee) session.createQuery( loginHql )
- ...
- .uniqueResult();
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/Project.java b/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/Project.java
deleted file mode 100644
index 94fe42c0d5..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/Project.java
+++ /dev/null
@@ -1,10 +0,0 @@
-@Entity
-public class Project {
- @Id
- private Long id;
-
- @ManyToMany
- private Set employees;
-
- ...
-}
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/ProjectsForAnEmployeeCriteria.java b/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/ProjectsForAnEmployeeCriteria.java
deleted file mode 100644
index 384d964e07..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/ProjectsForAnEmployeeCriteria.java
+++ /dev/null
@@ -1,10 +0,0 @@
-String userid = ...;
-CriteriaBuilder cb = entityManager.getCriteriaBuilder();
-CriteriaQuery criteria = cb.createQuery( Employee.class );
-Root root = criteria.from( Employee.class );
-root.fetch( Employee_.projects );
-criteria.select( root );
-criteria.where(
- cb.equal( root.get( Employee_.userid ), cb.literal( userid ) )
-);
-Employee e = entityManager.createQuery( criteria ).getSingleResult();
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/ProjectsForAnEmployeeFetchProfile.java b/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/ProjectsForAnEmployeeFetchProfile.java
deleted file mode 100644
index 297cb8cfc6..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/ProjectsForAnEmployeeFetchProfile.java
+++ /dev/null
@@ -1,4 +0,0 @@
-String userid = ...;
-session.enableFetchProfile( "employee.projects" );
-Employee e = (Employee) session.bySimpleNaturalId( Employee.class )
- .load( userid );
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/ProjectsForAnEmployeeHql.java b/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/ProjectsForAnEmployeeHql.java
deleted file mode 100644
index 11235281d0..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/ProjectsForAnEmployeeHql.java
+++ /dev/null
@@ -1,5 +0,0 @@
-String userid = ...;
-String hql = "select e from Employee e join fetch e.projects where e.userid = :userid";
-Employee e = (Employee) session.createQuery( hql )
- .setParameter( "userid", userid )
- .uniqueResult();
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/Multi_Tenancy.xml b/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/Multi_Tenancy.xml
deleted file mode 100644
index 94ee9ae1b9..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/Multi_Tenancy.xml
+++ /dev/null
@@ -1,313 +0,0 @@
-
-
-
-
-
- Multi-tenancy
-
-
- What is multi-tenancy?
-
- The term multi-tenancy in general is applied to software development to indicate an architecture in which
- a single running instance of an application simultaneously serves multiple clients (tenants). This is
- highly common in SaaS solutions. Isolating information (data, customizations, etc) pertaining to the
- various tenants is a particular challenge in these systems. This includes the data owned by each tenant
- stored in the database. It is this last piece, sometimes called multi-tenant data, on which we will focus.
-
-
-
-
- Multi-tenant data approaches
-
- There are 3 main approaches to isolating information in these multi-tenant systems which goes hand-in-hand
- with different database schema definitions and JDBC setups.
-
-
-
-
- Each approach has pros and cons as well as specific techniques and considerations. Such
- topics are beyond the scope of this documentation. Many resources exist which delve into these
- other topics. One example is
- which does a great job of covering these topics.
-
-
-
-
- Separate database
-
-
-
-
-
-
-
-
-
-
-
- Each tenant's data is kept in a physically separate database instance. JDBC Connections would point
- specifically to each database, so any pooling would be per-tenant. A general application approach
- here would be to define a JDBC Connection pool per-tenant and to select the pool to use based on the
- tenant identifier associated with the currently logged in user.
-
-
-
-
- Separate schema
-
-
-
-
-
-
-
-
-
-
-
- Each tenant's data is kept in a distinct database schema on a single database instance. There are 2
- different ways to define JDBC Connections here:
-
-
-
- Connections could point specifically to each schema, as we saw with the
- Separate database approach. This is an option provided that
- the driver supports naming the default schema in the connection URL or if the
- pooling mechanism supports naming a schema to use for its Connections. Using this
- approach, we would have a distinct JDBC Connection pool per-tenant where the pool to use
- would be selected based on the tenant identifier associated with the
- currently logged in user.
-
-
-
-
- Connections could point to the database itself (using some default schema) but
- the Connections would be altered using the SQL SET SCHEMA (or similar)
- command. Using this approach, we would have a single JDBC Connection pool for use to
- service all tenants, but before using the Connection it would be altered to reference
- the schema named by the tenant identifier associated with the currently
- logged in user.
-
-
-
-
-
-
-
- Partitioned (discriminator) data
-
-
-
-
-
-
-
-
-
-
-
- All data is kept in a single database schema. The data for each tenant is partitioned by the use of
- partition value or discriminator. The complexity of this discriminator might range from a simple
- column value to a complex SQL formula. Again, this approach would use a single Connection pool
- to service all tenants. However, in this approach the application needs to alter each and every
- SQL statement sent to the database to reference the tenant identifier discriminator.
-
-
-
-
-
- Multi-tenancy in Hibernate
-
- Using Hibernate with multi-tenant data comes down to both an API and then integration piece(s). As
- usual Hibernate strives to keep the API simple and isolated from any underlying integration complexities.
- The API is really just defined by passing the tenant identifier as part of opening any session.
-
-
- Specifying tenant identifier from SessionFactory
-
-
-
- Additionally, when specifying configuration, a org.hibernate.MultiTenancyStrategy
- should be named using the hibernate.multiTenancy setting. Hibernate will perform
- validations based on the type of strategy you specify. The strategy here correlates to the isolation
- approach discussed above.
-
-
-
- NONE
-
-
- (the default) No multi-tenancy is expected. In fact, it is considered an error if a tenant
- identifier is specified when opening a session using this strategy.
-
-
-
-
- SCHEMA
-
-
- Correlates to the separate schema approach. It is an error to attempt to open a session without
- a tenant identifier using this strategy. Additionally, a
- MultiTenantConnectionProvider
- must be specified.
-
-
-
-
- DATABASE
-
-
- Correlates to the separate database approach. It is an error to attempt to open a session without
- a tenant identifier using this strategy. Additionally, a
- MultiTenantConnectionProvider
- must be specified.
-
-
-
-
- DISCRIMINATOR
-
-
- Correlates to the partitioned (discriminator) approach. It is an error to attempt to open a
- session without a tenant identifier using this strategy. This strategy is not yet implemented
- in Hibernate as of 4.0 and 4.1. Its support is planned for 5.0.
-
-
-
-
-
-
- MultiTenantConnectionProvider
-
- When using either the DATABASE or SCHEMA approach, Hibernate needs to be able to obtain Connections
- in a tenant specific manner. That is the role of the
- MultiTenantConnectionProvider
- contract. Application developers will need to provide an implementation of this
- contract. Most of its methods are extremely self-explanatory. The only ones which might not be are
- getAnyConnection and releaseAnyConnection. It is
- important to note also that these methods do not accept the tenant identifier. Hibernate uses these
- methods during startup to perform various configuration, mainly via the
- java.sql.DatabaseMetaData object.
-
-
- The MultiTenantConnectionProvider to use can be specified in a number of
- ways:
-
-
-
-
- Use the hibernate.multi_tenant_connection_provider setting. It could
- name a MultiTenantConnectionProvider instance, a
- MultiTenantConnectionProvider implementation class reference or
- a MultiTenantConnectionProvider implementation class name.
-
-
-
-
- Passed directly to the org.hibernate.boot.registry.StandardServiceRegistryBuilder.
-
-
-
-
- If none of the above options match, but the settings do specify a
- hibernate.connection.datasource value, Hibernate will assume it should
- use the specific
- DataSourceBasedMultiTenantConnectionProviderImpl
- implementation which works on a number of pretty reasonable assumptions when running inside of
- an app server and using one javax.sql.DataSource per tenant.
- See its javadocs for more details.
-
-
-
-
-
-
- CurrentTenantIdentifierResolver
-
- org.hibernate.context.spi.CurrentTenantIdentifierResolver is a contract
- for Hibernate to be able to resolve what the application considers the current tenant identifier.
- The implementation to use is either passed directly to Configuration via its
- setCurrentTenantIdentifierResolver method. It can also be specified via
- the hibernate.tenant_identifier_resolver setting.
-
-
- There are 2 situations where CurrentTenantIdentifierResolver is used:
-
-
-
-
- The first situation is when the application is using the
- org.hibernate.context.spi.CurrentSessionContext feature in
- conjunction with multi-tenancy. In the case of the current-session feature, Hibernate will
- need to open a session if it cannot find an existing one in scope. However, when a session
- is opened in a multi-tenant environment the tenant identifier has to be specified. This is
- where the CurrentTenantIdentifierResolver comes into play;
- Hibernate will consult the implementation you provide to determine the tenant identifier to use
- when opening the session. In this case, it is required that a
- CurrentTenantIdentifierResolver be supplied.
-
-
-
-
- The other situation is when you do not want to have to explicitly specify the tenant
- identifier all the time as we saw in . If a
- CurrentTenantIdentifierResolver has been specified, Hibernate
- will use it to determine the default tenant identifier to use when opening the session.
-
-
-
-
- Additionally, if the CurrentTenantIdentifierResolver implementation
- returns true for its validateExistingCurrentSessions
- method, Hibernate will make sure any existing sessions that are found in scope have a matching
- tenant identifier. This capability is only pertinent when the
- CurrentTenantIdentifierResolver is used in current-session settings.
-
-
-
-
- Caching
-
- Multi-tenancy support in Hibernate works seamlessly with the Hibernate second level cache. The key
- used to cache data encodes the tenant identifier.
-
-
-
-
- Odds and ends
-
- Currently schema export will not really work with multi-tenancy. That may not change.
-
-
- The JPA expert group is in the process of defining multi-tenancy support for the upcoming 2.1
- version of the specification.
-
-
-
-
-
- Strategies for MultiTenantConnectionProvider implementors
-
- Implementing MultiTenantConnectionProvider using different connection pools
-
-
-
- The approach above is valid for the DATABASE approach. It is also valid for the SCHEMA approach
- provided the underlying database allows naming the schema to which to connect in the connection URL.
-
-
- Implementing MultiTenantConnectionProvider using single connection pool
-
-
-
- This approach is only relevant to the SCHEMA approach.
-
-
-
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/extras/MultiTenantConnectionProviderImpl-multi-cp.java b/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/extras/MultiTenantConnectionProviderImpl-multi-cp.java
deleted file mode 100644
index 56dcb7a9cd..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/extras/MultiTenantConnectionProviderImpl-multi-cp.java
+++ /dev/null
@@ -1,24 +0,0 @@
-/**
- * Simplisitc implementation for illustration purposes supporting 2 hard coded providers (pools) and leveraging
- * the support class {@link org.hibernate.service.jdbc.connections.spi.AbstractMultiTenantConnectionProvider}
- */
-public class MultiTenantConnectionProviderImpl extends AbstractMultiTenantConnectionProvider {
- private final ConnectionProvider acmeProvider = ConnectionProviderUtils.buildConnectionProvider( "acme" );
- private final ConnectionProvider jbossProvider = ConnectionProviderUtils.buildConnectionProvider( "jboss" );
-
- @Override
- protected ConnectionProvider getAnyConnectionProvider() {
- return acmeProvider;
- }
-
- @Override
- protected ConnectionProvider selectConnectionProvider(String tenantIdentifier) {
- if ( "acme".equals( tenantIdentifier ) ) {
- return acmeProvider;
- }
- else if ( "jboss".equals( tenantIdentifier ) ) {
- return jbossProvider;
- }
- throw new HibernateException( "Unknown tenant identifier" );
- }
-}
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/extras/MultiTenantConnectionProviderImpl-single-cp.java b/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/extras/MultiTenantConnectionProviderImpl-single-cp.java
deleted file mode 100644
index 6d41cfd2fd..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/extras/MultiTenantConnectionProviderImpl-single-cp.java
+++ /dev/null
@@ -1,54 +0,0 @@
-/**
- * Simplisitc implementation for illustration purposes showing a single connection pool used to serve
- * multiple schemas using "connection altering". Here we use the T-SQL specific USE command; Oracle
- * users might use the ALTER SESSION SET SCHEMA command; etc.
- */
-public class MultiTenantConnectionProviderImpl
- implements MultiTenantConnectionProvider, Stoppable {
- private final ConnectionProvider connectionProvider = ConnectionProviderUtils.buildConnectionProvider( "master" );
-
- @Override
- public Connection getAnyConnection() throws SQLException {
- return connectionProvider.getConnection();
- }
-
- @Override
- public void releaseAnyConnection(Connection connection) throws SQLException {
- connectionProvider.closeConnection( connection );
- }
-
- @Override
- public Connection getConnection(String tenantIdentifier) throws SQLException {
- final Connection connection = getAnyConnection();
- try {
- connection.createStatement().execute( "USE " + tenanantIdentifier );
- }
- catch ( SQLException e ) {
- throw new HibernateException(
- "Could not alter JDBC connection to specified schema [" +
- tenantIdentifier + "]",
- e
- );
- }
- return connection;
- }
-
- @Override
- public void releaseConnection(String tenantIdentifier, Connection connection) throws SQLException {
- try {
- connection.createStatement().execute( "USE master" );
- }
- catch ( SQLException e ) {
- // on error, throw an exception to make sure the connection is not returned to the pool.
- // your requirements may differ
- throw new HibernateException(
- "Could not alter JDBC connection to specified schema [" +
- tenantIdentifier + "]",
- e
- );
- }
- connectionProvider.closeConnection( connection );
- }
-
- ...
-}
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/extras/tenant-identifier-from-SessionFactory.java b/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/extras/tenant-identifier-from-SessionFactory.java
deleted file mode 100644
index bbc859be3f..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/extras/tenant-identifier-from-SessionFactory.java
+++ /dev/null
@@ -1,4 +0,0 @@
-Session session = sessionFactory.withOptions()
- .tenantIdentifier( yourTenantIdentifier )
- ...
- .openSession();
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_database.png b/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_database.png
deleted file mode 100644
index 84822ec4e5..0000000000
Binary files a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_database.png and /dev/null differ
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_database.svg b/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_database.svg
deleted file mode 100644
index f0d9e7baa8..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_database.svg
+++ /dev/null
@@ -1,130 +0,0 @@
-
-
-
-
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_discriminator.png b/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_discriminator.png
deleted file mode 100644
index dcc3ad6ed9..0000000000
Binary files a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_discriminator.png and /dev/null differ
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_discriminator.svg b/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_discriminator.svg
deleted file mode 100644
index afb2913326..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_discriminator.svg
+++ /dev/null
@@ -1,70 +0,0 @@
-
-
-
-
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_schema.png b/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_schema.png
deleted file mode 100644
index 2756ba2ed6..0000000000
Binary files a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_schema.png and /dev/null differ
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_schema.svg b/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_schema.svg
deleted file mode 100644
index 6fbb7a40e8..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_schema.svg
+++ /dev/null
@@ -1,126 +0,0 @@
-
-
-
-
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/osgi/OSGi.xml b/documentation/src/main/docbook/integration/en-US/chapters/osgi/OSGi.xml
deleted file mode 100644
index 90f5eb3f9c..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/osgi/OSGi.xml
+++ /dev/null
@@ -1,420 +0,0 @@
-
-
-
-
-
- OSGi
-
-
- The Open Services Gateway initiative (OSGi) specification describes a dynamic, modularized system. "Bundles"
- (components) can be installed, activated, deactivated, and uninstalled during runtime, without requiring
- a system restart. OSGi frameworks manage bundles' dependencies, packages, and classes. The framework
- is also in charge of ClassLoading, managing visibility of packages between bundles. Further, service
- registry and discovery is provided through a "whiteboard" pattern.
-
-
-
- OSGi environments present numerous, unique challenges. Most notably, the dynamic nature of available
- bundles during runtime can require significant architectural considerations. Also,
- architectures must allow the OSGi-specific ClassLoading and service registration/discovery.
-
-
-
-
-
- OSGi Specification and Environment
-
-
- Hibernate targets the OSGi 4.3 spec or later. It was necessary to start with 4.3, over 4.2, due to our
- dependency on OSGi's BundleWiring for entity/mapping scanning.
-
-
-
- Hibernate supports three types of configurations within OSGi.
-
-
-
- Container-Managed JPA:
-
-
- Unmanaged JPA:
-
-
- Unmanaged Native:
-
-
-
-
-
-
- hibernate-osgi
-
-
- Rather than embed OSGi capabilities into hibernate-core, hibernate-entitymanager, and sub-modules,
- hibernate-osgi was created. It's purposefully separated, isolating all OSGi dependencies. It provides an
- OSGi-specific ClassLoader (aggregates the container's CL with core and entitymanager CLs), JPA persistence
- provider, SF/EMF bootstrapping, entities/mappings scanner, and service management.
-
-
-
-
- Container-Managed JPA
-
-
- The Enterprise OSGi specification includes container-managed JPA. The container is responsible for
- discovering persistence units and creating the EntityManagerFactory (one EMF per PU).
- It uses the JPA provider (hibernate-osgi) that has registered itself with the OSGi
- PersistenceProvider service.
-
-
-
- Quickstart tutorial project, demonstrating a container-managed JPA client bundle:
- managed-jpa
-
-
-
- Client bundle imports
-
- Your client bundle's manifest will need to import, at a minimum,
-
-
- javax.persistence
-
-
-
- org.hibernate.proxy and javassist.util.proxy, due to
- Hibernate's ability to return proxies for lazy initialization (Javassist enhancement
- occurs on the entity's ClassLoader during runtime).
-
-
-
-
-
-
-
- JPA 2.1
-
-
- No Enterprise OSGi JPA container currently supports JPA 2.1 (the spec is not yet released). For
- testing, the managed-jpa example makes use of
- Brett's fork of Aries. To work
- with Hibernate 4.3, clone the fork and build Aries JPA.
-
-
-
-
- DataSource
-
- Typical Enterprise OSGi JPA usage includes a DataSource installed in the container. The client
- bundle's persistence.xml uses the DataSource through JNDI. For an example,
- see the QuickStart's DataSource:
- datasource-h2.xml
- The DataSource is then called out in
-
- persistence.xml's jta-data-source.
-
-
-
-
- Bundle Ordering
-
- Hibernate currently requires fairly specific bundle activation ordering. See the managed-jpa
- QuickStart's
- features.xml
- for the best supported sequence.
-
-
-
-
- Obtaining an EntityManger
-
- The easiest, and most supported, method of obtaining an EntityManager utilizes OSGi's
- blueprint.xml. The container takes the name of your persistence unit, then injects
- an EntityManager instance into your given bean attribute. See the
- dpService bean in the managed-jpa QuickStart's
- blueprint.xml
- for an example.
-
-
-
-
-
- Unmanaged JPA
-
-
- Hibernate also supports the use of JPA through hibernate-entitymanager, unmanaged by the OSGi
- container. The client bundle is responsible for managing the EntityManagerFactory and EntityManagers.
-
-
-
- Quickstart tutorial project, demonstrating an unmanaged JPA client bundle:
- unmanaged-jpa
-
-
-
- Client bundle imports
-
- Your client bundle's manifest will need to import, at a minimum,
-
-
- javax.persistence
-
-
-
- org.hibernate.proxy and javassist.util.proxy, due to
- Hibernate's ability to return proxies for lazy initialization (Javassist enhancement
- occurs on the entity's ClassLoader during runtime)
-
-
-
-
- JDBC driver package (example: org.h2)
-
-
-
-
- org.osgi.framework, necessary to discover the EMF (described below)
-
-
-
-
-
-
-
- Bundle Ordering
-
- Hibernate currently requires fairly specific bundle activation ordering. See the unmanaged-jpa
- QuickStart's
- features.xml
- for the best supported sequence.
-
-
-
-
- Obtaining an EntityMangerFactory
-
- hibernate-osgi registers an OSGi service, using the JPA PersistenceProvider interface
- name, that bootstraps and creates an EntityManagerFactory specific for OSGi
- environments. It is VITAL that your EMF be obtained through the service, rather than creating it
- manually. The service handles the OSGi ClassLoader, discovered extension points, scanning, etc. Manually
- creating an EntityManagerFactory is guaranteed to NOT work during runtime!
-
-
- For an example on how to discover and use the service, see the unmanaged-jpa
- QuickStart's
- HibernateUtil.java.
-
-
-
-
-
- Unmanaged Native
-
-
- Native Hibernate use is also supported. The client bundle is responsible for managing the
- SessionFactory and Sessions.
-
-
-
- Quickstart tutorial project, demonstrating an unmanaged native client bundle:
- unmanaged-native
-
-
-
- Client bundle imports
-
- Your client bundle's manifest will need to import, at a minimum,
-
-
- javax.persistence
-
-
-
- org.hibernate.proxy and javassist.util.proxy, due to
- Hibernate's ability to return proxies for lazy initialization (Javassist enhancement
- occurs on the entity's ClassLoader during runtime)
-
-
-
-
- JDBC driver package (example: org.h2)
-
-
-
-
- org.osgi.framework, necessary to discover the SF (described below)
-
-
-
-
- org.hibernate.* packages, as necessary (ex: cfg, criterion, service, etc.)
-
-
-
-
-
-
-
- Bundle Ordering
-
- Hibernate currently requires fairly specific bundle activation ordering. See the unmanaged-native
- QuickStart's
- features.xml
- for the best supported sequence.
-
-
-
-
- Obtaining an SessionFactory
-
- hibernate-osgi registers an OSGi service, using the SessionFactory interface
- name, that bootstraps and creates an SessionFactory specific for OSGi
- environments. It is VITAL that your SF be obtained through the service, rather than creating it
- manually. The service handles the OSGi ClassLoader, discovered extension points, scanning, etc. Manually
- creating an SessionFactory is guaranteed to NOT work during runtime!
-
-
- For an example on how to discover and use the service, see the unmanaged-native
- QuickStart's
- HibernateUtil.java.
-
-
-
-
-
- Optional Modules
-
-
- The unmanaged-native
- QuickStart project demonstrates the use of optional Hibernate modules. Each module adds additional
- dependency bundles that must first be activated
- (see features.xml).
- As of ORM 4.2, Envers is fully supported. Support for C3P0, Proxool, EhCache, and Infinispan were added in
- 4.3, however none of their 3rd party libraries currently work in OSGi (lots of ClassLoader problems, etc.).
- We're tracking the issues in JIRA.
-
-
-
-
- Extension Points
-
-
- Multiple contracts exist to allow applications to integrate with and extend Hibernate capabilities. Most
- apps utilize JDK services to provide their implementations. hibernate-osgi supports the same
- extensions through OSGi services. Implement and register them in any of the three configurations.
- hibernate-osgi will discover and integrate them during EMF/SF bootstrapping. Supported extension points
- are as follows. The specified interface should be used during service registration.
-
-
-
- org.hibernate.integrator.spi.Integrator (as of 4.2)
-
-
- org.hibernate.boot.registry.selector.StrategyRegistrationProvider (as of 4.3)
-
-
- org.hibernate.boot.model.TypeContributor (as of 4.3)
-
-
- JTA's javax.transaction.TransactionManager and
- javax.transaction.UserTransaction (as of 4.2), however these are typically
- provided by the OSGi container.
-
-
-
-
-
- The easiest way to register extension point implementations is through a blueprint.xml
- file. Add OSGI-INF/blueprint/blueprint.xml to your classpath. Envers' blueprint
- is a great example:
-
-
-
- Example extension point registrations in blueprint.xml
-
-
-
-
- Extension points can also be registered programmatically with
- BundleContext#registerService, typically within your
- BundleActivator#start.
-
-
-
-
- Caveats
-
-
-
-
- Technically, multiple persistence units are supported by Enterprise OSGi JPA and unmanaged
- Hibernate JPA use. However, we cannot currently support this in OSGi. In Hibernate 4, only one
- instance of the OSGi-specific ClassLoader is used per Hibernate bundle, mainly due to heavy use of
- static TCCL utilities. We hope to support one OSGi ClassLoader per persistence unit in
- Hibernate 5.
-
-
-
-
- Scanning is supported to find non-explicitly listed entities and mappings. However, they MUST be
- in the same bundle as your persistence unit (fairly typical anyway). Our OSGi ClassLoader only
- considers the "requesting bundle" (hence the requirement on using services to create EMF/SF),
- rather than attempting to scan all available bundles. This is primarily for versioning
- considerations, collision protections, etc.
-
-
-
-
- Some containers (ex: Aries) always return true for
- PersistenceUnitInfo#excludeUnlistedClasses,
- even if your persistence.xml explicitly has exclude-unlisted-classes set
- to false. They claim it's to protect JPA providers from having to implement
- scanning ("we handle it for you"), even though we still want to support it in many cases. The work
- around is to set hibernate.archive.autodetection to, for example,
- hbm,class. This tells hibernate to ignore the excludeUnlistedClasses value and
- scan for *.hbm.xml and entities regardless.
-
-
-
-
- Scanning does not currently support annotated packages on package-info.java.
-
-
-
-
- Currently, Hibernate OSGi is primarily tested using Apache Karaf and Apache Aries JPA. Additional
- testing is needed with Equinox, Gemini, and other container providers.
-
-
-
-
- Hibernate ORM has many dependencies that do not currently provide OSGi manifests.
- The QuickStart tutorials make heavy use of 3rd party bundles (SpringSource, ServiceMix) or the
- wrap:... operator.
-
-
-
-
- As previously mentioned, bundle activation is currently order specific. See the QuickStart
- tutorials' features.xml for example sequences.
-
-
-
-
- No Enterprise OSGi JPA container currently supports JPA 2.1 (the spec is not yet released). For
- testing, the managed-jpa example makes use of
- Brett's fork of Aries. To work
- with Hibernate 4.3, clone the fork and build Aries JPA.
-
-
-
-
-
-
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/osgi/extras/extension_point_blueprint.xml b/documentation/src/main/docbook/integration/en-US/chapters/osgi/extras/extension_point_blueprint.xml
deleted file mode 100644
index e9a4452914..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/osgi/extras/extension_point_blueprint.xml
+++ /dev/null
@@ -1,17 +0,0 @@
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/Persistence_Context.xml b/documentation/src/main/docbook/integration/en-US/chapters/pc/Persistence_Context.xml
deleted file mode 100644
index ef4f98fafe..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/pc/Persistence_Context.xml
+++ /dev/null
@@ -1,356 +0,0 @@
-
-
-
-
-
-
- Persistence Contexts
-
-
-
- Both the org.hibernate.Session API and
- javax.persistence.EntityManager API represent a context for dealing with
- persistent data. This concept is called a persistence context. Persistent data has a
- state in relation to both a persistence context and the underlying database.
-
-
-
- Entity states
-
-
- new, or transient - the entity has just been instantiated and is
- not associated with a persistence context. It has no persistent representation in the database and no
- identifier value has been assigned.
-
-
-
-
- managed, or persistent - the entity has an associated identifier
- and is associated with a persistence context.
-
-
-
-
- detached - the entity has an associated identifier, but is no longer associated with
- a persistence context (usually because the persistence context was closed or the instance was evicted
- from the context)
-
-
-
-
- removed - the entity has an associated identifier and is associated with a persistence
- context, however it is scheduled for removal from the database.
-
-
-
-
-
-
-
- In Hibernate native APIs, the persistence context is defined as the
- org.hibernate.Session. In JPA, the persistence context is defined by
- javax.persistence.EntityManager. Much of the
- org.hibernate.Session and
- javax.persistence.EntityManager methods deal with moving entities between these
- states.
-
-
-
- Making entities persistent
-
-
- Once you've created a new entity instance (using the standard new operator) it is in
- new state. You can make it persistent by associating it to either a
- org.hibernate.Session or
- javax.persistence.EntityManager
-
-
-
- Example of making an entity persistent
-
-
-
-
-
- org.hibernate.Session also has a method named persist
- which follows the exact semantic defined in the JPA specification for the persist
- method. It is this method on org.hibernate.Session to which the
- Hibernate javax.persistence.EntityManager implementation delegates.
-
-
-
- If the DomesticCat entity type has a generated identifier, the value is associated
- to the instance when the save or persist is called. If the
- identifier is not automatically generated, the application-assigned (usually natural) key value has to be
- set on the instance before save or persist is called.
-
-
-
-
- Deleting entities
-
- Entities can also be deleted.
-
-
- Example of deleting an entity
-
-
-
-
- It is important to note that Hibernate itself can handle deleting detached state. JPA, however, disallows
- it. The implication here is that the entity instance passed to the
- org.hibernate.Sessiondelete method can be either
- in managed or detached state, while the entity instance passed to remove on
- javax.persistence.EntityManager must be in managed state.
-
-
-
-
- Obtain an entity reference without initializing its data
-
- Sometimes referred to as lazy loading, the ability to obtain a reference to an entity without having to
- load its data is hugely important. The most common case being the need to create an association between
- an entity and another, existing entity.
-
-
- Example of obtaining an entity reference without initializing its data
-
-
-
-
- The above works on the assumption that the entity is defined to allow lazy loading, generally through
- use of runtime proxies. For more information see . In both
- cases an exception will be thrown later if the given entity does not refer to actual database state if and
- when the application attempts to use the returned proxy in any way that requires access to its data.
-
-
-
-
- Obtain an entity with its data initialized
-
-
- It is also quite common to want to obtain an entity along with with its data, for display for example.
-
-
- Example of obtaining an entity reference with its data initialized
-
-
-
-
- In both cases null is returned if no matching database row was found.
-
-
-
-
- Obtain an entity by natural-id
-
-
- In addition to allowing to load by identifier, Hibernate allows applications to load by declared
- natural identifier.
-
-
- Example of simple natural-id access
-
-
-
- Example of natural-id access
-
-
-
- Just like we saw above, access entity data by natural id allows both the load
- and getReference forms, with the same semantics.
-
-
-
- Accessing persistent data by identifier and by natural-id is consistent in the Hibernate API. Each defines
- the same 2 data access methods:
-
-
-
- getReference
-
-
- Should be used in cases where the identifier is assumed to exist, where non-existence would be
- an actual error. Should never be used to test existence. That is because this method will
- prefer to create and return a proxy if the data is not already associated with the Session
- rather than hit the database. The quintessential use-case for using this method is to create
- foreign-key based associations.
-
-
-
-
- load
-
-
- Will return the persistent data associated with the given identifier value or null if that
- identifier does not exist.
-
-
-
-
-
- In addition to those 2 methods, each also defines the method with accepting
- a org.hibernate.LockOptions argument. Locking is discussed in a separate
- chapter.
-
-
-
-
- Refresh entity state
-
-
- You can reload an entity instance and it's collections at any time.
-
-
-
- Example of refreshing entity state
-
-
-
-
-
- One case where this is useful is when it is known that the database state has changed since the data was
- read. Refreshing allows the current database state to be pulled into the entity instance and the
- persistence context.
-
-
-
- Another case where this might be useful is when database triggers are used to initialize some of the
- properties of the entity. Note that only the entity instance and its collections are refreshed unless you
- specify REFRESH as a cascade style of any associations. However, please note that
- Hibernate has the capability to handle this automatically through its notion of generated properties.
- See for information.
-
-
-
-
- Modifying managed/persistent state
-
-
- Entities in managed/persistent state may be manipulated by the application and any changes will be
- automatically detected and persisted when the persistence context is flushed. There is no need to call a
- particular method to make your modifications persistent.
-
-
-
- Example of modifying managed state
-
-
-
-
-
-
- Working with detached data
-
-
- Detachment is the process of working with data outside the scope of any persistence context. Data becomes
- detached in a number of ways. Once the persistence context is closed, all data that was associated with it
- becomes detached. Clearing the persistence context has the same effect. Evicting a particular entity
- from the persistence context makes it detached. And finally, serialization will make the deserialized form
- be detached (the original instance is still managed).
-
-
-
- Detached data can still be manipulated, however the persistence context will no longer automatically know
- about these modification and the application will need to intervene to make the changes persistent.
-
-
-
- Reattaching detached data
-
- Reattachment is the process of taking an incoming entity instance that is in detached state
- and re-associating it with the current persistence context.
-
-
-
- JPA does not provide for this model. This is only available through Hibernate
- org.hibernate.Session.
-
-
-
- Example of reattaching a detached entity
-
-
-
-
- The method name update is a bit misleading here. It does not mean that an
- SQLUPDATE is immediately performed. It does, however, mean that
- an SQLUPDATE will be performed when the persistence context is
- flushed since Hibernate does not know its previous state against which to compare for changes. Unless
- the entity is mapped with select-before-update, in which case Hibernate will
- pull the current state from the database and see if an update is needed.
-
-
- Provided the entity is detached, update and saveOrUpdate
- operate exactly the same.
-
-
-
-
- Merging detached data
-
- Merging is the process of taking an incoming entity instance that is in detached state and copying its
- data over onto a new instance that is in managed state.
-
-
- Visualizing merge
-
-
-
- That is not exactly what happens, but its a good visualization.
-
-
- Example of merging a detached entity
-
-
-
-
-
-
-
-
-
- Checking persistent state
-
-
- An application can verify the state of entities and collections in relation to the persistence context.
-
-
- Examples of verifying managed state
-
-
-
-
- Examples of verifying laziness
-
-
-
-
- In JPA there is an alternative means to check laziness using the following
- javax.persistence.PersistenceUtil pattern. However, the
- javax.persistence.PersistenceUnitUtil is recommended where ever possible
-
-
- Alternative JPA means to verify laziness
-
-
-
-
-
-
- Accessing Hibernate APIs from JPA
-
- JPA defines an incredibly useful method to allow applications access to the APIs of the underlying provider.
-
-
- Usage of EntityManager.unwrap
-
-
-
-
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/CheckingLazinessWithHibernate.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/CheckingLazinessWithHibernate.java
deleted file mode 100644
index 1c166cdca0..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/CheckingLazinessWithHibernate.java
+++ /dev/null
@@ -1,9 +0,0 @@
-if ( Hibernate.isInitialized( customer.getAddress() ) {
- //display address if loaded
-}
-if ( Hibernate.isInitialized( customer.getOrders()) ) ) {
- //display orders if loaded
-}
-if (Hibernate.isPropertyInitialized( customer, "detailedBio" ) ) {
- //display property detailedBio if loaded
-}
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/CheckingLazinessWithJPA.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/CheckingLazinessWithJPA.java
deleted file mode 100644
index 50751d25d3..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/CheckingLazinessWithJPA.java
+++ /dev/null
@@ -1,10 +0,0 @@
-javax.persistence.PersistenceUnitUtil jpaUtil = entityManager.getEntityManagerFactory().getPersistenceUnitUtil();
-if ( jpaUtil.isLoaded( customer.getAddress() ) {
- //display address if loaded
-}
-if ( jpaUtil.isLoaded( customer.getOrders()) ) ) {
- //display orders if loaded
-}
-if (jpaUtil.isLoaded( customer, "detailedBio" ) ) {
- //display property detailedBio if loaded
-}
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/CheckingLazinessWithJPA2.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/CheckingLazinessWithJPA2.java
deleted file mode 100644
index 9581b5d0e2..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/CheckingLazinessWithJPA2.java
+++ /dev/null
@@ -1,10 +0,0 @@
-javax.persistence.PersistenceUtil jpaUtil = javax.persistence.Persistence.getPersistenceUtil();
-if ( jpaUtil.isLoaded( customer.getAddress() ) {
- //display address if loaded
-}
-if ( jpaUtil.isLoaded( customer.getOrders()) ) ) {
- //display orders if loaded
-}
-if (jpaUtil.isLoaded(customer, "detailedBio") ) {
- //display property detailedBio if loaded
-}
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ContainsWithEM.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ContainsWithEM.java
deleted file mode 100644
index 12aaaff765..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ContainsWithEM.java
+++ /dev/null
@@ -1 +0,0 @@
-assert entityManager.contains( cat );
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ContainsWithSession.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ContainsWithSession.java
deleted file mode 100644
index acb40fed3e..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ContainsWithSession.java
+++ /dev/null
@@ -1 +0,0 @@
-assert session.contains( cat );
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/DeletingWithEM.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/DeletingWithEM.java
deleted file mode 100644
index 0938f5a3bf..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/DeletingWithEM.java
+++ /dev/null
@@ -1 +0,0 @@
-entityManager.remove( fritz );
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/DeletingWithSession.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/DeletingWithSession.java
deleted file mode 100644
index 25831ee346..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/DeletingWithSession.java
+++ /dev/null
@@ -1 +0,0 @@
-session.delete( fritz );
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/GetReferenceWithEM.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/GetReferenceWithEM.java
deleted file mode 100644
index e45609a42f..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/GetReferenceWithEM.java
+++ /dev/null
@@ -1,2 +0,0 @@
-Book book = new Book();
-book.setAuthor( entityManager.getReference( Author.class, authorId ) );
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/GetReferenceWithSession.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/GetReferenceWithSession.java
deleted file mode 100644
index d789191dc9..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/GetReferenceWithSession.java
+++ /dev/null
@@ -1,2 +0,0 @@
-Book book = new Book();
-book.setAuthor( session.byId( Author.class ).getReference( authorId ) );
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/LoadWithEM.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/LoadWithEM.java
deleted file mode 100644
index 3c3e56f9ff..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/LoadWithEM.java
+++ /dev/null
@@ -1 +0,0 @@
-entityManager.find( Author.class, authorId );
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/LoadWithSession.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/LoadWithSession.java
deleted file mode 100644
index d9801ef3e6..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/LoadWithSession.java
+++ /dev/null
@@ -1 +0,0 @@
-session.byId( Author.class ).load( authorId );
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/MakingPersistentWithEM.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/MakingPersistentWithEM.java
deleted file mode 100644
index fc6d6f92bb..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/MakingPersistentWithEM.java
+++ /dev/null
@@ -1,5 +0,0 @@
-DomesticCat fritz = new DomesticCat();
-fritz.setColor( Color.GINGER );
-fritz.setSex( 'M' );
-fritz.setName( "Fritz" );
-entityManager.persist( fritz );
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/MakingPersistentWithSession.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/MakingPersistentWithSession.java
deleted file mode 100644
index 05b85c02ff..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/MakingPersistentWithSession.java
+++ /dev/null
@@ -1,5 +0,0 @@
-DomesticCat fritz = new DomesticCat();
-fritz.setColor( Color.GINGER );
-fritz.setSex( 'M' );
-fritz.setName( "Fritz" );
-session.save( fritz );
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ManagedUpdateWithEM.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ManagedUpdateWithEM.java
deleted file mode 100644
index 49544b6502..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ManagedUpdateWithEM.java
+++ /dev/null
@@ -1,3 +0,0 @@
-Cat cat = entityManager.find( Cat.class, catId );
-cat.setName( "Garfield" );
-entityManager.flush(); // generally this is not explicitly needed
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ManagedUpdateWithSession.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ManagedUpdateWithSession.java
deleted file mode 100644
index 5e707244c2..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ManagedUpdateWithSession.java
+++ /dev/null
@@ -1,3 +0,0 @@
-Cat cat = session.get( Cat.class, catId );
-cat.setName( "Garfield" );
-session.flush(); // generally this is not explicitly needed
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/MergeWithEM.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/MergeWithEM.java
deleted file mode 100644
index 340af5cddf..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/MergeWithEM.java
+++ /dev/null
@@ -1 +0,0 @@
-Cat theManagedInstance = entityManager.merge( someDetachedCat );
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/MergeWithSession.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/MergeWithSession.java
deleted file mode 100644
index adfad9d69e..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/MergeWithSession.java
+++ /dev/null
@@ -1 +0,0 @@
-Cat theManagedInstance = session.merge( someDetachedCat );
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/NaturalIdLoading.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/NaturalIdLoading.java
deleted file mode 100644
index cd5cd43125..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/NaturalIdLoading.java
+++ /dev/null
@@ -1,31 +0,0 @@
-import java.lang.String;
-
-@Entity
-public class User {
- @Id
- @GeneratedValue
- Long id;
-
- @NaturalId
- String system;
-
- @NaturalId
- String userName;
-
- ...
-}
-
-// use getReference() to create associations...
-Resource aResource = (Resource) session.byId( Resource.class ).getReference( 123 );
-User aUser = (User) session.byNaturalId( User.class )
- .using( "system", "prod" )
- .using( "userName", "steve" )
- .getReference();
-aResource.assignTo( user );
-
-
-// use load() to pull initialzed data
-return session.byNaturalId( User.class )
- .using( "system", "prod" )
- .using( "userName", "steve" )
- .load();
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ReattachingWithSession1.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ReattachingWithSession1.java
deleted file mode 100644
index e69de29bb2..0000000000
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ReattachingWithSession2.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ReattachingWithSession2.java
deleted file mode 100644
index a26c2c98b0..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ReattachingWithSession2.java
+++ /dev/null
@@ -1 +0,0 @@
-session.saveOrUpdate( someDetachedCat );
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/RefreshWithEM.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/RefreshWithEM.java
deleted file mode 100644
index 8258af31e6..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/RefreshWithEM.java
+++ /dev/null
@@ -1,3 +0,0 @@
-Cat cat = entityManager.find( Cat.class, catId );
-...
-entityManager.refresh( cat );
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/RefreshWithSession.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/RefreshWithSession.java
deleted file mode 100644
index 3436fe3f88..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/RefreshWithSession.java
+++ /dev/null
@@ -1,3 +0,0 @@
-Cat cat = session.get( Cat.class, catId );
-...
-session.refresh( cat );
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/SimpleNaturalIdLoading.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/SimpleNaturalIdLoading.java
deleted file mode 100644
index a07ecf789b..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/SimpleNaturalIdLoading.java
+++ /dev/null
@@ -1,20 +0,0 @@
-@Entity
-public class User {
- @Id
- @GeneratedValue
- Long id;
-
- @NaturalId
- String userName;
-
- ...
-}
-
-// use getReference() to create associations...
-Resource aResource = (Resource) session.byId( Resource.class ).getReference( 123 );
-User aUser = (User) session.bySimpleNaturalId( User.class ).getReference( "steve" );
-aResource.assignTo( user );
-
-
-// use load() to pull initialzed data
-return session.bySimpleNaturalId( User.class ).load( "steve" );
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/UnwrapWithEM.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/UnwrapWithEM.java
deleted file mode 100644
index d0c4fac2aa..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/UnwrapWithEM.java
+++ /dev/null
@@ -1,2 +0,0 @@
-Session session = entityManager.unwrap( Session.class );
-SessionImplementor sessionImplementor = entityManager.unwrap( SessionImplementor.class );
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/VisualizingMerge.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/VisualizingMerge.java
deleted file mode 100644
index 3f8697a4ae..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/VisualizingMerge.java
+++ /dev/null
@@ -1,5 +0,0 @@
-Object detached = ...;
-Object managed = entityManager.find( detached.getClass(), detached.getId() );
-managed.setXyz( detached.getXyz() );
-...
-return managed;
\ No newline at end of file
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/Criteria.xml b/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/Criteria.xml
deleted file mode 100644
index dfce912537..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/Criteria.xml
+++ /dev/null
@@ -1,413 +0,0 @@
-
-
-
-
-
- Criteria
-
-
- Criteria queries offer a type-safe alternative to HQL, JPQL and native-sql queries.
-
-
-
-
- Hibernate offers an older, legacy org.hibernate.Criteria API which should be
- considered deprecated. No feature development will target those APIs. Eventually, Hibernate-specific
- criteria features will be ported as extensions to the JPA
- javax.persistence.criteria.CriteriaQuery. For details on the
- org.hibernate.Criteria API, see .
-
-
- This chapter will focus on the JPA APIs for declaring type-safe criteria queries.
-
-
-
-
-
- Criteria queries are a programmatic, type-safe way to express a query. They are type-safe in terms of
- using interfaces and classes to represent various structural parts of a query such as the query itself,
- or the select clause, or an order-by, etc. They can also be type-safe in terms of referencing attributes
- as we will see in a bit. Users of the older Hibernate org.hibernate.Criteria
- query API will recognize the general approach, though we believe the JPA API to be superior
- as it represents a clean look at the lessons learned from that API.
-
-
-
- Criteria queries are essentially an object graph, where each part of the graph represents an increasing
- (as we navigate down this graph) more atomic part of query. The first step in performing a criteria query
- is building this graph. The javax.persistence.criteria.CriteriaBuilder
- interface is the first thing with which you need to become acquainted to begin using criteria queries. Its
- role is that of a factory for all the individual pieces of the criteria. You obtain a
- javax.persistence.criteria.CriteriaBuilder instance by calling the
- getCriteriaBuilder method of either
- javax.persistence.EntityManagerFactory or
- javax.persistence.EntityManager.
-
-
-
- The next step is to obtain a javax.persistence.criteria.CriteriaQuery. This
- is accomplished using one of the 3 methods on
- javax.persistence.criteria.CriteriaBuilder for this purpose:
-
-
-
-
-
- Each serves a different purpose depending on the expected type of the query results.
-
-
-
-
- Chapter 6 Criteria API of the JPA Specification
- already contains a decent amount of reference material pertaining to the various parts of a
- criteria query. So rather than duplicate all that content here, lets instead look at some of
- the more widely anticipated usages of the API.
-
-
-
-
- Typed criteria queries
-
-
- The type of the criteria query (aka the ]]>) indicates the expected types in the query
- result. This might be an entity, an Integer, or any other object.
-
-
-
- Selecting an entity
-
-
- This is probably the most common form of query. The application wants to select entity instances.
-
-
-
- Selecting the root entity
-
-
-
-
- The example uses createQuery passing in the Person
- class reference as the results of the query will be Person objects.
-
-
-
-
- The call to the CriteriaQuery.select method in this example is
- unnecessary because personRoot will be the implied selection since we
- have only a single query root. It was done here only for completeness of an example.
-
-
- The Person_.eyeColor reference is an example of the static form of JPA
- metamodel reference. We will use that form exclusively in this chapter. See
- the documentation for the Hibernate JPA Metamodel Generator for additional details on
- the JPA static metamodel.
-
-
-
-
-
- Selecting an expression
-
-
- The simplest form of selecting an expression is selecting a particular attribute from an entity.
- But this expression might also represent an aggregation, a mathematical operation, etc.
-
-
-
- Selecting an attribute
-
-
-
-
- In this example, the query is typed as java.lang.Integer because that
- is the anticipated type of the results (the type of the Person#age attribute
- is java.lang.Integer). Because a query might contain multiple references to
- the Person entity, attribute references always need to be qualified. This is accomplished by the
- Root#get method call.
-
-
-
-
-
- Selecting multiple values
-
-
- There are actually a few different ways to select multiple values using criteria queries. We
- will explore 2 options here, but an alternative recommended approach is to use tuples as described in
- . Or consider a wrapper query; see
- for details.
-
-
-
- Selecting an array
-
-
-
-
- Technically this is classified as a typed query, but you can see from handling the results that
- this is sort of misleading. Anyway, the expected result type here is an array.
-
-
-
- The example then uses the array method of
- javax.persistence.criteria.CriteriaBuilder which explicitly
- combines individual selections into a
- javax.persistence.criteria.CompoundSelection.
-
-
-
- Selecting an array (2)
-
-
-
-
- Just as we saw in we have a typed criteria
- query returning an Object array. Both queries are functionally equivalent. This second example
- uses the multiselect method which behaves slightly differently based on
- the type given when the criteria query was first built, but in this case it says to select and
- return an Object[].
-
-
-
-
- Selecting a wrapper
-
- Another alternative to is to instead
- select an object that will wrap the multiple values. Going back to the example
- query there, rather than returning an array of [Person#id, Person#age]
- instead declare a class that holds these values and instead return that.
-
-
-
- Selecting an wrapper
-
-
-
-
- First we see the simple definition of the wrapper object we will be using to wrap our result
- values. Specifically notice the constructor and its argument types. Since we will be returning
- PersonWrapper objects, we use PersonWrapper as the
- type of our criteria query.
-
-
-
- This example illustrates the use of the
- javax.persistence.criteria.CriteriaBuilder method
- construct which is used to build a wrapper expression. For every row in the
- result we are saying we would like a PersonWrapper instantiated with
- the remaining arguments by the matching constructor. This wrapper expression is then passed as
- the select.
-
-
-
-
-
- Tuple criteria queries
-
-
- A better approach to is to use either a
- wrapper (which we just saw in ) or using the
- javax.persistence.Tuple contract.
-
-
-
- Selecting a tuple
-
-
-
-
- This example illustrates accessing the query results through the
- javax.persistence.Tuple interface. The example uses the explicit
- createTupleQuery of
- javax.persistence.criteria.CriteriaBuilder. An alternate approach
- is to use createQuery passing Tuple.class.
-
-
-
- Again we see the use of the multiselect method, just like in
- . The difference here is that the type of the
- javax.persistence.criteria.CriteriaQuery was defined as
- javax.persistence.Tuple so the compound selections in this case are
- interpreted to be the tuple elements.
-
-
-
- The javax.persistence.Tuple contract provides 3 forms of access to
- the underlying elements:
-
-
-
-
- typed
-
-
- The example illustrates this form of access
- in the tuple.get( idPath ) and tuple.get( agePath ) calls.
- This allows typed access to the underlying tuple values based on the
- javax.persistence.TupleElement expressions used to build
- the criteria.
-
-
-
-
- positional
-
-
- Allows access to the underlying tuple values based on the position. The simple
- Object get(int position) form is very similar to the access
- illustrated in and
- . The
- X get(int position, Class type]]> form
- allows typed positional access, but based on the explicitly supplied type which the tuple
- value must be type-assignable to.
-
-
-
-
- aliased
-
-
- Allows access to the underlying tuple values based an (optionally) assigned alias. The
- example query did not apply an alias. An alias would be applied via the
- alias method on
- javax.persistence.criteria.Selection. Just like
- positional access, there is both a typed
- (Object get(String alias)) and an untyped
- ( X get(String alias, Class type]]> form.
-
-
-
-
-
-
-
- FROM clause
-
-
- JPA Specification, section 6.5.2 Query Roots, pg 262
-
-
- A CriteriaQuery object defines a query over one or more entity, embeddable, or basic abstract
- schema types. The root objects of the query are entities, from which the other types are reached
- by navigation.
-
-
-
-
-
- All the individual parts of the FROM clause (roots, joins, paths) implement the
- javax.persistence.criteria.From interface.
-
-
-
-
- Roots
-
-
- Roots define the basis from which all joins, paths and attributes are available in the query.
- A root is always an entity type. Roots are defined and added to the criteria by the overloaded
- from methods on
- javax.persistence.criteria.CriteriaQuery:
-
-
-
-
-
- Adding a root
-
-
-
-
- Criteria queries may define multiple roots, the effect of which is to create a cartesian
- product between the newly added root and the others. Here is an example matching all single
- men and all single women:
-
-
-
- Adding multiple roots
-
-
-
-
-
- Joins
-
-
- Joins allow navigation from other javax.persistence.criteria.From
- to either association or embedded attributes. Joins are created by the numerous overloaded
- join methods of the
- javax.persistence.criteria.From interface
-
-
-
- Example with Embedded and ManyToOne
-
-
-
-
- Example with Collections
-
-
-
-
-
- Fetches
-
-
- Just like in HQL and JPQL, criteria queries can specify that associated data be fetched along
- with the owner. Fetches are created by the numerous overloaded fetch
- methods of the javax.persistence.criteria.From interface.
-
-
-
- Example with Embedded and ManyToOne
-
-
-
-
-
- Technically speaking, embedded attributes are always fetched with their owner. However in
- order to define the fetching of Address#country we needed a
- javax.persistence.criteria.Fetch for its parent path.
-
-
-
-
- Example with Collections
-
-
-
-
-
-
- Path expressions
-
-
- Roots, joins and fetches are themselves paths as well.
-
-
-
-
-
- Using parameters
-
-
- Using parameters
-
-
-
-
- Use the parameter method of
- javax.persistence.criteria.CriteriaBuilder to obtain a parameter
- reference. Then use the parameter reference to bind the parameter value to the
- javax.persistence.Query
-
-
-
-
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/CriteriaBuilder_query_creation_snippet.java b/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/CriteriaBuilder_query_creation_snippet.java
deleted file mode 100644
index a4dc3ef01b..0000000000
--- a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/CriteriaBuilder_query_creation_snippet.java
+++ /dev/null
@@ -1,3 +0,0 @@
- CriteriaQuery createQuery(Class resultClass);
-CriteriaQuery createTupleQuery();
-CriteriaQuery