diff --git a/documentation/src/main/docbook/integration/en-US/Author_Group.xml b/documentation/src/main/docbook/integration/en-US/Author_Group.xml deleted file mode 100644 index 564da0dcb5..0000000000 --- a/documentation/src/main/docbook/integration/en-US/Author_Group.xml +++ /dev/null @@ -1,92 +0,0 @@ - - - - - - - Gavin - King - - - - - Christian - Bauer - - - - - Steve - Ebersole - - - - - Max - Rydahl - Andersen - - - - - Emmanuel - Bernard - - - - - Hardy - Ferentschik - - - - - Adam - Warski - - - - - Gail - Badner - - - - - Brett - Meyer - - - - - - James - Cobb - - - Graphic Design - - - - - Cheyenne - Weaver - - - Graphic Design - - - - - - Misty - Stanley-Jones - - - - diff --git a/documentation/src/main/docbook/integration/en-US/Batch_Processing.xml b/documentation/src/main/docbook/integration/en-US/Batch_Processing.xml deleted file mode 100644 index 04b5e880d9..0000000000 --- a/documentation/src/main/docbook/integration/en-US/Batch_Processing.xml +++ /dev/null @@ -1,252 +0,0 @@ - - - - - - Batch Processing - - The following example shows an antipattern for batch inserts. - - - Naive way to insert 100000 lines with Hibernate - - - This fails with exception OutOfMemoryException after around 50000 rows on most - systems. The reason is that Hibernate caches all the newly inserted Customer instances in the session-level - cache. There are several ways to avoid this problem. - - - - Before batch processing, enable JDBC batching. To enable JDBC batching, set the property - hibernate.jdbc.batch_size to an integer between 10 and 50. - - - - Hibernate disables insert batching at the JDBC level transparently if you use an identity identifier generator. - - - - If the above approach is not appropriate, you can disable the second-level cache, by setting - hibernate.cache.use_second_level_cache to false. - - -
- Batch inserts - - When you make new objects persistent, employ methods flush() and - clear() to the session regularly, to control the size of the first-level cache. - - - Flushing and clearing the <classname>Session</classname> - - -
- -
- Batch updates - - When you retriev and update data, flush() and clear() the - session regularly. In addition, use method scroll() to take advantage of server-side - cursors for queries that return many rows of data. - - - Using <methodname>scroll()</methodname> - - -
- -
- StatelessSession - - StatelessSession is a command-oriented API provided by Hibernate. Use it to stream - data to and from the database in the form of detached objects. A StatelessSession - has no persistence context associated with it and does not provide many of the higher-level life cycle - semantics. Some of the things not provided by a StatelessSession include: - - - Features and behaviors not provided by <interfacename>StatelessSession</interfacename> - - - a first-level cache - - - - - interaction with any second-level or query cache - - - - - transactional write-behind or automatic dirty checking - - - - - Limitations of <interfacename>StatelessSession</interfacename> - - - Operations performed using a stateless session never cascade to associated instances. - - - - - Collections are ignored by a stateless session. - - - - - Operations performed via a stateless session bypass Hibernate's event model and interceptors. - - - - - Due to the lack of a first-level cache, Stateless sessions are vulnerable to data aliasing effects. - - - - - A stateless session is a lower-level abstraction that is much closer to the underlying JDBC. - - - - - Using a <interfacename>StatelessSession</interfacename> - - - The Customer instances returned by the query are immediately detached. They are never - associated with any persistence context. - - - - The insert(), update(), and delete() - operations defined by the StatelessSession interface operate directly on database - rows. They cause the corresponding SQL operations to be executed immediately. They have different semantics from - the save(), saveOrUpdate(), and - delete() operations defined by the Session interface. - -
- -
- Hibernate Query Language for DML - - DML, or Data Markup Language, refers to SQL statements such as INSERT, - UPDATE, and DELETE. Hibernate provides methods for bulk SQL-style DML - statement execution, in the form of Hibernate Query Language (HQL). - -
- HQL for UPDATE and DELETE - - Psuedo-syntax for UPDATE and DELETE statements using HQL - - ( UPDATE | DELETE ) FROM? EntityName (WHERE where_conditions)? - - - The ? suffix indications an optional parameter. The FROM and - WHERE clauses are each optional. - - - - The FROM clause can only refer to a single entity, which can be aliased. If the entity name - is aliased, any property references must be qualified using that alias. If the entity name is not aliased, then - it is illegal for any property references to be qualified. - - - Joins, either implicit or explicit, are prohibited in a bulk HQL query. You can use sub-queries in the - WHERE clause, and the sub-queries themselves can contain joins. - - - Executing an HQL UPDATE, using the <methodname>Query.executeUpdate()</methodname> method - - - - In keeping with the EJB3 specification, HQL UPDATE statements, by default, do not effect the version or the - timestamp property values for the affected entities. You can use a versioned update to force Hibernate to reset - the version or timestamp property values, by adding the VERSIONED keyword after the - UPDATE keyword. - - - Updating the version of timestamp - - - - - If you use the VERSIONED statement, you cannot use custom version types, which use class - org.hibernate.usertype.UserVersionType. - - - - A HQL <literal>DELETE</literal> statement - - - - Method Query.executeUpdate() returns an int value, which indicates the - number of entities effected by the operation. This may or may not correlate to the number of rows effected in - the database. An HQL bulk operation might result in multiple SQL statements being executed, such as for - joined-subclass. In the example of joined-subclass, a DELETE against one of the subclasses - may actually result in deletes in the tables underlying the join, or further down the inheritance hierarchy. - -
- -
- HQL syntax for INSERT - - Pseudo-syntax for INSERT statements - - INSERT INTO EntityName properties_list select_statement - - - - Only the INSERT INTO ... SELECT ... form is supported. You cannot specify explicit values to - insert. - - - The properties_list is analogous to the column specification in the SQL - INSERT statement. For entities involved in mapped inheritance, you can only use properties directly - defined on that given class-level in the properties_list. Superclass properties are - not allowed and subclass properties are irrelevant. In other words, INSERT statements are - inherently non-polymorphic. - - - The select_statement can be any valid HQL select query, but the return types must - match the types expected by the INSERT. Hibernate verifies the return types during query compilation, instead of - expecting the database to check it. Problems might result from Hibernate types which are equivalent, rather than - equal. One such example is a mismatch between a property defined as an org.hibernate.type.DateType - and a property defined as an org.hibernate.type.TimestampType, even though the database may not - make a distinction, or may be capable of handling the conversion. - - - If id property is not specified in the properties_list, - Hibernate generates a value automatically. Automatic generation is only available if you use ID generators which - operate on the database. Otherwise, Hibernate throws an exception during parsing. Available in-database - generators are org.hibernate.id.SequenceGenerator and its subclasses, and objects which - implement org.hibernate.id.PostInsertIdentifierGenerator. The most notable - exception is org.hibernate.id.TableHiLoGenerator, which does not expose a selectable way - to get its values. - - - For properties mapped as either version or timestamp, the insert statement gives you two options. You can either - specify the property in the properties_list, in which case its value is taken from the corresponding select - expressions, or omit it from the properties_list, in which case the seed value defined by the - org.hibernate.type.VersionType is used. - - - HQL INSERT statement - - -
-
- More information on HQL - - This section is only a brief overview of HQL. For more information, see . - -
-
-
- diff --git a/documentation/src/main/docbook/integration/en-US/Caching.xml b/documentation/src/main/docbook/integration/en-US/Caching.xml deleted file mode 100644 index 99aec1183e..0000000000 --- a/documentation/src/main/docbook/integration/en-US/Caching.xml +++ /dev/null @@ -1,557 +0,0 @@ - - - - - - - Caching - - -
- The query cache - - If you have queries that run over and over, with the same parameters, query caching provides performance gains. - - - Caching introduces overhead in the area of transactional processing. For example, if you cache results of a query - against an object, Hibernate needs to keep track of whether any changes have been committed against the object, - and invalidate the cache accordingly. In addition, the benefit from caching query results is limited, and highly - dependent on the usage patterns of your application. For these reasons, Hibernate disables the query cache by - default. - - - Enabling the query cache - - Set the <property>hibernate.cache.use_query_cache</property> property to <literal>true</literal>. - - This setting creates two new cache regions: - - - - - org.hibernate.cache.internal.StandardQueryCache holds the cached query results. - - - - - org.hibernate.cache.spi.UpdateTimestampsCache holds timestamps of the most recent updates to - queryable tables. These timestamps validate results served from the query cache. - - - - - - Adjust the cache timeout of the underlying cache region - - If you configure your underlying cache implementation to use expiry or timeouts, set the cache timeout of the - underlying cache region for the UpdateTimestampsCache to a higher value than the timeouts of any - of the query caches. It is possible, and recommended, to set the UpdateTimestampsCache region never to - expire. To be specific, a LRU (Least Recently Used) cache expiry policy is never appropriate. - - - - Enable results caching for specific queries - - Since most queries do not benefit from caching of their results, you need to enable caching for individual - queries, e ven after enabling query caching overall. To enable results caching for a particular query, call - org.hibernate.Query.setCacheable(true). This call allows the query to look for - existing cache results or add its results to the cache when it is executed. - - - - - The query cache does not cache the state of the actual entities in the cache. It caches identifier values and - results of value type. Therefore, always use the query cache in conjunction with the second-level - cache for those entities which should be cached as part of a query result cache. - - -
- Query cache regions - - For fine-grained control over query cache expiration policies, specify a named cache region for a particular - query by calling Query.setCacheRegion(). - - - - Method <methodname>setCacheRegion</methodname> - - - - - To force the query cache to refresh one of its regions and disregard any cached results in the region, call - org.hibernate.Query.setCacheMode(CacheMode.REFRESH). In conjunction with the region defined for the - given query, Hibernate selectively refreshes the results cached in that particular region. This is much more - efficient than bulk eviction of the region via org.hibernate.SessionFactory.evictQueries(). - - -
- -
- -
- Second-level cache providers - - Hibernate is compatible with several second-level cache providers. None of the providers support all of - Hibernate's possible caching strategies. lists the providers, along with - their interfaces and supported caching strategies. For definitions of caching strategies, see . - - -
- Configuring your cache providers - - You can configure your cache providers using either annotations or mapping files. - - - Entities - - By default, entities are not part of the second-level cache, and their use is not recommended. If you - absolutely must use entities, set the shared-cache-mode element in - persistence.xml, or use property javax.persistence.sharedCache.mode - in your configuration. Use one of the values in . - - - - Possible values for Shared Cache Mode - - - - Value - Description - - - - - ENABLE_SELECTIVE - - - Entities are not cached unless you explicitly mark them as cachable. This is the default and - recommended value. - - - - - DISABLE_SELECTIVE - - - Entities are cached unless you explicitly mark them as not cacheable. - - - - - ALL - - - All entities are always cached even if you mark them as not cacheable. - - - - - NONE - - - No entities are cached even if you mark them as cacheable. This option basically disables second-level - caching. - - - - - -
- - Set the global default cache concurrency strategy The cache concurrency strategy with the - hibernate.cache.default_cache_concurrency_strategy configuration property. See for possible values. - - - - When possible, define the cache concurrency strategy per entity rather than globally. Use the - @org.hibernate.annotations.Cache annotation. - - - - Configuring cache providers using annotations - - - You can cache the content of a collection or the identifiers, if the collection contains other entities. Use - the @Cache annotation on the Collection property. - - - @Cache can take several attributes. - - - Attributes of <code>@Cache</code> annotation - - usage - - - The given cache concurrency strategy, which may be: - - - - - NONE - - - - - READ_ONLY - - - - - NONSTRICT_READ_WRITE - - - - - READ_WRITE - - - - - TRANSACTIONAL - - - - - - - region - - - The cache region. This attribute is optional, and defaults to the fully-qualified class name of the - class, or the qually-qualified role name of the collection. - - - - - include - - - Whether or not to include all properties.. Optional, and can take one of two possible values. - - - - - A value of all includes all properties. This is the default. - - - - - A value of non-lazy only includes non-lazy properties. - - - - - - - - - - Configuring cache providers using mapping files - - - Just as in the , you can provide attributes in the - mapping file. There are some specific differences in the syntax for the attributes in a mapping file. - - - - usage - - - The caching strategy. This attribute is required, and can be any of the following values. - - - transactional - read-write - nonstrict-read-write - read-only - - - - - region - - - The name of the second-level cache region. This optional attribute defaults to the class or collection - role name. - - - - - include - - - Whether properties of the entity mapped with lazy=true can be cached when - attribute-level lazy fetching is enabled. Defaults to all and can also be - non-lazy. - - - - - - Instead of <cache>, you can use <class-cache> and - <collection-cache> elements in hibernate.cfg.xml. - - -
-
- Caching strategies - - - read-only - - - A read-only cache is good for data that needs to be read often but not modified. It is simple, performs - well, and is safe to use in a clustered environment. - - - - - nonstrict-read-write - - - Some applications only rarely need to modify data. This is the case if two transactions are unlikely to - try to update the same item simultaneously. In this case, you do not need strict transaction isolation, - and a nonstrict-read-write cache might be appropriate. If the cache is used in a JTA environment, you must - specify hibernate.transaction.manager_lookup_class. In other environments, ensore - that the transaction is complete before you call Session.close() or - Session.disconnect(). - - - - - read-write - - - A read-write cache is appropriate for an application which needs to update data regularly. Do not use a - read-write strategy if you need serializable transaction isolation. In a JTA environment, specify a - strategy for obtaining the JTA TransactionManager by setting the property - hibernate.transaction.manager_lookup_class. In non-JTA environments, be sure the - transaction is complete before you call Session.close() or - Session.disconnect(). - - - - To use the read-write strategy in a clustered environment, the underlying cache implementation must - support locking. The build-in cache providers do not support locking. - - - - - - transactional - - - The transactional cache strategy provides support for transactional cache providers such as JBoss - TreeCache. You can only use such a cache in a JTA environment, and you must first specify - hibernate.transaction.manager_lookup_class. - - - - -
-
- Second-level cache providers for Hibernate - - - - - Cache - Interface - Supported strategies - - - - - HashTable (testing only) - - - - read-only - nonstrict-read-write - read-write - - - - - EHCache - - - - read-only - nonstrict-read-write - read-write - transactional - - - - - Infinispan - - - - read-only - transactional - - - - - - -
-
- -
- Managing the cache - -
- Moving items into and out of the cache - - Actions that add an item to internal cache of the Session - - Saving or updating an item - - - - - save() - - - - - update() - - - - - saveOrUpdate() - - - - - - - Retrieving an item - - - - - load() - - - - - get() - - - - - list() - - - - - iterate() - - - - - scroll() - - - - - - - - Syncing or removing a cached item - - The state of an object is synchronized with the database when you call method - flush(). To avoid this synchronization, you can remove the object and all collections - from the first-level cache with the evict() method. To remove all items from the - Session cache, use method Session.clear(). - - - - Evicting an item from the first-level cache - - - - Determining whether an item belongs to the Session cache - - The Session provides a contains() method to determine if an instance belongs to the - session cache. - - - - - Second-level cache eviction - - You can evict the cached state of an instance, entire class, collection instance or entire collection role, - using methods of SessionFactory. - - - -
- Interactions between a Session and the second-level cache - - The CacheMode controls how a particular session interacts with the second-level cache. - - - - - - CacheMode.NORMAL - reads items from and writes them to the second-level cache. - - - CacheMode.GET - reads items from the second-level cache, but does not write to the second-level cache except to - update data. - - - CacheMode.PUT - writes items to the second-level cache. It does not read from the second-level cache. It bypasses - the effect of hibernate.cache.use_minimal_puts and forces a refresh of the - second-level cache for all items read from the database. - - - - -
- -
- Browsing the contents of a second-level or query cache region - - After enabling statistics, you can browse the contents of a second-level cache or query cache region. - - - Enabling Statistics - - - Set hibernate.generate_statistics to true. - - - - - Optionally, set hibernate.cache.use_structured_entries to true, to cause - Hibernate to store the cache entries in a human-readable format. - - - - - Browsing the second-level cache entries via the Statistics API - - -
-
-
-
diff --git a/documentation/src/main/docbook/integration/en-US/Data_Categorizations.xml b/documentation/src/main/docbook/integration/en-US/Data_Categorizations.xml deleted file mode 100644 index 198b6c7706..0000000000 --- a/documentation/src/main/docbook/integration/en-US/Data_Categorizations.xml +++ /dev/null @@ -1,458 +0,0 @@ - - - - - - Data categorizations - - Hibernate understands both the Java and JDBC representations of application data. The ability to read and write - object data to a database is called marshalling, and is the function of a Hibernate - type. A type is an implementation of the - org.hibernate.type.Type interface. A Hibernate type describes - various aspects of behavior of the Java type such as how to check for equality and how to clone values. - - - Usage of the word <wordasword>type</wordasword> - - A Hibernate type is neither a Java type nor a SQL datatype. It provides information about - both of these. - - - When you encounter the term type in regards to Hibernate, it may refer to the Java type, - the JDBC type, or the Hibernate type, depending on context. - - - - Hibernate categorizes types into two high-level groups: - and . - - -
- Value types - - A value type does not define its own lifecycle. It is, in effect, owned by an , which defines its - lifecycle. Value types are further classified into three sub-categories. - - - - - - - - -
- Basic types - - Basic value types usually map a single database value, or column, to a single, non-aggregated Java - type. Hibernate provides a number of built-in basic types, which follow the natural mappings recommended in the - JDBC specifications. You can override these mappings and provide and use alternative mappings. These topics are - discussed further on. - - - Basic Type Mappings - - - - Hibernate type - Database type - JDBC type - Type registry - - - - - org.hibernate.type.StringType - string - VARCHAR - string, java.lang.String - - - org.hibernate.type.MaterializedClob - string - CLOB - materialized_clob - - - org.hibernate.type.TextType - string - LONGVARCHAR - text - - - org.hibernate.type.CharacterType - char, java.lang.Character - CHAR - char, java.lang.Character - - - org.hibernate.type.BooleanType - boolean - BIT - boolean, java.lang.Boolean - - - org.hibernate.type.NumericBooleanType - boolean - INTEGER, 0 is false, 1 is true - numeric_boolean - - - org.hibernate.type.YesNoType - boolean - CHAR, 'N'/'n' is false, 'Y'/'y' is true. The uppercase value is written to the database. - yes_no - - - org.hibernate.type.TrueFalseType - boolean - CHAR, 'F'/'f' is false, 'T'/'t' is true. The uppercase value is written to the database. - true_false - - - org.hibernate.type.ByteType - byte, java.lang.Byte - TINYINT - byte, java.lang.Byte - - - org.hibernate.type.ShortType - short, java.lang.Short - SMALLINT - short, java.lang.Short - - - org.hibernate.type.IntegerTypes - int, java.lang.Integer - INTEGER - int, java.lang.Integer - - - org.hibernate.type.LongType - long, java.lang.Long - BIGINT - long, java.lang.Long - - - org.hibernate.type.FloatType - float, java.lang.Float - FLOAT - float, java.lang.Float - - - org.hibernate.type.DoubleType - double, java.lang.Double - DOUBLE - double, java.lang.Double - - - org.hibernate.type.BigIntegerType - java.math.BigInteger - NUMERIC - big_integer - - - org.hibernate.type.BigDecimalType - java.math.BigDecimal - NUMERIC - big_decimal, java.math.bigDecimal - - - org.hibernate.type.TimestampType - java.sql.Timestamp - TIMESTAMP - timestamp, java.sql.Timestamp - - - org.hibernate.type.TimeType - java.sql.Time - TIME - time, java.sql.Time - - - org.hibernate.type.DateType - java.sql.Date - DATE - date, java.sql.Date - - - org.hibernate.type.CalendarType - java.util.Calendar - TIMESTAMP - calendar, java.util.Calendar - - - org.hibernate.type.CalendarDateType - java.util.Calendar - DATE - calendar_date - - - org.hibernate.type.CurrencyType - java.util.Currency - VARCHAR - currency, java.util.Currency - - - org.hibernate.type.LocaleType - java.util.Locale - VARCHAR - locale, java.utility.locale - - - org.hibernate.type.TimeZoneType - java.util.TimeZone - VARCHAR, using the TimeZone ID - timezone, java.util.TimeZone - - - org.hibernate.type.UrlType - java.net.URL - VARCHAR - url, java.net.URL - - - org.hibernate.type.ClassType - java.lang.Class - VARCHAR, using the class name - class, java.lang.Class - - - org.hibernate.type.BlobType - java.sql.Blob - BLOB - blog, java.sql.Blob - - - org.hibernate.type.ClobType - java.sql.Clob - CLOB - clob, java.sql.Clob - - - org.hibernate.type.BinaryType - primitive byte[] - VARBINARY - binary, byte[] - - - org.hibernate.type.MaterializedBlobType - primitive byte[] - BLOB - materized_blob - - - org.hibernate.type.ImageType - primitive byte[] - LONGVARBINARY - image - - - org.hibernate.type.BinaryType - java.lang.Byte[] - VARBINARY - wrapper-binary - - - org.hibernate.type.CharArrayType - char[] - VARCHAR - characters, char[] - - - org.hibernate.type.CharacterArrayType - java.lang.Character[] - VARCHAR - wrapper-characters, Character[], java.lang.Character[] - - - org.hibernate.type.UUIDBinaryType - java.util.UUID - BINARY - uuid-binary, java.util.UUID - - - org.hibernate.type.UUIDCharType - java.util.UUID - CHAR, can also read VARCHAR - uuid-char - - - org.hibernate.type.PostgresUUIDType - java.util.UUID - PostgreSQL UUID, through Types#OTHER, which complies to the PostgreSQL JDBC driver - definition - pg-uuid - - - org.hibernate.type.SerializableType - implementors of java.lang.Serializable - VARBINARY - Unlike the other value types, multiple instances of this type are registered. It is registered - once under java.io.Serializable, and registered under the specific java.io.Serializable implementation - class names. - - - -
-
-
- National Character Types - - National Character types, which is a new feature since JDBC 4.0 API, now available in hibernate type system. - National Language Support enables you retrieve data or insert data into a database in any character - set that the underlying database supports. - - - - Depending on your environment, you might want to set the configuration option hibernate.use_nationalized_character_data - to true and having all string or clob based attributes having this national character support automatically. - There is nothing else to be changed, and you don't have to use any hibernate specific mapping, so it is portable - ( though the national character support feature is not required and may not work on other JPA provider impl ). - - - - The other way of using this feature is having the @Nationalized annotation on the attribute - that should be nationalized. This only works on string based attributes, including string, char, char array and clob. - - - @Entity( name="NationalizedEntity") - public static class NationalizedEntity { - @Id - private Integer id; - - @Nationalized - private String nvarcharAtt; - - @Lob - @Nationalized - private String materializedNclobAtt; - - @Lob - @Nationalized - private NClob nclobAtt; - - @Nationalized - private Character ncharacterAtt; - - @Nationalized - private Character[] ncharArrAtt; - - @Type(type = "ntext") - private String nlongvarcharcharAtt; - } - - - - National Character Type Mappings - - - - Hibernate type - Database type - JDBC type - Type registry - - - - - org.hibernate.type.StringNVarcharType - string - NVARCHAR - nstring - - - org.hibernate.type.NTextType - string - LONGNVARCHAR - materialized_clob - - - org.hibernate.type.NClobType - java.sql.NClob - NCLOB - nclob - - - org.hibernate.type.MaterializedNClobType - string - NCLOB - materialized_nclob - - - org.hibernate.type.PrimitiveCharacterArrayNClobType - char[] - NCHAR - char[] - - - org.hibernate.type.CharacterNCharType - java.lang.Character - NCHAR - ncharacter - - - org.hibernate.type.CharacterArrayNClobType - java.lang.Character[] - NCLOB - Character[], java.lang.Character[] - - - -
-
-
- Composite types - - Composite types, or embedded types, as they are called by the Java - Persistence API, have traditionally been called components in Hibernate. All of these - terms mean the same thing. - - - Components represent aggregations of values into a single Java type. An example is an - Address class, which aggregates street, city, state, and postal code. A composite type - behaves in a similar way to an entity. They are each classes written specifically for an application. They may - both include references to other application-specific classes, as well as to collections and simple JDK - types. The only distinguishing factors are that a component does not have its own lifecycle or define an - identifier. - - -
- -
- Collection types - - A collection type refers to the data type itself, not its contents. - - - A Collection denotes a one-to-one or one-to-many relationship between tables of a database. - - - Refer to the chapter on Collections for more information on collections. - -
-
-
- Entity Types - - Entities are application-specific classes which correlate to rows in a table, using a unique identifier. Because - of the requirement for a unique identifier, ntities exist independently and define their own lifecycle. As an - example, deleting a Membership should not delete the User or the Group. For more information, see the chapter on - Persistent Classes. - -
- -
- Implications of different data categorizations - - NEEDS TO BE WRITTEN - - -
- -
diff --git a/documentation/src/main/docbook/integration/en-US/Database_Access.xml b/documentation/src/main/docbook/integration/en-US/Database_Access.xml deleted file mode 100644 index 976f6b26d3..0000000000 --- a/documentation/src/main/docbook/integration/en-US/Database_Access.xml +++ /dev/null @@ -1,916 +0,0 @@ - - - - - - Database access - -
- Connecting - - Hibernate connects to databases on behalf of your application. It can connect through a variety of mechanisms, - including: - - - Stand-alone built-in connection pool - javax.sql.DataSource - Connection pools, including support for two different third-party opensource JDBC connection pools: - - c3p0 - proxool - - - - Application-supplied JDBC connections. This is not a recommended approach and exists for legacy reasons - - - - - The built-in connection pool is not intended for production environments. - - - - Hibernate obtains JDBC connections as needed though the - ConnectionProvider interface - which is a service contract. Applications may also supply their own - ConnectionProvider implementation - to define a custom approach for supplying connections to Hibernate (from a different connection pool - implementation, for example). - - - - -
- Configuration - - You can configure database connections using a properties file, an XML deployment descriptor or - programmatically. - - - <filename>hibernate.properties</filename> for a c3p0 connection pool - - - - <filename>hibernate.cfg.xml</filename> for a connection to the bundled HSQL database - - - -
- Programatic configuration - - An instance of object org.hibernate.cfg.Configuration represents an entire set of - mappings of an application's Java types to an SQL database. The - org.hibernate.cfg.Configuration builds an immutable - org.hibernate.SessionFactory, and compiles the mappings from various XML mapping - files. You can specify the mapping files directly, or Hibernate can find them for you. - - - Specifying the mapping files directly - - You can obtain a org.hibernate.cfg.Configuration instance by instantiating it - directly and specifying XML mapping documents. If the mapping files are in the classpath, use method - addResource(). - - - - - Letting Hibernate find the mapping files for you - - The addClass() method directs Hibernate to search the CLASSPATH for the mapping - files, eliminating hard-coded file names. In the following example, it searches for - org/hibernate/auction/Item.hbm.xml and - org/hibernate/auction/Bid.hbm.xml. - - - - - Specifying configuration properties - - - - Other ways to configure Hibernate programmatically - - - Pass an instance of java.util.Properties to - Configuration.setProperties(). - - - - - Set System properties using java - -Dproperty=value - - - -
-
- -
- Obtaining a JDBC connection - - After you configure the , you can use method - openSession of class org.hibernate.SessionFactory to open - sessions. Sessions will obtain JDBC connections as needed based on the provided configuration. - - - Specifying configuration properties - - - - Most important Hibernate JDBC properties - hibernate.connection.driver_class - hibernate.connection.url - hibernate.connection.username - hibernate.connection.password - hibernate.connection.pool_size - - - All available Hibernate settings are defined as constants and discussed on the - org.hibernate.cfg.AvailableSettings interface. See its source code or - JavaDoc for details. - -
-
- -
- Connection pooling - - Hibernate's internal connection pooling algorithm is rudimentary, and is provided for development and testing - purposes. Use a third-party pool for best performance and stability. To use a third-party pool, replace the - hibernate.connection.pool_size property with settings specific to your connection pool of - choice. This disables Hibernate's internal connection pool. - - -
- c3p0 connection pool - - C3P0 is an open source JDBC connection pool distributed along with Hibernate in the lib/ - directory. Hibernate uses its org.hibernate.service.jdbc.connections.internal.C3P0ConnectionProvider for - connection pooling if you set the hibernate.c3p0.* properties. properties. - - - Important configuration properties for the c3p0 connection pool - hibernate.c3p0.min_size - hibernate.c3p0.max_size - hibernate.c3p0.timeout - hibernate.c3p0.max_statements - -
- -
- Proxool connection pool - - Proxool is another open source JDBC connection pool distributed along with Hibernate in the - lib/ directory. Hibernate uses its - org.hibernate.service.jdbc.connections.internal.ProxoolConnectionProvider for connection pooling if you set the - hibernate.proxool.* properties. Unlike c3p0, proxool requires some additional configuration - parameters, as described by the Proxool documentation available at . - - - Important configuration properties for the Proxool connection pool - - - - - - Property - Description - - - - - hibernate.proxool.xml - Configure Proxool provider using an XML file (.xml is appended automatically) - - - hibernate.proxool.properties - Configure the Proxool provider using a properties file (.properties is appended - automatically) - - - hibernate.proxool.existing_pool - Whether to configure the Proxool provider from an existing pool - - - hibernate.proxool.pool_alias - Proxool pool alias to use. Required. - - - -
-
- - -
- Obtaining connections from an application server, using JNDI - - To use Hibernate inside an application server, configure Hibernate to obtain connections from an application - server javax.sql.Datasource registered in JNDI, by setting at least one of the following - properties: - - - Important Hibernate properties for JNDI datasources - hibernate.connection.datasource (required) - hibernate.jndi.url - hibernate.jndi.class - hibernate.connection.username - hibernate.connection.password - - - JDBC connections obtained from a JNDI datasource automatically participate in the container-managed transactions - of the application server. - -
- -
- Other connection-specific configuration - - You can pass arbitrary connection properties by prepending hibernate.connection to the - connection property name. For example, specify a charSet connection property as - hibernate.connection.charSet. - - - You can define your own plugin strategy for obtaining JDBC connections by implementing the interface - ConnectionProvider and specifying your custom - implementation with the hibernate.connection.provider_class property. - -
- -
- Optional configuration properties - - In addition to the properties mentioned in the previous sections, Hibernate includes many other optional - properties. See for a more complete list. - -
-
- -
- Dialects - - Although SQL is relatively standardized, each database vendor uses a subset of supported syntax. This is referred - to as a dialect. Hibernate handles variations across these dialects through its - org.hibernate.dialect.Dialect class and the various subclasses for each vendor dialect. - - - - Supported database dialects - - - - - - Database - Dialect - - - - - CUBRID 8.3 and later - - - org.hibernate.dialect.CUBRIDDialect - - - - - DB2 - - - org.hibernate.dialect.DB2Dialect - - - - - DB2 AS/400 - - - org.hibernate.dialect.DB2400Dialect - - - - - DB2 OS390 - - - org.hibernate.dialect.DB2390Dialect - - - - - Firebird - - - org.hibernate.dialect.FirebirdDialect - - - - - FrontBase - - - org.hibernate.dialect.FrontbaseDialect - - - - - H2 - - - org.hibernate.dialect.H2Dialect - - - - - HyperSQL (HSQL) - - - org.hibernate.dialect.HSQLDialect - - - - - Informix - - - org.hibernate.dialect.InformixDialect - - - - - Ingres - - - org.hibernate.dialect.IngresDialect - - - - - Ingres 9 - - - org.hibernate.dialect.Ingres9Dialect - - - - - Ingres 10 - - - org.hibernate.dialect.Ingres10Dialect - - - - - Interbase - - - org.hibernate.dialect.InterbaseDialect - - - - - InterSystems Cache 2007.1 - - - org.hibernate.dialect.Cache71Dialect - - - - - JDataStore - - - org.hibernate.dialect.JDataStoreDialect - - - - - Mckoi SQL - - - org.hibernate.dialect.MckoiDialect - - - - - Microsoft SQL Server 2000 - - - org.hibernate.dialect.SQLServerDialect - - - - - Microsoft SQL Server 2005 - - - org.hibernate.dialect.SQLServer2005Dialect - - - - - Microsoft SQL Server 2008 - - - org.hibernate.dialect.SQLServer2008Dialect - - - - - Microsoft SQL Server 2012 - - - org.hibernate.dialect.SQLServer2012Dialect - - - - - Mimer SQL - - - org.hibernate.dialect.MimerSQLDialect - - - - - MySQL - - - org.hibernate.dialect.MySQLDialect - - - - - MySQL with InnoDB - - - org.hibernate.dialect.MySQLInnoDBDialect - - - - - MySQL with MyISAM - - - org.hibernate.dialect.MySQLMyISAMDialect - - - - - MySQL5 - - - org.hibernate.dialect.MySQL5Dialect - - - - - MySQL5 with InnoDB - - - org.hibernate.dialect.MySQL5InnoDBDialect - - - - - Oracle 8i - - - org.hibernate.dialect.Oracle8iDialect - - - - - Oracle 9i - - - org.hibernate.dialect.Oracle9iDialect - - - - - Oracle 10g and later - - - org.hibernate.dialect.Oracle10gDialect - - - - - Oracle TimesTen - - - org.hibernate.dialect.TimesTenDialect - - - - - Pointbase - - - org.hibernate.dialect.PointbaseDialect - - - - - PostgreSQL 8.1 - - - org.hibernate.dialect.PostgreSQL81Dialect - - - - - PostgreSQL 8.2 - - - org.hibernate.dialect.PostgreSQL82Dialect - - - - - PostgreSQL 9 and later - - - org.hibernate.dialect.PostgreSQL9Dialect - - - - - Progress - - - org.hibernate.dialect.ProgressDialect - - - - - SAP DB - - - org.hibernate.dialect.SAPDBDialect - - - - - SAP HANA (column store) - - - org.hibernate.dialect.HANAColumnStoreDialect - - - - - SAP HANA (row store) - - - org.hibernate.dialect.HANARowStoreDialect - - - - - Sybase - - - org.hibernate.dialect.SybaseDialect - - - - - Sybase 11 - - - org.hibernate.dialect.Sybase11Dialect - - - - - Sybase ASE 15.5 - - - org.hibernate.dialect.SybaseASE15Dialect - - - - - Sybase ASE 15.7 - - - org.hibernate.dialect.SybaseASE157Dialect - - - - - Sybase Anywhere - - - org.hibernate.dialect.SybaseAnywhereDialect - - - - - Teradata - - - org.hibernate.dialect.TeradataDialect - - - - - Unisys OS 2200 RDMS - - - org.hibernate.dialect.RDMSOS2200Dialect - - - - -
- -
- Specifying the Dialect to use - - The developer may manually specify the Dialect to use by setting the - hibernate.dialect configuration property to the name of a specific - org.hibernate.dialect.Dialect class to use. - -
- -
- Dialect resolution - - Assuming a ConnectionProvider has been - set up, Hibernate will attempt to automatically determine the Dialect to use based on the - java.sql.DatabaseMetaData reported by a - java.sql.Connection obtained from that - ConnectionProvider. - - - This functionality is provided by a series of - org.hibernate.engine.jdbc.dialect.spi.DialectResolver instances registered - with Hibernate internally. Hibernate comes with a standard set of recognitions. If your application - requires extra Dialect resolution capabilities, it would simply register a custom implementation - of org.hibernate.engine.jdbc.dialect.spi.DialectResolver as follows: - - - - Registered org.hibernate.engine.jdbc.dialect.spi.DialectResolver are - prepended to an internal list of resolvers, so they take precedence - before any already registered resolvers including the standard one. - -
-
- -
- Automatic schema generation with SchemaExport - - SchemaExport is a Hibernate utility which generates DDL from your mapping files. The generated schema includes - referential integrity constraints, primary and foreign keys, for entity and collection tables. It also creates - tables and sequences for mapped identifier generators. - - - - You must specify a SQL Dialect via the hibernate.dialect property when using this tool, - because DDL is highly vendor-specific. See for information. - - - - Before Hibernate can generate your schema, you must customize your mapping files. - - -
- Customizing the mapping files - - Hibernate provides several elements and attributes to customize your mapping files. They are listed in , and a logical order of customization is presented in . - - - Elements and attributes provided for customizing mapping files - - - - - - - Name - Type of value - Description - - - - - length - number - Column length - - - precision - number - Decimal precision of column - - - scale - number - Decimal scale of column - - - not-null - true or false - Whether a column is allowed to hold null values - - - unique - true or false - Whether values in the column must be unique - - - index - string - The name of a multi-column index - - - unique-key - string - The name of a multi-column unique constraint - - - foreign-key - string - The name of the foreign key constraint generated for an association. This applies to - <one-to-one>, <many-to-one>, <key>, and <many-to-many> mapping - elements. inverse="true" sides are skipped by SchemaExport. - - - sql-type - string - Overrides the default column type. This applies to the <column> element only. - - - default - string - Default value for the column - - - check - string - An SQL check constraint on either a column or atable - - - -
- - Customizing the schema - - Set the length, precision, and scale of mapping elements. - - Many Hibernate mapping elements define optional attributes named , - , and . - - - - - Set the <option>not-null</option>, <option>UNIQUE</option>, <option>unique-key</option> attributes. - - The and attributes generate constraints on table columns. - - - The unique-key attribute groups columns in a single, unique key constraint. The attribute overrides - the name of any generated unique key constraint. - - - - - Set the <option>index</option> and <option>foreign-key</option> attributes. - - The attribute specifies the name of an index for Hibernate to create using the mapped - column or columns. You can group multiple columns into the same index by assigning them the same index name. - - - A foreign-key attribute overrides the name of any generated foreign key constraint. - - - - - Set child <option><column></option> elements. - - Many mapping elements accept one or more child <column> elements. This is particularly useful for - mapping types involving multiple columns. - - - - - Set the <option>default</option> attribute. - - The attribute represents a default value for a column. Assign the same value to the - mapped property before saving a new instance of the mapped class. - - - - - Set the <option>sql-type</option> attribure. - - Use the attribute to override the default mapping of a Hibernate type to SQL - datatype. - - - - - Set the <option>check</option> attribute. - - use the attribute to specify a check constraint. - - - - - Add <comment> elements to your schema. - - Use the <comment> element to specify comments for the generated schema. - - - - -
- -
- Running the SchemaExport tool - - The SchemaExport tool writes a DDL script to standard output, executes the DDL statements, or both. - - - SchemaExport syntax - - java -cp hibernate_classpaths org.hibernate.tool.hbm2ddl.SchemaExport options mapping_files - - - - SchemaExport Options - - - - - - Option - Description - - - - - --quiet - do not output the script to standard output - - - --drop - only drop the tables - - - --create - only create the tables - - - --text - do not export to the database - - - --output=my_schema.ddl - output the ddl script to a file - - - --naming=eg.MyNamingStrategy - select a NamingStrategy - - - --config=hibernate.cfg.xml - read Hibernate configuration from an XML file - - - --properties=hibernate.properties - read database properties from a file - - - --format - format the generated SQL nicely in the script - - - --delimiter=; - set an end-of-line delimiter for the script - - - -
- - Embedding SchemaExport into your application - - -
-
-
diff --git a/documentation/src/main/docbook/integration/en-US/Envers.xml b/documentation/src/main/docbook/integration/en-US/Envers.xml deleted file mode 100644 index d55e045ae0..0000000000 --- a/documentation/src/main/docbook/integration/en-US/Envers.xml +++ /dev/null @@ -1,1759 +0,0 @@ - - - - - - Envers - - - The aim of Hibernate Envers is to provide historical versioning of your application's entity data. Much - like source control management tools such as Subversion or Git, Hibernate Envers manages a notion of revisions - if your application data through the use of audit tables. Each transaction relates to one global revision number - which can be used to identify groups of changes (much like a change set in source control). As the revisions - are global, having a revision number, you can query for various entities at that revision, retrieving a - (partial) view of the database at that revision. You can find a revision number having a date, and the other - way round, you can get the date at which a revision was committed. - - - - -
- Basics - - - To audit changes that are performed on an entity, you only need two things: the - hibernate-envers jar on the classpath and an @Audited annotation - on the entity. - - - - - Unlike in previous versions, you no longer need to specify listeners in the Hibernate configuration - file. Just putting the Envers jar on the classpath is enough - listeners will be registered - automatically. - - - - - And that's all - you can create, modify and delete the entities as always. If you look at the generated - schema for your entities, or at the data persisted by Hibernate, you will notice that there are no changes. - However, for each audited entity, a new table is introduced - entity_table_AUD, - which stores the historical data, whenever you commit a transaction. Envers automatically creates audit - tables if hibernate.hbm2ddl.auto option is set to create, - create-drop or update. Otherwise, to export complete database schema - programatically, use org.hibernate.envers.tools.hbm2ddl.EnversSchemaGenerator. Appropriate DDL - statements can be also generated with Ant task described later in this manual. - - - - Instead of annotating the whole class and auditing all properties, you can annotate - only some persistent properties with @Audited. This will cause only - these properties to be audited. - - - - The audit (history) of an entity can be accessed using the AuditReader interface, which - can be obtained having an open EntityManager or Session via - the AuditReaderFactory. See the javadocs for these classes for details on the - functionality offered. - -
- -
- Configuration - - It is possible to configure various aspects of Hibernate Envers behavior, such as table names, etc. - - - - Envers Configuration Properties - - - - - - - - Property name - Default value - Description - - - - - - - org.hibernate.envers.audit_table_prefix - - - - - String that will be prepended to the name of an audited entity to create the name of the - entity, that will hold audit information. - - - - - org.hibernate.envers.audit_table_suffix - - - _AUD - - - String that will be appended to the name of an audited entity to create the name of the - entity, that will hold audit information. If you audit an entity with a table name Person, - in the default setting Envers will generate a Person_AUD table to store - historical data. - - - - - org.hibernate.envers.revision_field_name - - - REV - - - Name of a field in the audit entity that will hold the revision number. - - - - - org.hibernate.envers.revision_type_field_name - - - REVTYPE - - - Name of a field in the audit entity that will hold the type of the revision (currently, - this can be: add, mod, del). - - - - - org.hibernate.envers.revision_on_collection_change - - - true - - - Should a revision be generated when a not-owned relation field changes (this can be either - a collection in a one-to-many relation, or the field using "mappedBy" attribute in a - one-to-one relation). - - - - - org.hibernate.envers.do_not_audit_optimistic_locking_field - - - true - - - When true, properties to be used for optimistic locking, annotated with - @Version, will be automatically not audited (their history won't be - stored; it normally doesn't make sense to store it). - - - - - org.hibernate.envers.store_data_at_delete - - - false - - - Should the entity data be stored in the revision when the entity is deleted (instead of only - storing the id and all other properties as null). This is not normally needed, as the data is - present in the last-but-one revision. Sometimes, however, it is easier and more efficient to - access it in the last revision (then the data that the entity contained before deletion is - stored twice). - - - - - org.hibernate.envers.default_schema - - - null (same schema as table being audited) - - - The default schema name that should be used for audit tables. Can be overridden using the - @AuditTable(schema="...") annotation. If not present, the schema will - be the same as the schema of the table being audited. - - - - - org.hibernate.envers.default_catalog - - - null (same catalog as table being audited) - - - The default catalog name that should be used for audit tables. Can be overridden using the - @AuditTable(catalog="...") annotation. If not present, the catalog will - be the same as the catalog of the normal tables. - - - - - org.hibernate.envers.audit_strategy - - - org.hibernate.envers.strategy.DefaultAuditStrategy - - - The audit strategy that should be used when persisting audit data. The default stores only - the revision, at which an entity was modified. An alternative, the - org.hibernate.envers.strategy.ValidityAuditStrategy stores both the - start revision and the end revision. Together these define when an audit row was valid, - hence the name ValidityAuditStrategy. - - - - - org.hibernate.envers.audit_strategy_validity_end_rev_field_name - - - REVEND - - - The column name that will hold the end revision number in audit entities. This property is - only valid if the validity audit strategy is used. - - - - - org.hibernate.envers.audit_strategy_validity_store_revend_timestamp - - - false - - - Should the timestamp of the end revision be stored, until which the data was valid, in - addition to the end revision itself. This is useful to be able to purge old Audit records - out of a relational database by using table partitioning. Partitioning requires a column - that exists within the table. This property is only evaluated if the ValidityAuditStrategy - is used. - - - - - org.hibernate.envers.audit_strategy_validity_revend_timestamp_field_name - - - REVEND_TSTMP - - - Column name of the timestamp of the end revision until which the data was valid. Only used - if the ValidityAuditStrategy is used, and - org.hibernate.envers.audit_strategy_validity_store_revend_timestamp - evaluates to true - - - - - org.hibernate.envers.use_revision_entity_with_native_id - - - true - - - Boolean flag that determines the strategy of revision number generation. Default - implementation of revision entity uses native identifier generator. If current database - engine does not support identity columns, users are advised to set this property to false. - In this case revision numbers are created by preconfigured - org.hibernate.id.enhanced.SequenceStyleGenerator. See: - - org.hibernate.envers.DefaultRevisionEntity - org.hibernate.envers.enhanced.SequenceIdRevisionEntity - - - - - - org.hibernate.envers.track_entities_changed_in_revision - - - false - - - Should entity types, that have been modified during each revision, be tracked. The default - implementation creates REVCHANGES table that stores entity names - of modified persistent objects. Single record encapsulates the revision identifier - (foreign key to REVINFO table) and a string value. For more - information refer to - and . - - - - - org.hibernate.envers.global_with_modified_flag - - - false, can be individually overriden with @Audited(withModifiedFlag=true) - - - Should property modification flags be stored for all audited entities and all properties. - When set to true, for all properties an additional boolean column in the audit tables will - be created, filled with information if the given property changed in the given revision. - When set to false, such column can be added to selected entities or properties using the - @Audited annotation. - For more information refer to - and . - - - - - org.hibernate.envers.modified_flag_suffix - - - _MOD - - - The suffix for columns storing "Modified Flags". - For example: a property called "age", will by default get modified flag with column name "age_MOD". - - - - - org.hibernate.envers.embeddable_set_ordinal_field_name - - - SETORDINAL - - - Name of column used for storing ordinal of the change in sets of embeddable elements. - - - - - org.hibernate.envers.cascade_delete_revision - - - false - - - While deleting revision entry, remove data of associated audited entities. - Requires database support for cascade row removal. - - - - - org.hibernate.envers.allow_identifier_reuse - - - false - - - Guarantees proper validity audit strategy behavior when application reuses identifiers - of deleted entities. Exactly one row with null end date exists - for each identifier. - - - - -
- - - - The following configuration options have been added recently and should be regarded as experimental: - - - org.hibernate.envers.track_entities_changed_in_revision - - - org.hibernate.envers.using_modified_flag - - - org.hibernate.envers.modified_flag_suffix - - - - -
- -
- Additional mapping annotations - - - The name of the audit table can be set on a per-entity basis, using the - @AuditTable annotation. It may be tedious to add this - annotation to every audited entity, so if possible, it's better to use a prefix/suffix. - - - - If you have a mapping with secondary tables, audit tables for them will be generated in - the same way (by adding the prefix and suffix). If you wish to overwrite this behaviour, - you can use the @SecondaryAuditTable and - @SecondaryAuditTables annotations. - - - - If you'd like to override auditing behaviour of some fields/properties inherited from - @Mappedsuperclass or in an embedded component, you can - apply the @AuditOverride(s) annotation on the subtype or usage site - of the component. - - - - If you want to audit a relation mapped with @OneToMany+@JoinColumn, - please see for a description of the additional - @AuditJoinTable annotation that you'll probably want to use. - - - - If you want to audit a relation, where the target entity is not audited (that is the case for example with - dictionary-like entities, which don't change and don't have to be audited), just annotate it with - @Audited(targetAuditMode = RelationTargetAuditMode.NOT_AUDITED). Then, while reading historic - versions of your entity, the relation will always point to the "current" related entity. By default Envers - throws javax.persistence.EntityNotFoundException when "current" entity does not - exist in the database. Apply @NotFound(action = NotFoundAction.IGNORE) annotation - to silence the exception and assign null value instead. Hereby solution causes implicit eager loading - of to-one relations. - - - - If you'd like to audit properties of a superclass of an entity, which are not explicitly audited (which - don't have the @Audited annotation on any properties or on the class), you can list the - superclasses in the auditParents attribute of the @Audited - annotation. Please note that auditParents feature has been deprecated. Use - @AuditOverride(forClass = SomeEntity.class, isAudited = true/false) instead. - -
- -
- Choosing an audit strategy - - After the basic configuration it is important to choose the audit strategy that will be used to persist - and retrieve audit information. There is a trade-off between the performance of persisting and the - performance of querying the audit information. Currently there two audit strategies. - - - - - The default audit strategy persists the audit data together with a start revision. For each row - inserted, updated or deleted in an audited table, one or more rows are inserted in the audit - tables, together with the start revision of its validity. Rows in the audit tables are never - updated after insertion. Queries of audit information use subqueries to select the applicable - rows in the audit tables. These subqueries are notoriously slow and difficult to index. - - - - - The alternative is a validity audit strategy. This strategy stores the start-revision and the - end-revision of audit information. For each row inserted, updated or deleted in an audited table, - one or more rows are inserted in the audit tables, together with the start revision of its - validity. But at the same time the end-revision field of the previous audit rows (if available) - are set to this revision. Queries on the audit information can then use 'between start and end - revision' instead of subqueries as used by the default audit strategy. - - - The consequence of this strategy is that persisting audit information will be a bit slower, - because of the extra updates involved, but retrieving audit information will be a lot faster. - This can be improved by adding extra indexes. - - - -
- -
- Revision Log - Logging data for revisions - - - When Envers starts a new revision, it creates a new revision entity which stores - information about the revision. By default, that includes just - - - - - revision number - An integral value (int/Integer or - long/Long). Essentially the primary key of the revision - - - - - revision timestamp - either a long/Long or - java.util.Date value representing the instant at which the revision was made. - When using a java.util.Date, instead of a long/Long for - the revision timestamp, take care not to store it to a column data type which will loose precision. - - - - - - Envers handles this information as an entity. By default it uses its own internal class to act as the - entity, mapped to the REVINFO table. - You can, however, supply your own approach to collecting this information which might be useful to - capture additional details such as who made a change or the ip address from which the request came. There - are 2 things you need to make this work. - - - - - First, you will need to tell Envers about the entity you wish to use. Your entity must use the - @org.hibernate.envers.RevisionEntity annotation. It must - define the 2 attributes described above annotated with - @org.hibernate.envers.RevisionNumber and - @org.hibernate.envers.RevisionTimestamp, respectively. You can extend - from org.hibernate.envers.DefaultRevisionEntity, if you wish, to inherit all - these required behaviors. - - - Simply add the custom revision entity as you do your normal entities. Envers will "find it". Note - that it is an error for there to be multiple entities marked as - @org.hibernate.envers.RevisionEntity - - - - - Second, you need to tell Envers how to create instances of your revision entity which is handled - by the newRevision method of the - org.jboss.envers.RevisionListener interface. - - - You tell Envers your custom org.hibernate.envers.RevisionListener - implementation to use by specifying it on the - @org.hibernate.envers.RevisionEntity annotation, using the - value attribute. If your RevisionListener - class is inaccessible from @RevisionEntity (e.g. exists in a different - module), set org.hibernate.envers.revision_listener property to it's fully - qualified name. Class name defined by the configuration parameter overrides revision entity's - value attribute. - - - - - - - An alternative method to using the org.hibernate.envers.RevisionListener - is to instead call the getCurrentRevision method of the - org.hibernate.envers.AuditReader interface to obtain the current revision, - and fill it with desired information. The method accepts a persist parameter indicating - whether the revision entity should be persisted prior to returning from this method. true - ensures that the returned entity has access to its identifier value (revision number), but the revision - entity will be persisted regardless of whether there are any audited entities changed. false - means that the revision number will be null, but the revision entity will be persisted - only if some audited entities have changed. - - - - - Example of storing username with revision - - - ExampleRevEntity.java - - - ExampleListener.java - - - -
- Tracking entity names modified during revisions - - By default entity types that have been changed in each revision are not being tracked. This implies the - necessity to query all tables storing audited data in order to retrieve changes made during - specified revision. Envers provides a simple mechanism that creates REVCHANGES - table which stores entity names of modified persistent objects. Single record encapsulates the revision - identifier (foreign key to REVINFO table) and a string value. - - - Tracking of modified entity names can be enabled in three different ways: - - - - - Set org.hibernate.envers.track_entities_changed_in_revision parameter to - true. In this case - org.hibernate.envers.DefaultTrackingModifiedEntitiesRevisionEntity will - be implicitly used as the revision log entity. - - - - - Create a custom revision entity that extends - org.hibernate.envers.DefaultTrackingModifiedEntitiesRevisionEntity class. - - - - - - - Mark an appropriate field of a custom revision entity with - @org.hibernate.envers.ModifiedEntityNames annotation. The property is - required to be of ]]> type. - - - modifiedEntityNames; - - ... -}]]> - - - - Users, that have chosen one of the approaches listed above, can retrieve all entities modified in a - specified revision by utilizing API described in . - - - Users are also allowed to implement custom mechanism of tracking modified entity types. In this case, they - shall pass their own implementation of - org.hibernate.envers.EntityTrackingRevisionListener interface as the value - of @org.hibernate.envers.RevisionEntity annotation. - EntityTrackingRevisionListener interface exposes one method that notifies - whenever audited entity instance has been added, modified or removed within current revision boundaries. - - - - Custom implementation of tracking entity classes modified during revisions - - CustomEntityTrackingRevisionListener.java - - - CustomTrackingRevisionEntity.java - modifiedEntityTypes = - new HashSet(); - - public void addModifiedEntityType(String entityClassName) { - modifiedEntityTypes.add(new ModifiedEntityTypeEntity(this, entityClassName)); - } - - ... -} -]]> - - ModifiedEntityTypeEntity.java - - modifiedEntityTypes = revEntity.getModifiedEntityTypes()]]> - -
- -
- -
- Tracking entity changes at property level - - By default the only information stored by Envers are revisions of modified entities. - This approach lets user create audit queries based on historical values of entity's properties. - - Sometimes it is useful to store additional metadata for each revision, when you are interested also in - the type of changes, not only about the resulting values. The feature described in - - makes it possible to tell which entities were modified in given revision. - - Feature described here takes it one step further. "Modification Flags" enable Envers to track which - properties of audited entities were modified in a given revision. - - - Tracking entity changes at property level can be enabled by: - - - - - setting org.hibernate.envers.global_with_modified_flag configuration - property to true. This global switch will cause adding modification flags - for all audited properties in all audited entities. - - - - - using @Audited(withModifiedFlag=true) on a property or on an entity. - - - - - The trade-off coming with this functionality is an increased size of - audit tables and a very little, almost negligible, performance drop - during audit writes. This is due to the fact that every tracked - property has to have an accompanying boolean column in the - schema that stores information about the property's modifications. Of - course it is Envers' job to fill these columns accordingly - no additional work by the - developer is required. Because of costs mentioned, it is recommended - to enable the feature selectively, when needed with use of the - granular configuration means described above. - - - To see how "Modified Flags" can be utilized, check out the very - simple query API that uses them: . - -
- -
- - Queries - - - You can think of historic data as having two dimension. The first - horizontal - - is the state of the database at a given revision. Thus, you can - query for entities as they were at revision N. The second - vertical - are the - revisions, at which entities changed. Hence, you can query for revisions, - in which a given entity changed. - - - - The queries in Envers are similar to Hibernate Criteria queries, so if you are common with them, - using Envers queries will be much easier. - - - - The main limitation of the current queries implementation is that you cannot - traverse relations. You can only specify constraints on the ids of the - related entities, and only on the "owning" side of the relation. This however - will be changed in future releases. - - - - Please note, that queries on the audited data will be in many cases much slower - than corresponding queries on "live" data, as they involve correlated subselects. - - - - In the future, queries will be improved both in terms of speed and possibilities, when using the valid-time - audit strategy, that is when storing both start and end revisions for entities. See - . - - -
- - Querying for entities of a class at a given revision - - - The entry point for this type of queries is: - - - - - - You can then specify constraints, which should be met by the entities returned, by - adding restrictions, which can be obtained using the AuditEntity - factory class. For example, to select only entities, where the "name" property - is equal to "John": - - - - - - And to select only entites that are related to a given entity: - - - - - - You can limit the number of results, order them, and set aggregations and projections - (except grouping) in the usual way. - When your query is complete, you can obtain the results by calling the - getSingleResult() or getResultList() methods. - - - - A full query, can look for example like this: - - - - -
- -
- - Querying for revisions, at which entities of a given class changed - - - The entry point for this type of queries is: - - - - - - You can add constraints to this query in the same way as to the previous one. - There are some additional possibilities: - - - - - - using AuditEntity.revisionNumber() you can specify constraints, projections - and order on the revision number, in which the audited entity was modified - - - - - similarly, using AuditEntity.revisionProperty(propertyName) you can specify constraints, - projections and order on a property of the revision entity, corresponding to the revision - in which the audited entity was modified - - - - - AuditEntity.revisionType() gives you access as above to the type of - the revision (ADD, MOD, DEL). - - - - - - Using these methods, - you can order the query results by revision number, set projection or constraint - the revision number to be greater or less than a specified value, etc. For example, the - following query will select the smallest revision number, at which entity of class - MyEntity with id entityId has changed, after revision - number 42: - - - - - - The second additional feature you can use in queries for revisions is the ability - to maximalize/minimize a property. For example, if you want to select the - revision, at which the value of the actualDate for a given entity - was larger then a given value, but as small as possible: - - - - - - The minimize() and maximize() methods return a criteria, - to which you can add constraints, which must be met by the entities with the - maximized/minimized properties. AggregatedAuditExpression#computeAggregationInInstanceContext() - enables the possibility to compute aggregated expression in the context of each entity instance - separately. It turns out useful when querying for latest revisions of all entities of a particular type. - - - - You probably also noticed that there are two boolean parameters, passed when - creating the query. The first one, selectEntitiesOnly, is only valid when - you don't set an explicit projection. If true, the result of the query will be - a list of entities (which changed at revisions satisfying the specified - constraints). - - - - If false, the result will be a list of three element arrays. The - first element will be the changed entity instance. The second will be an entity - containing revision data (if no custom entity is used, this will be an instance - of DefaultRevisionEntity). The third will be the type of the - revision (one of the values of the RevisionType enumeration: - ADD, MOD, DEL). - - - - The second parameter, selectDeletedEntities, specifies if revisions, - in which the entity was deleted should be included in the results. If yes, such entities - will have the revision type DEL and all fields, except the id, - null. - - -
- -
- - Querying for revisions of entity that modified given property - - - For the two types of queries described above it's possible to use - special Audit criteria called - hasChanged() - and - hasNotChanged() - that makes use of the functionality - described in . - They're best suited for vertical queries, - however existing API doesn't restrict their usage for horizontal - ones. - - Let's have a look at following examples: - - - - - - - This query will return all revisions of MyEntity with given id, - where the - actualDate - property has been changed. - Using this query we won't get all other revisions in which - actualDate - wasn't touched. Of course nothing prevents user from combining - hasChanged condition with some additional criteria - add method - can be used here in a normal way. - - - - - - - This query will return horizontal slice for MyEntity at the time - revisionNumber was generated. It will be limited to revisions - that modified - prop1 - but not prop2. - Note that the result set will usually also contain revisions - with numbers lower than the revisionNumber, so we cannot read - this query as "Give me all MyEntities changed in revisionNumber - with - prop1 - modified and - prop2 - untouched". To get such result we have to use the - forEntitiesModifiedAtRevision query: - - - - - -
- - -
- Querying for entities modified in a given revision - - The basic query allows retrieving entity names and corresponding Java classes changed in a specified revision: - - > modifiedEntityTypes = getAuditReader() - .getCrossTypeRevisionChangesReader().findEntityTypes(revisionNumber);]]> - - Other queries (also accessible from org.hibernate.envers.CrossTypeRevisionChangesReader): - - - - - List]]> findEntities(Number) - - Returns snapshots of all audited entities changed (added, updated and removed) in a given revision. - Executes n+1 SQL queries, where n is a number of different entity - classes modified within specified revision. - - - - - List]]> findEntities(Number, RevisionType) - - Returns snapshots of all audited entities changed (added, updated or removed) in a given revision - filtered by modification type. Executes n+1 SQL queries, where n - is a number of different entity classes modified within specified revision. - - - - - >]]> findEntitiesGroupByRevisionType(Number) - - Returns a map containing lists of entity snapshots grouped by modification operation (e.g. - addition, update and removal). Executes 3n+1 SQL queries, where n - is a number of different entity classes modified within specified revision. - - - - - Note that methods described above can be legally used only when default mechanism of - tracking changed entity names is enabled (see ). - -
- -
- -
- Conditional auditing - - Envers persists audit data in reaction to various Hibernate events (e.g. post update, post insert, and - so on), using a series of even listeners from the org.hibernate.envers.event.spi - package. By default, if the Envers jar is in the classpath, the event listeners are auto-registered with - Hibernate. - - - Conditional auditing can be implemented by overriding some of the Envers event listeners. - To use customized Envers event listeners, the following steps are needed: - - - - Turn off automatic Envers event listeners registration by setting the - hibernate.listeners.envers.autoRegister Hibernate property to - false. - - - - - Create subclasses for appropriate event listeners. For example, if you want to - conditionally audit entity insertions, extend the - org.hibernate.envers.event.spi.EnversPostInsertEventListenerImpl - class. Place the conditional-auditing logic in the subclasses, call the super method if - auditing should be performed. - - - - - Create your own implementation of org.hibernate.integrator.spi.Integrator, - similar to org.hibernate.envers.boot.internal.EnversIntegrator. Use your event - listener classes instead of the default ones. - - - - - For the integrator to be automatically used when Hibernate starts up, you will need to add a - META-INF/services/org.hibernate.integrator.spi.Integrator file to your jar. - The file should contain the fully qualified name of the class implementing the interface. - - - - -
- -
- Understanding the Envers Schema - - - For each audited entity (that is, for each entity containing at least one audited field), an audit table is - created. By default, the audit table's name is created by adding a "_AUD" suffix to the original table name, - but this can be overridden by specifying a different suffix/prefix in the configuration or per-entity using - the @org.hibernate.envers.AuditTable annotation. - - - - Audit table columns - - - id of the original entity (this can be more then one column in the case of composite primary keys) - - - - - revision number - an integer. Matches to the revision number in the revision entity table. - - - - - revision type - a small integer - - - - - audited fields from the original entity - - - - - - The primary key of the audit table is the combination of the original id of the entity and the revision - number - there can be at most one historic entry for a given entity instance at a given revision. - - - - The current entity data is stored in the original table and in the audit table. This is a duplication of - data, however as this solution makes the query system much more powerful, and as memory is cheap, hopefully - this won't be a major drawback for the users. A row in the audit table with entity id ID, revision N and - data D means: entity with id ID has data D from revision N upwards. Hence, if we want to find an entity at - revision M, we have to search for a row in the audit table, which has the revision number smaller or equal - to M, but as large as possible. If no such row is found, or a row with a "deleted" marker is found, it means - that the entity didn't exist at that revision. - - - - The "revision type" field can currently have three values: 0, 1, 2, which means ADD, MOD and DEL, - respectively. A row with a revision of type DEL will only contain the id of the entity and no data (all - fields NULL), as it only serves as a marker saying "this entity was deleted at that revision". - - - - Additionally, there is a revision entity table which contains the information about the - global revision. By default the generated table is named REVINFO and - contains just 2 columns: ID and TIMESTAMP. - A row is inserted into this table on each new revision, that is, on each commit of a transaction, which - changes audited data. The name of this table can be configured, the name of its columns as well as adding - additional columns can be achieved as discussed in . - - - - While global revisions are a good way to provide correct auditing of relations, some people have pointed out - that this may be a bottleneck in systems, where data is very often modified. One viable solution is to - introduce an option to have an entity "locally revisioned", that is revisions would be created for it - independently. This wouldn't enable correct versioning of relations, but wouldn't also require the - REVINFO table. Another possibility is to introduce a notion of - "revisioning groups": groups of entities which share revision numbering. Each such group would have to - consist of one or more strongly connected component of the graph induced by relations between entities. - Your opinions on the subject are very welcome on the forum! :) - - -
- -
- Generating schema with Ant - - - If you'd like to generate the database schema file with the Hibernate Tools Ant task, - you'll probably notice that the generated file doesn't contain definitions of audit - tables. To generate also the audit tables, you simply need to use - org.hibernate.tool.ant.EnversHibernateToolTask instead of the usual - org.hibernate.tool.ant.HibernateToolTask. The former class extends - the latter, and only adds generation of the version entities. So you can use the task - just as you used to. - - - - For example: - - - - - - - - - - - - - - -]]> - - - Will generate the following schema: - - - -
- - -
- Mapping exceptions - -
- - What isn't and will not be supported - - - Bags, as they can contain non-unique elements. - The reason is that persisting, for example a bag of String-s, violates a principle - of relational databases: that each table is a set of tuples. In case of bags, - however (which require a join table), if there is a duplicate element, the two - tuples corresponding to the elements will be the same. Hibernate allows this, - however Envers (or more precisely: the database connector) will throw an exception - when trying to persist two identical elements, because of a unique constraint violation. - - - - There are at least two ways out if you need bag semantics: - - - - - - use an indexed collection, with the @IndexColumn annotation, or - - - - - provide a unique id for your elements with the @CollectionId annotation. - - - - -
- -
- - What isn't and <emphasis>will</emphasis> be supported - - - - - Bag style collection which identifier column has been defined using - @CollectionId annotation (JIRA ticket HHH-3950). - - - - -
- -
- - <literal>@OneToMany</literal>+<literal>@JoinColumn</literal> - - - When a collection is mapped using these two annotations, Hibernate doesn't - generate a join table. Envers, however, has to do this, so that when you read the - revisions in which the related entity has changed, you don't get false results. - - - To be able to name the additional join table, there is a special annotation: - @AuditJoinTable, which has similar semantics to JPA's - @JoinTable. - - - - One special case are relations mapped with @OneToMany+@JoinColumn on - the one side, and @ManyToOne+@JoinColumn(insertable=false, updatable=false) - on the many side. Such relations are in fact bidirectional, but the owning side is the collection. - - - To properly audit such relations with Envers, you can use the @AuditMappedBy annotation. - It enables you to specify the reverse property (using the mappedBy element). In case - of indexed collections, the index column must also be mapped in the referenced entity (using - @Column(insertable=false, updatable=false), and specified using - positionMappedBy. This annotation will affect only the way - Envers works. Please note that the annotation is experimental and may change in the future. - - -
-
- -
- Advanced: Audit table partitioning - -
- - Benefits of audit table partitioning - - - Because audit tables tend to grow indefinitely they can quickly become really large. When the audit tables have grown - to a certain limit (varying per RDBMS and/or operating system) it makes sense to start using table partitioning. - SQL table partitioning offers a lot of advantages including, but certainly not limited to: - - - - Improved query performance by selectively moving rows to various partitions (or even purging old rows) - - - - - Faster data loads, index creation, etc. - - - - - -
- -
- - Suitable columns for audit table partitioning - - Generally SQL tables must be partitioned on a column that exists within the table. As a rule it makes sense to use - either the end revision or the end revision timestamp column for - partioning of audit tables. - - - End revision information is not available for the default AuditStrategy. - - - - Therefore the following Envers configuration options are required: - - - org.hibernate.envers.audit_strategy = - org.hibernate.envers.strategy.ValidityAuditStrategy - - - org.hibernate.envers.audit_strategy_validity_store_revend_timestamp = - true - - - - Optionally, you can also override the default values using following properties: - - - org.hibernate.envers.audit_strategy_validity_end_rev_field_name - - - org.hibernate.envers.audit_strategy_validity_revend_timestamp_field_name - - - - For more information, see . - - - - - - The reason why the end revision information should be used for audit table partioning is based on the assumption that - audit tables should be partionioned on an 'increasing level of interestingness', like so: - - - - - - - A couple of partitions with audit data that is not very (or no longer) interesting. - This can be stored on slow media, and perhaps even be purged eventually. - - - - - Some partitions for audit data that is potentially interesting. - - - - - One partition for audit data that is most likely to be interesting. - This should be stored on the fastest media, both for reading and writing. - - - - - - -
- -
- - Audit table partitioning example - - In order to determine a suitable column for the 'increasing level of interestingness', - consider a simplified example of a salary registration for an unnamed agency. - - - - Currently, the salary table contains the following rows for a certain person X: - - - Salaries table - - - - - - Year - Salary (USD) - - - - - 2006 - 3300 - - - 2007 - 3500 - - - 2008 - 4000 - - - 2009 - 4500 - - - -
-
- - - The salary for the current fiscal year (2010) is unknown. The agency requires that all changes in registered - salaries for a fiscal year are recorded (i.e. an audit trail). The rationale behind this is that decisions - made at a certain date are based on the registered salary at that time. And at any time it must be possible - reproduce the reason why a certain decision was made at a certain date. - - - - The following audit information is available, sorted on in order of occurrence: - - - Salaries - audit table - - - - - - - - - Year - Revision type - Revision timestamp - Salary (USD) - End revision timestamp - - - - - 2006 - ADD - 2007-04-01 - 3300 - null - - - 2007 - ADD - 2008-04-01 - 35 - 2008-04-02 - - - 2007 - MOD - 2008-04-02 - 3500 - null - - - 2008 - ADD - 2009-04-01 - 3700 - 2009-07-01 - - - 2008 - MOD - 2009-07-01 - 4100 - 2010-02-01 - - - 2008 - MOD - 2010-02-01 - 4000 - null - - - 2009 - ADD - 2010-04-01 - 4500 - null - - - -
-
- -
- - Determining a suitable partitioning column - - To partition this data, the 'level of interestingness' must be defined. - Consider the following: - - - - For fiscal year 2006 there is only one revision. It has the oldest revision timestamp - of all audit rows, but should still be regarded as interesting because it is the latest modification - for this fiscal year in the salary table; its end revision timestamp is null. - - - Also note that it would be very unfortunate if in 2011 there would be an update of the salary for fiscal - year 2006 (which is possible in until at least 10 years after the fiscal year) and the audit - information would have been moved to a slow disk (based on the age of the - revision timestamp). Remember that in this case Envers will have to update - the end revision timestamp of the most recent audit row. - - - - - There are two revisions in the salary of fiscal year 2007 which both have nearly the same - revision timestamp and a different end revision timestamp. - On first sight it is evident that the first revision was a mistake and probably uninteresting. - The only interesting revision for 2007 is the one with end revision timestamp null. - - - - - Based on the above, it is evident that only the end revision timestamp is suitable for - audit table partitioning. The revision timestamp is not suitable. - - -
- -
- - Determining a suitable partitioning scheme - - A possible partitioning scheme for the salary table would be as follows: - - - - end revision timestamp year = 2008 - - - This partition contains audit data that is not very (or no longer) interesting. - - - - - end revision timestamp year = 2009 - - - This partition contains audit data that is potentially interesting. - - - - - end revision timestamp year >= 2010 or null - - - This partition contains the most interesting audit data. - - - - - - - This partitioning scheme also covers the potential problem of the update of the - end revision timestamp, which occurs if a row in the audited table is modified. - Even though Envers will update the end revision timestamp of the audit row to - the system date at the instant of modification, the audit row will remain in the same partition - (the 'extension bucket'). - - - - And sometime in 2011, the last partition (or 'extension bucket') is split into two new partitions: - - - - end revision timestamp year = 2010 - - - This partition contains audit data that is potentially interesting (in 2011). - - - - - end revision timestamp year >= 2011 or null - - - This partition contains the most interesting audit data and is the new 'extension bucket'. - - - - - -
- -
-
- -
- Envers links - - - - - Hibernate main page - - - - - Forum - - - - - JIRA issue tracker - (when adding issues concerning Envers, be sure to select the "envers" component!) - - - - - IRC channel - - - - - Envers Blog - - - - - FAQ - - - - -
- -
\ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/JMX.xml b/documentation/src/main/docbook/integration/en-US/JMX.xml deleted file mode 100644 index 4f4f7284a1..0000000000 --- a/documentation/src/main/docbook/integration/en-US/JMX.xml +++ /dev/null @@ -1,12 +0,0 @@ - - - - - JMX - - diff --git a/documentation/src/main/docbook/integration/en-US/Locking.xml b/documentation/src/main/docbook/integration/en-US/Locking.xml deleted file mode 100644 index b8bcfa7e66..0000000000 --- a/documentation/src/main/docbook/integration/en-US/Locking.xml +++ /dev/null @@ -1,332 +0,0 @@ - - - - - Locking - - Locking refers to actions taken to prevent data in a relational database from changing between the time it is read - and the time that it is used. - - - Your locking strategy can be either optimistic or pessimistic. - - - Locking strategies - - Optimistic - - - Optimistic locking assumes that multiple transactions can complete without affecting each other, and that - therefore transactions can proceed without locking the data resources that they affect. Before committing, - each transaction verifies that no other transaction has modified its data. If the check reveals conflicting - modifications, the committing transaction rolls back. - - - - - Pessimistic - - - Pessimistic locking assumes that concurrent transactions will conflict with each other, and requires resources - to be locked after they are read and only unlocked after the application has finished using the data. - - - - - - Hibernate provides mechanisms for implementing both types of locking in your applications. - -
- Optimistic - - When your application uses long transactions or conversations that span several database transactions, you can - store versioning data, so that if the same entity is updated by two conversations, the last to commit changes is - informed of the conflict, and does not override the other conversation's work. This approach guarantees some - isolation, but scales well and works particularly well in Read-Often Write-Sometimes - situations. - - - Hibernate provides two different mechanisms for storing versioning information, a dedicated version number or a - timestamp. - - - - Version number - - - - - - - - Timestamp - - - - - - - - - - A version or timestamp property can never be null for a detached instance. Hibernate detects any instance with a - null version or timestamp as transient, regardless of other unsaved-value strategies that you specify. Declaring - a nullable version or timestamp property is an easy way to avoid problems with transitive reattachment in - Hibernate, especially useful if you use assigned identifiers or composite keys. - - - -
- Dedicated version number - - The version number mechanism for optimistic locking is provided through a @Version - annotation. - - - The @Version annotation - - - Here, the version property is mapped to the OPTLOCK column, and the entity manager uses it - to detect conflicting updates, and prevent the loss of updates that would be overwritten by a - last-commit-wins strategy. - - - - The version column can be any kind of type, as long as you define and implement the appropriate - UserVersionType. - - - Your application is forbidden from altering the version number set by Hibernate. To artificially increase the - version number, see the documentation for properties - LockModeType.OPTIMISTIC_FORCE_INCREMENT or - LockModeType.PESSIMISTIC_FORCE_INCREMENTcheck in the Hibernate Entity Manager reference - documentation. - - - Database-generated version numbers - - If the version number is generated by the database, such as a trigger, use the annotation - @org.hibernate.annotations.Generated(GenerationTime.ALWAYS). - - - - Declaring a version property in <filename>hbm.xml</filename> - - - - - - column - The name of the column holding the version number. Optional, defaults to the property - name. - - - name - The name of a property of the persistent class. - - - type - The type of the version number. Optional, defaults to - integer. - - - access - Hibernate's strategy for accessing the property value. Optional, defaults to - property. - - - unsaved-value - Indicates that an instance is newly instantiated and thus unsaved. This distinguishes it - from detached instances that were saved or loaded in a previous session. The default value, - undefined, indicates that the identifier property value should be - used. Optional. - - - generated - Indicates that the version property value is generated by the database. Optional, defaults - to never. - - - insert - Whether or not to include the version column in SQL insert - statements. Defaults to true, but you can set it to false if the - database column is defined with a default value of 0. - - - - - -
- -
- Timestamp - - Timestamps are a less reliable way of optimistic locking than version numbers, but can be used by applications - for other purposes as well. Timestamping is automatically used if you the @Version annotation on a - Date or Calendar. - - - Using timestamps for optimistic locking - - - - Hibernate can retrieve the timestamp value from the database or the JVM, by reading the value you specify for - the @org.hibernate.annotations.Source annotation. The value can be either - org.hibernate.annotations.SourceType.DB or - org.hibernate.annotations.SourceType.VM. The default behavior is to use the database, and is - also used if you don't specify the annotation at all. - - - The timestamp can also be generated by the database instead of Hibernate, if you use the - @org.hibernate.annotations.Generated(GenerationTime.ALWAYS) annotation. - - - The timestamp element in <filename>hbm.xml</filename> - - - - - - column - The name of the column which holds the timestamp. Optional, defaults to the property - namel - - - name - The name of a JavaBeans style property of Java type Date or Timestamp of the persistent - class. - - - access - The strategy Hibernate uses to access the property value. Optional, defaults to - property. - - - unsaved-value A version property which indicates than instance is newly - instantiated, and unsaved. This distinguishes it from detached instances that were saved or loaded in a - previous session. The default value of undefined indicates that Hibernate uses the - identifier property value. - - - source - Whether Hibernate retrieves the timestamp from the database or the current - JVM. Database-based timestamps incur an overhead because Hibernate needs to query the database each time - to determine the incremental next value. However, database-derived timestamps are safer to use in a - clustered environment. Not all database dialects are known to support the retrieval of the database's - current timestamp. Others may also be unsafe for locking, because of lack of precision. - - - generated - Whether the timestamp property value is generated by the database. Optional, defaults to - never. - - - - - -
- -
- -
- Pessimistic - - Typically, you only need to specify an isolation level for the JDBC connections and let the database handle - locking issues. If you do need to obtain exclusive pessimistic locks or re-obtain locks at the start of a new - transaction, Hibernate gives you the tools you need. - - - - Hibernate always uses the locking mechanism of the database, and never lock objects in memory. - - -
- The <classname>LockMode</classname> class - - The LockMode class defines the different lock levels that Hibernate can acquire. - - - - - - LockMode.WRITE - acquired automatically when Hibernate updates or inserts a row. - - - LockMode.UPGRADE - acquired upon explicit user request using SELECT ... FOR UPDATE on databases - which support that syntax. - - - LockMode.UPGRADE_NOWAIT - acquired upon explicit user request using a SELECT ... FOR UPDATE NOWAIT in - Oracle. - - - LockMode.UPGRADE_SKIPLOCKED - acquired upon explicit user request using a SELECT ... FOR UPDATE SKIP LOCKED in - Oracle, or SELECT ... with (rowlock,updlock,readpast) in SQL Server. - - - LockMode.READ - acquired automatically when Hibernate reads data under Repeatable Read or - Serializable isolation level. It can be re-acquired by explicit user - request. - - - LockMode.NONE - The absence of a lock. All objects switch to this lock mode at the end of a - Transaction. Objects associated with the session via a call to update() or - saveOrUpdate() also start out in this lock mode. - - - - - - The explicit user request mentioned above occurs as a consequence of any of the following actions: - - - - - A call to Session.load(), specifying a LockMode. - - - - - A call to Session.lock(). - - - - - A call to Query.setLockMode(). - - - - - If you call Session.load() with option , - or , and the requested object is not already - loaded by the session, the object is loaded using SELECT ... FOR UPDATE. If you call - load() for an object that is already loaded with a less restrictive lock than the one - you request, Hibernate calls lock() for that object. - - - Session.lock() performs a version number check if the specified lock mode is - READ, UPGRADE, UPGRADE_NOWAIT or - UPGRADE_SKIPLOCKED. In the case of UPGRADE, - UPGRADE_NOWAIT or UPGRADE_SKIPLOCKED, SELECT ... FOR UPDATE - syntax is used. - - - If the requested lock mode is not supported by the database, Hibernate uses an appropriate alternate mode - instead of throwing an exception. This ensures that applications are portable. - -
-
-
diff --git a/documentation/src/main/docbook/integration/en-US/Mapping_Association.xml b/documentation/src/main/docbook/integration/en-US/Mapping_Association.xml deleted file mode 100644 index 089843d1e5..0000000000 --- a/documentation/src/main/docbook/integration/en-US/Mapping_Association.xml +++ /dev/null @@ -1,17 +0,0 @@ - - - - - Mapping associations - - The most basic form of mapping in Hibernate is mapping a persistent entity class to a database table. - You can expand on this concept by mapping associated classes together. - shows a Person class with a - - - diff --git a/documentation/src/main/docbook/integration/en-US/Mapping_Entities.xml b/documentation/src/main/docbook/integration/en-US/Mapping_Entities.xml deleted file mode 100644 index 66f7435eb6..0000000000 --- a/documentation/src/main/docbook/integration/en-US/Mapping_Entities.xml +++ /dev/null @@ -1,15 +0,0 @@ - - - - - Mapping entities -
- Hierarchies - -
-
diff --git a/documentation/src/main/docbook/integration/en-US/appendices/legacy_criteria/Legacy_Criteria.xml b/documentation/src/main/docbook/integration/en-US/appendices/legacy_criteria/Legacy_Criteria.xml deleted file mode 100644 index 2001abea70..0000000000 --- a/documentation/src/main/docbook/integration/en-US/appendices/legacy_criteria/Legacy_Criteria.xml +++ /dev/null @@ -1,549 +0,0 @@ - - - - - - Legacy Hibernate Criteria Queries - - - - - This appendix covers the legacy Hibernate org.hibernate.Criteria API, which - should be considered deprecated. New development should focus on the JPA - javax.persistence.criteria.CriteriaQuery API. Eventually, - Hibernate-specific criteria features will be ported as extensions to the JPA - javax.persistence.criteria.CriteriaQuery. For details on the JPA APIs, see - . - - - This information is copied as-is from the older Hibernate documentation. - - - - - Hibernate features an intuitive, extensible criteria query API. - - -
- Creating a <literal>Criteria</literal> instance - - - The interface org.hibernate.Criteria represents a query against - a particular persistent class. The Session is a factory for - Criteria instances. - - - - -
- -
- Narrowing the result set - - - An individual query criterion is an instance of the interface - org.hibernate.criterion.Criterion. The class - org.hibernate.criterion.Restrictions defines - factory methods for obtaining certain built-in - Criterion types. - - - - - - Restrictions can be grouped logically. - - - - - - - - There are a range of built-in criterion types (Restrictions - subclasses). One of the most useful allows you to specify SQL directly. - - - - - - The {alias} placeholder will be replaced by the row alias - of the queried entity. - - - - You can also obtain a criterion from a - Property instance. You can create a Property - by calling Property.forName(): - - - - -
- -
- Ordering the results - - - You can order the results using org.hibernate.criterion.Order. - - - - - - -
- -
- Associations - - - By navigating - associations using createCriteria() you can specify constraints upon related entities: - - - - - - The second createCriteria() returns a new - instance of Criteria that refers to the elements of - the kittens collection. - - - - There is also an alternate form that is useful in certain circumstances: - - - - - - (createAlias() does not create a new instance of - Criteria.) - - - - The kittens collections held by the Cat instances - returned by the previous two queries are not pre-filtered - by the criteria. If you want to retrieve just the kittens that match the - criteria, you must use a ResultTransformer. - - - - - - Additionally you may manipulate the result set using a left outer join: - - - - - This will return all of the Cats with a mate whose name starts with "good" - ordered by their mate's age, and all cats who do not have a mate. - This is useful when there is a need to order or limit in the database - prior to returning complex/large result sets, and removes many instances where - multiple queries would have to be performed and the results unioned - by java in memory. - - - Without this feature, first all of the cats without a mate would need to be loaded in one query. - - - A second query would need to retreive the cats with mates who's name started with "good" sorted by the mates age. - - - Thirdly, in memory; the lists would need to be joined manually. - -
- -
- Dynamic association fetching - - - You can specify association fetching semantics at runtime using - setFetchMode(). - - - - - - This query will fetch both mate and kittens - by outer join. See for more information. - - -
- -
- Components - - - To add a restriction against a property of an embedded component, the component property - name should be prepended to the property name when creating the Restriction. - The criteria object should be created on the owning entity, and cannot be created on the component - itself. For example, suppose the Cat has a component property fullName - with sub-properties firstName and lastName: - - - - - - - Note: this does not apply when querying collections of components, for that see below - - - -
- -
- Collections - - When using criteria against collections, there are two distinct cases. One is if - the collection contains entities (eg. <one-to-many/> - or <many-to-many/>) or components - (<composite-element/> ), - and the second is if the collection contains scalar values - (<element/>). - In the first case, the syntax is as given above in the section - where we restrict the kittens - collection. Essentially we create a Criteria object against the collection - property and restrict the entity or component properties using that instance. - - - For queryng a collection of basic values, we still create the Criteria - object against the collection, but to reference the value, we use the special property - "elements". For an indexed collection, we can also reference the index property using - the special property "indices". - - - -
- -
- Example queries - - - The class org.hibernate.criterion.Example allows - you to construct a query criterion from a given instance. - - - - - - Version properties, identifiers and associations are ignored. By default, - null valued properties are excluded. - - - - You can adjust how the Example is applied. - - - - - - You can even use examples to place criteria upon associated objects. - - - - -
- -
- Projections, aggregation and grouping - - The class org.hibernate.criterion.Projections is a - factory for Projection instances. You can apply a - projection to a query by calling setProjection(). - - - - - - - - There is no explicit "group by" necessary in a criteria query. Certain - projection types are defined to be grouping projections, - which also appear in the SQL group by clause. - - - - An alias can be assigned to a projection so that the projected value - can be referred to in restrictions or orderings. Here are two different ways to - do this: - - - - - - - - The alias() and as() methods simply wrap a - projection instance in another, aliased, instance of Projection. - As a shortcut, you can assign an alias when you add the projection to a - projection list: - - - - - - - - You can also use Property.forName() to express projections: - - - - - - -
- -
- Detached queries and subqueries - - The DetachedCriteria class allows you to create a query outside the scope - of a session and then execute it using an arbitrary Session. - - - - - - A DetachedCriteria can also be used to express a subquery. Criterion - instances involving subqueries can be obtained via Subqueries or - Property. - - - - - - - - Correlated subqueries are also possible: - - - - - - Example of multi-column restriction based on a subquery: - - - - -
- - - -
- Queries by natural identifier - - - For most queries, including criteria queries, the query cache is not efficient - because query cache invalidation occurs too frequently. However, there is a special - kind of query where you can optimize the cache invalidation algorithm: lookups by a - constant natural key. In some applications, this kind of query occurs frequently. - The criteria API provides special provision for this use case. - - - - First, map the natural key of your entity using - <natural-id> and enable use of the second-level cache. - - - - - - - - - - - - -]]> - - - This functionality is not intended for use with entities with - mutable natural keys. - - - - Once you have enabled the Hibernate query cache, - the Restrictions.naturalId() allows you to make use of - the more efficient cache algorithm. - - - - -
- -
diff --git a/documentation/src/main/docbook/integration/en-US/appendix-Configuration_Properties.xml b/documentation/src/main/docbook/integration/en-US/appendix-Configuration_Properties.xml deleted file mode 100644 index d340b5e38b..0000000000 --- a/documentation/src/main/docbook/integration/en-US/appendix-Configuration_Properties.xml +++ /dev/null @@ -1,441 +0,0 @@ - - - - - - Configuration properties - -
- Strategy configurations - - Many configuration settings define pluggable strategies that Hibernate uses for various purposes. - The configuration of many of these strategy type settings accept definition in various forms. The - documentation of such configuration settings refer here. The types of forms available in such cases - include: - - - - short name (if defined) - - - Certain built-in strategy implementations have a corresponding short name. - - - - - strategy instance - - - An instance of the strategy implementation to use can be specified - - - - - strategy Class reference - - - A java.lang.Class reference of the strategy implementation to use can - be specified - - - - - strategy Class name - - - The class name (java.lang.String) of the strategy implementation to - use can be specified - - - - -
- - -
- General Configuration - - - - - - - - hibernate.dialect - A fully-qualified classname - - - The classname of a Hibernate org.hibernate.dialect.Dialect from which Hibernate - can generate SQL optimized for a particular relational database. - - - In most cases Hibernate can choose the correct org.hibernate.dialect.Dialect - implementation based on the JDBC metadata returned by the JDBC driver. - - - - - hibernate.show_sql - true or false - Write all SQL statements to the console. This is an alternative to setting the log category - org.hibernate.SQL to debug. - - - hibernate.format_sql - true or false - Pretty-print the SQL in the log and console. - - - hibernate.default_schema - A schema name - Qualify unqualified table names with the given schema or tablespace in generated SQL. - - - hibernate.default_catalog - A catalog name - Qualifies unqualified table names with the given catalog in generated SQL. - - - hibernate.session_factory_name - A JNDI name - The org.hibernate.SessionFactory is automatically bound to this name in JNDI - after it is created. - - - hibernate.max_fetch_depth - A value between 0 and 3 - Sets a maximum depth for the outer join fetch tree for single-ended associations. A single-ended - assocation is a one-to-one or many-to-one assocation. A value of 0 disables default outer - join fetching. - - - hibernate.default_batch_fetch_size - 4,8, or 16 - Default size for Hibernate batch fetching of associations. - - - hibernate.default_entity_mode - dynamic-map or pojo - Default mode for entity representation for all sessions opened from this - SessionFactory, defaults to pojo. - - - hibernate.order_updates - true or false - Forces Hibernate to order SQL updates by the primary key value of the items being updated. This - reduces the likelihood of transaction deadlocks in highly-concurrent systems. - - - hibernate.order_by.default_null_ordering - none, first or last - Defines precedence of null values in ORDER BY clause. Defaults to - none which varies between RDBMS implementation. - - - hibernate.generate_statistics - true or false - Causes Hibernate to collect statistics for performance tuning. - - - hibernate.use_identifier_rollback - true or false - If true, generated identifier properties are reset to default values when objects are - deleted. - - - hibernate.use_sql_comments - true or false - If true, Hibernate generates comments inside the SQL, for easier debugging. - - - - -
-
- Database configuration - - JDBC properties - - - - Property - Example - Purpose - - - - - hibernate.jdbc.fetch_size - 0 or an integer - A non-zero value determines the JDBC fetch size, by calling - Statement.setFetchSize(). - - - hibernate.jdbc.batch_size - A value between 5 and 30 - A non-zero value causes Hibernate to use JDBC2 batch updates. - - - hibernate.jdbc.batch_versioned_data - true or false - Set this property to true if your JDBC driver returns correct row counts - from executeBatch(). This option is usually safe, but is disabled by default. If - enabled, Hibernate uses batched DML for automatically versioned data. - - - hibernate.jdbc.factory_class - The fully-qualified class name of the factory - Select a custom org.hibernate.jdbc.Batcher. Irrelevant for most - applications. - - - hibernate.jdbc.use_scrollable_resultset - true or false - Enables Hibernate to use JDBC2 scrollable resultsets. This property is only relevant for - user-supplied JDBC connections. Otherwise, Hibernate uses connection metadata. - - - hibernate.jdbc.use_streams_for_binary - true or false - Use streams when writing or reading binary or serializable types to - or from JDBC. This is a system-level property. - - - hibernate.jdbc.use_get_generated_keys - true or false - Allows Hibernate to use JDBC3 PreparedStatement.getGeneratedKeys() to - retrieve natively-generated keys after insert. You need the JDBC3+ driver and JRE1.4+. Disable this property - if your driver has problems with the Hibernate identifier generators. By default, it tries to detect the - driver capabilities from connection metadata. - - - -
- - Cache Properties - - - - - - - Property - Example - Purpose - - - - - hibernate.cache.provider_class - Fully-qualified classname - The classname of a custom CacheProvider. - - - hibernate.cache.use_minimal_puts - true or false - Optimizes second-level cache operation to minimize writes, at the cost of more frequent reads. This - is most useful for clustered caches and is enabled by default for clustered cache implementations. - - - hibernate.cache.use_query_cache - true or false - Enables the query cache. You still need to set individual queries to be cachable. - - - hibernate.cache.use_second_level_cache true or - false Completely disable the second level cache, which is enabled - by default for classes which specify a <cache> mapping. - - - hibernate.cache.query_cache_factory - Fully-qualified classname - A custom QueryCache interface. The default is the built-in - StandardQueryCache. - - - hibernate.cache.region_prefix - A string - A prefix for second-level cache region names. - - - hibernate.cache.use_structured_entries - true or false - Forces Hibernate to store data in the second-level cache in a more human-readable format. - - - hibernate.cache.auto_evict_collection_cache - true or false (default: false) - Enables the automatic eviction of a bi-directional association's collection cache when an element - in the ManyToOne collection is added/updated/removed without properly managing the change on the OneToMany - side. - - - hibernate.cache.use_reference_entries - true or false - Optimizes second-level cache operation to store immutable entities (aka "reference") which do - not have associations into cache directly, this case, lots of disasseble and deep copy operations - can be avoid. - Default value of this property is false. - - - - -
- - Transactions properties - - - - - - - - Property - Example - Purpose - - - - - hibernate.transaction.factory_class - - jdbc or - - - - Names the org.hibernate.engine.transaction.spi.TransactionFactory - strategy implementation to use. See and - - - - - - jta.UserTransaction - A JNDI name - The JTATransactionFactory needs a JNDI name to obtain the JTA - UserTransaction from the application server. - - - hibernate.transaction.manager_lookup_class - A fully-qualified classname - The classname of a TransactionManagerLookup, which is used in - conjunction with JVM-level or the hilo generator in a JTA environment. - - - hibernate.transaction.flush_before_completion - true or false - Causes the session be flushed during the before completion phase of the - transaction. If possible, use built-in and automatic session context management instead. - - - hibernate.transaction.auto_close_session - true or false - Causes the session to be closed during the after completion phase of the - transaction. If possible, use built-in and automatic session context management instead. - - - -
- - - Each of the properties in the following table are prefixed by hibernate.. It has been removed - in the table to conserve space. - - - - Miscellaneous properties - - - - Property - Example - Purpose - - - - - current_session_context_class - One of jta, thread, managed, or - custom.Class - Supply a custom strategy for the scoping of the Current - Session. - - - factory_class - org.hibernate.hql.internal.ast.ASTQueryTranslatorFactory or - org.hibernate.hql.internal.classic.ClassicQueryTranslatorFactory - Chooses the HQL parser implementation. - - - query.substitutions - hqlLiteral=SQL_LITERAL or hqlFunction=SQLFUNC - - Map from tokens in Hibernate queries to SQL tokens, such as function or literal names. - - - hbm2ddl.auto - validate, update, create, - create-drop - Validates or exports schema DDL to the database when the SessionFactory is - created. With create-drop, the database schema is dropped when the - SessionFactory is closed explicitly. - - - -
-
-
- Connection pool properties - - c3p0 connection pool properties - hibernate.c3p0.min_size - hibernate.c3p0.max_size - hibernate.c3p0.timeout - hibernate.c3p0.max_statements - - - Proxool connection pool properties - - - - - - Property - Description - - - - - hibernate.proxool.xml - Configure Proxool provider using an XML file (.xml is appended automatically) - - - hibernate.proxool.properties - Configure the Proxool provider using a properties file (.properties is appended - automatically) - - - hibernate.proxool.existing_pool - Whether to configure the Proxool provider from an existing pool - - - hibernate.proxool.pool_alias - Proxool pool alias to use. Required. - - - -
- - - For information on specific configuration of Proxool, refer to the Proxool documentation available from - . - - -
-
diff --git a/documentation/src/main/docbook/integration/en-US/appendix-Troubleshooting.xml b/documentation/src/main/docbook/integration/en-US/appendix-Troubleshooting.xml deleted file mode 100644 index 829c705f76..0000000000 --- a/documentation/src/main/docbook/integration/en-US/appendix-Troubleshooting.xml +++ /dev/null @@ -1,63 +0,0 @@ - - - - - - Troubleshooting - -
- Log messages - - This section discusses certain log messages you might see from Hibernate and the "meaning" of those - messages. Specifically, it will discuss certain messages having a "message id", which for Hibernate - is always the code HHH followed by a numeric code. The table below is ordered - by this code. - - - Explanation of identified log messages - - - - Key - Explanation - - - - - HHH000002 - - - Indicates that a session was left associated with the - org.hibernate.context.internal.ThreadLocalSessionContext that is used - to implement thread-based current session management. Internally that class uses a - ThreadLocal, and in environments where Threads are pooled this could represent a - potential "bleed through" situation. Consider using a different - org.hibernate.context.spi.CurrentSessionContext - implementation. Otherwise, make sure the sessions always get unbound properly. - - - - - HHH000408 - - - Using workaround for JVM bug in java.sql.Timestamp. Certain - JVMs are known to have a bug in the implementation of - java.sql.Timestamp that causes the following condition to - evaluate to false: new Timestamp(x).getTime() != x. - A huge concern here is to make sure you are not using temporal based optimistic - locking on such JVMs. - - - - - -
-
- -
\ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/events/Events.xml b/documentation/src/main/docbook/integration/en-US/chapters/events/Events.xml deleted file mode 100644 index faefdf73c8..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/events/Events.xml +++ /dev/null @@ -1,300 +0,0 @@ - - - - - - Interceptors and events - - - It is useful for the application to react to certain events that occur inside Hibernate. This allows for the - implementation of generic functionality and the extension of Hibernate functionality. - - -
- Interceptors - - - The org.hibernate.Interceptor interface provides callbacks from the session - to the application, allowing the application to inspect and/or manipulate properties of a persistent object - before it is saved, updated, deleted or loaded. One possible use for this is to track auditing information. - For example, the following example shows an Interceptor implementation - that automatically sets the createTimestamp property when an - Auditable entity is created and updates the - lastUpdateTimestamp property when an Auditable entity is - updated. - - - - - You can either implement Interceptor directly or extend - org.hibernate.EmptyInterceptor. - - - - - An Interceptor can be either Session-scoped or SessionFactory-scoped. - - - - A Session-scoped interceptor is specified when a session is opened. - - - - - - A SessionFactory-scoped interceptor is registered with the Configuration object - prior to building the SessionFactory. Unless a session is opened explicitly specifying the interceptor to - use, the SessionFactory-scoped interceptor will be applied to all sessions opened from that SessionFactory. - SessionFactory-scoped interceptors must be thread safe. Ensure that you do not store session-specific - states, since multiple sessions will use this interceptor potentially concurrently. - - - - -
- -
- Native Event system - - - If you have to react to particular events in the persistence layer, you can also use the Hibernate - event architecture. The event system can be used in place of or in addition to - interceptors. - - - - Many methods of the Session interface correlate to an event type. The - full range of defined event types is declared as enum values on - org.hibernate.event.spi.EventType. When a request is made of one of these methods, - the Session generates an appropriate event and passes it to the configured event listener(s) for that type. - Applications are free to implement a customization of one of the listener interfaces - (i.e., the LoadEvent is processed by the registered implementation - of the LoadEventListener interface), in which case their - implementation would be responsible for processing any load() requests - made of the Session. - - - - - See for information on registering custom event - listeners. - - - - - The listeners should be considered stateless; they are shared between requests, and should not save any - state as instance variables. - - - - A custom listener implements the appropriate interface for the event it wants to process and/or extend one - of the convenience base classes (or even the default event listeners used by Hibernate out-of-the-box as - these are declared non-final for this purpose). Here is an example of a custom load event listener: - - - - - Custom LoadListener example - - - - -
- Hibernate declarative security - - Usually, declarative security in Hibernate applications is managed in a session facade - layer. Hibernate allows certain actions to be permissioned via JACC, and authorized - via JAAS. This is an optional functionality that is built on top of the event architecture. - - - - First, you must configure the appropriate event listeners, to enable the use of JACC authorization. - Again, see for the details. Below is an example of an - appropriate org.hibernate.integrator.spi.Integrator implementation for - this purpose. - - - - - JACC listener registration example - - - - - - You must also decide how to configure your JACC provider. Consult your JACC provider documentation. - -
-
- -
- JPA Callbacks - - JPA also defines a more limited set of callbacks through annotations. - - - - Callback annotations - - - - - - Type - Description - - - - - - @PrePersist - - - Executed before the entity manager persist operation is actually executed or cascaded. - This call is synchronous with the persist operation. - - - - - @PreRemove - - - Executed before the entity manager remove operation is actually executed or cascaded. - This call is synchronous with the remove operation. - - - - - @PostPersist - - - Executed after the entity manager persist operation is actually executed or cascaded. - This call is invoked after the database INSERT is executed. - - - - - @PostRemove - - - Executed after the entity manager remove operation is actually executed or cascaded. - This call is synchronous with the remove operation. - - - - - @PreUpdate - - - Executed before the database UPDATE operation. - - - - - @PostUpdate - - - Executed after the database UPDATE operation. - - - - - @PostLoad - - - Executed after an entity has been loaded into the current persistence context or an entity - has been refreshed. - - - - -
- - - There are 2 available approaches defined for specifying callback handling: - - - - - The first approach is to annotate methods on the entity itself to receive notification of - particular entity life cycle event(s). - - - - - The second is to use a separate entity listener class. An entity listener is a stateless class - with a no-arg constructor. The callback annotations are placed on a method of this class instead - of the entity class. The entity listener class is then associated with the entity using the - javax.persistence.EntityListeners annotation - - - - - - Example of specifying JPA callbacks - - - - - These approaches can be mixed, meaning you can use both together. - - - Regardless of whether the callback method is defined on the entity or on an entity listener, it must have - a void-return signature. The name of the method is irrelevant as it is the placement of the callback - annotations that makes the method a callback. In the case of callback methods defined on the - entity class, the method must additionally have a no-argument signature. For callback methods defined on - an entity listener class, the method must have a single argument signature; the type of that argument can - be either java.lang.Object (to facilitate attachment to multiple entities) or the - specific entity type. - - - A callback method can throw a RuntimeException. If the callback method does - throw a RuntimeException, then the current transaction, if any, must be rolled back. - - - A callback method must not invoke EntityManager or - Query methods! - - - It is possible that multiple callback methods are defined for a particular lifecycle event. When that - is the case, the defined order of execution is well defined by the JPA spec (specifically section 3.5.4): - - - - - Any default listeners associated with the entity are invoked first, in the order they were - specified in the XML. See the javax.persistence.ExcludeDefaultListeners - annotation. - - - - - Next, entity listener class callbacks associated with the entity hierarchy are invoked, in the order - they are defined in the EntityListeners. If multiple classes in the - entity hierarchy define entity listeners, the listeners defined for a superclass are invoked before - the listeners defined for its subclasses. See the - javax.persistence.ExcludeSuperclassListeners annotation. - - - - - Lastly, callback methods defined on the entity hierarchy are invoked. If a callback type is - annotated on both an entity and one or more of its superclasses without method overriding, both - would be called, the most general superclass first. An entity class is also allowed to override - a callback method defined in a superclass in which case the super callback would not get invoked; - the overriding method would get invoked provided it is annotated. - - - -
- -
- diff --git a/documentation/src/main/docbook/integration/en-US/chapters/events/extras/AuditInterceptor.java b/documentation/src/main/docbook/integration/en-US/chapters/events/extras/AuditInterceptor.java deleted file mode 100644 index 76e8b5bde2..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/events/extras/AuditInterceptor.java +++ /dev/null @@ -1,79 +0,0 @@ -import java.io.Serializable; -import java.util.Date; - -import org.hibernate.EmptyInterceptor; -import org.hibernate.Transaction; -import org.hibernate.type.Type; - -public class AuditInterceptor extends EmptyInterceptor { - - private int updates; - private int creates; - private int loads; - - public void onDelete(Object entity, - Serializable id, - Object[] state, - String[] propertyNames, - Type[] types) { - // do nothing - } - - public boolean onFlushDirty(Object entity, - Serializable id, - Object[] currentState, - Object[] previousState, - String[] propertyNames, - Type[] types) { - - if ( entity instanceof Auditable ) { - updates++; - for ( int i=0; i < propertyNames.length; i++ ) { - if ( "lastUpdateTimestamp".equals( propertyNames[i] ) ) { - currentState[i] = new Date(); - return true; - } - } - } - return false; - } - - public boolean onLoad(Object entity, - Serializable id, - Object[] state, - String[] propertyNames, - Type[] types) { - if ( entity instanceof Auditable ) { - loads++; - } - return false; - } - - public boolean onSave(Object entity, - Serializable id, - Object[] state, - String[] propertyNames, - Type[] types) { - - if ( entity instanceof Auditable ) { - creates++; - for ( int i=0; i - - - - - Fetching - - - Fetching, essentially, is the process of grabbing data from the database and making it available to the - application. Tuning how an application does fetching is one of the biggest factors in determining how an - application will perform. Fetching too much data, in terms of width (values/columns) and/or - depth (results/rows), adds unnecessary overhead in terms of both JDBC communication and ResultSet processing. - Fetching too little data causes additional fetches to be needed. Tuning how an application - fetches data presents a great opportunity to influence the application's overall performance. - - -
- The basics - - - The concept of fetching breaks down into two different questions. - - - - When should the data be fetched? Now? Later? - - - - - How should the data be fetched? - - - - - - - - "now" is generally termed eager or immediate. "later" is - generally termed lazy or delayed. - - - - - There are a number of scopes for defining fetching: - - - - static - Static definition of fetching strategies is done in the - mappings. The statically-defined fetch strategies is used in the absence of any dynamically - defined strategies Except in the case of HQL/JPQL; see xyz. - - - - - dynamic (sometimes referred to as runtime) - Dynamic definition is - really use-case centric. There are 2 main ways to define dynamic fetching: - - - - - fetch profiles - defined in mappings, but can be - enabled/disabled on the Session. - - - - - HQL/JPQL and both Hibernate and JPA Criteria queries have the ability to specify - fetching, specific to said query. - - - - - - - - - The strategies - - SELECT - - - Performs a separate SQL select to load the data. This can either be EAGER (the second select - is issued immediately) or LAZY (the second select is delayed until the data is needed). This - is the strategy generally termed N+1. - - - - - JOIN - - - Inherently an EAGER style of fetching. The data to be fetched is obtained through the use of - an SQL join. - - - - - BATCH - - - Performs a separate SQL select to load a number of related data items using an - IN-restriction as part of the SQL WHERE-clause based on a batch size. Again, this can either - be EAGER (the second select is issued immediately) or LAZY (the second select is delayed until - the data is needed). - - - - - SUBSELECT - - - Performs a separate SQL select to load associated data based on the SQL restriction used to - load the owner. Again, this can either be EAGER (the second select is issued immediately) - or LAZY (the second select is delayed until the data is needed). - - - - -
- -
- Applying fetch strategies - - - Let's consider these topics as it relates to an simple domain model and a few use cases. - - - - Sample domain model - - - - - - - - The Hibernate recommendation is to statically mark all associations lazy and to use dynamic fetching - strategies for eagerness. This is unfortunately at odds with the JPA specification which defines that - all one-to-one and many-to-one associations should be eagerly fetched by default. Hibernate, as a JPA - provider honors that default. - - - -
- No fetching - The login use-case - - For the first use case, consider the application's login process for an Employee. Lets assume that - login only requires access to the Employee information, not Project nor Department information. - - - - No fetching example - - - - - In this example, the application gets the Employee data. However, because all associations from - Employee are declared as LAZY (JPA defines the default for collections as LAZY) no other data is - fetched. - - - - If the login process does not need access to the Employee information specifically, another - fetching optimization here would be to limit the width of the query results. - - - - No fetching (scalar) example - - -
- -
- Dynamic fetching via queries - The projects for an employee use-case - - - For the second use case, consider a screen displaying the Projects for an Employee. Certainly access - to the Employee is needed, as is the collection of Projects for that Employee. Information - about Departments, other Employees or other Projects is not needed. - - - - Dynamic query fetching example - - - - - - In this example we have an Employee and their Projects loaded in a single query shown both as an HQL - query and a JPA Criteria query. In both cases, this resolves to exactly one database query to get - all that information. - -
- -
- Dynamic fetching via profiles - The projects for an employee use-case using natural-id - - - Suppose we wanted to leverage loading by natural-id to obtain the Employee information in the - "projects for and employee" use-case. Loading by natural-id uses the statically defined fetching - strategies, but does not expose a means to define load-specific fetching. So we would leverage a - fetch profile. - - - - Fetch profile example - - - - - - Here the Employee is obtained by natural-id lookup and the Employee's Project data is fetched eagerly. - If the Employee data is resolved from cache, the Project data is resolved on its own. However, - if the Employee data is not resolved in cache, the Employee and Project data is resolved in one - SQL query via join as we saw above. - -
-
- - - - -
\ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/Department.java b/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/Department.java deleted file mode 100644 index 06eab06a51..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/Department.java +++ /dev/null @@ -1,10 +0,0 @@ -@Entity -public class Department { - @Id - private Long id; - - @OneToMany(mappedBy="department") - private List employees; - - ... -} \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/Employee.java b/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/Employee.java deleted file mode 100644 index 62ed3e54e0..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/Employee.java +++ /dev/null @@ -1,24 +0,0 @@ -@Entity -public class Employee { - @Id - private Long id; - - @NaturalId - private String userid; - - @Column( name="pswd" ) - @ColumnTransformer( read="decrypt(pswd)" write="encrypt(?)" ) - private String password; - - private int accessLevel; - - @ManyToOne( fetch=LAZY ) - @JoinColumn - private Department department; - - @ManyToMany(mappedBy="employees") - @JoinColumn - private Set projects; - - ... -} \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/FetchOverrides.java b/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/FetchOverrides.java deleted file mode 100644 index 4144ea27dc..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/FetchOverrides.java +++ /dev/null @@ -1,10 +0,0 @@ -@FetchProfile( - name="employee.projects", - fetchOverrides={ - @FetchOverride( - entity=Employee.class, - association="projects", - mode=JOIN - ) - } -) \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/Login.java b/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/Login.java deleted file mode 100644 index f916bb847a..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/Login.java +++ /dev/null @@ -1,4 +0,0 @@ -String loginHql = "select e from Employee e where e.userid = :userid and e.password = :password"; -Employee employee = (Employee) session.createQuery( loginHql ) - ... - .uniqueResult(); diff --git a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/LoginScalar.java b/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/LoginScalar.java deleted file mode 100644 index 8905b0ce4a..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/LoginScalar.java +++ /dev/null @@ -1,4 +0,0 @@ -String loginHql = "select e.accessLevel from Employee e where e.userid = :userid and e.password = :password"; -Employee employee = (Employee) session.createQuery( loginHql ) - ... - .uniqueResult(); diff --git a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/Project.java b/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/Project.java deleted file mode 100644 index 94fe42c0d5..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/Project.java +++ /dev/null @@ -1,10 +0,0 @@ -@Entity -public class Project { - @Id - private Long id; - - @ManyToMany - private Set employees; - - ... -} \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/ProjectsForAnEmployeeCriteria.java b/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/ProjectsForAnEmployeeCriteria.java deleted file mode 100644 index 384d964e07..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/ProjectsForAnEmployeeCriteria.java +++ /dev/null @@ -1,10 +0,0 @@ -String userid = ...; -CriteriaBuilder cb = entityManager.getCriteriaBuilder(); -CriteriaQuery criteria = cb.createQuery( Employee.class ); -Root root = criteria.from( Employee.class ); -root.fetch( Employee_.projects ); -criteria.select( root ); -criteria.where( - cb.equal( root.get( Employee_.userid ), cb.literal( userid ) ) -); -Employee e = entityManager.createQuery( criteria ).getSingleResult(); diff --git a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/ProjectsForAnEmployeeFetchProfile.java b/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/ProjectsForAnEmployeeFetchProfile.java deleted file mode 100644 index 297cb8cfc6..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/ProjectsForAnEmployeeFetchProfile.java +++ /dev/null @@ -1,4 +0,0 @@ -String userid = ...; -session.enableFetchProfile( "employee.projects" ); -Employee e = (Employee) session.bySimpleNaturalId( Employee.class ) - .load( userid ); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/ProjectsForAnEmployeeHql.java b/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/ProjectsForAnEmployeeHql.java deleted file mode 100644 index 11235281d0..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/fetching/extras/ProjectsForAnEmployeeHql.java +++ /dev/null @@ -1,5 +0,0 @@ -String userid = ...; -String hql = "select e from Employee e join fetch e.projects where e.userid = :userid"; -Employee e = (Employee) session.createQuery( hql ) - .setParameter( "userid", userid ) - .uniqueResult(); diff --git a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/Multi_Tenancy.xml b/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/Multi_Tenancy.xml deleted file mode 100644 index 94ee9ae1b9..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/Multi_Tenancy.xml +++ /dev/null @@ -1,313 +0,0 @@ - - - - - - Multi-tenancy - -
- What is multi-tenancy? - - The term multi-tenancy in general is applied to software development to indicate an architecture in which - a single running instance of an application simultaneously serves multiple clients (tenants). This is - highly common in SaaS solutions. Isolating information (data, customizations, etc) pertaining to the - various tenants is a particular challenge in these systems. This includes the data owned by each tenant - stored in the database. It is this last piece, sometimes called multi-tenant data, on which we will focus. - -
- -
- Multi-tenant data approaches - - There are 3 main approaches to isolating information in these multi-tenant systems which goes hand-in-hand - with different database schema definitions and JDBC setups. - - - - - Each approach has pros and cons as well as specific techniques and considerations. Such - topics are beyond the scope of this documentation. Many resources exist which delve into these - other topics. One example is - which does a great job of covering these topics. - - - -
- Separate database - - - - - - - - - - - - Each tenant's data is kept in a physically separate database instance. JDBC Connections would point - specifically to each database, so any pooling would be per-tenant. A general application approach - here would be to define a JDBC Connection pool per-tenant and to select the pool to use based on the - tenant identifier associated with the currently logged in user. - -
- -
- Separate schema - - - - - - - - - - - - Each tenant's data is kept in a distinct database schema on a single database instance. There are 2 - different ways to define JDBC Connections here: - - - - Connections could point specifically to each schema, as we saw with the - Separate database approach. This is an option provided that - the driver supports naming the default schema in the connection URL or if the - pooling mechanism supports naming a schema to use for its Connections. Using this - approach, we would have a distinct JDBC Connection pool per-tenant where the pool to use - would be selected based on the tenant identifier associated with the - currently logged in user. - - - - - Connections could point to the database itself (using some default schema) but - the Connections would be altered using the SQL SET SCHEMA (or similar) - command. Using this approach, we would have a single JDBC Connection pool for use to - service all tenants, but before using the Connection it would be altered to reference - the schema named by the tenant identifier associated with the currently - logged in user. - - - - -
- -
- Partitioned (discriminator) data - - - - - - - - - - - - All data is kept in a single database schema. The data for each tenant is partitioned by the use of - partition value or discriminator. The complexity of this discriminator might range from a simple - column value to a complex SQL formula. Again, this approach would use a single Connection pool - to service all tenants. However, in this approach the application needs to alter each and every - SQL statement sent to the database to reference the tenant identifier discriminator. - -
-
- -
- Multi-tenancy in Hibernate - - Using Hibernate with multi-tenant data comes down to both an API and then integration piece(s). As - usual Hibernate strives to keep the API simple and isolated from any underlying integration complexities. - The API is really just defined by passing the tenant identifier as part of opening any session. - - - Specifying tenant identifier from <interfacename>SessionFactory</interfacename> - - - - Additionally, when specifying configuration, a org.hibernate.MultiTenancyStrategy - should be named using the hibernate.multiTenancy setting. Hibernate will perform - validations based on the type of strategy you specify. The strategy here correlates to the isolation - approach discussed above. - - - - NONE - - - (the default) No multi-tenancy is expected. In fact, it is considered an error if a tenant - identifier is specified when opening a session using this strategy. - - - - - SCHEMA - - - Correlates to the separate schema approach. It is an error to attempt to open a session without - a tenant identifier using this strategy. Additionally, a - MultiTenantConnectionProvider - must be specified. - - - - - DATABASE - - - Correlates to the separate database approach. It is an error to attempt to open a session without - a tenant identifier using this strategy. Additionally, a - MultiTenantConnectionProvider - must be specified. - - - - - DISCRIMINATOR - - - Correlates to the partitioned (discriminator) approach. It is an error to attempt to open a - session without a tenant identifier using this strategy. This strategy is not yet implemented - in Hibernate as of 4.0 and 4.1. Its support is planned for 5.0. - - - - - -
- <interfacename>MultiTenantConnectionProvider</interfacename> - - When using either the DATABASE or SCHEMA approach, Hibernate needs to be able to obtain Connections - in a tenant specific manner. That is the role of the - MultiTenantConnectionProvider - contract. Application developers will need to provide an implementation of this - contract. Most of its methods are extremely self-explanatory. The only ones which might not be are - getAnyConnection and releaseAnyConnection. It is - important to note also that these methods do not accept the tenant identifier. Hibernate uses these - methods during startup to perform various configuration, mainly via the - java.sql.DatabaseMetaData object. - - - The MultiTenantConnectionProvider to use can be specified in a number of - ways: - - - - - Use the hibernate.multi_tenant_connection_provider setting. It could - name a MultiTenantConnectionProvider instance, a - MultiTenantConnectionProvider implementation class reference or - a MultiTenantConnectionProvider implementation class name. - - - - - Passed directly to the org.hibernate.boot.registry.StandardServiceRegistryBuilder. - - - - - If none of the above options match, but the settings do specify a - hibernate.connection.datasource value, Hibernate will assume it should - use the specific - DataSourceBasedMultiTenantConnectionProviderImpl - implementation which works on a number of pretty reasonable assumptions when running inside of - an app server and using one javax.sql.DataSource per tenant. - See its javadocs for more details. - - - -
- -
- <interfacename>CurrentTenantIdentifierResolver</interfacename> - - org.hibernate.context.spi.CurrentTenantIdentifierResolver is a contract - for Hibernate to be able to resolve what the application considers the current tenant identifier. - The implementation to use is either passed directly to Configuration via its - setCurrentTenantIdentifierResolver method. It can also be specified via - the hibernate.tenant_identifier_resolver setting. - - - There are 2 situations where CurrentTenantIdentifierResolver is used: - - - - - The first situation is when the application is using the - org.hibernate.context.spi.CurrentSessionContext feature in - conjunction with multi-tenancy. In the case of the current-session feature, Hibernate will - need to open a session if it cannot find an existing one in scope. However, when a session - is opened in a multi-tenant environment the tenant identifier has to be specified. This is - where the CurrentTenantIdentifierResolver comes into play; - Hibernate will consult the implementation you provide to determine the tenant identifier to use - when opening the session. In this case, it is required that a - CurrentTenantIdentifierResolver be supplied. - - - - - The other situation is when you do not want to have to explicitly specify the tenant - identifier all the time as we saw in . If a - CurrentTenantIdentifierResolver has been specified, Hibernate - will use it to determine the default tenant identifier to use when opening the session. - - - - - Additionally, if the CurrentTenantIdentifierResolver implementation - returns true for its validateExistingCurrentSessions - method, Hibernate will make sure any existing sessions that are found in scope have a matching - tenant identifier. This capability is only pertinent when the - CurrentTenantIdentifierResolver is used in current-session settings. - -
- -
- Caching - - Multi-tenancy support in Hibernate works seamlessly with the Hibernate second level cache. The key - used to cache data encodes the tenant identifier. - -
- -
- Odds and ends - - Currently schema export will not really work with multi-tenancy. That may not change. - - - The JPA expert group is in the process of defining multi-tenancy support for the upcoming 2.1 - version of the specification. - -
-
- -
- Strategies for <interfacename>MultiTenantConnectionProvider</interfacename> implementors - - Implementing MultiTenantConnectionProvider using different connection pools - - - - The approach above is valid for the DATABASE approach. It is also valid for the SCHEMA approach - provided the underlying database allows naming the schema to which to connect in the connection URL. - - - Implementing MultiTenantConnectionProvider using single connection pool - - - - This approach is only relevant to the SCHEMA approach. - -
-
\ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/extras/MultiTenantConnectionProviderImpl-multi-cp.java b/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/extras/MultiTenantConnectionProviderImpl-multi-cp.java deleted file mode 100644 index 56dcb7a9cd..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/extras/MultiTenantConnectionProviderImpl-multi-cp.java +++ /dev/null @@ -1,24 +0,0 @@ -/** - * Simplisitc implementation for illustration purposes supporting 2 hard coded providers (pools) and leveraging - * the support class {@link org.hibernate.service.jdbc.connections.spi.AbstractMultiTenantConnectionProvider} - */ -public class MultiTenantConnectionProviderImpl extends AbstractMultiTenantConnectionProvider { - private final ConnectionProvider acmeProvider = ConnectionProviderUtils.buildConnectionProvider( "acme" ); - private final ConnectionProvider jbossProvider = ConnectionProviderUtils.buildConnectionProvider( "jboss" ); - - @Override - protected ConnectionProvider getAnyConnectionProvider() { - return acmeProvider; - } - - @Override - protected ConnectionProvider selectConnectionProvider(String tenantIdentifier) { - if ( "acme".equals( tenantIdentifier ) ) { - return acmeProvider; - } - else if ( "jboss".equals( tenantIdentifier ) ) { - return jbossProvider; - } - throw new HibernateException( "Unknown tenant identifier" ); - } -} \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/extras/MultiTenantConnectionProviderImpl-single-cp.java b/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/extras/MultiTenantConnectionProviderImpl-single-cp.java deleted file mode 100644 index 6d41cfd2fd..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/extras/MultiTenantConnectionProviderImpl-single-cp.java +++ /dev/null @@ -1,54 +0,0 @@ -/** - * Simplisitc implementation for illustration purposes showing a single connection pool used to serve - * multiple schemas using "connection altering". Here we use the T-SQL specific USE command; Oracle - * users might use the ALTER SESSION SET SCHEMA command; etc. - */ -public class MultiTenantConnectionProviderImpl - implements MultiTenantConnectionProvider, Stoppable { - private final ConnectionProvider connectionProvider = ConnectionProviderUtils.buildConnectionProvider( "master" ); - - @Override - public Connection getAnyConnection() throws SQLException { - return connectionProvider.getConnection(); - } - - @Override - public void releaseAnyConnection(Connection connection) throws SQLException { - connectionProvider.closeConnection( connection ); - } - - @Override - public Connection getConnection(String tenantIdentifier) throws SQLException { - final Connection connection = getAnyConnection(); - try { - connection.createStatement().execute( "USE " + tenanantIdentifier ); - } - catch ( SQLException e ) { - throw new HibernateException( - "Could not alter JDBC connection to specified schema [" + - tenantIdentifier + "]", - e - ); - } - return connection; - } - - @Override - public void releaseConnection(String tenantIdentifier, Connection connection) throws SQLException { - try { - connection.createStatement().execute( "USE master" ); - } - catch ( SQLException e ) { - // on error, throw an exception to make sure the connection is not returned to the pool. - // your requirements may differ - throw new HibernateException( - "Could not alter JDBC connection to specified schema [" + - tenantIdentifier + "]", - e - ); - } - connectionProvider.closeConnection( connection ); - } - - ... -} \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/extras/tenant-identifier-from-SessionFactory.java b/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/extras/tenant-identifier-from-SessionFactory.java deleted file mode 100644 index bbc859be3f..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/extras/tenant-identifier-from-SessionFactory.java +++ /dev/null @@ -1,4 +0,0 @@ -Session session = sessionFactory.withOptions() - .tenantIdentifier( yourTenantIdentifier ) - ... - .openSession(); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_database.png b/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_database.png deleted file mode 100644 index 84822ec4e5..0000000000 Binary files a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_database.png and /dev/null differ diff --git a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_database.svg b/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_database.svg deleted file mode 100644 index f0d9e7baa8..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_database.svg +++ /dev/null @@ -1,130 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -CUSTOMER ( -ID BIGINT, -NAME VARCHAR, -... -) -CUSTOMER ( -ID BIGINT, -NAME VARCHAR, -... -) -Application - - - - - - - - - - - diff --git a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_discriminator.png b/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_discriminator.png deleted file mode 100644 index dcc3ad6ed9..0000000000 Binary files a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_discriminator.png and /dev/null differ diff --git a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_discriminator.svg b/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_discriminator.svg deleted file mode 100644 index afb2913326..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_discriminator.svg +++ /dev/null @@ -1,70 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -CUSTOMER ( -ID BIGINT, -NAME VARCHAR, -... -TENANT_ID VARCHAR -) -Application - - - - - - diff --git a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_schema.png b/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_schema.png deleted file mode 100644 index 2756ba2ed6..0000000000 Binary files a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_schema.png and /dev/null differ diff --git a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_schema.svg b/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_schema.svg deleted file mode 100644 index 6fbb7a40e8..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/multitenancy/images/multitenacy_schema.svg +++ /dev/null @@ -1,126 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -CUSTOMER ( -ID BIGINT, -NAME VARCHAR, -... -) -CUSTOMER ( -ID BIGINT, -NAME VARCHAR, -... -) -Application - - - - - - - - - - - - - - - - diff --git a/documentation/src/main/docbook/integration/en-US/chapters/osgi/OSGi.xml b/documentation/src/main/docbook/integration/en-US/chapters/osgi/OSGi.xml deleted file mode 100644 index 90f5eb3f9c..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/osgi/OSGi.xml +++ /dev/null @@ -1,420 +0,0 @@ - - - - - - OSGi - - - The Open Services Gateway initiative (OSGi) specification describes a dynamic, modularized system. "Bundles" - (components) can be installed, activated, deactivated, and uninstalled during runtime, without requiring - a system restart. OSGi frameworks manage bundles' dependencies, packages, and classes. The framework - is also in charge of ClassLoading, managing visibility of packages between bundles. Further, service - registry and discovery is provided through a "whiteboard" pattern. - - - - OSGi environments present numerous, unique challenges. Most notably, the dynamic nature of available - bundles during runtime can require significant architectural considerations. Also, - architectures must allow the OSGi-specific ClassLoading and service registration/discovery. - - - - -
- OSGi Specification and Environment - - - Hibernate targets the OSGi 4.3 spec or later. It was necessary to start with 4.3, over 4.2, due to our - dependency on OSGi's BundleWiring for entity/mapping scanning. - - - - Hibernate supports three types of configurations within OSGi. - - - - Container-Managed JPA: - - - Unmanaged JPA: - - - Unmanaged Native: - - - -
- -
- hibernate-osgi - - - Rather than embed OSGi capabilities into hibernate-core, hibernate-entitymanager, and sub-modules, - hibernate-osgi was created. It's purposefully separated, isolating all OSGi dependencies. It provides an - OSGi-specific ClassLoader (aggregates the container's CL with core and entitymanager CLs), JPA persistence - provider, SF/EMF bootstrapping, entities/mappings scanner, and service management. - -
- -
- Container-Managed JPA - - - The Enterprise OSGi specification includes container-managed JPA. The container is responsible for - discovering persistence units and creating the EntityManagerFactory (one EMF per PU). - It uses the JPA provider (hibernate-osgi) that has registered itself with the OSGi - PersistenceProvider service. - - - - Quickstart tutorial project, demonstrating a container-managed JPA client bundle: - managed-jpa - - -
- Client bundle imports - - Your client bundle's manifest will need to import, at a minimum, - - - javax.persistence - - - - org.hibernate.proxy and javassist.util.proxy, due to - Hibernate's ability to return proxies for lazy initialization (Javassist enhancement - occurs on the entity's ClassLoader during runtime). - - - - -
- -
- JPA 2.1 - - - No Enterprise OSGi JPA container currently supports JPA 2.1 (the spec is not yet released). For - testing, the managed-jpa example makes use of - Brett's fork of Aries. To work - with Hibernate 4.3, clone the fork and build Aries JPA. - -
- -
- DataSource - - Typical Enterprise OSGi JPA usage includes a DataSource installed in the container. The client - bundle's persistence.xml uses the DataSource through JNDI. For an example, - see the QuickStart's DataSource: - datasource-h2.xml - The DataSource is then called out in - - persistence.xml's jta-data-source. - -
- -
- Bundle Ordering - - Hibernate currently requires fairly specific bundle activation ordering. See the managed-jpa - QuickStart's - features.xml - for the best supported sequence. - -
- -
- Obtaining an EntityManger - - The easiest, and most supported, method of obtaining an EntityManager utilizes OSGi's - blueprint.xml. The container takes the name of your persistence unit, then injects - an EntityManager instance into your given bean attribute. See the - dpService bean in the managed-jpa QuickStart's - blueprint.xml - for an example. - -
-
- -
- Unmanaged JPA - - - Hibernate also supports the use of JPA through hibernate-entitymanager, unmanaged by the OSGi - container. The client bundle is responsible for managing the EntityManagerFactory and EntityManagers. - - - - Quickstart tutorial project, demonstrating an unmanaged JPA client bundle: - unmanaged-jpa - - -
- Client bundle imports - - Your client bundle's manifest will need to import, at a minimum, - - - javax.persistence - - - - org.hibernate.proxy and javassist.util.proxy, due to - Hibernate's ability to return proxies for lazy initialization (Javassist enhancement - occurs on the entity's ClassLoader during runtime) - - - - - JDBC driver package (example: org.h2) - - - - - org.osgi.framework, necessary to discover the EMF (described below) - - - - -
- -
- Bundle Ordering - - Hibernate currently requires fairly specific bundle activation ordering. See the unmanaged-jpa - QuickStart's - features.xml - for the best supported sequence. - -
- -
- Obtaining an EntityMangerFactory - - hibernate-osgi registers an OSGi service, using the JPA PersistenceProvider interface - name, that bootstraps and creates an EntityManagerFactory specific for OSGi - environments. It is VITAL that your EMF be obtained through the service, rather than creating it - manually. The service handles the OSGi ClassLoader, discovered extension points, scanning, etc. Manually - creating an EntityManagerFactory is guaranteed to NOT work during runtime! - - - For an example on how to discover and use the service, see the unmanaged-jpa - QuickStart's - HibernateUtil.java. - -
-
- -
- Unmanaged Native - - - Native Hibernate use is also supported. The client bundle is responsible for managing the - SessionFactory and Sessions. - - - - Quickstart tutorial project, demonstrating an unmanaged native client bundle: - unmanaged-native - - -
- Client bundle imports - - Your client bundle's manifest will need to import, at a minimum, - - - javax.persistence - - - - org.hibernate.proxy and javassist.util.proxy, due to - Hibernate's ability to return proxies for lazy initialization (Javassist enhancement - occurs on the entity's ClassLoader during runtime) - - - - - JDBC driver package (example: org.h2) - - - - - org.osgi.framework, necessary to discover the SF (described below) - - - - - org.hibernate.* packages, as necessary (ex: cfg, criterion, service, etc.) - - - - -
- -
- Bundle Ordering - - Hibernate currently requires fairly specific bundle activation ordering. See the unmanaged-native - QuickStart's - features.xml - for the best supported sequence. - -
- -
- Obtaining an SessionFactory - - hibernate-osgi registers an OSGi service, using the SessionFactory interface - name, that bootstraps and creates an SessionFactory specific for OSGi - environments. It is VITAL that your SF be obtained through the service, rather than creating it - manually. The service handles the OSGi ClassLoader, discovered extension points, scanning, etc. Manually - creating an SessionFactory is guaranteed to NOT work during runtime! - - - For an example on how to discover and use the service, see the unmanaged-native - QuickStart's - HibernateUtil.java. - -
-
- -
- Optional Modules - - - The unmanaged-native - QuickStart project demonstrates the use of optional Hibernate modules. Each module adds additional - dependency bundles that must first be activated - (see features.xml). - As of ORM 4.2, Envers is fully supported. Support for C3P0, Proxool, EhCache, and Infinispan were added in - 4.3, however none of their 3rd party libraries currently work in OSGi (lots of ClassLoader problems, etc.). - We're tracking the issues in JIRA. - -
- -
- Extension Points - - - Multiple contracts exist to allow applications to integrate with and extend Hibernate capabilities. Most - apps utilize JDK services to provide their implementations. hibernate-osgi supports the same - extensions through OSGi services. Implement and register them in any of the three configurations. - hibernate-osgi will discover and integrate them during EMF/SF bootstrapping. Supported extension points - are as follows. The specified interface should be used during service registration. - - - - org.hibernate.integrator.spi.Integrator (as of 4.2) - - - org.hibernate.boot.registry.selector.StrategyRegistrationProvider (as of 4.3) - - - org.hibernate.boot.model.TypeContributor (as of 4.3) - - - JTA's javax.transaction.TransactionManager and - javax.transaction.UserTransaction (as of 4.2), however these are typically - provided by the OSGi container. - - - - - - The easiest way to register extension point implementations is through a blueprint.xml - file. Add OSGI-INF/blueprint/blueprint.xml to your classpath. Envers' blueprint - is a great example: - - - - Example extension point registrations in blueprint.xml - - - - - Extension points can also be registered programmatically with - BundleContext#registerService, typically within your - BundleActivator#start. - -
- -
- Caveats - - - - - Technically, multiple persistence units are supported by Enterprise OSGi JPA and unmanaged - Hibernate JPA use. However, we cannot currently support this in OSGi. In Hibernate 4, only one - instance of the OSGi-specific ClassLoader is used per Hibernate bundle, mainly due to heavy use of - static TCCL utilities. We hope to support one OSGi ClassLoader per persistence unit in - Hibernate 5. - - - - - Scanning is supported to find non-explicitly listed entities and mappings. However, they MUST be - in the same bundle as your persistence unit (fairly typical anyway). Our OSGi ClassLoader only - considers the "requesting bundle" (hence the requirement on using services to create EMF/SF), - rather than attempting to scan all available bundles. This is primarily for versioning - considerations, collision protections, etc. - - - - - Some containers (ex: Aries) always return true for - PersistenceUnitInfo#excludeUnlistedClasses, - even if your persistence.xml explicitly has exclude-unlisted-classes set - to false. They claim it's to protect JPA providers from having to implement - scanning ("we handle it for you"), even though we still want to support it in many cases. The work - around is to set hibernate.archive.autodetection to, for example, - hbm,class. This tells hibernate to ignore the excludeUnlistedClasses value and - scan for *.hbm.xml and entities regardless. - - - - - Scanning does not currently support annotated packages on package-info.java. - - - - - Currently, Hibernate OSGi is primarily tested using Apache Karaf and Apache Aries JPA. Additional - testing is needed with Equinox, Gemini, and other container providers. - - - - - Hibernate ORM has many dependencies that do not currently provide OSGi manifests. - The QuickStart tutorials make heavy use of 3rd party bundles (SpringSource, ServiceMix) or the - wrap:... operator. - - - - - As previously mentioned, bundle activation is currently order specific. See the QuickStart - tutorials' features.xml for example sequences. - - - - - No Enterprise OSGi JPA container currently supports JPA 2.1 (the spec is not yet released). For - testing, the managed-jpa example makes use of - Brett's fork of Aries. To work - with Hibernate 4.3, clone the fork and build Aries JPA. - - - -
- -
\ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/osgi/extras/extension_point_blueprint.xml b/documentation/src/main/docbook/integration/en-US/chapters/osgi/extras/extension_point_blueprint.xml deleted file mode 100644 index e9a4452914..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/osgi/extras/extension_point_blueprint.xml +++ /dev/null @@ -1,17 +0,0 @@ - - - - - - - - - - \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/Persistence_Context.xml b/documentation/src/main/docbook/integration/en-US/chapters/pc/Persistence_Context.xml deleted file mode 100644 index ef4f98fafe..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/pc/Persistence_Context.xml +++ /dev/null @@ -1,356 +0,0 @@ - - - - - - - Persistence Contexts - - - - Both the org.hibernate.Session API and - javax.persistence.EntityManager API represent a context for dealing with - persistent data. This concept is called a persistence context. Persistent data has a - state in relation to both a persistence context and the underlying database. - - - - Entity states - - - new, or transient - the entity has just been instantiated and is - not associated with a persistence context. It has no persistent representation in the database and no - identifier value has been assigned. - - - - - managed, or persistent - the entity has an associated identifier - and is associated with a persistence context. - - - - - detached - the entity has an associated identifier, but is no longer associated with - a persistence context (usually because the persistence context was closed or the instance was evicted - from the context) - - - - - removed - the entity has an associated identifier and is associated with a persistence - context, however it is scheduled for removal from the database. - - - - - - - - In Hibernate native APIs, the persistence context is defined as the - org.hibernate.Session. In JPA, the persistence context is defined by - javax.persistence.EntityManager. Much of the - org.hibernate.Session and - javax.persistence.EntityManager methods deal with moving entities between these - states. - - -
- Making entities persistent - - - Once you've created a new entity instance (using the standard new operator) it is in - new state. You can make it persistent by associating it to either a - org.hibernate.Session or - javax.persistence.EntityManager - - - - Example of making an entity persistent - - - - - - org.hibernate.Session also has a method named persist - which follows the exact semantic defined in the JPA specification for the persist - method. It is this method on org.hibernate.Session to which the - Hibernate javax.persistence.EntityManager implementation delegates. - - - - If the DomesticCat entity type has a generated identifier, the value is associated - to the instance when the save or persist is called. If the - identifier is not automatically generated, the application-assigned (usually natural) key value has to be - set on the instance before save or persist is called. - -
- -
- Deleting entities - - Entities can also be deleted. - - - Example of deleting an entity - - - - - It is important to note that Hibernate itself can handle deleting detached state. JPA, however, disallows - it. The implication here is that the entity instance passed to the - org.hibernate.Session delete method can be either - in managed or detached state, while the entity instance passed to remove on - javax.persistence.EntityManager must be in managed state. - -
- -
- Obtain an entity reference without initializing its data - - Sometimes referred to as lazy loading, the ability to obtain a reference to an entity without having to - load its data is hugely important. The most common case being the need to create an association between - an entity and another, existing entity. - - - Example of obtaining an entity reference without initializing its data - - - - - The above works on the assumption that the entity is defined to allow lazy loading, generally through - use of runtime proxies. For more information see . In both - cases an exception will be thrown later if the given entity does not refer to actual database state if and - when the application attempts to use the returned proxy in any way that requires access to its data. - -
- -
- Obtain an entity with its data initialized - - - It is also quite common to want to obtain an entity along with with its data, for display for example. - - - Example of obtaining an entity reference with its data initialized - - - - - In both cases null is returned if no matching database row was found. - -
- -
- Obtain an entity by natural-id - - - In addition to allowing to load by identifier, Hibernate allows applications to load by declared - natural identifier. - - - Example of simple natural-id access - - - - Example of natural-id access - - - - Just like we saw above, access entity data by natural id allows both the load - and getReference forms, with the same semantics. - - - - Accessing persistent data by identifier and by natural-id is consistent in the Hibernate API. Each defines - the same 2 data access methods: - - - - getReference - - - Should be used in cases where the identifier is assumed to exist, where non-existence would be - an actual error. Should never be used to test existence. That is because this method will - prefer to create and return a proxy if the data is not already associated with the Session - rather than hit the database. The quintessential use-case for using this method is to create - foreign-key based associations. - - - - - load - - - Will return the persistent data associated with the given identifier value or null if that - identifier does not exist. - - - - - - In addition to those 2 methods, each also defines the method with accepting - a org.hibernate.LockOptions argument. Locking is discussed in a separate - chapter. - -
- -
- Refresh entity state - - - You can reload an entity instance and it's collections at any time. - - - - Example of refreshing entity state - - - - - - One case where this is useful is when it is known that the database state has changed since the data was - read. Refreshing allows the current database state to be pulled into the entity instance and the - persistence context. - - - - Another case where this might be useful is when database triggers are used to initialize some of the - properties of the entity. Note that only the entity instance and its collections are refreshed unless you - specify REFRESH as a cascade style of any associations. However, please note that - Hibernate has the capability to handle this automatically through its notion of generated properties. - See for information. - -
- -
- Modifying managed/persistent state - - - Entities in managed/persistent state may be manipulated by the application and any changes will be - automatically detected and persisted when the persistence context is flushed. There is no need to call a - particular method to make your modifications persistent. - - - - Example of modifying managed state - - - -
- -
- Working with detached data - - - Detachment is the process of working with data outside the scope of any persistence context. Data becomes - detached in a number of ways. Once the persistence context is closed, all data that was associated with it - becomes detached. Clearing the persistence context has the same effect. Evicting a particular entity - from the persistence context makes it detached. And finally, serialization will make the deserialized form - be detached (the original instance is still managed). - - - - Detached data can still be manipulated, however the persistence context will no longer automatically know - about these modification and the application will need to intervene to make the changes persistent. - - -
- Reattaching detached data - - Reattachment is the process of taking an incoming entity instance that is in detached state - and re-associating it with the current persistence context. - - - - JPA does not provide for this model. This is only available through Hibernate - org.hibernate.Session. - - - - Example of reattaching a detached entity - - - - - The method name update is a bit misleading here. It does not mean that an - SQL UPDATE is immediately performed. It does, however, mean that - an SQL UPDATE will be performed when the persistence context is - flushed since Hibernate does not know its previous state against which to compare for changes. Unless - the entity is mapped with select-before-update, in which case Hibernate will - pull the current state from the database and see if an update is needed. - - - Provided the entity is detached, update and saveOrUpdate - operate exactly the same. - -
- -
- Merging detached data - - Merging is the process of taking an incoming entity instance that is in detached state and copying its - data over onto a new instance that is in managed state. - - - Visualizing merge - - - - That is not exactly what happens, but its a good visualization. - - - Example of merging a detached entity - - - -
- -
- - -
- Checking persistent state - - - An application can verify the state of entities and collections in relation to the persistence context. - - - Examples of verifying managed state - - - - - Examples of verifying laziness - - - - - In JPA there is an alternative means to check laziness using the following - javax.persistence.PersistenceUtil pattern. However, the - javax.persistence.PersistenceUnitUtil is recommended where ever possible - - - Alternative JPA means to verify laziness - - - -
- -
- Accessing Hibernate APIs from JPA - - JPA defines an incredibly useful method to allow applications access to the APIs of the underlying provider. - - - Usage of EntityManager.unwrap - - -
-
\ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/CheckingLazinessWithHibernate.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/CheckingLazinessWithHibernate.java deleted file mode 100644 index 1c166cdca0..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/CheckingLazinessWithHibernate.java +++ /dev/null @@ -1,9 +0,0 @@ -if ( Hibernate.isInitialized( customer.getAddress() ) { - //display address if loaded -} -if ( Hibernate.isInitialized( customer.getOrders()) ) ) { - //display orders if loaded -} -if (Hibernate.isPropertyInitialized( customer, "detailedBio" ) ) { - //display property detailedBio if loaded -} \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/CheckingLazinessWithJPA.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/CheckingLazinessWithJPA.java deleted file mode 100644 index 50751d25d3..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/CheckingLazinessWithJPA.java +++ /dev/null @@ -1,10 +0,0 @@ -javax.persistence.PersistenceUnitUtil jpaUtil = entityManager.getEntityManagerFactory().getPersistenceUnitUtil(); -if ( jpaUtil.isLoaded( customer.getAddress() ) { - //display address if loaded -} -if ( jpaUtil.isLoaded( customer.getOrders()) ) ) { - //display orders if loaded -} -if (jpaUtil.isLoaded( customer, "detailedBio" ) ) { - //display property detailedBio if loaded -} \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/CheckingLazinessWithJPA2.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/CheckingLazinessWithJPA2.java deleted file mode 100644 index 9581b5d0e2..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/CheckingLazinessWithJPA2.java +++ /dev/null @@ -1,10 +0,0 @@ -javax.persistence.PersistenceUtil jpaUtil = javax.persistence.Persistence.getPersistenceUtil(); -if ( jpaUtil.isLoaded( customer.getAddress() ) { - //display address if loaded -} -if ( jpaUtil.isLoaded( customer.getOrders()) ) ) { - //display orders if loaded -} -if (jpaUtil.isLoaded(customer, "detailedBio") ) { - //display property detailedBio if loaded -} \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ContainsWithEM.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ContainsWithEM.java deleted file mode 100644 index 12aaaff765..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ContainsWithEM.java +++ /dev/null @@ -1 +0,0 @@ -assert entityManager.contains( cat ); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ContainsWithSession.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ContainsWithSession.java deleted file mode 100644 index acb40fed3e..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ContainsWithSession.java +++ /dev/null @@ -1 +0,0 @@ -assert session.contains( cat ); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/DeletingWithEM.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/DeletingWithEM.java deleted file mode 100644 index 0938f5a3bf..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/DeletingWithEM.java +++ /dev/null @@ -1 +0,0 @@ -entityManager.remove( fritz ); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/DeletingWithSession.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/DeletingWithSession.java deleted file mode 100644 index 25831ee346..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/DeletingWithSession.java +++ /dev/null @@ -1 +0,0 @@ -session.delete( fritz ); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/GetReferenceWithEM.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/GetReferenceWithEM.java deleted file mode 100644 index e45609a42f..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/GetReferenceWithEM.java +++ /dev/null @@ -1,2 +0,0 @@ -Book book = new Book(); -book.setAuthor( entityManager.getReference( Author.class, authorId ) ); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/GetReferenceWithSession.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/GetReferenceWithSession.java deleted file mode 100644 index d789191dc9..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/GetReferenceWithSession.java +++ /dev/null @@ -1,2 +0,0 @@ -Book book = new Book(); -book.setAuthor( session.byId( Author.class ).getReference( authorId ) ); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/LoadWithEM.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/LoadWithEM.java deleted file mode 100644 index 3c3e56f9ff..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/LoadWithEM.java +++ /dev/null @@ -1 +0,0 @@ -entityManager.find( Author.class, authorId ); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/LoadWithSession.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/LoadWithSession.java deleted file mode 100644 index d9801ef3e6..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/LoadWithSession.java +++ /dev/null @@ -1 +0,0 @@ -session.byId( Author.class ).load( authorId ); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/MakingPersistentWithEM.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/MakingPersistentWithEM.java deleted file mode 100644 index fc6d6f92bb..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/MakingPersistentWithEM.java +++ /dev/null @@ -1,5 +0,0 @@ -DomesticCat fritz = new DomesticCat(); -fritz.setColor( Color.GINGER ); -fritz.setSex( 'M' ); -fritz.setName( "Fritz" ); -entityManager.persist( fritz ); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/MakingPersistentWithSession.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/MakingPersistentWithSession.java deleted file mode 100644 index 05b85c02ff..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/MakingPersistentWithSession.java +++ /dev/null @@ -1,5 +0,0 @@ -DomesticCat fritz = new DomesticCat(); -fritz.setColor( Color.GINGER ); -fritz.setSex( 'M' ); -fritz.setName( "Fritz" ); -session.save( fritz ); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ManagedUpdateWithEM.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ManagedUpdateWithEM.java deleted file mode 100644 index 49544b6502..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ManagedUpdateWithEM.java +++ /dev/null @@ -1,3 +0,0 @@ -Cat cat = entityManager.find( Cat.class, catId ); -cat.setName( "Garfield" ); -entityManager.flush(); // generally this is not explicitly needed \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ManagedUpdateWithSession.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ManagedUpdateWithSession.java deleted file mode 100644 index 5e707244c2..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ManagedUpdateWithSession.java +++ /dev/null @@ -1,3 +0,0 @@ -Cat cat = session.get( Cat.class, catId ); -cat.setName( "Garfield" ); -session.flush(); // generally this is not explicitly needed \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/MergeWithEM.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/MergeWithEM.java deleted file mode 100644 index 340af5cddf..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/MergeWithEM.java +++ /dev/null @@ -1 +0,0 @@ -Cat theManagedInstance = entityManager.merge( someDetachedCat ); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/MergeWithSession.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/MergeWithSession.java deleted file mode 100644 index adfad9d69e..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/MergeWithSession.java +++ /dev/null @@ -1 +0,0 @@ -Cat theManagedInstance = session.merge( someDetachedCat ); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/NaturalIdLoading.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/NaturalIdLoading.java deleted file mode 100644 index cd5cd43125..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/NaturalIdLoading.java +++ /dev/null @@ -1,31 +0,0 @@ -import java.lang.String; - -@Entity -public class User { - @Id - @GeneratedValue - Long id; - - @NaturalId - String system; - - @NaturalId - String userName; - - ... -} - -// use getReference() to create associations... -Resource aResource = (Resource) session.byId( Resource.class ).getReference( 123 ); -User aUser = (User) session.byNaturalId( User.class ) - .using( "system", "prod" ) - .using( "userName", "steve" ) - .getReference(); -aResource.assignTo( user ); - - -// use load() to pull initialzed data -return session.byNaturalId( User.class ) - .using( "system", "prod" ) - .using( "userName", "steve" ) - .load(); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ReattachingWithSession1.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ReattachingWithSession1.java deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ReattachingWithSession2.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ReattachingWithSession2.java deleted file mode 100644 index a26c2c98b0..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/ReattachingWithSession2.java +++ /dev/null @@ -1 +0,0 @@ -session.saveOrUpdate( someDetachedCat ); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/RefreshWithEM.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/RefreshWithEM.java deleted file mode 100644 index 8258af31e6..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/RefreshWithEM.java +++ /dev/null @@ -1,3 +0,0 @@ -Cat cat = entityManager.find( Cat.class, catId ); -... -entityManager.refresh( cat ); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/RefreshWithSession.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/RefreshWithSession.java deleted file mode 100644 index 3436fe3f88..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/RefreshWithSession.java +++ /dev/null @@ -1,3 +0,0 @@ -Cat cat = session.get( Cat.class, catId ); -... -session.refresh( cat ); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/SimpleNaturalIdLoading.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/SimpleNaturalIdLoading.java deleted file mode 100644 index a07ecf789b..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/SimpleNaturalIdLoading.java +++ /dev/null @@ -1,20 +0,0 @@ -@Entity -public class User { - @Id - @GeneratedValue - Long id; - - @NaturalId - String userName; - - ... -} - -// use getReference() to create associations... -Resource aResource = (Resource) session.byId( Resource.class ).getReference( 123 ); -User aUser = (User) session.bySimpleNaturalId( User.class ).getReference( "steve" ); -aResource.assignTo( user ); - - -// use load() to pull initialzed data -return session.bySimpleNaturalId( User.class ).load( "steve" ); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/UnwrapWithEM.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/UnwrapWithEM.java deleted file mode 100644 index d0c4fac2aa..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/UnwrapWithEM.java +++ /dev/null @@ -1,2 +0,0 @@ -Session session = entityManager.unwrap( Session.class ); -SessionImplementor sessionImplementor = entityManager.unwrap( SessionImplementor.class ); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/VisualizingMerge.java b/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/VisualizingMerge.java deleted file mode 100644 index 3f8697a4ae..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/pc/extras/VisualizingMerge.java +++ /dev/null @@ -1,5 +0,0 @@ -Object detached = ...; -Object managed = entityManager.find( detached.getClass(), detached.getId() ); -managed.setXyz( detached.getXyz() ); -... -return managed; \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/Criteria.xml b/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/Criteria.xml deleted file mode 100644 index dfce912537..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/Criteria.xml +++ /dev/null @@ -1,413 +0,0 @@ - - - - - - Criteria - - - Criteria queries offer a type-safe alternative to HQL, JPQL and native-sql queries. - - - - - Hibernate offers an older, legacy org.hibernate.Criteria API which should be - considered deprecated. No feature development will target those APIs. Eventually, Hibernate-specific - criteria features will be ported as extensions to the JPA - javax.persistence.criteria.CriteriaQuery. For details on the - org.hibernate.Criteria API, see . - - - This chapter will focus on the JPA APIs for declaring type-safe criteria queries. - - - - - - Criteria queries are a programmatic, type-safe way to express a query. They are type-safe in terms of - using interfaces and classes to represent various structural parts of a query such as the query itself, - or the select clause, or an order-by, etc. They can also be type-safe in terms of referencing attributes - as we will see in a bit. Users of the older Hibernate org.hibernate.Criteria - query API will recognize the general approach, though we believe the JPA API to be superior - as it represents a clean look at the lessons learned from that API. - - - - Criteria queries are essentially an object graph, where each part of the graph represents an increasing - (as we navigate down this graph) more atomic part of query. The first step in performing a criteria query - is building this graph. The javax.persistence.criteria.CriteriaBuilder - interface is the first thing with which you need to become acquainted to begin using criteria queries. Its - role is that of a factory for all the individual pieces of the criteria. You obtain a - javax.persistence.criteria.CriteriaBuilder instance by calling the - getCriteriaBuilder method of either - javax.persistence.EntityManagerFactory or - javax.persistence.EntityManager. - - - - The next step is to obtain a javax.persistence.criteria.CriteriaQuery. This - is accomplished using one of the 3 methods on - javax.persistence.criteria.CriteriaBuilder for this purpose: - - - - - - Each serves a different purpose depending on the expected type of the query results. - - - - - Chapter 6 Criteria API of the JPA Specification - already contains a decent amount of reference material pertaining to the various parts of a - criteria query. So rather than duplicate all that content here, lets instead look at some of - the more widely anticipated usages of the API. - - - -
- Typed criteria queries - - - The type of the criteria query (aka the ]]>) indicates the expected types in the query - result. This might be an entity, an Integer, or any other object. - - -
- Selecting an entity - - - This is probably the most common form of query. The application wants to select entity instances. - - - - Selecting the root entity - - - - - The example uses createQuery passing in the Person - class reference as the results of the query will be Person objects. - - - - - The call to the CriteriaQuery.select method in this example is - unnecessary because personRoot will be the implied selection since we - have only a single query root. It was done here only for completeness of an example. - - - The Person_.eyeColor reference is an example of the static form of JPA - metamodel reference. We will use that form exclusively in this chapter. See - the documentation for the Hibernate JPA Metamodel Generator for additional details on - the JPA static metamodel. - - -
- -
- Selecting an expression - - - The simplest form of selecting an expression is selecting a particular attribute from an entity. - But this expression might also represent an aggregation, a mathematical operation, etc. - - - - Selecting an attribute - - - - - In this example, the query is typed as java.lang.Integer because that - is the anticipated type of the results (the type of the Person#age attribute - is java.lang.Integer). Because a query might contain multiple references to - the Person entity, attribute references always need to be qualified. This is accomplished by the - Root#get method call. - -
- - -
- Selecting multiple values - - - There are actually a few different ways to select multiple values using criteria queries. We - will explore 2 options here, but an alternative recommended approach is to use tuples as described in - . Or consider a wrapper query; see - for details. - - - - Selecting an array - - - - - Technically this is classified as a typed query, but you can see from handling the results that - this is sort of misleading. Anyway, the expected result type here is an array. - - - - The example then uses the array method of - javax.persistence.criteria.CriteriaBuilder which explicitly - combines individual selections into a - javax.persistence.criteria.CompoundSelection. - - - - Selecting an array (2) - - - - - Just as we saw in we have a typed criteria - query returning an Object array. Both queries are functionally equivalent. This second example - uses the multiselect method which behaves slightly differently based on - the type given when the criteria query was first built, but in this case it says to select and - return an Object[]. - -
- -
- Selecting a wrapper - - Another alternative to is to instead - select an object that will wrap the multiple values. Going back to the example - query there, rather than returning an array of [Person#id, Person#age] - instead declare a class that holds these values and instead return that. - - - - Selecting an wrapper - - - - - First we see the simple definition of the wrapper object we will be using to wrap our result - values. Specifically notice the constructor and its argument types. Since we will be returning - PersonWrapper objects, we use PersonWrapper as the - type of our criteria query. - - - - This example illustrates the use of the - javax.persistence.criteria.CriteriaBuilder method - construct which is used to build a wrapper expression. For every row in the - result we are saying we would like a PersonWrapper instantiated with - the remaining arguments by the matching constructor. This wrapper expression is then passed as - the select. - -
-
- -
- Tuple criteria queries - - - A better approach to is to use either a - wrapper (which we just saw in ) or using the - javax.persistence.Tuple contract. - - - - Selecting a tuple - - - - - This example illustrates accessing the query results through the - javax.persistence.Tuple interface. The example uses the explicit - createTupleQuery of - javax.persistence.criteria.CriteriaBuilder. An alternate approach - is to use createQuery passing Tuple.class. - - - - Again we see the use of the multiselect method, just like in - . The difference here is that the type of the - javax.persistence.criteria.CriteriaQuery was defined as - javax.persistence.Tuple so the compound selections in this case are - interpreted to be the tuple elements. - - - - The javax.persistence.Tuple contract provides 3 forms of access to - the underlying elements: - - - - - typed - - - The example illustrates this form of access - in the tuple.get( idPath ) and tuple.get( agePath ) calls. - This allows typed access to the underlying tuple values based on the - javax.persistence.TupleElement expressions used to build - the criteria. - - - - - positional - - - Allows access to the underlying tuple values based on the position. The simple - Object get(int position) form is very similar to the access - illustrated in and - . The - X get(int position, Class type]]> form - allows typed positional access, but based on the explicitly supplied type which the tuple - value must be type-assignable to. - - - - - aliased - - - Allows access to the underlying tuple values based an (optionally) assigned alias. The - example query did not apply an alias. An alias would be applied via the - alias method on - javax.persistence.criteria.Selection. Just like - positional access, there is both a typed - (Object get(String alias)) and an untyped - ( X get(String alias, Class type]]> form. - - - - -
- -
- FROM clause - -
- JPA Specification, section 6.5.2 Query Roots, pg 262 - - - A CriteriaQuery object defines a query over one or more entity, embeddable, or basic abstract - schema types. The root objects of the query are entities, from which the other types are reached - by navigation. - -
- - - - All the individual parts of the FROM clause (roots, joins, paths) implement the - javax.persistence.criteria.From interface. - - - -
- Roots - - - Roots define the basis from which all joins, paths and attributes are available in the query. - A root is always an entity type. Roots are defined and added to the criteria by the overloaded - from methods on - javax.persistence.criteria.CriteriaQuery: - - - - - - Adding a root - - - - - Criteria queries may define multiple roots, the effect of which is to create a cartesian - product between the newly added root and the others. Here is an example matching all single - men and all single women: - - - - Adding multiple roots - - -
- -
- Joins - - - Joins allow navigation from other javax.persistence.criteria.From - to either association or embedded attributes. Joins are created by the numerous overloaded - join methods of the - javax.persistence.criteria.From interface - - - - Example with Embedded and ManyToOne - - - - - Example with Collections - - -
- -
- Fetches - - - Just like in HQL and JPQL, criteria queries can specify that associated data be fetched along - with the owner. Fetches are created by the numerous overloaded fetch - methods of the javax.persistence.criteria.From interface. - - - - Example with Embedded and ManyToOne - - - - - - Technically speaking, embedded attributes are always fetched with their owner. However in - order to define the fetching of Address#country we needed a - javax.persistence.criteria.Fetch for its parent path. - - - - - Example with Collections - - -
-
- -
- Path expressions - - - Roots, joins and fetches are themselves paths as well. - - -
- -
- Using parameters - - - Using parameters - - - - - Use the parameter method of - javax.persistence.criteria.CriteriaBuilder to obtain a parameter - reference. Then use the parameter reference to bind the parameter value to the - javax.persistence.Query - -
- -
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/CriteriaBuilder_query_creation_snippet.java b/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/CriteriaBuilder_query_creation_snippet.java deleted file mode 100644 index a4dc3ef01b..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/CriteriaBuilder_query_creation_snippet.java +++ /dev/null @@ -1,3 +0,0 @@ - CriteriaQuery createQuery(Class resultClass); -CriteriaQuery createTupleQuery(); -CriteriaQuery createQuery(); diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/from_fetch_example_embedded_and_many2one.java b/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/from_fetch_example_embedded_and_many2one.java deleted file mode 100644 index fe68976760..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/from_fetch_example_embedded_and_many2one.java +++ /dev/null @@ -1,6 +0,0 @@ -CriteriaQuery personCriteria = builder.createQuery( Person.class ); -Root personRoot = person.from( Person.class ); -// Person.address is an embedded attribute -Fetch personAddress = personRoot.fetch( Person_.address ); -// Address.country is a ManyToOne -Fetch addressCountry = personAddress.fetch( Address_.country ); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/from_fetch_example_plural.java b/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/from_fetch_example_plural.java deleted file mode 100644 index 5a216e28ef..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/from_fetch_example_plural.java +++ /dev/null @@ -1,4 +0,0 @@ -CriteriaQuery<Person> personCriteria = builder.createQuery( Person.class ); -Root personRoot = person.from( Person.class ); -Fetch orders = personRoot.fetch( Person_.orders ); -Fetch orderLines = orders.fetch( Order_.lineItems ); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/from_join_example_embedded_and_many2one.java b/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/from_join_example_embedded_and_many2one.java deleted file mode 100644 index 478b649cac..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/from_join_example_embedded_and_many2one.java +++ /dev/null @@ -1,6 +0,0 @@ -CriteriaQuery personCriteria = builder.createQuery( Person.class ); -Root personRoot = person.from( Person.class ); -// Person.address is an embedded attribute -Join personAddress = personRoot.join( Person_.address ); -// Address.country is a ManyToOne -Join addressCountry = personAddress.join( Address_.country ); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/from_join_example_plural.java b/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/from_join_example_plural.java deleted file mode 100644 index 1840bad318..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/from_join_example_plural.java +++ /dev/null @@ -1,4 +0,0 @@ -CriteriaQuery personCriteria = builder.createQuery( Person.class ); -Root personRoot = person.from( Person.class ); -Join orders = personRoot.join( Person_.orders ); -Join orderLines = orders.join( Order_.lineItems ); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/from_root_example.java b/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/from_root_example.java deleted file mode 100644 index 0c73ad04ff..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/from_root_example.java +++ /dev/null @@ -1,3 +0,0 @@ -CriteriaQuery personCriteria = builder.createQuery( Person.class ); -// create and add the root -person.from( Person.class ); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/from_root_example_multiple.java b/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/from_root_example_multiple.java deleted file mode 100644 index e37cbc17a4..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/from_root_example_multiple.java +++ /dev/null @@ -1,12 +0,0 @@ -CriteriaQuery query = builder.createQuery(); -Root men = query.from( Person.class ); -Root women = query.from( Person.class ); -Predicate menRestriction = builder.and( - builder.equal( men.get( Person_.gender ), Gender.MALE ), - builder.equal( men.get( Person_.relationshipStatus ), RelationshipStatus.SINGLE ) -); -Predicate womenRestriction = builder.and( - builder.equal( women.get( Person_.gender ), Gender.FEMALE ), - builder.equal( women.get( Person_.relationshipStatus ), RelationshipStatus.SINGLE ) -); -query.where( builder.and( menRestriction, womenRestriction ) ); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/from_root_methods.java b/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/from_root_methods.java deleted file mode 100644 index 4a9c915904..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/from_root_methods.java +++ /dev/null @@ -1,3 +0,0 @@ - Root from(Class); - - Root from(EntityType) \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/parameter_example.java b/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/parameter_example.java deleted file mode 100644 index bcb72ff2b4..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/parameter_example.java +++ /dev/null @@ -1,9 +0,0 @@ -CriteriaQuery criteria = build.createQuery( Person.class ); -Root personRoot = criteria.from( Person.class ); -criteria.select( personRoot ); -ParameterExpression eyeColorParam = builder.parameter( String.class ); -criteria.where( builder.equal( personRoot.get( Person_.eyeColor ), eyeColorParam ) ); - -TypedQuery query = em.createQuery( criteria ); -query.setParameter( eyeColorParam, "brown" ); -List people = query.getResultList(); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/select_attribute_example.java b/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/select_attribute_example.java deleted file mode 100644 index 0667a435e9..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/select_attribute_example.java +++ /dev/null @@ -1,9 +0,0 @@ -CriteriaQuery criteria = builder.createQuery( Integer.class ); -Root personRoot = criteria.from( Person.class ); -criteria.select( personRoot.get( Person_.age ) ); -criteria.where( builder.equal( personRoot.get( Person_.eyeColor ), "brown" ) ); - -List ages = em.createQuery( criteria ).getResultList(); -for ( Integer age : ages ) { - ... -} \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/select_multiple_values_array.java b/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/select_multiple_values_array.java deleted file mode 100644 index 7e3fefaa18..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/select_multiple_values_array.java +++ /dev/null @@ -1,13 +0,0 @@ -CriteriaQuery criteria = builder.createQuery( Object[].class ); -Root personRoot = criteria.from( Person.class ); -Path idPath = personRoot.get( Person_.id ); -Path agePath = personRoot.get( Person_.age ); -criteria.select( builder.array( idPath, agePath ) ); -criteria.where( builder.equal( personRoot.get( Person_.eyeColor ), "brown" ) ); - -List valueArray = em.createQuery( criteria ).getResultList(); -for ( Object[] values : valueArray ) { - final Long id = (Long) values[0]; - final Integer age = (Integer) values[1]; - ... -} \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/select_multiple_values_array2.java b/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/select_multiple_values_array2.java deleted file mode 100644 index 75c8feeaa3..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/select_multiple_values_array2.java +++ /dev/null @@ -1,13 +0,0 @@ -CriteriaQuery criteria = builder.createQuery( Object[].class ); -Root personRoot = criteria.from( Person.class ); -Path idPath = personRoot.get( Person_.id ); -Path agePath = personRoot.get( Person_.age ); -criteria.multiselect( idPath, agePath ); -criteria.where( builder.equal( personRoot.get( Person_.eyeColor ), "brown" ) ); - -List valueArray = em.createQuery( criteria ).getResultList(); -for ( Object[] values : valueArray ) { - final Long id = (Long) values[0]; - final Integer age = (Integer) values[1]; - ... -} \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/select_root_entity_example.java b/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/select_root_entity_example.java deleted file mode 100644 index c60921d116..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/select_root_entity_example.java +++ /dev/null @@ -1,9 +0,0 @@ -CriteriaQuery criteria = builder.createQuery( Person.class ); -Root personRoot = criteria.from( Person.class ); -criteria.select( personRoot ); -criteria.where( builder.equal( personRoot.get( Person_.eyeColor ), "brown" ) ); - -List people = em.createQuery( criteria ).getResultList(); -for ( Person person : people ) { - ... -} \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/select_tuple.java b/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/select_tuple.java deleted file mode 100644 index dc651c1cf0..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/select_tuple.java +++ /dev/null @@ -1,13 +0,0 @@ -CriteriaQuery criteria = builder.createTupleQuery(); -Root personRoot = criteria.from( Person.class ); -Path idPath = personRoot.get( Person_.id ); -Path agePath = personRoot.get( Person_.age ); -criteria.multiselect( idPath, agePath ); -criteria.where( builder.equal( personRoot.get( Person_.eyeColor ), "brown" ) ); - -List tuples = em.createQuery( criteria ).getResultList(); -for ( Tuple tuple : valueArray ) { - assert tuple.get( 0 ) == tuple.get( idPath ); - assert tuple.get( 1 ) == tuple.get( agePath ); - ... -} \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/select_wrapper.java b/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/select_wrapper.java deleted file mode 100644 index f82398829f..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_criteria/extras/select_wrapper.java +++ /dev/null @@ -1,27 +0,0 @@ -public class PersonWrapper { - private final Long id; - private final Integer age; - public PersonWrapper(Long id, Integer age) { - this.id = id; - this.age = age; - } - ... -} - -... - -CriteriaQuery criteria = builder.createQuery( PersonWrapper.class ); -Root personRoot = criteria.from( Person.class ); -criteria.select( - builder.construct( - PersonWrapper.class, - personRoot.get( Person_.id ), - personRoot.get( Person_.age ) - ) -); -criteria.where( builder.equal( personRoot.get( Person_.eyeColor ), "brown" ) ); - -List people = em.createQuery( criteria ).getResultList(); -for ( PersonWrapper person : people ) { - ... -} \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_native/Native_SQL.xml b/documentation/src/main/docbook/integration/en-US/chapters/query_native/Native_SQL.xml deleted file mode 100644 index 9d8cdcdc18..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_native/Native_SQL.xml +++ /dev/null @@ -1,1127 +0,0 @@ - - - - - - Native SQL Queries - - - You may also express queries in the native SQL dialect of your database. This is useful if you - want to utilize database specific features such as query hints or the CONNECT BY option in Oracle. - It also provides a clean migration path from a direct SQL/JDBC based application to Hibernate/JPA. - Hibernate also allows you to specify handwritten SQL (including stored procedures) for all - create, update, delete, and load operations. - - -
- Using a <literal>SQLQuery</literal> - - Execution of native SQL queries is controlled via the - SQLQuery interface, which is obtained by calling - Session.createSQLQuery(). The following sections - describe how to use this API for querying. - -
- Scalar queries - - The most basic SQL query is to get a list of scalars - (values). - - sess.createSQLQuery("SELECT * FROM CATS").list(); -sess.createSQLQuery("SELECT ID, NAME, BIRTHDATE FROM CATS").list(); - - - These will return a List of Object arrays (Object[]) with scalar - values for each column in the CATS table. Hibernate will use - ResultSetMetadata to deduce the actual order and types of the returned - scalar values. - - To avoid the overhead of using - ResultSetMetadata, or simply to be more explicit in - what is returned, one can use addScalar(): - - sess.createSQLQuery("SELECT * FROM CATS") - .addScalar("ID", Hibernate.LONG) - .addScalar("NAME", Hibernate.STRING) - .addScalar("BIRTHDATE", Hibernate.DATE) - - - This query specified: - - - - the SQL query string - - - - the columns and types to return - - - - This will return Object arrays, but now it will not use - ResultSetMetadata but will instead explicitly get the - ID, NAME and BIRTHDATE column as respectively a Long, String and a Short - from the underlying resultset. This also means that only these three - columns will be returned, even though the query is using - * and could return more than the three listed - columns. - - It is possible to leave out the type information for all or some - of the scalars. - - sess.createSQLQuery("SELECT * FROM CATS") - .addScalar("ID", Hibernate.LONG) - .addScalar("NAME") - .addScalar("BIRTHDATE") - - - This is essentially the same query as before, but now - ResultSetMetaData is used to determine the type of - NAME and BIRTHDATE, where as the type of ID is explicitly - specified. - - How the java.sql.Types returned from ResultSetMetaData is mapped - to Hibernate types is controlled by the Dialect. If a specific type is - not mapped, or does not result in the expected type, it is possible to - customize it via calls to registerHibernateType in - the Dialect. -
- -
- Entity queries - - The above queries were all about returning scalar values, - basically returning the "raw" values from the resultset. The following - shows how to get entity objects from a native sql query via - addEntity(). - - sess.createSQLQuery("SELECT * FROM CATS").addEntity(Cat.class); -sess.createSQLQuery("SELECT ID, NAME, BIRTHDATE FROM CATS").addEntity(Cat.class); - - - This query specified: - - - - the SQL query string - - - - the entity returned by the query - - - - Assuming that Cat is mapped as a class with the columns ID, NAME - and BIRTHDATE the above queries will both return a List where each - element is a Cat entity. - - If the entity is mapped with a many-to-one to - another entity it is required to also return this when performing the - native query, otherwise a database specific "column not found" error - will occur. The additional columns will automatically be returned when - using the * notation, but we prefer to be explicit as in the following - example for a many-to-one to a - Dog: - - sess.createSQLQuery("SELECT ID, NAME, BIRTHDATE, DOG_ID FROM CATS").addEntity(Cat.class); - - - This will allow cat.getDog() to function properly. -
- -
- Handling associations and collections - - It is possible to eagerly join in the Dog to - avoid the possible extra roundtrip for initializing the proxy. This is - done via the addJoin() method, which allows you to - join in an association or collection. - - sess.createSQLQuery("SELECT c.ID, NAME, BIRTHDATE, DOG_ID, D_ID, D_NAME FROM CATS c, DOGS d WHERE c.DOG_ID = d.D_ID") - .addEntity("cat", Cat.class) - .addJoin("cat.dog"); - - - In this example, the returned Cat's will have - their dog property fully initialized without any - extra roundtrip to the database. Notice that you added an alias name - ("cat") to be able to specify the target property path of the join. It - is possible to do the same eager joining for collections, e.g. if the - Cat had a one-to-many to Dog - instead. - - sess.createSQLQuery("SELECT ID, NAME, BIRTHDATE, D_ID, D_NAME, CAT_ID FROM CATS c, DOGS d WHERE c.ID = d.CAT_ID") - .addEntity("cat", Cat.class) - .addJoin("cat.dogs"); - - - At this stage you are reaching the limits of what is possible with - native queries, without starting to enhance the sql queries to make them - usable in Hibernate. Problems can arise when returning multiple entities - of the same type or when the default alias/column names are not - enough. -
- -
- Returning multiple entities - - Until now, the result set column names are assumed to be the same - as the column names specified in the mapping document. This can be - problematic for SQL queries that join multiple tables, since the same - column names can appear in more than one table. - - Column alias injection is needed in the following query (which - most likely will fail): - - sess.createSQLQuery("SELECT c.*, m.* FROM CATS c, CATS m WHERE c.MOTHER_ID = c.ID") - .addEntity("cat", Cat.class) - .addEntity("mother", Cat.class) - - - The query was intended to return two Cat instances per row: a cat - and its mother. The query will, however, fail because there is a - conflict of names; the instances are mapped to the same column names. - Also, on some databases the returned column aliases will most likely be - on the form "c.ID", "c.NAME", etc. which are not equal to the columns - specified in the mappings ("ID" and "NAME"). - - The following form is not vulnerable to column name - duplication: - - sess.createSQLQuery("SELECT {cat.*}, {m.*} FROM CATS c, CATS m WHERE c.MOTHER_ID = m.ID") - .addEntity("cat", Cat.class) - .addEntity("mother", Cat.class) - - - This query specified: - - - - the SQL query string, with placeholders for Hibernate to - inject column aliases - - - - the entities returned by the query - - - - The {cat.*} and {mother.*} notation used above is a shorthand for - "all properties". Alternatively, you can list the columns explicitly, - but even in this case Hibernate injects the SQL column aliases for each - property. The placeholder for a column alias is just the property name - qualified by the table alias. In the following example, you retrieve - Cats and their mothers from a different table (cat_log) to the one - declared in the mapping metadata. You can even use the property aliases - in the where clause. - - String sql = "SELECT ID as {c.id}, NAME as {c.name}, " + - "BIRTHDATE as {c.birthDate}, MOTHER_ID as {c.mother}, {mother.*} " + - "FROM CAT_LOG c, CAT_LOG m WHERE {c.mother} = c.ID"; - -List loggedCats = sess.createSQLQuery(sql) - .addEntity("cat", Cat.class) - .addEntity("mother", Cat.class).list() - - -
- Alias and property references - - In most cases the above alias injection is needed. For queries - relating to more complex mappings, like composite properties, - inheritance discriminators, collections etc., you can use specific - aliases that allow Hibernate to inject the proper aliases. - - The following table shows the different ways you can use the - alias injection. Please note that the alias names in the result are - simply examples; each alias will have a unique and probably different - name when used. - - - Alias injection names - - - - - - - - - - - Description - - Syntax - - Example - - - - - - A simple property - - {[aliasname].[propertyname] - - A_NAME as {item.name} - - - - A composite property - - {[aliasname].[componentname].[propertyname]} - - CURRENCY as {item.amount.currency}, VALUE as - {item.amount.value} - - - - Discriminator of an entity - - {[aliasname].class} - - DISC as {item.class} - - - - All properties of an entity - - {[aliasname].*} - - {item.*} - - - - A collection key - - {[aliasname].key} - - ORGID as {coll.key} - - - - The id of an collection - - {[aliasname].id} - - EMPID as {coll.id} - - - - The element of an collection - - {[aliasname].element} - - XID as {coll.element} - - - - property of the element in the collection - - {[aliasname].element.[propertyname]} - - NAME as {coll.element.name} - - - - All properties of the element in the collection - - {[aliasname].element.*} - - {coll.element.*} - - - - All properties of the collection - - {[aliasname].*} - - {coll.*} - - - -
-
-
- -
- Returning non-managed entities - - It is possible to apply a ResultTransformer to native SQL queries, - allowing it to return non-managed entities. - - sess.createSQLQuery("SELECT NAME, BIRTHDATE FROM CATS") - .setResultTransformer(Transformers.aliasToBean(CatDTO.class)) - - This query specified: - - - - the SQL query string - - - - a result transformer - - - - The above query will return a list of CatDTO - which has been instantiated and injected the values of NAME and - BIRTHNAME into its corresponding properties or fields. -
- -
- Handling inheritance - - Native SQL queries which query for entities that are mapped as - part of an inheritance must include all properties for the baseclass and - all its subclasses. -
- -
- Parameters - - Native SQL queries support positional as well as named - parameters: - - Query query = sess.createSQLQuery("SELECT * FROM CATS WHERE NAME like ?").addEntity(Cat.class); -List pusList = query.setString(0, "Pus%").list(); - -query = sess.createSQLQuery("SELECT * FROM CATS WHERE NAME like :name").addEntity(Cat.class); -List pusList = query.setString("name", "Pus%").list(); -
-
- -
- Named SQL queries - - Named SQL queries can also be defined in the mapping document and - called in exactly the same way as a named HQL query (see ). In this case, you do - not need to call - addEntity(). - - - Named sql query using the <sql-query> maping - element - - <sql-query name="persons"> - <return alias="person" class="eg.Person"/> - SELECT person.NAME AS {person.name}, - person.AGE AS {person.age}, - person.SEX AS {person.sex} - FROM PERSON person - WHERE person.NAME LIKE :namePattern -</sql-query> - - - - Execution of a named query - - List people = sess.getNamedQuery("persons") - .setString("namePattern", namePattern) - .setMaxResults(50) - .list(); - - - The <return-join> element is use to join - associations and the <load-collection> element is - used to define queries which initialize collections, - - - Named sql query with association - - <sql-query name="personsWith"> - <return alias="person" class="eg.Person"/> - <return-join alias="address" property="person.mailingAddress"/> - SELECT person.NAME AS {person.name}, - person.AGE AS {person.age}, - person.SEX AS {person.sex}, - address.STREET AS {address.street}, - address.CITY AS {address.city}, - address.STATE AS {address.state}, - address.ZIP AS {address.zip} - FROM PERSON person - JOIN ADDRESS address - ON person.ID = address.PERSON_ID AND address.TYPE='MAILING' - WHERE person.NAME LIKE :namePattern -</sql-query> - - - A named SQL query may return a scalar value. You must declare the - column alias and Hibernate type using the - <return-scalar> element: - - - Named query returning a scalar - - <sql-query name="mySqlQuery"> - <return-scalar column="name" type="string"/> - <return-scalar column="age" type="long"/> - SELECT p.NAME AS name, - p.AGE AS age, - FROM PERSON p WHERE p.NAME LIKE 'Hiber%' -</sql-query> - - - You can externalize the resultset mapping information in a - <resultset> element which will allow you to - either reuse them across several named queries or through the - setResultSetMapping() API. - - - <resultset> mapping used to externalize mapping - information - - <resultset name="personAddress"> - <return alias="person" class="eg.Person"/> - <return-join alias="address" property="person.mailingAddress"/> -</resultset> - -<sql-query name="personsWith" resultset-ref="personAddress"> - SELECT person.NAME AS {person.name}, - person.AGE AS {person.age}, - person.SEX AS {person.sex}, - address.STREET AS {address.street}, - address.CITY AS {address.city}, - address.STATE AS {address.state}, - address.ZIP AS {address.zip} - FROM PERSON person - JOIN ADDRESS address - ON person.ID = address.PERSON_ID AND address.TYPE='MAILING' - WHERE person.NAME LIKE :namePattern -</sql-query> - - - You can, alternatively, use the resultset mapping information in - your hbm files directly in java code. - - - Programmatically specifying the result mapping information - - - List cats = sess.createSQLQuery( - "select {cat.*}, {kitten.*} from cats cat, cats kitten where kitten.mother = cat.id" - ) - .setResultSetMapping("catAndKitten") - .list(); - - - So far we have only looked at externalizing SQL queries using - Hibernate mapping files. The same concept is also available with - anntations and is called named native queries. You can use - @NamedNativeQuery - (@NamedNativeQueries) in conjunction with - @SqlResultSetMapping - (@SqlResultSetMappings). Like - @NamedQuery, @NamedNativeQuery - and @SqlResultSetMapping can be defined at class level, - but their scope is global to the application. Lets look at a view - examples. - - - shows how a resultSetMapping parameter is defined in - @NamedNativeQuery. It represents the name of a defined - @SqlResultSetMapping. The resultset mapping declares - the entities retrieved by this native query. Each field of the entity is - bound to an SQL alias (or column name). All fields of the entity including - the ones of subclasses and the foreign key columns of related entities - have to be present in the SQL query. Field definitions are optional - provided that they map to the same column name as the one declared on the - class property. In the example 2 entities, Night and - Area, are returned and each property is declared and - associated to a column name, actually the column name retrieved by the - query. - - In the result - set mapping is implicit. We only describe the entity class of the result - set mapping. The property / column mappings is done using the entity - mapping values. In this case the model property is bound to the model_txt - column. - - Finally, if the association to a related entity involve a composite - primary key, a @FieldResult element should be used for - each foreign key column. The @FieldResult name is - composed of the property name for the relationship, followed by a dot - ("."), followed by the name or the field or property of the primary key. - This can be seen in . - - - Named SQL query using <classname>@NamedNativeQuery</classname> - together with <classname>@SqlResultSetMapping</classname> - - @NamedNativeQuery(name="night&area", query="select night.id nid, night.night_duration, " - + " night.night_date, area.id aid, night.area_id, area.name " - + "from Night night, Area area where night.area_id = area.id", - resultSetMapping="joinMapping") -@SqlResultSetMapping(name="joinMapping", entities={ - @EntityResult(entityClass=Night.class, fields = { - @FieldResult(name="id", column="nid"), - @FieldResult(name="duration", column="night_duration"), - @FieldResult(name="date", column="night_date"), - @FieldResult(name="area", column="area_id"), - discriminatorColumn="disc" - }), - @EntityResult(entityClass=org.hibernate.test.annotations.query.Area.class, fields = { - @FieldResult(name="id", column="aid"), - @FieldResult(name="name", column="name") - }) - } -) - - - - Implicit result set mapping - - @Entity -@SqlResultSetMapping(name="implicit", - entities=@EntityResult(entityClass=SpaceShip.class)) -@NamedNativeQuery(name="implicitSample", - query="select * from SpaceShip", - resultSetMapping="implicit") -public class SpaceShip { - private String name; - private String model; - private double speed; - - @Id - public String getName() { - return name; - } - - public void setName(String name) { - this.name = name; - } - - @Column(name="model_txt") - public String getModel() { - return model; - } - - public void setModel(String model) { - this.model = model; - } - - public double getSpeed() { - return speed; - } - - public void setSpeed(double speed) { - this.speed = speed; - } -} - - - - Using dot notation in @FieldResult for specifying associations - - - @Entity -@SqlResultSetMapping(name="compositekey", - entities=@EntityResult(entityClass=SpaceShip.class, - fields = { - @FieldResult(name="name", column = "name"), - @FieldResult(name="model", column = "model"), - @FieldResult(name="speed", column = "speed"), - @FieldResult(name="captain.firstname", column = "firstn"), - @FieldResult(name="captain.lastname", column = "lastn"), - @FieldResult(name="dimensions.length", column = "length"), - @FieldResult(name="dimensions.width", column = "width") - }), - columns = { @ColumnResult(name = "surface"), - @ColumnResult(name = "volume") } ) - -@NamedNativeQuery(name="compositekey", - query="select name, model, speed, lname as lastn, fname as firstn, length, width, length * width as surface from SpaceShip", - resultSetMapping="compositekey") -} ) -public class SpaceShip { - private String name; - private String model; - private double speed; - private Captain captain; - private Dimensions dimensions; - - @Id - public String getName() { - return name; - } - - public void setName(String name) { - this.name = name; - } - - @ManyToOne(fetch= FetchType.LAZY) - @JoinColumns( { - @JoinColumn(name="fname", referencedColumnName = "firstname"), - @JoinColumn(name="lname", referencedColumnName = "lastname") - } ) - public Captain getCaptain() { - return captain; - } - - public void setCaptain(Captain captain) { - this.captain = captain; - } - - public String getModel() { - return model; - } - - public void setModel(String model) { - this.model = model; - } - - public double getSpeed() { - return speed; - } - - public void setSpeed(double speed) { - this.speed = speed; - } - - public Dimensions getDimensions() { - return dimensions; - } - - public void setDimensions(Dimensions dimensions) { - this.dimensions = dimensions; - } -} - -@Entity -@IdClass(Identity.class) -public class Captain implements Serializable { - private String firstname; - private String lastname; - - @Id - public String getFirstname() { - return firstname; - } - - public void setFirstname(String firstname) { - this.firstname = firstname; - } - - @Id - public String getLastname() { - return lastname; - } - - public void setLastname(String lastname) { - this.lastname = lastname; - } -} - - - - - If you retrieve a single entity using the default mapping, you can - specify the resultClass attribute instead of - resultSetMapping: - - @NamedNativeQuery(name="implicitSample", query="select * from SpaceShip", resultClass=SpaceShip.class) -public class SpaceShip { - - - In some of your native queries, you'll have to return scalar values, - for example when building report queries. You can map them in the - @SqlResultsetMapping through - @ColumnResult. You actually can even mix, entities and - scalar returns in the same native query (this is probably not that common - though). - - - Scalar values via <classname>@ColumnResult</classname> - - @SqlResultSetMapping(name="scalar", columns=@ColumnResult(name="dimension")) -@NamedNativeQuery(name="scalar", query="select length*width as dimension from SpaceShip", resultSetMapping="scalar") - - - An other query hint specific to native queries has been introduced: - org.hibernate.callable which can be true or false - depending on whether the query is a stored procedure or not. - -
- Using return-property to explicitly specify column/alias - names - - You can explicitly tell Hibernate what column aliases to use with - <return-property>, instead of using the - {}-syntax to let Hibernate inject its own aliases.For - example: - - <sql-query name="mySqlQuery"> - <return alias="person" class="eg.Person"> - <return-property name="name" column="myName"/> - <return-property name="age" column="myAge"/> - <return-property name="sex" column="mySex"/> - </return> - SELECT person.NAME AS myName, - person.AGE AS myAge, - person.SEX AS mySex, - FROM PERSON person WHERE person.NAME LIKE :name -</sql-query> - - - <return-property> also works with - multiple columns. This solves a limitation with the - {}-syntax which cannot allow fine grained control of - multi-column properties. - - <sql-query name="organizationCurrentEmployments"> - <return alias="emp" class="Employment"> - <return-property name="salary"> - <return-column name="VALUE"/> - <return-column name="CURRENCY"/> - </return-property> - <return-property name="endDate" column="myEndDate"/> - </return> - SELECT EMPLOYEE AS {emp.employee}, EMPLOYER AS {emp.employer}, - STARTDATE AS {emp.startDate}, ENDDATE AS {emp.endDate}, - REGIONCODE as {emp.regionCode}, EID AS {emp.id}, VALUE, CURRENCY - FROM EMPLOYMENT - WHERE EMPLOYER = :id AND ENDDATE IS NULL - ORDER BY STARTDATE ASC -</sql-query> - - In this example <return-property> was - used in combination with the {}-syntax for injection. - This allows users to choose how they want to refer column and - properties. - - If your mapping has a discriminator you must use - <return-discriminator> to specify the - discriminator column. -
- -
- Using stored procedures for querying - - Hibernate provides support for queries via stored procedures and - functions. Most of the following documentation is equivalent for both. - The stored procedure/function must return a resultset as the first - out-parameter to be able to work with Hibernate. An example of such a - stored function in Oracle 9 and higher is as follows: - - CREATE OR REPLACE FUNCTION selectAllEmployments - RETURN SYS_REFCURSOR -AS - st_cursor SYS_REFCURSOR; -BEGIN - OPEN st_cursor FOR - SELECT EMPLOYEE, EMPLOYER, - STARTDATE, ENDDATE, - REGIONCODE, EID, VALUE, CURRENCY - FROM EMPLOYMENT; - RETURN st_cursor; - END; - - To use this query in Hibernate you need to map it via a named - query. - - <sql-query name="selectAllEmployees_SP" callable="true"> - <return alias="emp" class="Employment"> - <return-property name="employee" column="EMPLOYEE"/> - <return-property name="employer" column="EMPLOYER"/> - <return-property name="startDate" column="STARTDATE"/> - <return-property name="endDate" column="ENDDATE"/> - <return-property name="regionCode" column="REGIONCODE"/> - <return-property name="id" column="EID"/> - <return-property name="salary"> - <return-column name="VALUE"/> - <return-column name="CURRENCY"/> - </return-property> - </return> - { ? = call selectAllEmployments() } -</sql-query> - - Stored procedures currently only return scalars and entities. - <return-join> and - <load-collection> are not supported. - -
- Rules/limitations for using stored procedures - - You cannot use stored procedures with Hibernate unless you - follow some procedure/function rules. If they do not follow those - rules they are not usable with Hibernate. If you still want to use - these procedures you have to execute them via - session.connection(). The rules are different for - each database, since database vendors have different stored procedure - semantics/syntax. - - Stored procedure queries cannot be paged with - setFirstResult()/setMaxResults(). - - The recommended call form is standard SQL92: { ? = call - functionName(<parameters>) } or { ? = call - procedureName(<parameters>}. Native call syntax is not - supported. - - For Oracle the following rules apply: - - - - A function must return a result set. The first parameter of - a procedure must be an OUT that returns a - result set. This is done by using a - SYS_REFCURSOR type in Oracle 9 or 10. In Oracle - you need to define a REF CURSOR type. See - Oracle literature for further information. - - - - For Sybase or MS SQL server the following rules apply: - - - - The procedure must return a result set. Note that since - these servers can return multiple result sets and update counts, - Hibernate will iterate the results and take the first result that - is a result set as its return value. Everything else will be - discarded. - - - - If you can enable SET NOCOUNT ON in your - procedure it will probably be more efficient, but this is not a - requirement. - - -
-
-
- -
- Custom SQL for create, update and delete - - Hibernate can use custom SQL for create, update, and delete - operations. The SQL can be overridden at the statement level or - inidividual column level. This section describes statement overrides. For - columns, see . shows how to define - custom SQL operatons using annotations. - - - Custom CRUD via annotations - - @Entity -@Table(name="CHAOS") -@SQLInsert( sql="INSERT INTO CHAOS(size, name, nickname, id) VALUES(?,upper(?),?,?)") -@SQLUpdate( sql="UPDATE CHAOS SET size = ?, name = upper(?), nickname = ? WHERE id = ?") -@SQLDelete( sql="DELETE CHAOS WHERE id = ?") -@SQLDeleteAll( sql="DELETE CHAOS") -@Loader(namedQuery = "chaos") -@NamedNativeQuery(name="chaos", query="select id, size, name, lower( nickname ) as nickname from CHAOS where id= ?", resultClass = Chaos.class) -public class Chaos { - @Id - private Long id; - private Long size; - private String name; - private String nickname; - - - @SQLInsert, @SQLUpdate, - @SQLDelete, @SQLDeleteAll - respectively override the INSERT, UPDATE, DELETE, and DELETE all - statement. The same can be achieved using Hibernate mapping files and the - <sql-insert>, - <sql-update> and - <sql-delete> nodes. This can be seen in . - - - Custom CRUD XML - - <class name="Person"> - <id name="id"> - <generator class="increment"/> - </id> - <property name="name" not-null="true"/> - <sql-insert>INSERT INTO PERSON (NAME, ID) VALUES ( UPPER(?), ? )</sql-insert> - <sql-update>UPDATE PERSON SET NAME=UPPER(?) WHERE ID=?</sql-update> - <sql-delete>DELETE FROM PERSON WHERE ID=?</sql-delete> -</class> - - - If you expect to call a store procedure, be sure to set the - callable attribute to true. In - annotations as well as in xml. - - To check that the execution happens correctly, Hibernate allows you - to define one of those three strategies: - - - - none: no check is performed: the store procedure is expected to - fail upon issues - - - - count: use of rowcount to check that the update is - successful - - - - param: like COUNT but using an output parameter rather that the - standard mechanism - - - - To define the result check style, use the check - parameter which is again available in annoations as well as in xml. - - You can use the exact same set of annotations respectively xml nodes - to override the collection related statements -see . - - - Overriding SQL statements for collections using - annotations - - @OneToMany -@JoinColumn(name="chaos_fk") -@SQLInsert( sql="UPDATE CASIMIR_PARTICULE SET chaos_fk = ? where id = ?") -@SQLDelete( sql="UPDATE CASIMIR_PARTICULE SET chaos_fk = null where id = ?") -private Set<CasimirParticle> particles = new HashSet<CasimirParticle>(); - - - - The parameter order is important and is defined by the order - Hibernate handles properties. You can see the expected order by enabling - debug logging for the org.hibernate.persister.entity - level. With this level enabled Hibernate will print out the static SQL - that is used to create, update, delete etc. entities. (To see the - expected sequence, remember to not include your custom SQL through - annotations or mapping files as that will override the Hibernate - generated static sql) - - - Overriding SQL statements for secondary tables is also possible - using @org.hibernate.annotations.Table and either (or - all) attributes sqlInsert, - sqlUpdate, sqlDelete: - - - Overriding SQL statements for secondary tables - - @Entity -@SecondaryTables({ - @SecondaryTable(name = "`Cat nbr1`"), - @SecondaryTable(name = "Cat2"}) -@org.hibernate.annotations.Tables( { - @Table(appliesTo = "Cat", comment = "My cat table" ), - @Table(appliesTo = "Cat2", foreignKey = @ForeignKey(name="FK_CAT2_CAT"), fetch = FetchMode.SELECT, - sqlInsert=@SQLInsert(sql="insert into Cat2(storyPart2, id) values(upper(?), ?)") ) -} ) -public class Cat implements Serializable { - - - The previous example also shows that you can give a comment to a - given table (primary or secondary): This comment will be used for DDL - generation. - - - The SQL is directly executed in your database, so you can use any - dialect you like. This will, however, reduce the portability of your - mapping if you use database specific SQL. - - - Last but not least, stored procedures are in most cases required to - return the number of rows inserted, updated and deleted. Hibernate always - registers the first statement parameter as a numeric output parameter for - the CUD operations: - - - Stored procedures and their return value - - CREATE OR REPLACE FUNCTION updatePerson (uid IN NUMBER, uname IN VARCHAR2) - RETURN NUMBER IS -BEGIN - - update PERSON - set - NAME = uname, - where - ID = uid; - - return SQL%ROWCOUNT; - -END updatePerson; - -
- -
- Custom SQL for loading - - You can also declare your own SQL (or HQL) queries for entity - loading. As with inserts, updates, and deletes, this can be done at the - individual column level as described in or at the statement level. Here - is an example of a statement level override: - - <sql-query name="person"> - <return alias="pers" class="Person" lock-mode="upgrade"/> - SELECT NAME AS {pers.name}, ID AS {pers.id} - FROM PERSON - WHERE ID=? - FOR UPDATE -</sql-query> - - This is just a named query declaration, as discussed earlier. You - can reference this named query in a class mapping: - - <class name="Person"> - <id name="id"> - <generator class="increment"/> - </id> - <property name="name" not-null="true"/> - <loader query-ref="person"/> -</class> - - This even works with stored procedures. - - You can even define a query for collection loading: - - <set name="employments" inverse="true"> - <key/> - <one-to-many class="Employment"/> - <loader query-ref="employments"/> -</set> - - <sql-query name="employments"> - <load-collection alias="emp" role="Person.employments"/> - SELECT {emp.*} - FROM EMPLOYMENT emp - WHERE EMPLOYER = :id - ORDER BY STARTDATE ASC, EMPLOYEE ASC -</sql-query> - - You can also define an entity loader that loads a collection by join - fetching: - - <sql-query name="person"> - <return alias="pers" class="Person"/> - <return-join alias="emp" property="pers.employments"/> - SELECT NAME AS {pers.*}, {emp.*} - FROM PERSON pers - LEFT OUTER JOIN EMPLOYMENT emp - ON pers.ID = emp.PERSON_ID - WHERE ID=? -</sql-query> - - The annotation equivalent <loader> is the - @Loader annotation as seen in . -
- -
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/HQL_JPQL.xml b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/HQL_JPQL.xml deleted file mode 100644 index 5135a67ab5..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/HQL_JPQL.xml +++ /dev/null @@ -1,1449 +0,0 @@ - - - - - - HQL and JPQL - - - The Hibernate Query Language (HQL) and Java Persistence Query Language (JPQL) are both object model - focused query languages similar in nature to SQL. JPQL is a heavily-inspired-by subset of HQL. A JPQL - query is always a valid HQL query, the reverse is not true however. - - - - Both HQL and JPQL are non-type-safe ways to perform query operations. Criteria queries offer a - type-safe approach to querying. See for more information. - - -
- Case Sensitivity - - - With the exception of names of Java classes and properties, queries are case-insensitive. - So SeLeCT is the same as sELEct is the same as - SELECT, but - org.hibernate.eg.FOO and org.hibernate.eg.Foo are different, as are - foo.barSet and foo.BARSET. - - - - - This documentation uses lowercase keywords as convention in examples. - - -
- -
- Statement types - - Both HQL and JPQL allow SELECT, UPDATE and DELETE - statements to be performed. HQL additionally allows INSERT statements, in a form - similar to a SQL INSERT-SELECT. - - - - - Care should be taken as to when a UPDATE or DELETE statement is - executed. - -
- Section 4.10 of the JPA 2.0 Specification - - Caution should be used when executing bulk update or delete operations because they may result in - inconsistencies between the database and the entities in the active persistence context. In general, bulk - update and delete operations should only be performed within a transaction in a new persistence con- - text or before fetching or accessing entities whose state might be affected by such operations. - -
-
- -
- Select statements - - The BNF for SELECT statements in HQL is: - - - - The simplest possible HQL SELECT statement is of the form: - - from com.acme.Cat - - The select statement in JPQL is exactly the same as for HQL except that JPQL requires a - select_clause, whereas HQL does not. Even though HQL does not require the presence - of a select_clause, it is generally good practice to include one. For simple queries - the intent is clear and so the intended result of the select_clause is east to - infer. But on more complex queries that is not always the case. It is usually better to explicitly - specify intent. Hibernate does not actually enforce that a select_clause be present - even when parsing JPQL queries, however applications interested in JPA portability should take heed of - this. - -
- -
- Update statements - - The BNF for UPDATE statements is the same in HQL and JPQL: - - - - UPDATE statements, by default, do not effect the version - or the timestamp attribute values for the affected entities. However, - you can force Hibernate to set the version or timestamp attribute - values through the use of a versioned update. This is achieved by adding the - VERSIONED keyword after the UPDATE keyword. Note, however, that - this is a Hibernate specific feature and will not work in a portable manner. Custom version types, - org.hibernate.usertype.UserVersionType, are not allowed in conjunction - with a update versioned statement. - - - An UPDATE statement is executed using the executeUpdate - of either org.hibernate.Query or - javax.persistence.Query. The method is named for those familiar with - the JDBC executeUpdate on java.sql.PreparedStatement. - The int value returned by the executeUpdate() method - indicates the number of entities effected by the operation. This may or may not correlate to the number - of rows effected in the database. An HQL bulk operation might result in multiple actual SQL statements - being executed (for joined-subclass, for example). The returned number indicates the number of actual - entities affected by the statement. Using a JOINED inheritance hierarchy, a delete against one of the - subclasses may actually result in deletes against not just the table to which that subclass is mapped, - but also the "root" table and tables in between - - - Example UPDATE query statements - - - - -
- - - - Neither UPDATE nor DELETE statements are allowed to - result in what is called an implicit join. Their form already disallows explicit joins. - - - -
- Delete statements - - The BNF for DELETE statements is the same in HQL and JPQL: - - - - A DELETE statement is also executed using the executeUpdate - method of either org.hibernate.Query or - javax.persistence.Query. - -
- -
- Insert statements - - HQL adds the ability to define INSERT statements as well. There is no JPQL - equivalent to this. The BNF for an HQL INSERT statement is: - - - - The attribute_list is analogous to the column specification in the - SQL INSERT statement. For entities involved in mapped inheritance, only attributes - directly defined on the named entity can be used in the attribute_list. Superclass - properties are not allowed and subclass properties do not make sense. In other words, - INSERT statements are inherently non-polymorphic. - - - select_statement can be any valid HQL select query, with the caveat that the return - types must match the types expected by the insert. Currently, this is checked during query - compilation rather than allowing the check to relegate to the database. This may cause problems - between Hibernate Types which are equivalent as opposed to - equal. For example, this might cause lead to issues with mismatches between an - attribute mapped as a org.hibernate.type.DateType and an attribute defined as - a org.hibernate.type.TimestampType, even though the database might not make a - distinction or might be able to handle the conversion. - - - For the id attribute, the insert statement gives you two options. You can either explicitly specify - the id property in the attribute_list, in which case its value is taken from the - corresponding select expression, or omit it from the attribute_list in which case a - generated value is used. This latter option is only available when using id generators that operate - in the database; attempting to use this option with any in memory type - generators will cause an exception during parsing. - - - For optimistic locking attributes, the insert statement again gives you two options. You can either - specify the attribute in the attribute_list in which case its value is taken from - the corresponding select expressions, or omit it from the attribute_list in which - case the seed value defined by the corresponding - org.hibernate.type.VersionType is used. - - - Example INSERT query statements - - -
-
- -
- The <literal>FROM</literal> clause - - The FROM clause is responsible defining the scope of object model types available to - the rest of the query. It also is responsible for defining all the identification variables - available to the rest of the query. - -
- Identification variables - - Identification variables are often referred to as aliases. References to object model classes - in the FROM clause can be associated with an identification variable that can then be used to - refer to that type thoughout the rest of the query. - - - In most cases declaring an identification variable is optional, though it is usually good practice to - declare them. - - - An identification variable must follow the rules for Java identifier validity. - - - According to JPQL, identification variables must be treated as case insensitive. Good practice - says you should use the same case throughout a query to refer to a given identification variable. In - other words, JPQL says they can be case insensitive and so Hibernate must - be able to treat them as such, but this does not make it good practice. - -
-
- Root entity references - - A root entity reference, or what JPA calls a range variable declaration, is - specifically a reference to a mapped entity type from the application. It cannot name component/ - embeddable types. And associations, including collections, are handled in a different manner - discussed later. - - - The BNF for a root entity reference is: - - - - Simple query example - - - - We see that the query is defining a root entity reference to the com.acme.Cat - object model type. Additionally, it declares an alias of c to that - com.acme.Cat reference; this is the identification variable. - - - Usually the root entity reference just names the entity name rather than the - entity class FQN. By default the entity name is the unqualified entity class name, - here Cat - - - Simple query using entity name for root entity reference - - - - Multiple root entity references can also be specified. Even naming the same entity! - - - Simple query using multiple root entity references - - - -
-
- Explicit joins - - The FROM clause can also contain explicit relationship joins using the - join keyword. These joins can be either inner - or left outer style joins. - - - Explicit inner join examples - - - - Explicit left (outer) join examples - - - - An important use case for explicit joins is to define FETCH JOINS which override - the laziness of the joined association. As an example, given an entity named Customer - with a collection-valued association named orders - - - Fetch join example - - - - As you can see from the example, a fetch join is specified by injecting the keyword fetch - after the keyword join. In the example, we used a left outer join because we want - to return customers who have no orders also. Inner joins can also be fetched. But inner joins still - filter. In the example, using an inner join instead would have resulted in customers without any orders - being filtered out of the result. - - - - Fetch joins are not valid in sub-queries. - - - Care should be taken when fetch joining a collection-valued association which is in any way further - restricted; the fetched collection will be restricted too! For this reason it is usually considered - best practice to not assign an identification variable to fetched joins except for the purpose - of specifying nested fetch joins. - - - Fetch joins should not be used in paged queries (aka, setFirstResult/ - setMaxResults). Nor should they be used with the HQL - scroll or iterate features. - - - - HQL also defines a WITH clause to qualify the join conditions. Again, this is - specific to HQL; JPQL does not define this feature. - - - with-clause join example - - - - The important distinction is that in the generated SQL the conditions of the - with clause are made part of the on clause in the generated SQL - as opposed to the other queries in this section where the HQL/JPQL conditions are made part of the - where clause in the generated SQL. The distinction in this specific example is - probably not that significant. The with clause is sometimes necessary in more - complicated queries. - - - Explicit joins may reference association or component/embedded attributes. For further information - about collection-valued association references, see . - In the case of component/embedded attributes, the join is simply logical and does not correlate to a - physical (SQL) join. - -
-
- Implicit joins (path expressions) - - Another means of adding to the scope of object model types available to the query is through the - use of implicit joins, or path expressions. - - - Simple implicit join example - - - - An implicit join always starts from an identification variable, followed by - the navigation operator (.), followed by an attribute for the object model type referenced by the - initial identification variable. In the example, the initial - identification variable is c which refers to the - Customer entity. The c.chiefExecutive reference then refers - to the chiefExecutive attribute of the Customer entity. - chiefExecutive is an association type so we further navigate to its - age attribute. - - - - If the attribute represents an entity association (non-collection) or a component/embedded, that - reference can be further navigated. Basic values and collection-valued associations cannot be - further navigated. - - - - As shown in the example, implicit joins can appear outside the FROM clause. However, - they affect the FROM clause. Implicit joins are always treated as inner joins. - Multiple references to the same implicit join always refer to the same logical and physical (SQL) join. - - - Reused implicit join - - - - Just as with explicit joins, implicit joins may reference association or component/embedded attributes. - For further information about collection-valued association references, see - . In the case of component/embedded attributes, - the join is simply logical and does not correlate to a physical (SQL) join. Unlike explicit joins, - however, implicit joins may also reference basic state fields as long as the path expression ends - there. - -
-
- Collection member references - - References to collection-valued associations actually refer to the values of - that collection. - - - Collection references example - - - - In the example, the identification variable o actually refers to the object model - type Order which is the type of the elements of the - Customer#orders association. - - - The example also shows the alternate syntax for specifying collection association joins using the - IN syntax. Both forms are equivalent. Which form an application chooses to use is - simply a matter of taste. - -
- Special case - qualified path expressions - - We said earlier that collection-valued associations actually refer to the values - of that collection. Based on the type of collection, there are also available a set of - explicit qualification expressions. - - - Qualified collection references example - - - - - VALUE - - - Refers to the collection value. Same as not specifying a qualifier. Useful to - explicitly show intent. Valid for any type of collection-valued reference. - - - - - INDEX - - - According to HQL rules, this is valid for both Maps and Lists which specify a - javax.persistence.OrderColumn annotation to refer to - the Map key or the List position (aka the OrderColumn value). JPQL however, reserves - this for use in the List case and adds KEY for the MAP case. - Applications interested in JPA provider portability should be aware of this - distinction. - - - - - KEY - - - Valid only for Maps. Refers to the map's key. If the key is itself an entity, - can be further navigated. - - - - - ENTRY - - - Only valid only for Maps. Refers to the Map's logical - java.util.Map.Entry tuple (the combination of its key - and value). ENTRY is only valid as a terminal path and only valid - in the select clause. - - - - - - See for additional details on collection related - expressions. - -
-
-
- Polymorphism - - HQL and JPQL queries are inherently polymorphic. - - select p from Payment p - - This query names the Payment entity explicitly. However, all subclasses of - Payment are also available to the query. So if the - CreditCardPayment entity and WireTransferPayment entity - each extend from Payment all three types would be available to the query. And - the query would return instances of all three. - - - The logical extreme - - The HQL query from java.lang.Object is totally valid! It returns every - object of every type defined in your application. - - - - This can be altered by using either the - org.hibernate.annotations.Polymorphism annotation (global, and - Hibernate-specific) or limiting them using in the query itself using an entity type expression. - -
-
- -
- Expressions - - - Essentially expressions are references that resolve to basic or tuple values. - - -
- Identification variable - - See . - -
- -
- Path expressions - - Again, see . - -
- -
- Literals - - String literals are enclosed in single-quotes. To escape a single-quote within a string literal, use - double single-quotes. - - - String literal examples - - - - - Numeric literals are allowed in a few different forms. - - - Numeric literal examples - - - - In the scientific notation form, the E is case insensitive. - - - Specific typing can be achieved through the use of the same suffix approach specified by Java. So, - L denotes a long; D denotes a double; F - denotes a float. The actual suffix is case insensitive. - - - - The boolean literals are TRUE and FALSE, again case-insensitive. - - - - Enums can even be referenced as literals. The fully-qualified enum class name must be used. HQL - can also handle constants in the same manner, though JPQL does not define that as supported. - - - - Entity names can also be used as literal. See . - - - - Date/time literals can be specified using the JDBC escape syntax: {d 'yyyy-mm-dd'} - for dates, {t 'hh:mm:ss'} for times and - {ts 'yyyy-mm-dd hh:mm:ss[.millis]'} (millis optional) for timestamps. These - literals only work if you JDBC drivers supports them. - -
- -
- Parameters - - HQL supports all 3 of the following forms. JPQL does not support the HQL-specific positional - parameters notion. It is good practice to not mix forms in a given query. - -
- Named parameters - - Named parameters are declared using a colon followed by an identifier - - :aNamedParameter. The same named parameter can appear multiple times in a query. - - - Named parameter examples - - -
-
- Positional (JPQL) parameters - - JPQL-style positional parameters are declared using a question mark followed by an ordinal - - ?1, ?2. The ordinals start with 1. Just like with - named parameters, positional parameters can also appear multiple times in a query. - - - Positional (JPQL) parameter examples - - -
-
- Positional (HQL) parameters - - HQL-style positional parameters follow JDBC positional parameter syntax. They are declared using - ? without a following ordinal. There is no way to relate two such - positional parameters as being "the same" aside from binding the same value to each. - - - This form should be considered deprecated and may be removed in the near future. - -
-
- -
- Arithmetic - - Arithmetic operations also represent valid expressions. - - - Numeric arithmetic examples - - - - The following rules apply to the result of arithmetic operations: - - - - - If either of the operands is Double/double, the result is a Double; - - - - - else, if either of the operands is Float/float, the result is a Float; - - - - - else, if either operand is BigDecimal, the result is BigDecimal; - - - - - else, if either operand is BigInteger, the result is BigInteger (except for division, in - which case the result type is not further defined); - - - - - else, if either operand is Long/long, the result is Long (except for division, in - which case the result type is not further defined); - - - - - else, (the assumption being that both operands are of integral type) the result is Integer - (except for division, in which case the result type is not further defined); - - - - - - Date arithmetic is also supported, albeit in a more limited fashion. This is due partially to - differences in database support and partially to the lack of support for INTERVAL - definition in the query language itself. - -
- -
- Concatenation (operation) - - HQL defines a concatenation operator in addition to supporting the concatenation - (CONCAT) function. This is not defined by JPQL, so portable applications - should avoid it use. The concatenation operator is taken from the SQL concatenation operator - - ||. - - - Concatenation operation example - - - - See for details on the concat() function - -
- -
- Aggregate functions - - Aggregate functions are also valid expressions in HQL and JPQL. The semantic is the same as their - SQL counterpart. The supported aggregate functions are: - - - - - COUNT (including distinct/all qualifiers) - The result type is always Long. - - - - - AVG - The result type is always Double. - - - - - MIN - The result type is the same as the argument type. - - - - - MAX - The result type is the same as the argument type. - - - - - SUM - The result type of the avg() function depends on - the type of the values being averaged. For integral values (other than BigInteger), the result - type is Long. For floating point values (other than BigDecimal) the result type is Double. For - BigInteger values, the result type is BigInteger. For BigDecimal values, the result type is - BigDecimal. - - - - - Aggregate function examples - - - - Aggregations often appear with grouping. For information on grouping see - -
- -
- Scalar functions - - Both HQL and JPQL define some standard functions that are available regardless of the underlying - database in use. HQL can also understand additional functions defined by the Dialect as well as the - application. - - -
- Standardized functions - JPQL - - Here are the list of functions defined as supported by JPQL. Applications interested in remaining - portable between JPA providers should stick to these functions. - - - - CONCAT - - - String concatenation function. Variable argument length of 2 or more string values - to be concatenated together. - - - - - SUBSTRING - - - Extracts a portion of a string value. - - - - The second argument denotes the starting position. The third (optional) argument - denotes the length. - - - - - UPPER - - - Upper cases the specified string - - - - - LOWER - - - Lower cases the specified string - - - - - TRIM - - - Follows the semantics of the SQL trim function. - - - - - LENGTH - - - Returns the length of a string. - - - - - LOCATE - - - Locates a string within another string. - - - - The third argument (optional) is used to denote a position from which to start looking. - - - - - ABS - - - Calculates the mathematical absolute value of a numeric value. - - - - - MOD - - - Calculates the remainder of dividing the first argument by the second. - - - - - SQRT - - - Calculates the mathematical square root of a numeric value. - - - - - CURRENT_DATE - - - Returns the database current date. - - - - - CURRENT_TIME - - - Returns the database current time. - - - - - CURRENT_TIMESTAMP - - - Returns the database current timestamp. - - - - -
-
- Standardized functions - HQL - - Beyond the JPQL standardized functions, HQL makes some additional functions available regardless - of the underlying database in use. - - - - BIT_LENGTH - - - Returns the length of binary data. - - - - - CAST - - - Performs a SQL cast. The cast target should name the Hibernate mapping type to use. - See the chapter on data types for more information. - - - - - EXTRACT - - - Performs a SQL extraction on datetime values. An extraction extracts parts of - the datetime (the year, for example). See the abbreviated forms below. - - - - - SECOND - - - Abbreviated extract form for extracting the second. - - - - - MINUTE - - - Abbreviated extract form for extracting the minute. - - - - - HOUR - - - Abbreviated extract form for extracting the hour. - - - - - DAY - - - Abbreviated extract form for extracting the day. - - - - - MONTH - - - Abbreviated extract form for extracting the month. - - - - - YEAR - - - Abbreviated extract form for extracting the year. - - - - - STR - - - Abbreviated form for casting a value as character data. - - - - -
- -
- Non-standardized functions - - Hibernate Dialects can register additional functions known to be available for that particular - database product. These functions are also available in HQL (and JPQL, though only when using - Hibernate as the JPA provider obviously). However, they would only be available when using that - database/Dialect. Applications that aim for database portability should avoid using functions - in this category. - - - Application developers can also supply their own set of functions. This would usually represent - either custom SQL functions or aliases for snippets of SQL. Such function declarations are - made by using the addSqlFunction method of - org.hibernate.cfg.Configuration - -
-
- -
- Collection-related expressions - - There are a few specialized expressions for working with collection-valued associations. Generally - these are just abbreviated forms or other expressions for the sake of conciseness. - - - - SIZE - - - Calculate the size of a collection. Equates to a subquery! - - - - - MAXELEMENT - - - Available for use on collections of basic type. Refers to the maximum value as determined - by applying the max SQL aggregation. - - - - - MAXINDEX - - - Available for use on indexed collections. Refers to the maximum index (key/position) as - determined by applying the max SQL aggregation. - - - - - MINELEMENT - - - Available for use on collections of basic type. Refers to the minimum value as determined - by applying the min SQL aggregation. - - - - - MININDEX - - - Available for use on indexed collections. Refers to the minimum index (key/position) as - determined by applying the min SQL aggregation. - - - - - ELEMENTS - - - Used to refer to the elements of a collection as a whole. Only allowed in the where clause. - Often used in conjunction with ALL, ANY or - SOME restrictions. - - - - - INDICES - - - Similar to elements except that indices refers to - the collections indices (keys/positions) as a whole. - - - - - - Collection-related expressions examples - - - - Elements of indexed collections (arrays, lists, and maps) can be referred to by index operator. - - - Index operator examples - - - - See also as there is a good deal of overlap. - -
- -
- Entity type - - We can also refer to the type of an entity as an expression. This is mainly useful when dealing - with entity inheritance hierarchies. The type can expressed using a TYPE function - used to refer to the type of an identification variable representing an entity. The name of the - entity also serves as a way to refer to an entity type. Additionally the entity type can be - parametrized, in which case the entity's Java Class reference would be bound as the parameter - value. - - - Entity type expression examples - - - - HQL also has a legacy form of referring to an entity type, though that legacy form is considered - deprecated in favor of TYPE. The legacy form would have used p.class - in the examples rather than type(p). It is mentioned only for completeness. - -
- -
- CASE expressions - - Both the simple and searched forms are supported, as well as the 2 SQL defined abbreviated forms - (NULLIF and COALESCE) - -
- Simple CASE expressions - - The simple form has the following syntax: - - - - Simple case expression example - - -
-
- Searched CASE expressions - - The searched form has the following syntax: - - - - Searched case expression example - - -
-
- NULLIF expressions - - NULLIF is an abbreviated CASE expression that returns NULL if its operands are considered equal. - - - NULLIF example - - -
-
- COALESCE expressions - - COALESCE is an abbreviated CASE expression that returns the first non-null operand. We have seen a - number of COALESCE examples above. - -
-
-
- -
- The <literal>SELECT</literal> clause - - The SELECT clause identifies which objects and values to return as the query results. - The expressions discussed in are all valid select expressions, except - where otherwise noted. See the section for information on handling the results - depending on the types of values specified in the SELECT clause. - - - - There is a particular expression type that is only valid in the select clause. Hibernate calls this - dynamic instantiation. JPQL supports some of that feature and calls it - a constructor expression - - - - Dynamic instantiation example - constructor - - - - - So rather than dealing with the Object[] (again, see ) here we are wrapping - the values in a type-safe java object that will be returned as the results of the query. The class - reference must be fully qualified and it must have a matching constructor. - - - The class here need not be mapped. If it does represent an entity, the resulting instances are - returned in the NEW state (not managed!). - - - - That is the part JPQL supports as well. HQL supports additional dynamic instantiation - features. First, the query can specify to return a List rather than an Object[] for scalar results: - - - Dynamic instantiation example - list - - - - The results from this query will be a ]]> as opposed to a ]]> - - - - HQL also supports wrapping the scalar results in a Map. - - - Dynamic instantiation example - map - - - - The results from this query will be a >]]> as opposed to a - ]]>. The keys of the map are defined by the aliases given to the select - expressions. - -
- -
- Predicates - - Predicates form the basis of the where clause, the having clause and searched case expressions. - They are expressions which resolve to a truth value, generally TRUE or - FALSE, although boolean comparisons involving NULLs generally resolve to - UNKNOWN. - - -
- Relational comparisons - - Comparisons involve one of the comparison operators - , >=, <, <=, <>]>. HQL also defines - as a comparison operator synonymous with ]]>. The operands should be - of the same type. - - - Relational comparison examples - - - - Comparisons can also involve subquery qualifiers - ALL, ANY, - SOME. SOME and ANY are synonymous. - - - The ALL qualifier resolves to true if the comparison is true for all of the values in the result of - the subquery. It resolves to false if the subquery result is empty. - - - ALL subquery comparison qualifier example - - - - The ANY/SOME qualifier resolves to true if the comparison is true for some of (at least one of) the - values in the result of the subquery. It resolves to false if the subquery result is empty. - -
- -
- Nullness predicate - - Check a value for nullness. Can be applied to basic attribute references, entity references and - parameters. HQL additionally allows it to be applied to component/embeddable types. - - - Nullness checking examples - - -
- -
- Like predicate - - Performs a like comparison on string values. The syntax is: - - - - The semantics follow that of the SQL like expression. The pattern_value is the - pattern to attempt to match in the string_expression. Just like SQL, - pattern_value can use _ and % as wildcards. The - meanings are the same. _ matches any single character. % matches - any number of characters. - - - The optional escape_character is used to specify an escape character used to - escape the special meaning of _ and % in the - pattern_value. THis is useful when needing to search on patterns including either - _ or % - - - Like predicate examples - - -
- -
- Between predicate - - Analogous to the SQL between expression. Perform a evaluation that a value is within the range - of 2 other values. All the operands should have comparable types. - - - Between predicate examples - - -
- -
- In predicate - - IN predicates performs a check that a particular value is in a list of values. - Its syntax is: - - - - The types of the single_valued_expression and the individual values in the - single_valued_list must be consistent. JPQL limits the valid types here - to string, numeric, date, time, timestamp, and enum types. In JPQL, - single_valued_expression can only refer to: - - - - - state fields, which is its term for simple attributes. Specifically this - excludes association and component/embedded attributes. - - - - - entity type expressions. See - - - - - In HQL, single_valued_expression can refer to a far more broad set of expression - types. Single-valued association are allowed. So are component/embedded attributes, although that - feature depends on the level of support for tuple or row value constructor syntax in - the underlying database. Additionally, HQL does not limit the value type in any way, though - application developers should be aware that different types may incur limited support based on - the underlying database vendor. This is largely the reason for the JPQL limitations. - - - The list of values can come from a number of different sources. In the - constructor_expression and collection_valued_input_parameter, the - list of values must not be empty; it must contain at least one value. - - - In predicate examples - - -
- -
- Exists predicate - - Exists expressions test the existence of results from a subquery. The affirmative form returns true - if the subquery result contains values. The negated form returns true if the subquery - result is empty. - -
- -
- Empty collection predicate - - The IS [NOT] EMPTY expression applies to collection-valued path expressions. It - checks whether the particular collection has any associated values. - - - Empty collection expression examples - - -
- -
- Member-of collection predicate - - The [NOT] MEMBER [OF] expression applies to collection-valued path expressions. It - checks whether a value is a member of the specified collection. - - - Member-of collection expression examples - - -
- -
- NOT predicate operator - - The NOT operator is used to negate the predicate that follows it. If that - following predicate is true, the NOT resolves to false. If the predicate is true, NOT resolves to - false. If the predicate is unknown, the NOT resolves to unknown as well. - -
- -
- AND predicate operator - - The AND operator is used to combine 2 predicate expressions. The result of the - AND expression is true if and only if both predicates resolve to true. If either predicate resolves - to unknown, the AND expression resolves to unknown as well. Otherwise, the result is false. - -
- -
- OR predicate operator - - The OR operator is used to combine 2 predicate expressions. The result of the - OR expression is true if either predicate resolves to true. If both predicates resolve to unknown, the - OR expression resolves to unknown. Otherwise, the result is false. - -
-
- -
- The <literal>WHERE</literal> clause - - The WHERE clause of a query is made up of predicates which assert whether values in - each potential row match the predicated checks. Thus, the where clause restricts the results returned - from a select query and limits the scope of update and delete queries. - -
- -
- Grouping - - The GROUP BY clause allows building aggregated results for various value groups. As an - example, consider the following queries: - - - Group-by illustration - - - - The first query retrieves the complete total of all orders. The second retrieves the total for each - customer; grouped by each customer. - - - In a grouped query, the where clause applies to the non aggregated values (essentially it determines whether - rows will make it into the aggregation). The HAVING clause also restricts results, - but it operates on the aggregated values. In the example, - we retrieved order totals for all customers. If that ended up being too much data to deal with, - we might want to restrict the results to focus only on customers with a summed order total of more than - $10,000.00: - - - Having illustration - - - - The HAVING clause follows the same rules as the WHERE clause and is also made up of predicates. HAVING is - applied after the groupings and aggregations have been done; WHERE is applied before. - -
- -
- Ordering - - The results of the query can also be ordered. The ORDER BY clause is used to specify - the selected values to be used to order the result. The types of expressions considered valid as part - of the order-by clause include: - - - - - state fields - - - - - component/embeddable attributes - - - - - scalar expressions such as arithmetic operations, functions, etc. - - - - - identification variable declared in the select clause for any of the previous expression types - - - - - Additionally, JPQL says that all values referenced in the order-by clause must be named in the select - clause. HQL does not mandate that restriction, but applications desiring database portability should be - aware that not all databases support referencing values in the order-by clause that are not referenced - in the select clause. - - - Individual expressions in the order-by can be qualified with either ASC (ascending) or - DESC (descending) to indicated the desired ordering direction. Null values can be placed - in front or at the end of sorted set using NULLS FIRST or NULLS LAST - clause respectively. - - - Order-by examples - - -
- -
- Query API -
-
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/agg_func_example.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/agg_func_example.txt deleted file mode 100644 index fffa0f6acc..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/agg_func_example.txt +++ /dev/null @@ -1,10 +0,0 @@ -select count(*), sum( o.total ), avg( o.total ), min( o.total ), max( o.total ) -from Order o - -select count( distinct c.name ) -from Customer c - -select c.id, c.name, sum( o.total ) -from Customer c - left join c.orders o -group by c.id, c.name \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/arithmetic_example.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/arithmetic_example.txt deleted file mode 100644 index 19c8b5c2fb..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/arithmetic_example.txt +++ /dev/null @@ -1,9 +0,0 @@ -select year( current_date() ) - year( c.dateOfBirth ) -from Customer c - -select c -from Customer c -where year( current_date() ) - year( c.dateOfBirth ) < 30 - -select o.customer, o.total + ( o.total * :salesTax ) -from Order o \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/collection_expression_example.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/collection_expression_example.txt deleted file mode 100644 index a625cf5066..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/collection_expression_example.txt +++ /dev/null @@ -1,36 +0,0 @@ -select cal -from Calendar cal -where maxelement(cal.holidays) > current_date() - -select o -from Order o -where maxindex(o.items) > 100 - -select o -from Order o -where minelement(o.items) > 10000 - -select m -from Cat as m, Cat as kit -where kit in elements(m.kittens) - -// the above query can be re-written in jpql standard way: -select m -from Cat as m, Cat as kit -where kit member of m.kittens - -select p -from NameList l, Person p -where p.name = some elements(l.names) - -select cat -from Cat cat -where exists elements(cat.kittens) - -select p -from Player p -where 3 > all elements(p.scores) - -select show -from Show show -where 'fizard' in indices(show.acts) \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/collection_reference_example.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/collection_reference_example.txt deleted file mode 100644 index d9f1eaceb3..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/collection_reference_example.txt +++ /dev/null @@ -1,16 +0,0 @@ -select c -from Customer c - join c.orders o - join o.lineItems l - join l.product p -where o.status = 'pending' - and p.status = 'backorder' - -// alternate syntax -select c -from Customer c, - in(c.orders) o, - in(o.lineItems) l - join l.product p -where o.status = 'pending' - and p.status = 'backorder' \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/concat_op_example.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/concat_op_example.txt deleted file mode 100644 index 645f51dca8..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/concat_op_example.txt +++ /dev/null @@ -1,3 +0,0 @@ -select 'Mr. ' || c.name.first || ' ' || c.name.last -from Customer c -where c.gender = Gender.MALE \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/ctor_dynamic_instantiation_example.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/ctor_dynamic_instantiation_example.txt deleted file mode 100644 index c96df422fe..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/ctor_dynamic_instantiation_example.txt +++ /dev/null @@ -1,4 +0,0 @@ -select new Family( mother, mate, offspr ) -from DomesticCat as mother - join mother.mate as mate - left join mother.kittens as offspr \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/empty_collection_example.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/empty_collection_example.txt deleted file mode 100644 index f6af597a87..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/empty_collection_example.txt +++ /dev/null @@ -1,7 +0,0 @@ -select o -from Order o -where o.lineItems is empty - -select c -from Customer c -where c.pastDueBills is not empty \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/entity_type_exp_example.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/entity_type_exp_example.txt deleted file mode 100644 index 1f6c58e49c..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/entity_type_exp_example.txt +++ /dev/null @@ -1,8 +0,0 @@ -select p -from Payment p -where type(p) = CreditCardPayment - -select p -from Payment p -where type(p) = :aType - diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/group_by_illustration.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/group_by_illustration.txt deleted file mode 100644 index 598d20c5f8..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/group_by_illustration.txt +++ /dev/null @@ -1,10 +0,0 @@ -// retrieve the total for all orders -select sum( o.total ) -from Order o - -// retrieve the total of all orders -// *grouped by* customer -select c.id, sum( o.total ) -from Order o - inner join o.customer c -group by c.id \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/having_illustration.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/having_illustration.txt deleted file mode 100644 index 60f39b3c4a..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/having_illustration.txt +++ /dev/null @@ -1,5 +0,0 @@ -select c.id, sum( o.total ) -from Order o - inner join o.customer c -group by c.id -having sum( o.total ) > 10000.00 \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/index_operator_example.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/index_operator_example.txt deleted file mode 100644 index 06f3326c03..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/index_operator_example.txt +++ /dev/null @@ -1,22 +0,0 @@ -select o -from Order o -where o.items[0].id = 1234 - -select p -from Person p, Calendar c -where c.holidays['national day'] = p.birthDay - and p.nationality.calendar = c - -select i -from Item i, Order o -where o.items[ o.deliveredItemIndices[0] ] = i - and o.id = 11 - -select i -from Item i, Order o -where o.items[ maxindex(o.items) ] = i - and o.id = 11 - -select i -from Item i, Order o -where o.items[ size(o.items) - 1 ] = i \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/join_example_explicit_inner.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/join_example_explicit_inner.txt deleted file mode 100644 index a809d3fec9..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/join_example_explicit_inner.txt +++ /dev/null @@ -1,10 +0,0 @@ -select c -from Customer c - join c.chiefExecutive ceo -where ceo.age < 25 - -// same query but specifying join type as 'inner' explicitly -select c -from Customer c - inner join c.chiefExecutive ceo -where ceo.age < 25 diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/join_example_explicit_outer.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/join_example_explicit_outer.txt deleted file mode 100644 index 2e8ed33854..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/join_example_explicit_outer.txt +++ /dev/null @@ -1,15 +0,0 @@ -// get customers who have orders worth more than $5000 -// or who are in "preferred" status -select distinct c -from Customer c - left join c.orders o -where o.value > 5000.00 - or c.status = 'preferred' - -// functionally the same query but using the -// 'left outer' phrase -select distinct c -from Customer c - left outer join c.orders o -where o.value > 5000.00 - or c.status = 'preferred' diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/join_example_fetch.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/join_example_fetch.txt deleted file mode 100644 index a882f28816..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/join_example_fetch.txt +++ /dev/null @@ -1,3 +0,0 @@ -select c -from Customer c - left join fetch c.orders o \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/join_example_implicit.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/join_example_implicit.txt deleted file mode 100644 index 4185a64f72..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/join_example_implicit.txt +++ /dev/null @@ -1,9 +0,0 @@ -select c -from Customer c -where c.chiefExecutive.age < 25 - -// same as -select c -from Customer c - inner join c.chiefExecutive ceo -where ceo.age < 25 diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/join_example_implicit_reused.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/join_example_implicit_reused.txt deleted file mode 100644 index 18db817e5e..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/join_example_implicit_reused.txt +++ /dev/null @@ -1,19 +0,0 @@ -select c -from Customer c -where c.chiefExecutive.age < 25 - and c.chiefExecutive.address.state = 'TX' - -// same as -select c -from Customer c - inner join c.chiefExecutive ceo -where ceo.age < 25 - and ceo.address.state = 'TX' - -// same as -select c -from Customer c - inner join c.chiefExecutive ceo - inner join ceo.address a -where ceo.age < 25 - and a.state = 'TX' diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/join_example_with.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/join_example_with.txt deleted file mode 100644 index b0441e6216..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/join_example_with.txt +++ /dev/null @@ -1,4 +0,0 @@ -select distinct c -from Customer c - left join c.orders o - with o.value > 5000.00 \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/jpql_positional_parameter_example.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/jpql_positional_parameter_example.txt deleted file mode 100644 index ca276e93c5..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/jpql_positional_parameter_example.txt +++ /dev/null @@ -1,16 +0,0 @@ -String queryString = - "select c " + - "from Customer c " + - "where c.name = ?1 " + - " or c.nickName = ?1"; - -// HQL - as you can see, handled just like named parameters -// in terms of API -List customers = session.createQuery( queryString ) - .setParameter( "1", theNameOfInterest ) - .list(); - -// JPQL -List customers = entityManager.createQuery( queryString, Customer.class ) - .setParameter( 1, theNameOfInterest ) - .getResultList(); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/list_dynamic_instantiation_example.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/list_dynamic_instantiation_example.txt deleted file mode 100644 index 1678b5ad2d..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/list_dynamic_instantiation_example.txt +++ /dev/null @@ -1,4 +0,0 @@ -select new list(mother, offspr, mate.name) -from DomesticCat as mother - inner join mother.mate as mate - left outer join mother.kittens as offspr \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/locate_bnf.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/locate_bnf.txt deleted file mode 100644 index c8e0f02d8f..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/locate_bnf.txt +++ /dev/null @@ -1 +0,0 @@ -locate( string_expression, string_expression[, numeric_expression] ) \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/map_dynamic_instantiation_example.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/map_dynamic_instantiation_example.txt deleted file mode 100644 index ed9bcfc972..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/map_dynamic_instantiation_example.txt +++ /dev/null @@ -1,7 +0,0 @@ -select new map( mother as mother, offspr as offspr, mate as mate ) -from DomesticCat as mother - inner join mother.mate as mate - left outer join mother.kittens as offspr - -select new map( max(c.bodyWeight) as max, min(c.bodyWeight) as min, count(*) as n ) -from Cat c \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/member_of_collection_example.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/member_of_collection_example.txt deleted file mode 100644 index e0969a3d0b..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/member_of_collection_example.txt +++ /dev/null @@ -1,8 +0,0 @@ -select p -from Person p -where 'John' member of p.nickNames - -select p -from Person p -where p.name.first = 'Joseph' - and 'Joey' not member of p.nickNames diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/multiple_root_entity_ref_example.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/multiple_root_entity_ref_example.txt deleted file mode 100644 index 173c8b3879..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/multiple_root_entity_ref_example.txt +++ /dev/null @@ -1,5 +0,0 @@ -// build a product between customers and active mailing campaigns so we can spam! -select distinct cust, camp -from Customer cust, Campaign camp -where camp.type = 'mail' - and current_timestamp() between camp.activeRange.start and camp.activeRange.end \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/multiple_root_entity_ref_example2.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/multiple_root_entity_ref_example2.txt deleted file mode 100644 index 5e2231e433..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/multiple_root_entity_ref_example2.txt +++ /dev/null @@ -1,5 +0,0 @@ -// retrieve all customers with headquarters in the same state as Acme's headquarters -select distinct c1 -from Customer c1, Customer c2 -where c1.address.state = c2.address.state - and c2.name = 'Acme' \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/named_parameter_example.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/named_parameter_example.txt deleted file mode 100644 index 0ca77a503d..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/named_parameter_example.txt +++ /dev/null @@ -1,15 +0,0 @@ -String queryString = - "select c " + - "from Customer c " + - "where c.name = :name " + - " or c.nickName = :name"; - -// HQL -List customers = session.createQuery( queryString ) - .setParameter( "name", theNameOfInterest ) - .list(); - -// JPQL -List customers = entityManager.createQuery( queryString, Customer.class ) - .setParameter( "name", theNameOfInterest ) - .getResultList(); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/nullif_example.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/nullif_example.txt deleted file mode 100644 index 6610b5764c..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/nullif_example.txt +++ /dev/null @@ -1,8 +0,0 @@ -// return customers who have changed their last name -select nullif( c.previousName.last, c.name.last ) -from Customer c - -// equivalent CASE expression -select case when c.previousName.last = c.name.last then null - else c.previousName.last end -from Customer c diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/numeric_literals_example.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/numeric_literals_example.txt deleted file mode 100644 index 1732edfec2..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/numeric_literals_example.txt +++ /dev/null @@ -1,29 +0,0 @@ -// simple integer literal -select o -from Order o -where o.referenceNumber = 123 - -// simple integer literal, typed as a long -select o -from Order o -where o.referenceNumber = 123L - -// decimal notation -select o -from Order o -where o.total > 5000.00 - -// decimal notation, typed as a float -select o -from Order o -where o.total > 5000.00F - -// scientific notation -select o -from Order o -where o.total > 5e+3 - -// scientific notation, typed as a float -select o -from Order o -where o.total > 5e+3F \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/order_by_example.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/order_by_example.txt deleted file mode 100644 index 71b6024fee..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/order_by_example.txt +++ /dev/null @@ -1,10 +0,0 @@ -// legal because p.name is implicitly part of p -select p -from Person p -order by p.name - -select c.id, sum( o.total ) as t -from Order o - inner join o.customer c -group by c.id -order by t diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/predicate_between_example.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/predicate_between_example.txt deleted file mode 100644 index f445b48d46..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/predicate_between_example.txt +++ /dev/null @@ -1,19 +0,0 @@ -select p -from Customer c - join c.paymentHistory p -where c.id = 123 - and index(p) between 0 and 9 - -select c -from Customer c -where c.president.dateOfBirth - between {d '1945-01-01'} - and {d '1965-01-01'} - -select o -from Order o -where o.total between 500 and 5000 - -select p -from Person p -where p.name between 'A' and 'E' \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/predicate_comparison_example.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/predicate_comparison_example.txt deleted file mode 100644 index ff52fa7932..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/predicate_comparison_example.txt +++ /dev/null @@ -1,34 +0,0 @@ -// numeric comparison -select c -from Customer c -where c.chiefExecutive.age < 30 - -// string comparison -select c -from Customer c -where c.name = 'Acme' - -// datetime comparison -select c -from Customer c -where c.inceptionDate < {d '2000-01-01'} - -// enum comparison -select c -from Customer c -where c.chiefExecutive.gender = com.acme.Gender.MALE - -// boolean comparison -select c -from Customer c -where c.sendEmail = true - -// entity type comparison -select p -from Payment p -where type(p) = WireTransferPayment - -// entity value comparison -select c -from Customer c -where c.chiefExecutive = c.chiefTechnologist \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/predicate_comparison_example_using_all.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/predicate_comparison_example_using_all.txt deleted file mode 100644 index 2c9aa10ed3..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/predicate_comparison_example_using_all.txt +++ /dev/null @@ -1,9 +0,0 @@ -// select all players that scored at least 3 points -// in every game. -select p -from Player p -where 3 > all ( - select spg.points - from StatsPerGame spg - where spg.player = p -) \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/predicate_in_bnf.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/predicate_in_bnf.txt deleted file mode 100644 index a8f3daec41..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/predicate_in_bnf.txt +++ /dev/null @@ -1,8 +0,0 @@ -in_expression ::= single_valued_expression - [NOT] IN single_valued_list - -single_valued_list ::= constructor_expression | - (subquery) | - collection_valued_input_parameter - -constructor_expression ::= (expression[, expression]*) \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/predicate_in_example.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/predicate_in_example.txt deleted file mode 100644 index 8e0f3b97f5..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/predicate_in_example.txt +++ /dev/null @@ -1,36 +0,0 @@ -select p -from Payment p -where type(p) in (CreditCardPayment, WireTransferPayment) - -select c -from Customer c -where c.hqAddress.state in ('TX', 'OK', 'LA', 'NM') - -select c -from Customer c -where c.hqAddress.state in ? - -select c -from Customer c -where c.hqAddress.state in ( - select dm.state - from DeliveryMetadata dm - where dm.salesTax is not null -) - -// Not JPQL compliant! -select c -from Customer c -where c.name in ( - ('John','Doe'), - ('Jane','Doe') -) - -// Not JPQL compliant! -select c -from Customer c -where c.chiefExecutive in ( - select p - from Person p - where ... -) \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/predicate_like_bnf.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/predicate_like_bnf.txt deleted file mode 100644 index abac134a49..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/predicate_like_bnf.txt +++ /dev/null @@ -1,4 +0,0 @@ -like_expression ::= - string_expression - [NOT] LIKE pattern_value - [ESCAPE escape_character] diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/predicate_like_example.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/predicate_like_example.txt deleted file mode 100644 index 348c21bae3..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/predicate_like_example.txt +++ /dev/null @@ -1,13 +0,0 @@ -select p -from Person p -where p.name like '%Schmidt' - -select p -from Person p -where p.name not like 'Jingleheimmer%' - -// find any with name starting with "sp_" -select sp -from StoredProcedureMetadata sp -where sp.name like 'sp|_%' escape '|' - diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/predicate_nullness_example.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/predicate_nullness_example.txt deleted file mode 100644 index bf36fe0230..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/predicate_nullness_example.txt +++ /dev/null @@ -1,9 +0,0 @@ -// select everyone with an associated address -select p -from Person p -where p.address is not null - -// select everyone without an associated address -select p -from Person p -where p.address is null \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/qualified_path_expressions_example.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/qualified_path_expressions_example.txt deleted file mode 100644 index 5bb7eab416..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/qualified_path_expressions_example.txt +++ /dev/null @@ -1,33 +0,0 @@ -// Product.images is a Map : key = a name, value = file path - -// select all the image file paths (the map value) for Product#123 -select i -from Product p - join p.images i -where p.id = 123 - -// same as above -select value(i) -from Product p - join p.images i -where p.id = 123 - -// select all the image names (the map key) for Product#123 -select key(i) -from Product p - join p.images i -where p.id = 123 - -// select all the image names and file paths (the 'Map.Entry') for Product#123 -select entry(i) -from Product p - join p.images i -where p.id = 123 - -// total the value of the initial line items for all orders for a customer -select sum( li.amount ) -from Customer c - join c.orders o - join o.lineItems li -where c.id = 123 - and index(li) = 1 \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/root_entity_ref_bnf.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/root_entity_ref_bnf.txt deleted file mode 100644 index 9e5b71cca0..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/root_entity_ref_bnf.txt +++ /dev/null @@ -1 +0,0 @@ -root_entity_reference ::= entity_name [AS] identification_variable \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/searched_case_bnf.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/searched_case_bnf.txt deleted file mode 100644 index 211fa3be08..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/searched_case_bnf.txt +++ /dev/null @@ -1 +0,0 @@ -CASE [ WHEN {test_conditional} THEN {match_result} ]* ELSE {miss_result} END \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/searched_case_exp_example.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/searched_case_exp_example.txt deleted file mode 100644 index 34d80d9cc1..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/searched_case_exp_example.txt +++ /dev/null @@ -1,9 +0,0 @@ -select case when c.name.first is not null then c.name.first - when c.nickName is not null then c.nickName - else '' end -from Customer c - -// Again, the abbreviated form coalesce can handle this a -// little more succinctly -select coalesce( c.name.first, c.nickName, '' ) -from Customer c diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/simple_case_bnf.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/simple_case_bnf.txt deleted file mode 100644 index f4b03b45fa..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/simple_case_bnf.txt +++ /dev/null @@ -1 +0,0 @@ -CASE {operand} WHEN {test_value} THEN {match_result} ELSE {miss_result} END \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/simple_case_exp_example.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/simple_case_exp_example.txt deleted file mode 100644 index f09820d506..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/simple_case_exp_example.txt +++ /dev/null @@ -1,16 +0,0 @@ -select case c.nickName when null then '' else c.nickName end -from Customer c - -// This NULL checking is such a common case that most dbs -// define an abbreviated CASE form. For example: -select nvl( c.nickName, '' ) -from Customer c - -// or: -select isnull( c.nickName, '' ) -from Customer c - -// the standard coalesce abbreviated form can be used -// to achieve the same result: -select coalesce( c.nickName, '' ) -from Customer c diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/simplest_query.java b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/simplest_query.java deleted file mode 100644 index 10b1a0b4bf..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/simplest_query.java +++ /dev/null @@ -1 +0,0 @@ -select c from com.acme.Cat c \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/simplest_query2.java b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/simplest_query2.java deleted file mode 100644 index 4b56392fad..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/simplest_query2.java +++ /dev/null @@ -1 +0,0 @@ -select c from Cat c \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/statement_delete_bnf.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/statement_delete_bnf.txt deleted file mode 100644 index 087d1ed4da..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/statement_delete_bnf.txt +++ /dev/null @@ -1,3 +0,0 @@ -delete_statement ::= delete_clause [where_clause] - -delete_clause ::= DELETE FROM entity_name [[AS] identification_variable] diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/statement_insert_bnf.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/statement_insert_bnf.txt deleted file mode 100644 index 95bb029cce..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/statement_insert_bnf.txt +++ /dev/null @@ -1,5 +0,0 @@ -insert_statement ::= insert_clause select_statement - -insert_clause ::= INSERT INTO entity_name (attribute_list) - -attribute_list ::= state_field[, state_field ]* \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/statement_insert_example_named_id.java b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/statement_insert_example_named_id.java deleted file mode 100644 index 1a27846de6..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/statement_insert_example_named_id.java +++ /dev/null @@ -1,2 +0,0 @@ -String hqlInsert = "insert into DelinquentAccount (id, name) select c.id, c.name from Customer c where ..."; -int createdEntities = s.createQuery( hqlInsert ).executeUpdate(); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/statement_select_bnf.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/statement_select_bnf.txt deleted file mode 100644 index e4aac0d1c3..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/statement_select_bnf.txt +++ /dev/null @@ -1,7 +0,0 @@ -select_statement :: = - [select_clause] - from_clause - [where_clause] - [groupby_clause] - [having_clause] - [orderby_clause] \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/statement_update_bnf.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/statement_update_bnf.txt deleted file mode 100644 index 6ffbb48ed2..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/statement_update_bnf.txt +++ /dev/null @@ -1,11 +0,0 @@ -update_statement ::= update_clause [where_clause] - -update_clause ::= UPDATE entity_name [[AS] identification_variable] - SET update_item {, update_item}* - -update_item ::= [identification_variable.]{state_field | single_valued_object_field} - = new_value - -new_value ::= scalar_expression | - simple_entity_expression | - NULL diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/statement_update_example_hql.java b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/statement_update_example_hql.java deleted file mode 100644 index 779ea5121f..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/statement_update_example_hql.java +++ /dev/null @@ -1,8 +0,0 @@ -String hqlUpdate = - "update Customer c " + - "set c.name = :newName " + - "where c.name = :oldName"; -int updatedEntities = session.createQuery( hqlUpdate ) - .setString( "newName", newName ) - .setString( "oldName", oldName ) - .executeUpdate(); diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/statement_update_example_hql_versioned.java b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/statement_update_example_hql_versioned.java deleted file mode 100644 index aece8f8595..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/statement_update_example_hql_versioned.java +++ /dev/null @@ -1,8 +0,0 @@ -String hqlVersionedUpdate = - "update versioned Customer c " + - "set c.name = :newName " + - "where c.name = :oldName"; -int updatedEntities = s.createQuery( hqlUpdate ) - .setString( "newName", newName ) - .setString( "oldName", oldName ) - .executeUpdate(); \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/statement_update_example_jpql.java b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/statement_update_example_jpql.java deleted file mode 100644 index 99a5012548..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/statement_update_example_jpql.java +++ /dev/null @@ -1,8 +0,0 @@ -String jpqlUpdate = - "update Customer c " + - "set c.name = :newName " + - "where c.name = :oldName"; -int updatedEntities = entityManager.createQuery( jpqlUpdate ) - .setString( "newName", newName ) - .setString( "oldName", oldName ) - .executeUpdate(); diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/string_literals_example.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/string_literals_example.txt deleted file mode 100644 index d2f95cb50f..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/string_literals_example.txt +++ /dev/null @@ -1,8 +0,0 @@ -select c -from Customer c -where c.name = 'Acme' - -select c -from Customer c -where c.name = 'Acme''s Pretzel Logic' - diff --git a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/substring_bnf.txt b/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/substring_bnf.txt deleted file mode 100644 index 81302e9a4b..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/query_ql/extras/substring_bnf.txt +++ /dev/null @@ -1 +0,0 @@ -substring( string_expression, numeric_expression [, numeric_expression] ) \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/transactions/Transactions.xml b/documentation/src/main/docbook/integration/en-US/chapters/transactions/Transactions.xml deleted file mode 100644 index e12e40dd91..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/transactions/Transactions.xml +++ /dev/null @@ -1,540 +0,0 @@ - - - - - - Transactions and concurrency control - -
- Defining Transaction - - It is important to understand that the term transaction has many different yet related meanings in regards - to persistence and Object/Relational Mapping. In most use-cases these definitions align, but that is not - always the case. - - - - - Might refer to the physical transaction with the database. - - - - - Might refer to the logical notion of a transaction as related to a persistence context. - - - - - Might refer to the application notion of a Unit-of-Work, as defined by the archetypal pattern. - - - - - - This documentation largely treats the physical and logic notions of transaction as one-in-the-same. - - -
- -
- Physical Transactions - - Hibernate uses the JDBC API for persistence. In the world of Java there are 2 well defined mechanism - for dealing with transactions in JDBC: JDBC itself and JTA. Hibernate supports both mechanisms for - integrating with transactions and allowing applications to manage physical transactions. - - - The first concept in understanding Hibernate transaction support is the - org.hibernate.engine.transaction.spi.TransactionFactory interface which - serves 2 main functions: - - - - - It allows Hibernate to understand the transaction semantics of the environment. Are we operating - in a JTA environment? Is a physical transaction already currently active? etc. - - - - - It acts as a factory for org.hibernate.Transaction instances which - are used to allow applications to manage and check the state of transactions. - org.hibernate.Transaction is Hibernate's notion of a logical - transaction. JPA has a similar notion in the - javax.persistence.EntityTransaction interface. - - - - - - - javax.persistence.EntityTransaction is only available when using - resource-local transactions. Hibernate allows access to - org.hibernate.Transaction regardless of environment. - - - - - org.hibernate.engine.transaction.spi.TransactionFactory is a standard - Hibernate service. See for details. - - -
- Physical Transactions - JDBC - - JDBC-based transaction management leverages the JDBC defined methods - java.sql.Connection.commit() and - java.sql.Connection.rollback() (JDBC does not define an explicit - method of beginning a transaction). In Hibernate, this approach is represented by the - org.hibernate.engine.transaction.internal.jdbc.JdbcTransactionFactory class. - -
- -
- Physical Transactions - JTA - - JTA-based transaction approach which leverages the - javax.transaction.UserTransaction interface as obtained from - org.hibernate.engine.transaction.jta.platform.spi.JtaPlatform API. This approach - is represented by the - org.hibernate.engine.transaction.internal.jta.JtaTransactionFactory class. - - - See for information on integration with the underlying JTA - system. - -
- - -
- Physical Transactions - CMT - - Another JTA-based transaction approach which leverages the JTA - javax.transaction.TransactionManager interface as obtained from - org.hibernate.engine.transaction.jta.platform.spi.JtaPlatform API. This approach - is represented by the - org.hibernate.engine.transaction.internal.jta.CMTTransactionFactory class. In - an actual JEE CMT environment, access to the - javax.transaction.UserTransaction is restricted. - - - - The term CMT is potentially misleading here. The important point simply being that the physical JTA - transactions are being managed by something other than the Hibernate transaction API. - - - - See for information on integration with the underlying JTA - system. - -
- -
- Physical Transactions - Custom - - Its is also possible to plug in a custom transaction approach by implementing the - org.hibernate.engine.transaction.spi.TransactionFactory contract. - The default service initiator has built-in support for understanding custom transaction approaches - via the hibernate.transaction.factory_class which can name either: - - - - - The instance of org.hibernate.engine.transaction.spi.TransactionFactory - to use. - - - - - The name of a class implementing - org.hibernate.engine.transaction.spi.TransactionFactory - to use. The expectation is that the implementation class have a no-argument constructor. - - - -
- -
- Physical Transactions - Legacy - - During development of 4.0, most of these classes named here were moved to new packages. To help - facilitate upgrading, Hibernate will also recognize the legacy names here for a short period of time. - - - - - org.hibernate.transaction.JDBCTransactionFactory is mapped to - org.hibernate.engine.transaction.internal.jdbc.JdbcTransactionFactory - - - - - org.hibernate.transaction.JTATransactionFactory is mapped to - org.hibernate.engine.transaction.internal.jta.JtaTransactionFactory - - - - - org.hibernate.transaction.CMTTransactionFactory is mapped to - org.hibernate.engine.transaction.internal.jta.CMTTransactionFactory - - - -
- -
- - -
- Hibernate Transaction Usage - - Hibernate uses JDBC connections and JTA resources directly, without adding any additional locking behavior. - It is important for you to become familiar with the JDBC, ANSI SQL, and transaction isolation specifics - of your database management system. - - - Hibernate does not lock objects in memory. The behavior defined by the isolation level of your database - transactions does not change when you use Hibernate. The Hibernate - org.hibernate.Session acts as a transaction-scoped cache providing - repeatable reads for lookup by identifier and queries that result in loading entities. - - - - - To reduce lock contention in the database, the physical database transaction needs to be as short as - possible. Long database transactions prevent your application from scaling to a highly-concurrent load. - Do not hold a database transaction open during end-user-level work, but open it after the end-user-level - work is finished. This is concept is referred to as transactional write-behind. - - -
- -
- Transactional patterns (and anti-patterns) - -
- Session-per-operation anti-pattern - - This is an anti-pattern of opening and closing a Session for each database call - in a single thread. It is also an anti-pattern in terms of database transactions. Group your database - calls into a planned sequence. In the same way, do not auto-commit after every SQL statement in your - application. Hibernate disables, or expects the application server to disable, auto-commit mode - immediately. Database transactions are never optional. All communication with a database must - be encapsulated by a transaction. Avoid auto-commit behavior for reading data, because many small - transactions are unlikely to perform better than one clearly-defined unit of work, and are more - difficult to maintain and extend. - - - - Using auto-commit does not circumvent database transactions. Instead, when in auto-commit mode, - JDBC drivers simply perform each call in an implicit transaction call. It is as if your application - called commit after each and every JDBC call. - - -
- -
- Session-per-request pattern - - This is the most common transaction pattern. The term request here relates to the concept of a system - that reacts to a series of requests from a client/user. Web applications are a prime example of this - type of system, though certainly not the only one. At the beginning of handling such a request, the - application opens a Hibernate Session, starts a transaction, performs - all data related work, ends the transaction and closes the Session. - The crux of the pattern is the one-to-one relationship between the transaction and the - Session. - - - - Within this pattern there is a common technique of defining a current session to - simplify the need of passing this Session around to all the application - components that may need access to it. Hibernate provides support for this technique through the - getCurrentSession method of the SessionFactory. - The concept of a "current" session has to have a scope that defines the bounds in which the notion - of "current" is valid. This is purpose of the - org.hibernate.context.spi.CurrentSessionContext contract. There are 2 - reliable defining scopes: - - - - - First is a JTA transaction because it allows a callback hook to know when it is ending which - gives Hibernate a chance to close the Session and clean up. - This is represented by the - org.hibernate.context.internal.JTASessionContext implementation of - the org.hibernate.context.spi.CurrentSessionContext contract. - Using this implementation, a Session will be opened the first - time getCurrentSession is called within that transaction. - - - - - Secondly is this application request cycle itself. This is best represented with the - org.hibernate.context.internal.ManagedSessionContext implementation of - the org.hibernate.context.spi.CurrentSessionContext contract. - Here an external component is responsible for managing the lifecycle and scoping of a "current" - session. At the start of such a scope, ManagedSessionContext's - bind method is called passing in the - Session. At the end, its unbind - method is called. - - - Some common examples of such "external components" include: - - - - - javax.servlet.Filter implementation - - - - - AOP interceptor with a pointcut on the service methods - - - - - A proxy/interception container - - - - - - - - The getCurrentSession() method has one downside in a JTA environment. If - you use it, after_statement connection release mode is also used by default. Due to a limitation of - the JTA specification, Hibernate cannot automatically clean up any unclosed - ScrollableResults or Iterator - instances returned by scroll() or iterate(). - Release the underlying database cursor by calling ScrollableResults.close() - or Hibernate.close(Iterator) explicitly from a - finally block. - - -
- -
- Conversations - - The session-per-request pattern is not the only valid way of designing units of work. - Many business processes require a whole series of interactions with the user that are interleaved with - database accesses. In web and enterprise applications, it is not acceptable for a database transaction - to span a user interaction. Consider the following example: - - - An example of a long-running conversation - - - The first screen of a dialog opens. The data seen by the user is loaded in a particular - Session and database transaction. The user is free to modify the objects. - - - - - The user uses a UI element to save their work after five minutes of editing. The modifications - are made persistent. The user also expects to have exclusive access to the data during the edit - session. - - - - - - Even though we have multiple databases access here, from the point of view of the user, this series of - steps represents a single unit of work. There are many ways to implement this in your application. - - - - A first naive implementation might keep the Session and database transaction open - while the user is editing, using database-level locks to prevent other users from modifying the same - data and to guarantee isolation and atomicity. This is an anti-pattern, because lock contention is a - bottleneck which will prevent scalability in the future. - - - Several database transactions are used to implement the conversation. In this case, maintaining - isolation of business processes becomes the partial responsibility of the application tier. A single - conversation usually spans several database transactions. These multiple database accesses can only - be atomic as a whole if only one of these database transactions (typically the last one) stores the - updated data. All others only read data. A common way to receive this data is through a wizard-style - dialog spanning several request/response cycles. Hibernate includes some features which make this easy - to implement. - - - - - - - - - Automatic Versioning - - - - - Hibernate can perform automatic optimistic concurrency control for you. It can - automatically detect if a concurrent modification occurred during user think time. - Check for this at the end of the conversation. - - - - - - - Detached Objects - - - - - If you decide to use the session-per-request pattern, all loaded instances will be - in the detached state during user think time. Hibernate allows you to reattach the - objects and persist the modifications. The pattern is called - session-per-request-with-detached-objects. Automatic versioning is used to isolate - concurrent modifications. - - - - - - - Extended Session - - - - - The Hibernate Session can be disconnected from the - underlying JDBC connection after the database transaction has been committed and - reconnected when a new client request occurs. This pattern is known as - session-per-conversation and makes even reattachment unnecessary. Automatic - versioning is used to isolate concurrent modifications and the - Session will not be allowed to flush automatically, - only explicitly. - - - - - - - - - Session-per-request-with-detached-objects and session-per-conversation - each have advantages and disadvantages. - -
- -
- Session-per-application - - Discussion coming soon.. - -
-
- -
- Object identity - - An application can concurrently access the same persistent state (database row) in two different Sessions. - However, an instance of a persistent class is never shared between two - Session instances. Two different notions of identity exist and come into - play here: Database identity and JVM identity. - - - Database identity - - - - JVM identity - - - - For objects attached to a particular Session, the two notions are - equivalent, and JVM identity for database identity is guaranteed by Hibernate. The application might - concurrently access a business object with the same identity in two different sessions, the two - instances are actually different, in terms of JVM identity. Conflicts are resolved using an optimistic - approach and automatic versioning at flush/commit time. - - - This approach places responsibility for concurrency on Hibernate and the database. It also provides the - best scalability, since expensive locking is not needed to guarantee identity in single-threaded units - of work. The application does not need to synchronize on any business object, as long as it maintains - a single thread per anti-patterns. While not recommended, within a - Session the application could safely use the == - operator to compare objects. - - - - However, an application that uses the == operator outside of a - Session - may introduce problems.. If you put two detached instances into the same Set, they might - use the same database identity, which means they represent the same row in the database. They would not be - guaranteed to have the same JVM identity if they are in a detached state. Override the - equals and hashCode methods in persistent classes, so that - they have their own notion of object equality. Never use the database identifier to implement equality. Instead, - use a business key that is a combination of unique, typically immutable, attributes. The database identifier - changes if a transient object is made persistent. If the transient instance, together with detached instances, - is held in a Set, changing the hash-code breaks the contract of the - Set. Attributes for business keys can be less stable than database primary keys. You only - need to guarantee stability as long as the objects are in the same Set.This is not a - Hibernate issue, but relates to Java's implementation of object identity and equality. - - -
- - -
- Common issues - - - Both the session-per-user-session and session-per-application - anti-patterns are susceptible to the following issues. Some of the issues might also arise within the - recommended patterns, so ensure that you understand the implications before making a design decision: - - - - - - A Session is not thread-safe. Things that work concurrently, like - HTTP requests, session beans, or Swing workers, will cause race conditions if a - Session instance is shared. If you keep your Hibernate - Session in your - javax.servlet.http.HttpSession (this is discussed later in the - chapter), you should consider synchronizing access to your - HttpSession; otherwise, a user that clicks reload fast enough can use - the same Session in two concurrently running threads. - - - - - An exception thrown by Hibernate means you have to rollback your database transaction - and close the Session immediately (this is discussed in more detail - later in the chapter). If your Session is bound to the application, - you have to stop the application. Rolling back the database transaction does not put your business - objects back into the state they were at the start of the transaction. This means that the - database state and the business objects will be out of sync. Usually this is not a - problem, because exceptions are not recoverable and you will have to start over after - rollback anyway. - - - - - The Session caches every object that is in a persistent state - (watched and checked for changes by Hibernate). If you keep it open for a long time or simply load - too much data, it will grow endlessly until you get an OutOfMemoryException. One solution is to - call clear() and evict() to manage the - Session cache, but you should consider an alternate means of dealing - with large amounts of data such as a Stored Procedure. Java is simply not the right tool for these - kind of operations. Some solutions are shown in . Keeping a - Session open for the duration of a user session also means a higher - probability of stale data. - - - - -
- -
diff --git a/documentation/src/main/docbook/integration/en-US/chapters/transactions/extras/database-identity.java b/documentation/src/main/docbook/integration/en-US/chapters/transactions/extras/database-identity.java deleted file mode 100644 index 79e273346b..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/transactions/extras/database-identity.java +++ /dev/null @@ -1 +0,0 @@ -foo.getId().equals( bar.getId() ) \ No newline at end of file diff --git a/documentation/src/main/docbook/integration/en-US/chapters/transactions/extras/jvm-identity.java b/documentation/src/main/docbook/integration/en-US/chapters/transactions/extras/jvm-identity.java deleted file mode 100644 index 5e48a72052..0000000000 --- a/documentation/src/main/docbook/integration/en-US/chapters/transactions/extras/jvm-identity.java +++ /dev/null @@ -1 +0,0 @@ -foo==bar \ No newline at end of file diff --git a/documentation/src/main/docbook/manual/en-US/Hibernate_Manual.xml b/documentation/src/main/docbook/manual/en-US/Hibernate_Manual.xml index 3a1d003b33..c0fbda987b 100644 --- a/documentation/src/main/docbook/manual/en-US/Hibernate_Manual.xml +++ b/documentation/src/main/docbook/manual/en-US/Hibernate_Manual.xml @@ -12,7 +12,7 @@ - Hibernate Reference Manual + Hibernate User Guide Hibernate - Relational Persistence for Idiomatic Java &version; Hibernate ORM diff --git a/documentation/src/main/docbook/manual/en-US/Preface.xml b/documentation/src/main/docbook/manual/en-US/Preface.xml index 062a36192c..25797073bc 100644 --- a/documentation/src/main/docbook/manual/en-US/Preface.xml +++ b/documentation/src/main/docbook/manual/en-US/Preface.xml @@ -58,10 +58,6 @@ - - This documentation is intended as a reference manual. As such, it is very detailed and great when you - know what to look for. - If you are just getting started with using Hibernate you may want to start with the Hibernate Getting Started Guide available from the diff --git a/documentation/src/main/docbook/manual/en-US/chapters/bootstrap/Bootstrap.xml b/documentation/src/main/docbook/manual/en-US/chapters/bootstrap/Bootstrap.xml index 2108bafdfa..cdfafd306b 100644 --- a/documentation/src/main/docbook/manual/en-US/chapters/bootstrap/Bootstrap.xml +++ b/documentation/src/main/docbook/manual/en-US/chapters/bootstrap/Bootstrap.xml @@ -1,3 +1,5 @@ + + Bootstrap diff --git a/documentation/src/main/docbook/manual/en-US/chapters/transactions/Transactions.xml b/documentation/src/main/docbook/manual/en-US/chapters/transactions/Transactions.xml index 04805a9e19..5b20dcda70 100644 --- a/documentation/src/main/docbook/manual/en-US/chapters/transactions/Transactions.xml +++ b/documentation/src/main/docbook/manual/en-US/chapters/transactions/Transactions.xml @@ -8,7 +8,7 @@ --> + xmlns:xi="http://www.w3.org/2001/XInclude"> Transactions and concurrency control @@ -234,6 +234,39 @@ checks with the underling transaction system if needed, so care should be taken to minimize its use; it can have a big performance impact in certain JTA set ups. + + + Lets take a look at using the Transaction API in the various environments. + + + + Using Transaction API in JDBC + + + + + Using Transaction API in JTA (CMT) + + + + + Using Transaction API in JTA (BMT) + + + + + In the CMT case we really could have omitted all of the Transaction calls. But the point of + the examples was to show that the Transaction API really does insulate your code from the underlying + transaction mechanism. In fact if you strip away the comments and the single configruation + setting supplied at bootstrap, the code is exactly the same in all 3 examples. In other words, + we could develop that code and drop it, as-is, in any of the 3 transaction environments. + + + + The Transaction API tries hard to make the experience consistent across all environments. To that end, + it generally defers to the JTA specification when there are differences (for example automatically trying + rollback on a failed commit). +
diff --git a/documentation/src/main/docbook/manual/en-US/chapters/transactions/extras/bmt.java b/documentation/src/main/docbook/manual/en-US/chapters/transactions/extras/bmt.java new file mode 100644 index 0000000000..9b3c84d6b0 --- /dev/null +++ b/documentation/src/main/docbook/manual/en-US/chapters/transactions/extras/bmt.java @@ -0,0 +1,41 @@ +public void doSomeWork() { + StandardServiceRegistry ssr = new StandardServiceRegistryBuilder() + .applySetting( AvailableSettings.TRANSACTION_COORDINATOR_STRATEGY, "jta" ) + ...; + + // Note: depending on the JtaPlatform used and some optional settings, + // the underlying transactions here will be controlled through either + // the JTA TransactionManager or UserTransaction + + SessionFactory = ...; + + Session session = sessionFactory.openSession(); + try { + // Assuming a JTA transaction is not already active, + // this call the TM/UT begin method. If a JTA + // transaction is already active, we remember that + // the Transaction associated with the Session did + // not "initiate" the JTA transaction and will later + // nop-op the commit and rollback calls... + session.getTransaction().begin(); + + doTheWork(); + + // calls TM/UT commit method, assuming we are initiator. + session.getTransaction().commit(); + } + catch (Exception e) { + // we may need to rollback depending on + // where the exception happened + if ( session.getTransaction().getStatus() == ACTIVE + || session.getTransaction().getStatus() == MARKED_ROLLBACK ) { + // calls TM/UT commit method, assuming we are initiator; + // otherwise marks the JTA trsnaction for rollback only + session.getTransaction().rollback(); + } + // handle the underlying error + } + finally { + session.close(); + } +} \ No newline at end of file diff --git a/documentation/src/main/docbook/manual/en-US/chapters/transactions/extras/cmt.java b/documentation/src/main/docbook/manual/en-US/chapters/transactions/extras/cmt.java new file mode 100644 index 0000000000..9a144dec84 --- /dev/null +++ b/documentation/src/main/docbook/manual/en-US/chapters/transactions/extras/cmt.java @@ -0,0 +1,38 @@ +public void doSomeWork() { + StandardServiceRegistry ssr = new StandardServiceRegistryBuilder() + .applySetting( AvailableSettings.TRANSACTION_COORDINATOR_STRATEGY, "jta" ) + ...; + + // Note: depending on the JtaPlatform used and some optional settings, + // the underlying transactions here will be controlled through either + // the JTA TransactionManager or UserTransaction + + SessionFactory = ...; + + Session session = sessionFactory.openSession(); + try { + // Since we are in CMT, a JTA transaction would + // already have been started. This call essentially + // no-ops + session.getTransaction().begin(); + + doTheWork(); + + // Since we did not start the transaction (CMT), + // we also will not end it. This call essentially + // no-ops in terms of transaction handling. + session.getTransaction().commit(); + } + catch (Exception e) { + // again, the rollback call here would no-op (aside from + // marking the underlying CMT transaction for rollback only). + if ( session.getTransaction().getStatus() == ACTIVE + || session.getTransaction().getStatus() == MARKED_ROLLBACK ) { + session.getTransaction().rollback(); + } + // handle the underlying error + } + finally { + session.close(); + } +} \ No newline at end of file diff --git a/documentation/src/main/docbook/manual/en-US/chapters/transactions/extras/jdbc.java b/documentation/src/main/docbook/manual/en-US/chapters/transactions/extras/jdbc.java new file mode 100644 index 0000000000..9cf59f174e --- /dev/null +++ b/documentation/src/main/docbook/manual/en-US/chapters/transactions/extras/jdbc.java @@ -0,0 +1,33 @@ +public void doSomeWork() { + StandardServiceRegistry ssr = new StandardServiceRegistryBuilder() + // "jdbc" is the default, but for explicitness + .applySetting( AvailableSettings.TRANSACTION_COORDINATOR_STRATEGY, "jdbc" ) + ...; + + SessionFactory = ...; + + Session session = sessionFactory.openSession(); + try { + // calls Connection#setAutoCommit(false) to + // signal start of transaction + session.getTransaction().begin(); + + doTheWork(); + + // calls Connection#commit(), if an error + // happens we attempt a rollback + session.getTransaction().commit(); + } + catch (Exception e) { + // we may need to rollback depending on + // where the exception happened + if ( session.getTransaction().getStatus() == ACTIVE + || session.getTransaction().getStatus() == MARKED_ROLLBACK ) { + session.getTransaction().rollback(); + } + // handle the underlying error + } + finally { + session.close(); + } +} \ No newline at end of file diff --git a/documentation/src/main/docbook/mapping/en-US/Hibernate_Mapping.xml b/documentation/src/main/docbook/mapping/en-US/Hibernate_Mapping.xml index b134d0fa6e..7babd3a1b1 100644 --- a/documentation/src/main/docbook/mapping/en-US/Hibernate_Mapping.xml +++ b/documentation/src/main/docbook/mapping/en-US/Hibernate_Mapping.xml @@ -12,7 +12,7 @@ - Hibernate Domain Model Mapping Manual + Hibernate Domain Model Mapping Guide Hibernate - Relational Persistence for Idiomatic Java &version; Hibernate ORM diff --git a/hibernate-core/src/main/java/org/hibernate/cfg/AvailableSettings.java b/hibernate-core/src/main/java/org/hibernate/cfg/AvailableSettings.java index 97efeb96ac..e67a9f6432 100644 --- a/hibernate-core/src/main/java/org/hibernate/cfg/AvailableSettings.java +++ b/hibernate-core/src/main/java/org/hibernate/cfg/AvailableSettings.java @@ -176,7 +176,7 @@ public interface AvailableSettings { * Can be
    *
  • TransactionCoordinatorBuilder instance
  • *
  • TransactionCoordinatorBuilder implementation {@link Class} reference
  • - *
  • TransactionCoordinatorBuilder implementation class name (FQN)
  • + *
  • TransactionCoordinatorBuilder implementation class name (FQN) or short-name
  • *
* * @since 5.0