HHH-12892 - Fix spelling issues in the User Guide

This commit is contained in:
Vlad Mihalcea 2018-08-08 13:29:16 +03:00
parent c55f3def03
commit 6c5e172609
53 changed files with 419 additions and 418 deletions

View File

@ -11,7 +11,7 @@ It will also delve into the ways third-party integrators and applications can le
=== What is a Service?
A services provides a certain types of functionality, in a pluggable manner.
Specifically they are interfaces defining certain functionality and then implementations of those `Service` contract interfaces.
Specifically, they are interfaces defining certain functionality and then implementations of those `Service` contract interfaces.
The interface is known as the `Service` role; the implementation class is known as the `Service` implementation.
The pluggability comes from the fact that the `Service` implementation adheres to contract defined by the interface of the `Service` role and that consumers of the `Service` program to the `Service` role, not the implementation.

View File

@ -3,7 +3,7 @@
[preface]
== Preface
Working with both Object-Oriented software and Relational Databases can be cumbersome and time consuming.
Working with both Object-Oriented software and Relational Databases can be cumbersome and time-consuming.
Development costs are significantly higher due to a paradigm mismatch between how data is represented in objects
versus relational databases. Hibernate is an Object/Relational Mapping (ORM) solution for Java environments. The
term Object/Relational Mapping refers to the technique of mapping data between an object model representation to

View File

@ -58,7 +58,7 @@ There are other ways to specify configuration properties, including:
* Place a file named hibernate.properties in a root directory of the classpath.
* Place a file named hibernate.properties in a root directory of the classpath.
* Pass an instance of java.util.Properties to `Configuration#setProperties`.
* Set System properties using java `-Dproperty=value`.
* Set System properties using Java `-Dproperty=value`.
* Include `<property/>` elements in `hibernate.cfg.xml`

View File

@ -4,7 +4,7 @@
This guide discusses the process of bootstrapping a Hibernate `org.hibernate.SessionFactory`. It also
discusses the ways in which applications and integrators can hook-in to and affect that process. This
bootstrapping process is defined in 2 distinct steps. The first step is the building of a ServiceRegistry
holding the services Hibernate will need at bootstrap- and run-time. The second step is the building of
holding the services Hibernate will need at bootstrap- and runtime. The second step is the building of
a Metadata object representing the mapping information for the application's model and its mapping to
the database.

View File

@ -23,11 +23,11 @@ Ultimately all enhancement is handled by the `org.hibernate.bytecode.enhance.spi
enhancement can certainly be crafted on top of Enhancer, but that is beyond the scope of this guide. Here we
will focus on the means Hibernate already exposes for performing these enhancements.
=== Run-time enhancement
=== Runtime enhancement
Currently run-time enhancement of the domain model is only supported in managed JPA environments following the JPA defined SPI for performing class transformations.
Currently runtime enhancement of the domain model is only supported in managed JPA environments following the JPA defined SPI for performing class transformations.
Even then, this support is disabled by default. To enable run-time enhancement, specify one of the following configuration properties:
Even then, this support is disabled by default. To enable runtime enhancement, specify one of the following configuration properties:
`*hibernate.enhancer.enableDirtyTracking*` (e.g. `true` or `false` (default value))::
Enable dirty tracking feature in runtime bytecode enhancement.

View File

@ -187,7 +187,7 @@ http://sourceforge.net/projects/hibernate/files/hibernate4[SourceForge].
In most cases the annotation processor will automatically run provided
the processor jar is added to the build classpath and a JDK >6 is used.
This happens due to Java's Service Provider contract and the fact
the the Hibernate Static Metamodel Generator jar files contains the
the Hibernate Static Metamodel Generator jar files contains the
file _javax.annotation.processing.Processor_ in the _META-INF/services_ directory.
The fully qualified name of the processor itself is:

View File

@ -12,7 +12,7 @@ applications can leverage and customize Services and Registries.
== What is a Service?
Services provide various types of functionality, in a pluggable manner. Specifically they are interfaces defining
Services provide various types of functionality, in a pluggable manner. Specifically, they are interfaces defining
certain functionality and then implementations of those service contract interfaces. The interface is known as the
service role; the implementation class is known as the service implementation. The pluggability comes from the fact
that the service implementation adheres to contract defined by the interface of the service role and that consumers

View File

@ -266,9 +266,9 @@ By convention all modules included with WildFly use the "main" slot, while the m
will use a slot name which matches the version, and also provide an alias to match its "major.minor" version.
Our suggestion is to depend on the module using the "major.minor" representation, as this simplifies rolling out bugfix
releases (micro version updates) of Hibernate ORM without changing application configuration (micro versions are always expected to be backwards compatible and released as bugfix only).
releases (micro version updates) of Hibernate ORM without changing application configuration (micro versions are always expected to be backward compatible and released as bugfix only).
For example if your application wants to use the latest version of Hibernate ORM version {majorMinorVersion}.x it should declare to use the module _org.hibernate:{majorMinorVersion}_. You can of course decide to use the full version instead for more precise control, in case an application requires a very specific version.
For example, if your application wants to use the latest version of Hibernate ORM version {majorMinorVersion}.x it should declare to use the module _org.hibernate:{majorMinorVersion}_. You can of course decide to use the full version instead for more precise control, in case an application requires a very specific version.
== Switch to a different Hibernate ORM slot
@ -311,7 +311,7 @@ you might want to check it out as it lists several other useful properties.
When using the custom modules provided by the feature packs you're going to give up on some of the integration which the application server normally automates.
For example enabling an Infinispan 2nd level cache is straight forward when using the default Hibernate ORM
For example, enabling an Infinispan 2nd level cache is straight forward when using the default Hibernate ORM
module, as WildFly will automatically setup the dependency to the Infinispan and clustering components.
When using these custom modules such integration will no longer work automatically: you can still
enable all normally available features but these will require explicit configuration, as if you were

View File

@ -1,10 +1,10 @@
[[preface]]
== Preface
Working with both Object-Oriented software and Relational Databases can be cumbersome and time consuming.
Working with both Object-Oriented software and Relational Databases can be cumbersome and time-consuming.
Development costs are significantly higher due to a paradigm mismatch between how data is represented in objects versus relational databases.
Hibernate is an Object/Relational Mapping solution for Java environments.
The term http://en.wikipedia.org/wiki/Object-relational_mapping[Object/Relational Mapping] refers to the technique of mapping data from an object model representation to a relational data model representation (and visa versa).
The term http://en.wikipedia.org/wiki/Object-relational_mapping[Object/Relational Mapping] refers to the technique of mapping data from an object model representation to a relational data model representation (and vice versa).
Hibernate not only takes care of the mapping from Java classes to database tables (and from Java data types to SQL data types), but also provides data query and retrieval facilities.
It can significantly reduce development time otherwise spent with manual data handling in SQL and JDBC.

View File

@ -56,7 +56,7 @@ See the <<chapters/caching/Caching.adoc#caching,Caching>> chapter for more info.
[[annotations-jpa-collectiontable]]
==== `@CollectionTable`
The http://docs.oracle.com/javaee/7/api/javax/persistence/CollectionTable.html[`@CollectionTable`] annotation is used to specify the database table that stores the values of a basic or an embeddable type collection.
The http://docs.oracle.com/javaee/7/api/javax/persistence/CollectionTable.html[`@CollectionTable`] annotation is used to specify the database table that stores the values of basic or an embeddable type collection.
See the <<chapters/domain/embeddables.adoc#embeddable-collections,Collections of embeddable types>> section for more info.
@ -84,7 +84,7 @@ See the <<chapters/query/native/Native.adoc#sql-multiple-scalar-values-dto-Named
[[annotations-jpa-convert]]
==== `@Convert`
The http://docs.oracle.com/javaee/7/api/javax/persistence/Convert.html[`@Convert`] annotation is used to specify the http://docs.oracle.com/javaee/7/api/javax/persistence/AttributeConverter.html[`AttributeConverter`] implementation used to convert the current annotated basic attribute.
The http://docs.oracle.com/javaee/7/api/javax/persistence/Convert.html[`@Convert`] annotation is used to specify the http://docs.oracle.com/javaee/7/api/javax/persistence/AttributeConverter.html[`AttributeConverter`] implementation used to convert the currently annotated basic attribute.
If the `AttributeConverter` uses http://docs.oracle.com/javaee/7/api/javax/persistence/Converter.html#autoApply--[`autoApply`], then all entity attributes with the same target type are going to be converted automatically.
@ -116,7 +116,7 @@ See the <<chapters/domain/inheritance.adoc#entity-inheritance-discriminator, Dis
[[annotations-jpa-discriminatorvalue]]
==== `@DiscriminatorValue`
The http://docs.oracle.com/javaee/7/api/javax/persistence/DiscriminatorValue.html[`@DiscriminatorValue`] annotation is used to specify what value of the discriminator column is used for mapping the current annotated entity.
The http://docs.oracle.com/javaee/7/api/javax/persistence/DiscriminatorValue.html[`@DiscriminatorValue`] annotation is used to specify what value of the discriminator column is used for mapping the currently annotated entity.
See the <<chapters/domain/inheritance.adoc#entity-inheritance-discriminator, Discriminator>> section for more info.
@ -159,7 +159,7 @@ See the <<chapters/domain/entity.adoc#entity, Entity>> section for more info.
[[annotations-jpa-entitylisteners]]
==== `@EntityListeners`
The http://docs.oracle.com/javaee/7/api/javax/persistence/EntityListeners.html[`@EntityListeners`] annotation is used to specify an array of callback listener classes that are used by the current annotated entity.
The http://docs.oracle.com/javaee/7/api/javax/persistence/EntityListeners.html[`@EntityListeners`] annotation is used to specify an array of callback listener classes that are used by the currently annotated entity.
See the <<chapters/events/Events.adoc#events-jpa-callbacks-example, JPA callbacks>> section for more info.
@ -180,14 +180,14 @@ See the <<chapters/domain/basic_types.adoc#basic-enums-Enumerated, `@Enumerated`
[[annotations-jpa-excludedefaultlisteners]]
==== `@ExcludeDefaultListeners`
The http://docs.oracle.com/javaee/7/api/javax/persistence/ExcludeDefaultListeners.html[`@ExcludeDefaultListeners`] annotation is used to specify that the current annotated entity skips the invocation of any default listener.
The http://docs.oracle.com/javaee/7/api/javax/persistence/ExcludeDefaultListeners.html[`@ExcludeDefaultListeners`] annotation is used to specify that the currently annotated entity skips the invocation of any default listener.
See the <<chapters/events/Events.adoc#events-exclude-default-listener, Exclude default entity listeners>> section for more info.
[[annotations-jpa-excludesuperclasslisteners]]
==== `@ExcludeSuperclassListeners`
The http://docs.oracle.com/javaee/7/api/javax/persistence/ExcludeSuperclassListeners.html[`@ExcludeSuperclassListeners`] annotation is used to specify that the current annotated entity skips the invocation of listeners declared by its superclass.
The http://docs.oracle.com/javaee/7/api/javax/persistence/ExcludeSuperclassListeners.html[`@ExcludeSuperclassListeners`] annotation is used to specify that the currently annotated entity skips the invocation of listeners declared by its superclass.
See the <<chapters/events/Events.adoc#events-exclude-default-listener, Exclude default entity listeners>> section for more info.
@ -266,7 +266,7 @@ See the <<chapters/domain/collections.adoc#collections-map-unidirectional-exampl
[[annotations-jpa-lob]]
==== `@Lob`
The http://docs.oracle.com/javaee/7/api/javax/persistence/Lob.html[`@Lob`] annotation is used to specify that the current annotated entity attribute represents a large object type.
The http://docs.oracle.com/javaee/7/api/javax/persistence/Lob.html[`@Lob`] annotation is used to specify that the currently annotated entity attribute represents a large object type.
See the <<chapters/domain/basic_types.adoc#basic-blob-example, `BLOB` mapping>> section for more info.
@ -303,7 +303,7 @@ See the <<chapters/domain/collections.adoc#collections-map-key-class, `@MapKeyCl
The http://docs.oracle.com/javaee/7/api/javax/persistence/MapKeyColumn.html[`@MapKeyColumn`] annotation is used to specify the database column which stores the key of a `java.util.Map` association for which the map key is a basic type.
See the <<chapters/domain/collections.adoc#collections-map-custom-key-type-mapping-example, `@MapKeyType` mapping section>> section for an example of `@MapKeyColumn` annotation usage.
See the <<chapters/domain/collections.adoc#collections-map-custom-key-type-mapping-example, `@MapKeyType` mapping section>> for an example of `@MapKeyColumn` annotation usage.
[[annotations-jpa-mapkeyenumerated]]
==== `@MapKeyEnumerated`
@ -335,14 +335,14 @@ See the <<chapters/domain/collections.adoc#collections-map-unidirectional-exampl
[[annotations-jpa-mappedsuperclass]]
==== `@MappedSuperclass`
The http://docs.oracle.com/javaee/7/api/javax/persistence/MappedSuperclass.html[`@MappedSuperclass`] annotation is used to specify that the current annotated type attributes are inherited by any subclass entity.
The http://docs.oracle.com/javaee/7/api/javax/persistence/MappedSuperclass.html[`@MappedSuperclass`] annotation is used to specify that the currently annotated type attributes are inherited by any subclass entity.
See the <<chapters/domain/inheritance.adoc#entity-inheritance-mapped-superclass, `@MappedSuperclass`>> section for more info.
[[annotations-jpa-mapsid]]
==== `@MapsId`
The http://docs.oracle.com/javaee/7/api/javax/persistence/MapsId.html[`@MapsId`] annotation is used to specify that the entity identifier is mapped by the current annotated `@ManyToOne` or `@OneToOne` association.
The http://docs.oracle.com/javaee/7/api/javax/persistence/MapsId.html[`@MapsId`] annotation is used to specify that the entity identifier is mapped by the currently annotated `@ManyToOne` or `@OneToOne` association.
See the <<chapters/domain/identifiers.adoc#identifiers-derived-mapsid, `@MapsId` mapping>> section for more info.
@ -427,7 +427,7 @@ See the <<chapters/domain/associations.adoc#associations-one-to-one, `@OneToOne`
[[annotations-jpa-orderby]]
==== `@OrderBy`
The http://docs.oracle.com/javaee/7/api/javax/persistence/OrderBy.html[`@OrderBy`] annotation is used to specify the entity attributes used for sorting when fetching the current annotated collection.
The http://docs.oracle.com/javaee/7/api/javax/persistence/OrderBy.html[`@OrderBy`] annotation is used to specify the entity attributes used for sorting when fetching the currently annotated collection.
See the <<chapters/domain/collections.adoc#collections-unidirectional-ordered-list, `@OrderBy` mapping>> section for more info.
@ -521,7 +521,7 @@ See the <<chapters/events/Events.adoc#events-jpa-callbacks-example, JPA callback
[[annotations-jpa-primarykeyjoincolumn]]
==== `@PrimaryKeyJoinColumn`
The http://docs.oracle.com/javaee/7/api/javax/persistence/PrimaryKeyJoinColumn.html[`@PrimaryKeyJoinColumn`] annotation is used to specify that the primary key column of the current annotated entity is also a foreign key to some other entity
The http://docs.oracle.com/javaee/7/api/javax/persistence/PrimaryKeyJoinColumn.html[`@PrimaryKeyJoinColumn`] annotation is used to specify that the primary key column of the currently annotated entity is also a foreign key to some other entity
(e.g. a base class table in a `JOINED` inheritance strategy, the primary table in a secondary table mapping, or the parent table in a `@OneToOne` relationship).
See the <<chapters/domain/identifiers.adoc#identifiers-derived-primarykeyjoincolumn, `@PrimaryKeyJoinColumn` mapping>> section for more info.
@ -541,7 +541,7 @@ See the <<chapters/query/hql/HQL.adoc#jpa-read-only-entities-native-example, `@Q
[[annotations-jpa-secondarytable]]
==== `@SecondaryTable`
The http://docs.oracle.com/javaee/7/api/javax/persistence/SecondaryTable.html[`@SecondaryTable`] annotation is used to specify a secondary table for the current annotated entity.
The http://docs.oracle.com/javaee/7/api/javax/persistence/SecondaryTable.html[`@SecondaryTable`] annotation is used to specify a secondary table for the currently annotated entity.
See the <<chapters/query/native/Native.adoc#sql-custom-crud-secondary-table-example, `@SecondaryTable` mapping>> section for more info.
@ -553,7 +553,7 @@ The http://docs.oracle.com/javaee/7/api/javax/persistence/SecondaryTables.html[`
[[annotations-jpa-sequencegenerator]]
==== `@SequenceGenerator`
The http://docs.oracle.com/javaee/7/api/javax/persistence/SequenceGenerator.html[`@SequenceGenerator`] annotation is used to specify the database sequence used by the identifier generator of the current annotated entity.
The http://docs.oracle.com/javaee/7/api/javax/persistence/SequenceGenerator.html[`@SequenceGenerator`] annotation is used to specify the database sequence used by the identifier generator of the currently annotated entity.
See the <<chapters/domain/identifiers.adoc#identifiers-generators-sequence-configured,`@SequenceGenerator` mapping>> section for more info.
@ -579,21 +579,21 @@ See the <<chapters/query/native/Native.adoc#sql-sp-named-query, Using named quer
[[annotations-jpa-table]]
==== `@Table`
The http://docs.oracle.com/javaee/7/api/javax/persistence/Table.html[`@Table`] annotation is used to specify the primary table of the current annotated entity.
The http://docs.oracle.com/javaee/7/api/javax/persistence/Table.html[`@Table`] annotation is used to specify the primary table of the currently annotated entity.
See the <<chapters/query/native/Native.adoc#sql-custom-crud-secondary-table-example, `@Table` mapping>> section for more info.
[[annotations-jpa-tablegenerator]]
==== `@TableGenerator`
The http://docs.oracle.com/javaee/7/api/javax/persistence/TableGenerator.html[`@TableGenerator`] annotation is used to specify the database table used by the identity generator of the current annotated entity.
The http://docs.oracle.com/javaee/7/api/javax/persistence/TableGenerator.html[`@TableGenerator`] annotation is used to specify the database table used by the identity generator of the currently annotated entity.
See the <<chapters/domain/identifiers.adoc#identifiers-generators-table-configured-mapping-example,`@TableGenerator` mapping>> section for more info.
[[annotations-jpa-temporal]]
==== `@Temporal`
The http://docs.oracle.com/javaee/7/api/javax/persistence/Temporal.html[`@Temporal`] annotation is used to specify the `TemporalType` of the current annotated `java.util.Date` or `java.util.Calendar` entity attribute.
The http://docs.oracle.com/javaee/7/api/javax/persistence/Temporal.html[`@Temporal`] annotation is used to specify the `TemporalType` of the currently annotated `java.util.Date` or `java.util.Calendar` entity attribute.
See the <<chapters/domain/basic_types.adoc#basic-datetime,Basic temporal types>> chapter for more info.
@ -607,7 +607,7 @@ See the <<chapters/events/Events.adoc#events-jpa-callbacks-example, `@Transient`
[[annotations-jpa-uniqueconstraint]]
==== `@UniqueConstraint`
The http://docs.oracle.com/javaee/7/api/javax/persistence/UniqueConstraint.html[`@UniqueConstraint`] annotation is used to specify a unique constraint to be included by the automated schema generator for the primary or secondary table associated with the current annotated entity.
The http://docs.oracle.com/javaee/7/api/javax/persistence/UniqueConstraint.html[`@UniqueConstraint`] annotation is used to specify a unique constraint to be included by the automated schema generator for the primary or secondary table associated with the currently annotated entity.
See the <<chapters/schema/Schema.adoc#schema-generation-columns-unique-constraint, Columns unique constraint>> chapter for more info.
@ -712,7 +712,7 @@ The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibern
The same behavior can be achieved using the `definition` attribute of the JPA <<annotations-jpa-column>> annotation.
See the <<chapters/schema/Schema.adoc#schema-generation-column-default-value,Default value for database column>> chapter for more info.
See the <<chapters/schema/Schema.adoc#schema-generation-column-default-value,Default value for a database column>> chapter for more info.
[[annotations-hibernate-columns]]
==== `@Columns`
@ -736,7 +736,7 @@ The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibern
[[annotations-hibernate-creationtimestamp]]
==== `@CreationTimestamp`
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/CreationTimestamp.html[`@CreationTimestamp`] annotation is used to specify that the current annotated temporal type must be initialized with the current JVM timestamp value.
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/CreationTimestamp.html[`@CreationTimestamp`] annotation is used to specify that the currently annotated temporal type must be initialized with the current JVM timestamp value.
See the <<chapters/domain/basic_types.adoc#mapping-generated-CreationTimestamp,`@CreationTimestamp` mapping>> section for more info.
@ -787,7 +787,7 @@ The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibern
[[annotations-hibernate-fetch]]
==== `@Fetch`
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/Fetch.html[`@Fetch`] annotation is used to specify the Hibernate specific https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/FetchMode.html[`FetchMode`] (e.g. `JOIN`, `SELECT`, `SUBSELECT`) used for the current annotated association:
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/Fetch.html[`@Fetch`] annotation is used to specify the Hibernate specific https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/FetchMode.html[`FetchMode`] (e.g. `JOIN`, `SELECT`, `SUBSELECT`) used for the currently annotated association:
See the <<chapters/fetching/Fetching.adoc#fetching-fetch-annotation, `@Fetch` mapping>> section for more info.
@ -861,7 +861,7 @@ See the <<chapters/domain/basic_types.adoc#mapping-column-formula-example,`@Form
[[annotations-hibernate-generated]]
==== `@Generated`
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/Generated.html[`@Generated`] annotation is used to specify that the current annotated entity attribute is generated by the database.
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/Generated.html[`@Generated`] annotation is used to specify that the currently annotated entity attribute is generated by the database.
See the <<chapters/domain/basic_types.adoc#mapping-generated-Generated,`@Generated` mapping>> section for more info.
@ -869,7 +869,7 @@ See the <<chapters/domain/basic_types.adoc#mapping-generated-Generated,`@Generat
==== `@GeneratorType`
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/GeneratorType.html[`@GeneratorType`] annotation is used to provide a https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/tuple/ValueGenerator.html[`ValueGenerator`]
and a https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/GenerationTime.html[`GenerationTime`] for the current annotated generated attribute.
and a https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/GenerationTime.html[`GenerationTime`] for the currently annotated generated attribute.
See the <<chapters/domain/basic_types.adoc#mapping-generated-GeneratorType-example,`@GeneratorType` mapping>> section for more info.
@ -1044,14 +1044,14 @@ See the <<chapters/query/hql/HQL.adoc#jpql-api-hibernate-named-query-example, `@
[[annotations-hibernate-nationalized]]
==== `@Nationalized`
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/Nationalized.html[`@Nationalized`] annotation is used to specify that the current annotated attribute is a character type (e.g. `String`, `Character`, `Clob`) that is stored in a nationalized column type (`NVARCHAR`, `NCHAR`, `NCLOB`).
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/Nationalized.html[`@Nationalized`] annotation is used to specify that the currently annotated attribute is a character type (e.g. `String`, `Character`, `Clob`) that is stored in a nationalized column type (`NVARCHAR`, `NCHAR`, `NCLOB`).
See the <<chapters/domain/basic_types.adoc#basic-nationalized-example,`@Nationalized` mapping>> section for more info.
[[annotations-hibernate-naturalid]]
==== `@NaturalId`
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/NaturalId.html[`@NaturalId`] annotation is used to specify that the current annotated attribute is part of the natural id of the entity.
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/NaturalId.html[`@NaturalId`] annotation is used to specify that the currently annotated attribute is part of the natural id of the entity.
See the <<chapters/domain/natural_id.adoc#naturalid,Natural Ids>> section for more info.
@ -1077,7 +1077,7 @@ See the <<chapters/domain/associations.adoc#associations-not-found,`@NotFound` m
[[annotations-hibernate-ondelete]]
==== `@OnDelete`
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/OnDelete.html[`@OnDelete`] annotation is used to specify the delete strategy employed by the current annotated collection, array or joined subclasses.
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/OnDelete.html[`@OnDelete`] annotation is used to specify the delete strategy employed by the currently annotated collection, array or joined subclasses.
This annotation is used by the automated schema generation tool to generated the appropriate FOREIGN KEY DDL cascade directive.
The two possible strategies are defined by the https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/OnDeleteAction.html[`OnDeleteAction`] enumeration:
@ -1090,14 +1090,14 @@ See the <<chapters/pc/PersistenceContext.adoc#pc-cascade-on-delete, `@OnDelete`
[[annotations-hibernate-optimisticlock]]
==== `@OptimisticLock`
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/OptimisticLock.html[`@OptimisticLock`] annotation is used to specify if the current annotated attribute will trigger an entity version increment upon being modified.
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/OptimisticLock.html[`@OptimisticLock`] annotation is used to specify if the currently annotated attribute will trigger an entity version increment upon being modified.
See the <<chapters/locking/Locking.adoc#locking-optimistic-exclude-attribute, Excluding attributes>> section for more info.
[[annotations-hibernate-optimisticlocking]]
==== `@OptimisticLocking`
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/OptimisticLocking.html[`@OptimisticLocking`] annotation is used to specify the current annotated an entity optimistic locking strategy.
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/OptimisticLocking.html[`@OptimisticLocking`] annotation is used to specify the currently annotated an entity optimistic locking strategy.
The four possible strategies are defined by the https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/OptimisticLockType.html[`OptimisticLockType`] enumeration:
@ -1111,7 +1111,7 @@ See the <<chapters/locking/Locking.adoc#locking-optimistic-versionless, Versionl
[[annotations-hibernate-orderby]]
==== `@OrderBy`
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/OrderBy.html[`@OrderBy`] annotation is used to specify a *SQL* ordering directive for sorting the current annotated collection.
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/OrderBy.html[`@OrderBy`] annotation is used to specify a *SQL* ordering directive for sorting the currently annotated collection.
It differs from the JPA <<annotations-jpa-orderby>> annotation because the JPA annotation expects a JPQL order-by fragment, not an SQL directive.
@ -1127,13 +1127,13 @@ See the <<chapters/domain/basic_types.adoc#mapping-filter-example,Filter mapping
[[annotations-hibernate-parameter]]
==== `@Parameter`
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/Parameter.html[`@Parameter`] annotation is generic parameter (basically a key/value combination) tused to parametrize other annotations,
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/Parameter.html[`@Parameter`] annotation is a generic parameter (basically a key/value combination) used to parametrize other annotations,
like <<annotations-hibernate-collectiontype>>, <<annotations-hibernate-genericgenerator>>, and <<annotations-hibernate-type>>, <<annotations-hibernate-typedef>>.
[[annotations-hibernate-parent]]
==== `@Parent`
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/Parent.html[`@Parent`] annotation is used to specify that the current annotated embeddable attribute references back the owning entity.
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/Parent.html[`@Parent`] annotation is used to specify that the currently annotated embeddable attribute references back the owning entity.
See the <<chapters/domain/basic_types.adoc#mapping-Parent,`@Parent` mapping>> section for more info.
@ -1155,15 +1155,15 @@ The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibern
There are two possible `PolymorphismType` options:
EXPLICIT:: The current annotated entity is retrieved only if explicitly asked.
IMPLICIT:: The current annotated entity is retrieved if any of its super entity are retrieved. This is the default option.
EXPLICIT:: The currently annotated entity is retrieved only if explicitly asked.
IMPLICIT:: The currently annotated entity is retrieved if any of its super entity are retrieved. This is the default option.
See the <<chapters/domain/inheritance.adoc#entity-inheritance-polymorphism, `@Polymorphism`>> section for more info.
[[annotations-hibernate-proxy]]
==== `@Proxy`
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/Proxy.html[`@Proxy`] annotation is used to specify a custom proxy implementation for the current annotated entity.
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/Proxy.html[`@Proxy`] annotation is used to specify a custom proxy implementation for the currently annotated entity.
See the <<chapters/domain/entity.adoc#entity-proxy, `@Proxy` mapping>> section for more info.
@ -1180,7 +1180,7 @@ See the <<chapters/domain/identifiers.adoc#identifiers-rowid, `@RowId` mapping>>
[[annotations-hibernate-selectbeforeupdate]]
==== `@SelectBeforeUpdate`
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/SelectBeforeUpdate.html[`@SelectBeforeUpdate`] annotation is used to specify that the current annotated entity state be selected from the database when determining whether to perform an update when the detached entity is reattached.
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/SelectBeforeUpdate.html[`@SelectBeforeUpdate`] annotation is used to specify that the currently annotated entity state be selected from the database when determining whether to perform an update when the detached entity is reattached.
See the <<chapters/domain/entity.adoc#locking-optimistic-lock-type-dirty-example, `OptimisticLockType.DIRTY` mapping>> section for more info on how `@SelectBeforeUpdate` works.
@ -1219,14 +1219,14 @@ See the <<chapters/locking/Locking.adoc#locking-optimistic-version-timestamp-sou
[[annotations-hibernate-sqldelete]]
==== `@SQLDelete`
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/SQLDelete.html[`@SQLDelete`] annotation is used to specify a custom SQL `DELETE` statement for the current annotated entity or collection.
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/SQLDelete.html[`@SQLDelete`] annotation is used to specify a custom SQL `DELETE` statement for the currently annotated entity or collection.
See the <<chapters/query/native/Native.adoc#sql-custom-crud-example, Custom CRUD mapping>> section for more info.
[[annotations-hibernate-sqldeleteall]]
==== `@SQLDeleteAll`
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/SQLDeleteAll.html[`@SQLDeleteAll`] annotation is used to specify a custom SQL `DELETE` statement when removing all elements of the current annotated collection.
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/SQLDeleteAll.html[`@SQLDeleteAll`] annotation is used to specify a custom SQL `DELETE` statement when removing all elements of the currently annotated collection.
See the <<chapters/query/native/Native.adoc#sql-custom-crud-example, Custom CRUD mapping>> section for more info.
@ -1242,14 +1242,14 @@ See the <<chapters/domain/basic_types.adoc#mapping-column-filter-sql-fragment-al
[[annotations-hibernate-sqlinsert]]
==== `@SQLInsert`
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/SQLInsert.html[`@SQLInsert`] annotation is used to specify a custom SQL `INSERT` statement for the current annotated entity or collection.
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/SQLInsert.html[`@SQLInsert`] annotation is used to specify a custom SQL `INSERT` statement for the currently annotated entity or collection.
See the <<chapters/query/native/Native.adoc#sql-custom-crud-example, Custom CRUD mapping>> section for more info.
[[annotations-hibernate-sqlupdate]]
==== `@SQLUpdate`
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/SQLUpdate.html[`@SQLUpdate`] annotation is used to specify a custom SQL `UPDATE` statement for the current annotated entity or collection.
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/SQLUpdate.html[`@SQLUpdate`] annotation is used to specify a custom SQL `UPDATE` statement for the currently annotated entity or collection.
See the <<chapters/query/native/Native.adoc#sql-custom-crud-example, Custom CRUD mapping>> section for more info.
@ -1285,14 +1285,14 @@ The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibern
[[annotations-hibernate-target]]
==== `@Target`
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/Target.html[`@Target`] annotation is used to specify an explicit target implementation when the current annotated association is using an interface type.
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/Target.html[`@Target`] annotation is used to specify an explicit target implementation when the currently annotated association is using an interface type.
See the <<chapters/domain/basic_types.adoc#mapping-Target,`@Target` mapping>> section for more info.
[[annotations-hibernate-tuplizer]]
==== `@Tuplizer`
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/Tuplizer.html[`@Tuplizer`] annotation is used to specify a custom tuplizer for the current annotated entity or embeddable.
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/Tuplizer.html[`@Tuplizer`] annotation is used to specify a custom tuplizer for the currently annotated entity or embeddable.
For entities, the tupelizer must implement the https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/tuple/entity/EntityTuplizer.html[`EntityTuplizer`] interface.
@ -1308,7 +1308,7 @@ The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibern
[[annotations-hibernate-type]]
==== `@Type`
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/Type.html[`@Type`] annotation is used to specify the Hibernate https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/type/Type.html[`@Type`] used by the current annotated basic attribute.
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/Type.html[`@Type`] annotation is used to specify the Hibernate https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/type/Type.html[`@Type`] used by the currently annotated basic attribute.
See the <<chapters/domain/basic_types.adoc#basic-custom-type-BitSetType-mapping-example, `@Type` mapping>> section for more info.
@ -1327,7 +1327,7 @@ The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibern
[[annotations-hibernate-updatetimestamp]]
==== `@UpdateTimestamp`
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/UpdateTimestamp.html[`@UpdateTimestamp`] annotation is used to specify that the current annotated timestamp attribute should be updated with the current JVM timestamp whenever the owning entity gets modified.
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/UpdateTimestamp.html[`@UpdateTimestamp`] annotation is used to specify that the currently annotated timestamp attribute should be updated with the current JVM timestamp whenever the owning entity gets modified.
- `java.util.Date`
- `java.util.Calendar`

View File

@ -215,12 +215,12 @@ If you need to fetch multiple collections, to avoid a Cartesian Product, you sho
Hibernate has two caching layers:
- the first-level cache (Persistence Context) which is a application-level repeatable reads.
- the first-level cache (Persistence Context) which provides application-level repeatable reads.
- the second-level cache which, unlike application-level caches, it doesn't store entity aggregates but normalized dehydrated entity entries.
The first-level cache is not a caching solution "per se", being more useful for ensuring `READ COMMITTED` isolation level.
While the first-level cache is short lived, being cleared when the underlying `EntityManager` is closed, the second-level cache is tied to an `EntityManagerFactory`.
While the first-level cache is short-lived, being cleared when the underlying `EntityManager` is closed, the second-level cache is tied to an `EntityManagerFactory`.
Some second-level caching providers offer support for clusters. Therefore, a node needs only to store a subset of the whole cached data.
Although the second-level cache can reduce transaction response time since entities are retrieved from the cache rather than from the database,
@ -233,8 +233,8 @@ and you should consider these alternatives prior to jumping to a second-level ca
After properly tuning the database, to further reduce the average response time and increase the system throughput, application-level caching becomes inevitable.
Topically, a key-value application-level cache like https://memcached.org/[Memcached] or http://redis.io/[Redis] is a common choice to store data aggregates.
If you can duplicate all data in the key-value store, you have the option of taking down the database system for maintenance without completely loosing availability since read-only traffic can still be served from the cache.
Typically, a key-value application-level cache like https://memcached.org/[Memcached] or http://redis.io/[Redis] is a common choice to store data aggregates.
If you can duplicate all data in the key-value store, you have the option of taking down the database system for maintenance without completely losing availability since read-only traffic can still be served from the cache.
One of the main challenges of using an application-level cache is ensuring data consistency across entity aggregates.
That's where the second-level cache comes to the rescue.

View File

@ -5,8 +5,8 @@
=== Strategy configurations
Many configuration settings define pluggable strategies that Hibernate uses for various purposes.
The configuration of many of these strategy type settings accept definition in various forms.
The documentation of such configuration settings refer here.
The configurations of many of these strategy type settings accept definition in various forms.
The documentation of such configuration settings refers here.
The types of forms available in such cases include:
short name (if defined)::
@ -22,9 +22,9 @@ strategy Class name::
=== General configuration
`*hibernate.dialect*` (e.g. `org.hibernate.dialect.PostgreSQL94Dialect`)::
The classname of a Hibernate https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/dialect/Dialect.html[`Dialect`] from which Hibernate can generate SQL optimized for a particular relational database.
The class name of a Hibernate https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/dialect/Dialect.html[`Dialect`] from which Hibernate can generate SQL optimized for a particular relational database.
+
In most cases Hibernate can choose the correct https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/dialect/Dialect.html[`Dialect`] implementation based on the JDBC metadata returned by the JDBC driver.
In most cases, Hibernate can choose the correct https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/dialect/Dialect.html[`Dialect`] implementation based on the JDBC metadata returned by the JDBC driver.
+
`*hibernate.current_session_context_class*` (e.g. `jta`, `thread`, `managed`, or a custom class implementing `org.hibernate.context.spi.CurrentSessionContext`)::
+
@ -32,7 +32,7 @@ Supply a custom strategy for the scoping of the _current_ `Session`.
+
The definition of what exactly _current_ means is controlled by the https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/context/spi/CurrentSessionContext.html[`CurrentSessionContext`] implementation in use.
+
Note that for backwards compatibility, if a https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/context/spi/CurrentSessionContext.html[`CurrentSessionContext`] is not configured but JTA is configured this will default to the https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/context/internal/JTASessionContext.html[`JTASessionContext`].
Note that for backward compatibility, if a https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/context/spi/CurrentSessionContext.html[`CurrentSessionContext`] is not configured but JTA is configured this will default to the https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/context/internal/JTASessionContext.html[`JTASessionContext`].
[[configurations-jpa-compliance]]
=== JPA compliance
@ -45,7 +45,7 @@ since it extends the JPA one.
Controls whether Hibernate's handling of `javax.persistence.Query` (JPQL, Criteria and native-query) should strictly follow the JPA spec.
+
This includes both in terms of parsing or translating a query as well as calls to the `javax.persistence.Query` methods throwing spec
defined exceptions where as Hibernate might not.
defined exceptions whereas Hibernate might not.
`*hibernate.jpa.compliance.list*` (e.g. `true` or `false` (default value))::
Controls whether Hibernate should recognize what it considers a "bag" (`org.hibernate.collection.internal.PersistentBag`)
@ -58,7 +58,7 @@ is just missing (and its defaults will apply).
JPA defines specific exceptions upon calling specific methods on `javax.persistence.EntityManager` and `javax.persistence.EntityManagerFactory`
objects which have been closed previously.
+
This setting controls whether the JPA spec defined behavior or the Hibernate behavior will be used.
This setting controls whether the JPA spec-defined behavior or the Hibernate behavior will be used.
+
If enabled, Hibernate will operate in the JPA specified way, throwing exceptions when the spec says it should.
@ -105,13 +105,13 @@ See discussion of `hibernate.connection.provider_disables_autocommit` as well.
`*hibernate.connection.provider_disables_autocommit*` (e.g. `true` or `false` (default value))::
Indicates a promise by the user that Connections that Hibernate obtains from the configured ConnectionProvider
have auto-commit disabled when they are obtained from that provider, whether that provider is backed by
a DataSource or some other Connection pooling mechanism. Generally this occurs when:
a DataSource or some other Connection pooling mechanism. Generally, this occurs when:
* Hibernate is configured to get Connections from an underlying DataSource, and that DataSource is already configured to disable auto-commit on its managed Connections
* Hibernate is configured to get Connections from a non-DataSource connection pool and that connection pool is already configured to disable auto-commit.
For the Hibernate provided implementation this will depend on the value of `hibernate.connection.autocommit` setting.
+
Hibernate uses this assurance as an opportunity to opt-out of certain operations that may have a performance
impact (although this impact is general negligible). Specifically, when a transaction is started via the
impact (although this impact is generally negligible). Specifically, when a transaction is started via the
Hibernate or JPA transaction APIs Hibernate will generally immediately acquire a Connection from the
provider and:
* check whether the Connection is initially in auto-commit mode via a call to `Connection#getAutocommit` to know how to clean up the Connection when released.
@ -146,7 +146,7 @@ Can reference:
** a fully qualified name of a class implementing `ConnectionProvider`
+
The term `class` appears in the setting name due to legacy reasons; however it can accept instances.
The term `class` appears in the setting name due to legacy reasons. However, it can accept instances.
`*hibernate.jndi.class*`::
Names the JNDI `javax.naming.InitialContext` class.
@ -196,7 +196,7 @@ The number of seconds between two consecutive pool validations. During validatio
Maximum size of C3P0 statement cache. Refers to http://www.mchange.com/projects/c3p0/#maxStatements[c3p0 `maxStatements` setting].
`*hibernate.c3p0.acquire_increment*` (e.g. 2)::
Number of connections acquired at a time when there's no connection available in the pool. Refers to http://www.mchange.com/projects/c3p0/#acquireIncrement[c3p0 `acquireIncrement` setting].
The number of connections acquired at a time when there's no connection available in the pool. Refers to http://www.mchange.com/projects/c3p0/#acquireIncrement[c3p0 `acquireIncrement` setting].
`*hibernate.c3p0.idle_test_period*` (e.g. 5)::
Idle time before a C3P0 pooled connection is validated. Refers to http://www.mchange.com/projects/c3p0/#idleConnectionTestPeriod[c3p0 `idleConnectionTestPeriod` setting].
@ -289,7 +289,7 @@ The following short names are defined for this setting:
`legacy-hbm`::: Uses the https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/boot/model/naming/ImplicitNamingStrategyLegacyHbmImpl.html[`ImplicitNamingStrategyLegacyHbmImpl`]
`component-path`::: Uses the https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/boot/model/naming/ImplicitNamingStrategyComponentPathImpl.html[`ImplicitNamingStrategyComponentPathImpl`]
+
If this property happens to be empty, the fallback is to use `default` strategy.
If this property happens to be empty, the fallback is to use the `default` strategy.
`*hibernate.physical_naming_strategy*` (e.g. `org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl` (default value))::
Used to specify the https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/boot/model/naming/PhysicalNamingStrategy.html[`PhysicalNamingStrategy`] class to use.
@ -336,7 +336,7 @@ Therefore, when setting `exclude-unlisted-classes` to true, only the classes tha
Used to specify the order in which metadata sources should be processed.
Value is a delimited-list whose elements are defined by https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/cfg/MetadataSourceType.html[`MetadataSourceType`].
+
Default is `hbm,class"`, therefore `hbm.xml` files are processed first, followed by annotations (combined with `orm.xml` mappings).
The default is `hbm,class"`, therefore `hbm.xml` files are processed first, followed by annotations (combined with `orm.xml` mappings).
+
When using JPA, the XML mapping overrides a conflicting annotation mapping that targets the same entity attribute.
@ -460,7 +460,7 @@ Can reference a
`StatementInspector` implementation class name (fully-qualified class name).
`*hibernate.query.validate_parameters*` (e.g. `true` (default value) or `false`)::
This configuration property can be used to disable parameters validation performed by `org.hibernate.query.Query#setParameter` when the the Session is bootstrapped via JPA
This configuration property can be used to disable parameters validation performed by `org.hibernate.query.Query#setParameter` when the Session is bootstrapped via JPA
`javax.persistence.EntityManagerFactory`
`*hibernate.criteria.literal_handling_mode*` (e.g. `AUTO` (default value), `BIND` or `INLINE`)::
@ -491,19 +491,19 @@ Provide a custom https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javado
`*hibernate.hql.bulk_id_strategy.persistent.drop_tables*` (e.g. `true` or `false` (default value))::
This configuration property is used by the https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/hql/spi/id/persistent/PersistentTableBulkIdStrategy.html[`PersistentTableBulkIdStrategy`], that mimics temporary tables for databases which do not support temporary tables.
It follows a pattern similar to the ANSI SQL definition of global temporary table using a "session id" column to segment rows from the various sessions.
It follows a pattern similar to the ANSI SQL definition of the global temporary table using a "session id" column to segment rows from the various sessions.
+
This configuration property allows you to DROP the tables used for multi-table bulk HQL operations when the `SessionFactory` or the `EntityManagerFactory` is closed.
`*hibernate.hql.bulk_id_strategy.persistent.schema*` (e.g. Database schema name. By default, the `hibernate.default_schema` is used.)::
This configuration property is used by the https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/hql/spi/id/persistent/PersistentTableBulkIdStrategy.html[`PersistentTableBulkIdStrategy`], that mimics temporary tables for databases which do not support temporary tables.
It follows a pattern similar to the ANSI SQL definition of global temporary table using a "session id" column to segment rows from the various sessions.
It follows a pattern similar to the ANSI SQL definition of the global temporary table using a "session id" column to segment rows from the various sessions.
+
This configuration property defines the database schema used for storing the temporary tables used for bulk HQL operations.
`*hibernate.hql.bulk_id_strategy.persistent.catalog*` (e.g. Database catalog name. By default, the `hibernate.default_catalog` is used.)::
This configuration property is used by the https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/hql/spi/id/persistent/PersistentTableBulkIdStrategy.html[`PersistentTableBulkIdStrategy`], that mimics temporary tables for databases which do not support temporary tables.
It follows a pattern similar to the ANSI SQL definition of global temporary table using a "session id" column to segment rows from the various sessions.
It follows a pattern similar to the ANSI SQL definition of the global temporary table using a "session id" column to segment rows from the various sessions.
+
This configuration property defines the database catalog used for storing the temporary tables used for bulk HQL operations.
@ -515,9 +515,9 @@ Legacy 4.x behavior favored performing pagination in-memory by avoiding the use
In 5.x, the limit handler behavior favors performance, thus, if the dialect doesn't support offsets, an exception is thrown instead.
`*hibernate.query.conventional_java_constants*` (e.g. `true` (default value) or `false`)::
Setting which indicates whether or not Java constant follow the https://docs.oracle.com/javase/tutorial/java/nutsandbolts/variables.html[Java Naming conventions].
Setting which indicates whether or not Java constants follow the https://docs.oracle.com/javase/tutorial/java/nutsandbolts/variables.html[Java Naming conventions].
+
Default is `true`.
The default is `true`.
Existing applications may want to disable this (set it `false`) if non-conventional Java constants are used.
However, there is a significant performance overhead for using non-conventional Java constants
since Hibernate cannot determine if aliases should be treated as Java constants or not.
@ -546,17 +546,17 @@ Names the https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/
+
Can specify either the https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/loader/BatchFetchStyle.html[`BatchFetchStyle`] name (insensitively), or a https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/loader/BatchFetchStyle.html[`BatchFetchStyle`] instance. `LEGACY}` is the default value.
`*hibernate.jdbc.batch.builder*` (e.g. The fully qualified name of an https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/engine/jdbc/batch/spi/BatchBuilder.html[`BatchBuilder`] implementation class type or an actual object instance)::
`*hibernate.jdbc.batch.builder*` (e.g. The fully qualified name of a https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/engine/jdbc/batch/spi/BatchBuilder.html[`BatchBuilder`] implementation class type or an actual object instance)::
Names the https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/engine/jdbc/batch/spi/BatchBuilder.html[`BatchBuilder`] implementation to use.
[[configurations-database-fetch]]
==== Fetching properties
`*hibernate.max_fetch_depth*` (e.g. A value between `0` and `3`)::
Sets a maximum depth for the outer join fetch tree for single-ended associations. A single-ended association is a one-to-one or many-to-one assocation. A value of `0` disables default outer join fetching.
Sets a maximum depth for the outer join fetch tree for single-ended associations. A single-ended association is a one-to-one or many-to-one association. A value of `0` disables default outer join fetching.
`*hibernate.default_batch_fetch_size*` (e.g. `4`,`8`, or `16`)::
Default size for Hibernate Batch fetching of associations (lazily fetched associations can be fetched in batches to prevent N+1 query problems).
The default size for Hibernate Batch fetching of associations (lazily fetched associations can be fetched in batches to prevent N+1 query problems).
`*hibernate.jdbc.fetch_size*` (e.g. `0` or an integer)::
A non-zero value determines the JDBC fetch size, by calling `Statement.setFetchSize()`.
@ -576,7 +576,7 @@ Enable wrapping of JDBC result sets in order to speed up column name lookups for
`*hibernate.enable_lazy_load_no_trans*` (e.g. `true` or `false` (default value))::
Initialize Lazy Proxies or Collections outside a given Transactional Persistence Context.
+
Although enabling this configuration can make `LazyInitializationException` go away, it's better to use a fetch plan that guarantees that all properties are properly initialised before the Session is closed.
Although enabling this configuration can make `LazyInitializationException` go away, it's better to use a fetch plan that guarantees that all properties are properly initialized before the Session is closed.
+
In reality, you shouldn't probably enable this setting anyway.
@ -599,7 +599,7 @@ If true, Hibernate generates comments inside the SQL, for easier debugging.
`*hibernate.generate_statistics*` (e.g. `true` or `false`)::
Causes Hibernate to collect statistics for performance tuning.
`*hibernate.stats.factory*` (e.g. the fully qualified name of an https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/stat/spi/StatisticsFactory.html[`StatisticsFactory`] implementation or an actual instance)::
`*hibernate.stats.factory*` (e.g. the fully qualified name of a https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/stat/spi/StatisticsFactory.html[`StatisticsFactory`] implementation or an actual instance)::
The `StatisticsFactory` allow you to customize how the Hibernate Statistics are being collected.
`*hibernate.session.events.log*` (e.g. `true` or `false`)::
@ -626,7 +626,7 @@ Enables the query cache. You still need to set individual queries to be cachable
`*hibernate.cache.use_second_level_cache*` (e.g. `true` (default value) or `false`)::
Enable/disable the second level cache, which is enabled by default, although the default `RegionFactor` is `NoCachingRegionFactory` (meaning there is no actual caching implementation).
`*hibernate.cache.query_cache_factory*` (e.g. Fully-qualified classname)::
`*hibernate.cache.query_cache_factory*` (e.g. Fully-qualified class name)::
A custom https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/cache/spi/QueryCacheFactory.html[`QueryCacheFactory`] interface. The default is the built-in `StandardQueryCacheFactory`.
`*hibernate.cache.region_prefix*` (e.g. A string)::
@ -639,7 +639,7 @@ Forces Hibernate to store data in the second-level cache in a more human-readabl
Enables the automatic eviction of a bi-directional association's collection cache when an element in the `ManyToOne` collection is added/updated/removed without properly managing the change on the `OneToMany` side.
`*hibernate.cache.use_reference_entries*` (e.g. `true` or `false`)::
Optimizes second-level cache operation to store immutable entities (aka "reference") which do not have associations into cache directly, this case, lots of disasseble and deep copy operations can be avoid. Default value of this property is `false`.
Optimizes second-level cache operation to store immutable entities (aka "reference") which do not have associations into cache directly, this case, disassembling and deep copy operations can be avoided. The default value of this property is `false`.
`*hibernate.ejb.classcache*` (e.g. `hibernate.ejb.classcache.org.hibernate.ejb.test.Item` = `read-write`)::
Sets the associated entity class cache concurrency strategy for the designated region. Caching configuration should follow the following pattern `hibernate.ejb.classcache.<fully.qualified.Classname>` usage[, region] where usage is the cache strategy used and region the cache region name.
@ -788,21 +788,21 @@ Specifies the minor version of the underlying database, as would be returned by
This value is used to help more precisely determine how to perform schema generation tasks for the underlying database in cases where `javax.persistence.database-product-name` and `javax.persistence.database-major-version` does not provide enough distinction.
`*javax.persistence.schema-generation.create-source*`::
Specifies whether schema generation commands for schema creation are to be determine based on object/relational mapping metadata, DDL scripts, or a combination of the two.
Specifies whether schema generation commands for schema creation are to be determined based on object/relational mapping metadata, DDL scripts, or a combination of the two.
See https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/tool/schema/SourceType.html[`SourceType`] for valid set of values.
+
If no value is specified, a default is assumed as follows:
+
* if source scripts are specified (per `javax.persistence.schema-generation.create-script-source`), then `scripts` is assumed
* if source scripts are specified (per `javax.persistence.schema-generation.create-script-source`), then `script` is assumed
* otherwise, `metadata` is assumed
`*javax.persistence.schema-generation.drop-source*`::
Specifies whether schema generation commands for schema dropping are to be determine based on object/relational mapping metadata, DDL scripts, or a combination of the two.
Specifies whether schema generation commands for schema dropping are to be determined based on object/relational mapping metadata, DDL scripts, or a combination of the two.
See https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/tool/schema/SourceType.html[`SourceType`] for valid set of values.
+
If no value is specified, a default is assumed as follows:
+
* if source scripts are specified (per `javax.persistence.schema-generation.create-script-source`), then `scripts` is assumed
* if source scripts are specified (per `javax.persistence.schema-generation.create-script-source`), then the `script` option is assumed
* otherwise, `metadata` is assumed
`*javax.persistence.schema-generation.create-script-source*`::
@ -821,7 +821,7 @@ For cases where the `javax.persistence.schema-generation.scripts.action` value i
`*javax.persistence.hibernate.hbm2ddl.import_files*` (e.g. `import.sql` (default value))::
Comma-separated names of the optional files containing SQL DML statements executed during the `SessionFactory` creation.
File order matters, the statements of a give file are executed before the statements of the following one.
File order matters, the statements of a given file are executed before the statements of the following one.
+
These statements are only executed if the schema is created, meaning that `hibernate.hbm2ddl.auto` is set to `create`, `create-drop`, or `update`.
`javax.persistence.schema-generation.create-script-source` / `javax.persistence.schema-generation.drop-script-source` should be preferred.
@ -999,7 +999,7 @@ Names the `ClassLoader` used to load user application classes.
Names the `ClassLoader` Hibernate should use to perform resource loading.
`*hibernate.classLoader.hibernate*`::
Names the `ClassLoader` responsible for loading Hibernate classes. By default this is the `ClassLoader` that loaded this class.
Names the `ClassLoader` responsible for loading Hibernate classes. By default, this is the `ClassLoader` that loaded this class.
`*hibernate.classLoader.environment*`::
Names the `ClassLoader` used when Hibernate is unable to locates classes on the `hibernate.classLoader.application` or `hibernate.classLoader.hibernate`.
@ -1008,13 +1008,13 @@ Names the `ClassLoader` used when Hibernate is unable to locates classes on the
=== Bootstrap properties
`*hibernate.integrator_provider*` (e.g. The fully qualified name of an https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/jpa/boot/spi/IntegratorProvider.html[`IntegratorProvider`])::
Used to define a list of https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/integrator/spi/Integrator.html[`Integrator`] which are used during bootstrap process to integrate various services.
Used to define a list of https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/integrator/spi/Integrator.html[`Integrator`] which is used during the bootstrap process to integrate various services.
`*hibernate.strategy_registration_provider*` (e.g. The fully qualified name of an https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/jpa/boot/spi/StrategyRegistrationProviderList.html[`StrategyRegistrationProviderList`])::
Used to define a list of https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/boot/registry/selector/StrategyRegistrationProvider.html[`StrategyRegistrationProvider`] which are used during bootstrap process to provide registrations of strategy selector(s).
Used to define a list of https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/boot/registry/selector/StrategyRegistrationProvider.html[`StrategyRegistrationProvider`] which is used during the bootstrap process to provide registrations of strategy selector(s).
`*hibernate.type_contributors*` (e.g. The fully qualified name of an https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/jpa/boot/spi/TypeContributorList.html[`TypeContributorList`])::
Used to define a list of https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/boot/model/TypeContributor.html[`TypeContributor`] which are used during bootstrap process to contribute types.
Used to define a list of https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/boot/model/TypeContributor.html[`TypeContributor`] which is used during the bootstrap process to contribute types.
`*hibernate.persister.resolver*` (e.g. The fully qualified name of a https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/persister/spi/PersisterClassResolver.html[`PersisterClassResolver`] or a `PersisterClassResolver` instance)::
Used to define an implementation of the `PersisterClassResolver` interface which can be used to customize how an entity or a collection is being persisted.
@ -1025,8 +1025,8 @@ Like a `PersisterClassResolver`, the `PersisterFactory` can be used to customize
`*hibernate.service.allow_crawling*` (e.g. `true` (default value) or `false`)::
Crawl all available service bindings for an alternate registration of a given Hibernate `Service`.
`*hibernate.metadata_builder_contributor*` (e.g. The instance, the class or the fully qualified class name of an https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/jpa/boot/spi/MetadataBuilderContributor.html[`MetadataBuilderContributor`])::
Used to define a instance, the class or the fully qualified class name of an https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/jpa/boot/spi/MetadataBuilderContributor.html[`MetadataBuilderContributor`] which can be used to configure the `MetadataBuilder` when bootstrapping via the JPA `EntityManagerFactory`.
`*hibernate.metadata_builder_contributor*` (e.g. The instance, the class or the fully qualified class name of a https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/jpa/boot/spi/MetadataBuilderContributor.html[`MetadataBuilderContributor`])::
Used to define an instance, the class or the fully qualified class name of a https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/jpa/boot/spi/MetadataBuilderContributor.html[`MetadataBuilderContributor`] which can be used to configure the `MetadataBuilder` when bootstrapping via the JPA `EntityManagerFactory`.
[[configurations-misc]]
=== Miscellaneous properties
@ -1043,7 +1043,7 @@ If `hibernate.session_factory_name_is_jndi` is set to `true`, this is also the n
`*hibernate.session_factory_name_is_jndi*` (e.g. `true` (default value) or `false`)::
Does the value defined by `hibernate.session_factory_name` represent a JNDI namespace into which the `org.hibernate.SessionFactory` should be bound and made accessible?
+
Defaults to `true` for backwards compatibility. Set this to `false` if naming a SessionFactory is needed for serialization purposes, but no writable JNDI context exists in the runtime environment or if the user simply does not want JNDI to be used.
Defaults to `true` for backward compatibility. Set this to `false` if naming a SessionFactory is needed for serialization purposes, but no writable JNDI context exists in the runtime environment or if the user simply does not want JNDI to be used.
`*hibernate.ejb.entitymanager_factory_name*` (e.g. By default, the persistence unit name is used, otherwise a randomly generated UUID)::
Internally, Hibernate keeps track of all `EntityManagerFactory` instances using the `EntityManagerFactoryRegistry`. The name is used as a key to identify a given `EntityManagerFactory` reference.

View File

@ -41,7 +41,7 @@ There are other ways to specify Configuration information, including:
* Place a file named hibernate.properties in a root directory of the classpath
* Pass an instance of java.util.Properties to `Configuration#setProperties`
* Via a `hibernate.cfg.xml` file
* System properties using java `-Dproperty=value`
* System properties using Java `-Dproperty=value`
== Migration

View File

@ -26,7 +26,7 @@ List cats = crit.list();
----
[[criteria-entity-name]]
=== JPA vs Hibernate entity name
=== JPA vs. Hibernate entity name
When using the https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/SharedSessionContract.html#createCriteria-java.lang.String-[`Session#createCriteria(String entityName)` or `StatelessSession#createCriteria(String entityName)`],
the *entityName* means the fully-qualified name of the underlying entity and not the name denoted by the `name` attribute of the JPA `@Entity` annotation.
@ -228,7 +228,7 @@ List cats = session.createCriteria( Cat.class )
This will return all of the `Cat`s with a mate whose name starts with "good" ordered by their mate's age, and all cats who do not have a mate.
This is useful when there is a need to order or limit in the database prior to returning complex/large result sets,
and removes many instances where multiple queries would have to be performed and the results unioned by java in memory.
and removes many instances where multiple queries would have to be performed and the results unioned by Java in memory.
Without this feature, first all of the cats without a mate would need to be loaded in one query.
@ -275,7 +275,7 @@ When using criteria against collections, there are two distinct cases.
One is if the collection contains entities (eg. `<one-to-many/>` or `<many-to-many/>`) or components (`<composite-element/>` ),
and the second is if the collection contains scalar values (`<element/>`).
In the first case, the syntax is as given above in the section <<criteria-associations>> where we restrict the `kittens` collection.
Essentially we create a `Criteria` object against the collection property and restrict the entity or component properties using that instance.
Essentially, we create a `Criteria` object against the collection property and restrict the entity or component properties using that instance.
For querying a collection of basic values, we still create the `Criteria` object against the collection,
but to reference the value, we use the special property "elements".

View File

@ -37,7 +37,7 @@ include::{sourcedir}/timestamp_version.xml[]
|column |The name of the column which holds the timestamp. Optional, defaults to the property name
|name |The name of a JavaBeans style property of Java type `Date` or `Timestamp` of the persistent class.
|access |The strategy Hibernate uses to access the property value. Optional, defaults to `property`.
|unsaved-value |A version property which indicates than instance is newly instantiated, and unsaved.
|unsaved-value |A version property which indicates that the instance is newly instantiated and unsaved.
This distinguishes it from detached instances that were saved or loaded in a previous session.
The default value of `undefined` indicates that Hibernate uses the identifier property value.
|source |Whether Hibernate retrieves the timestamp from the database or the current JVM.

View File

@ -102,7 +102,7 @@ You can externalize the resultset mapping information in a `<resultset>` element
----
====
You can, alternatively, use the resultset mapping information in your hbm files directly in java code.
You can, alternatively, use the resultset mapping information in your hbm files directly in Java code.
.Programmatically specifying the result mapping information
====

View File

@ -16,7 +16,7 @@ As a JPA provider, Hibernate implements the Java Persistence API specifications
image:images/architecture/JPA_Hibernate.svg[image]
SessionFactory (`org.hibernate.SessionFactory`):: A thread-safe (and immutable) representation of the mapping of the application domain model to a database.
Acts as a factory for `org.hibernate.Session` instances. The `EntityManagerFactory` is the JPA equivalent of a `SessionFactory` and basically those two converge into the same `SessionFactory` implementation.
Acts as a factory for `org.hibernate.Session` instances. The `EntityManagerFactory` is the JPA equivalent of a `SessionFactory` and basically, those two converge into the same `SessionFactory` implementation.
+
A `SessionFactory` is very expensive to create, so, for any given database, the application should have only one associated `SessionFactory`.
The `SessionFactory` maintains services that Hibernate uses across all `Session(s)` such as second level caches, connection pools, transaction system integrations, etc.

View File

@ -67,7 +67,7 @@ include::{sourcedir}/BatchTest.java[tags=batch-session-batch-example]
There are several problems associated with this example:
. Hibernate caches all the newly inserted `Customer` instances in the session-level c1ache, so, when the transaction ends, 100 000 entities are managed by the persistence context.
If the maximum memory allocated to the JVM is rather low, this example could fails with an `OutOfMemoryException`.
If the maximum memory allocated to the JVM is rather low, this example could fail with an `OutOfMemoryException`.
The Java 1.8 JVM allocated either 1/4 of available RAM or 1Gb, which can easily accommodate 100 000 objects on the heap.
. long-running transactions can deplete a connection pool so other transactions don't get a chance to proceed.
. JDBC batching is not enabled by default, so every insert statement requires a database roundtrip.
@ -118,7 +118,7 @@ However, it is good practice to close the `ScrollableResults` explicitly.
`StatelessSession` is a command-oriented API provided by Hibernate.
Use it to stream data to and from the database in the form of detached objects.
A `StatelessSession` has no persistence context associated with it and does not provide many of the higher-level life cycle semantics.
A `StatelessSession` has no persistence context associated with it and does not provide many of the higher-level lifecycle semantics.
Some of the things not provided by a `StatelessSession` include:
@ -243,8 +243,8 @@ include::{sourcedir}/BatchTest.java[tags=batch-bulk-hql-delete-example]
----
====
Method `Query.executeUpdate()` returns an `int` value, which indicates the number of entities effected by the operation.
This may or may not correlate to the number of rows effected in the database.
Method `Query.executeUpdate()` returns an `int` value, which indicates the number of entities affected by the operation.
This may or may not correlate to the number of rows affected in the database.
A JPQL/HQL bulk operation might result in multiple SQL statements being executed, such as for joined-subclass.
In the example of joined-subclass, a `DELETE` against one of the subclasses may actually result in deletes in the tables underlying the join, or further down the inheritance hierarchy.
@ -282,7 +282,7 @@ Otherwise, Hibernate throws an exception during parsing.
Available in-database generators are `org.hibernate.id.SequenceGenerator` and its subclasses, and objects which implement `org.hibernate.id.PostInsertIdentifierGenerator`.
For properties mapped as either version or timestamp, the insert statement gives you two options.
You can either specify the property in the properties_list, in which case its value is taken from the corresponding select expressions, or omit it from the properties_list,
You can either specify the property in the properties_list, in which case its value is taken from the corresponding select expressions or omit it from the properties_list,
in which case the seed value defined by the org.hibernate.type.VersionType is used.
[[batch-bulk-hql-insert-example]]

View File

@ -26,7 +26,7 @@ During the bootstrap process, you might want to customize Hibernate behavior so
=== Native Bootstrapping
This section discusses the process of bootstrapping a Hibernate `SessionFactory`.
Specifically it discusses the bootstrapping APIs as redesigned in 5.0.
Specifically, it addresses the bootstrapping APIs as redesigned in 5.0.
For a discussion of the legacy bootstrapping API, see <<appendices/Legacy_Bootstrap.adoc#appendix-legacy-bootstrap,Legacy Bootstrapping>>
[[bootstrap-native-registry]]
@ -110,7 +110,7 @@ include::{sourcedir}/BootstrapTest.java[tags=bootstrap-event-listener-registrati
[[bootstrap-native-metadata]]
==== Building the Metadata
The second step in native bootstrapping is the building of a `org.hibernate.boot.Metadata` object containing the parsed representations of an application domain model and its mapping to a database.
The second step in native bootstrapping is the building of an `org.hibernate.boot.Metadata` object containing the parsed representations of an application domain model and its mapping to a database.
The first thing we obviously need to build a parsed representation is the source information to be parsed (annotated classes, `hbm.xml` files, `orm.xml` files).
This is the purpose of `org.hibernate.boot.MetadataSources`:
@ -133,7 +133,7 @@ If you are ok with the default behavior in building the Metadata then you can si
====
Notice that a `ServiceRegistry` can be passed at a number of points in this bootstrapping process.
The suggested approach is to build a `StandardServiceRegistry` yourself and pass that along to the `MetadataSources` constructor.
From there, `MetadataBuilder`, `Metadata`, `SessionFactoryBuilder` and `SessionFactory` will all pick up that same `StandardServiceRegistry`.
From there, `MetadataBuilder`, `Metadata`, `SessionFactoryBuilder`, and `SessionFactory` will all pick up that same `StandardServiceRegistry`.
====
However, if you wish to adjust the process of building `Metadata` from `MetadataSources`,
@ -156,7 +156,7 @@ include::{sourcedir}/BootstrapTest.java[tags=bootstrap-native-metadata-builder-e
The final step in native bootstrapping is to build the `SessionFactory` itself.
Much like discussed above, if you are ok with the default behavior of building a `SessionFactory` from a `Metadata` reference, you can simply call the https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/boot/Metadata.html#buildSessionFactory--[`buildSessionFactory`] method on the `Metadata` object.
However, if you would like to adjust that building process you will need to use `SessionFactoryBuilder` as obtained via [`Metadata#getSessionFactoryBuilder`. Again, see its https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/boot/Metadata.html#getSessionFactoryBuilder--[Javadocs] for more details.
However, if you would like to adjust that building process, you will need to use `SessionFactoryBuilder` as obtained via [`Metadata#getSessionFactoryBuilder`. Again, see its https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/boot/Metadata.html#getSessionFactoryBuilder--[Javadocs] for more details.
[[bootstrap-native-SessionFactory-example]]
.Native Bootstrapping - Putting it all together
@ -291,22 +291,22 @@ JPA offers two mapping options:
- annotations
- XML mappings
Although annotations are much more common, there are projects were XML mappings are preferred.
Although annotations are much more common, there are projects where XML mappings are preferred.
You can even mix annotations and XML mappings so that you can override annotation mappings with XML configurations that can be easily changed without recompiling the project source code.
This is possible because if there are two conflicting mappings, the XML mappings takes precedence over its annotation counterpart.
This is possible because if there are two conflicting mappings, the XML mappings take precedence over its annotation counterpart.
The JPA specifications requires the XML mappings to be located on the class path:
The JPA specification requires the XML mappings to be located on the classpath:
[quote, Section 8.2.1.6.2 of the JPA 2.1 Specification]
____
An object/relational mapping XML file named `orm.xml` may be specified in the `META-INF` directory in the root of the persistence unit or in the `META-INF` directory of any jar file referenced by the `persistence.xml`.
Alternatively, or in addition, one or more mapping files may be referenced by the mapping-file elements of the persistence-unit element. These mapping files may be present anywhere on the class path.
Alternatively, or in addition, one or more mapping files may be referenced by the mapping-file elements of the persistence-unit element. These mapping files may be present anywhere on the classpath.
____
Therefore, the mapping files can reside in the application jar artifacts, or they can be stored in an external folder location with the cogitation that that location be included in the class path.
Therefore, the mapping files can reside in the application jar artifacts, or they can be stored in an external folder location with the cogitation that that location be included in the classpath.
Hibernate is more lenient in this regard so you can use any external location even outside of the application configured class path.
Hibernate is more lenient in this regard so you can use any external location even outside of the application configured classpath.
[[bootstrap-jpa-compliant-persistence-xml-external-mappings-example]]
.META-INF/persistence.xml configuration file for external XML mappings

View File

@ -35,9 +35,9 @@ Detailed information is provided later in this chapter.
Besides specific provider configuration, there are a number of configurations options on the Hibernate side of the integration that control various caching behaviors:
`hibernate.cache.use_second_level_cache`::
Enable or disable second level caching overall. Default is true, although the default region factory is `NoCachingRegionFactory`.
Enable or disable second level caching overall. The default is true, although the default region factory is `NoCachingRegionFactory`.
`hibernate.cache.use_query_cache`::
Enable or disable second level caching of query results. Default is false.
Enable or disable second level caching of query results. The default is false.
`hibernate.cache.query_cache_factory`::
Query result caching is handled by a special contract that deals with staleness-based invalidation of the results.
The default implementation does not allow stale results at all. Use this for applications that would like to relax that.
@ -48,7 +48,7 @@ Besides specific provider configuration, there are a number of configurations op
Defines a name to be used as a prefix to all second-level cache region names.
`hibernate.cache.default_cache_concurrency_strategy`::
In Hibernate second-level caching, all regions can be configured differently including the concurrency strategy to use when accessing that particular region.
This setting allows to define a default strategy to be used.
This setting allows defining a default strategy to be used.
This setting is very rarely required as the pluggable providers do specify the default strategy to use.
Valid values include:
* read-only,
@ -61,12 +61,12 @@ Besides specific provider configuration, there are a number of configurations op
`hibernate.cache.auto_evict_collection_cache`::
Enables or disables the automatic eviction of a bidirectional association's collection cache entry when the association is changed just from the owning side.
This is disabled by default, as it has a performance impact to track this state.
However if your application does not manage both sides of bidirectional association where the collection side is cached,
However, if your application does not manage both sides of bidirectional association where the collection side is cached,
the alternative is to have stale data in that collection cache.
`hibernate.cache.use_reference_entries`::
Enable direct storage of entity references into the second level cache for read-only or immutable entities.
`hibernate.cache.keys_factory`::
When storing entries into second-level cache as key-value pair, the identifiers can be wrapped into tuples
When storing entries into the second-level cache as a key-value pair, the identifiers can be wrapped into tuples
<entity type, tenant, identifier> to guarantee uniqueness in case that second-level cache stores all entities
in single space. These tuples are then used as keys in the cache. When the second-level cache implementation
(incl. its configuration) guarantees that different entity types are stored separately and multi-tenancy is not
@ -380,7 +380,7 @@ When using http://docs.oracle.com/javaee/7/api/javax/persistence/CacheStoreMode.
Hibernate will selectively force the results cached in that particular region to be refreshed.
This is particularly useful in cases where underlying data may have been updated via a separate process
and is a far more efficient alternative to bulk eviction of the region via `SessionFactory` eviction which looks as follows:
and is a far more efficient alternative to the bulk eviction of the region via `SessionFactory` eviction which looks as follows:
[source, JAVA, indent=0]
----
@ -402,11 +402,11 @@ The relationship between Hibernate and JPA cache modes can be seen in the follow
[cols=",,",options="header",]
|======================================
|Hibernate | JPA | Description
|`CacheMode.NORMAL` |`CacheStoreMode.USE` and `CacheRetrieveMode.USE` | Default. Reads/writes data from/into cache
|`CacheMode.NORMAL` |`CacheStoreMode.USE` and `CacheRetrieveMode.USE` | Default. Reads/writes data from/into the cache
|`CacheMode.REFRESH` |`CacheStoreMode.REFRESH` and `CacheRetrieveMode.BYPASS` | Doesn't read from cache, but writes to the cache upon loading from the database
|`CacheMode.PUT` |`CacheStoreMode.USE` and `CacheRetrieveMode.BYPASS` | Doesn't read from cache, but writes to the cache as it reads from the database
|`CacheMode.GET` |`CacheStoreMode.BYPASS` and `CacheRetrieveMode.USE` | Read from the cache, but doesn't write to cache
|`CacheMode.IGNORE` |`CacheStoreMode.BYPASS` and `CacheRetrieveMode.BYPASS` | Doesn't read/write data from/into cache
|`CacheMode.IGNORE` |`CacheStoreMode.BYPASS` and `CacheRetrieveMode.BYPASS` | Doesn't read/write data from/into the cache
|======================================
Setting the cache mode can be done either when loading entities directly or when executing a query.
@ -507,7 +507,7 @@ include::{sourcedir}/SecondLevelCacheTest.java[tags=caching-statistics-example]
[NOTE]
====
Use of the build-in integration for https://jcp.org/en/jsr/detail?id=107[JCache] requires that the `hibernate-jcache` module jar (and all of its dependencies) are on the classpath.
Use of the built-in integration for https://jcp.org/en/jsr/detail?id=107[JCache] requires that the `hibernate-jcache` module jar (and all of its dependencies) are on the classpath.
In addition a JCache implementation needs to be added as well.
A list of compatible implementations can be found https://jcp.org/aboutJava/communityprocess/implementations/jsr107/index.html[on the JCP website].
An alternative source of compatible implementations can be found through https://github.com/cruftex/jsr107-test-zoo[the JSR-107 test zoo].
@ -585,7 +585,7 @@ and also log a warning about the missing cache.
Note that caches created this way may be very badly configured (unlimited size and no eviction in particular)
unless the cache provider was explicitly configured to use a specific configuration for default caches.
Ehcache in particular allows to set such default configuration using cache templates,
Ehcache, in particular, allows to set such default configuration using cache templates,
see http://www.ehcache.org/documentation/3.0/107.html#supplement-jsr-107-configurations
====
@ -596,7 +596,7 @@ This integration covers Ehcache 2.x, in order to use Ehcache 3.x as second level
[NOTE]
====
Use of the build-in integration for http://www.ehcache.org/[Ehcache] requires that the `hibernate-ehcache` module jar (and all of its dependencies) are on the classpath.
Use of the built-in integration for http://www.ehcache.org/[Ehcache] requires that the `hibernate-ehcache` module jar (and all of its dependencies) are on the classpath.
====
[[caching-provider-ehcache-region-factory]]

View File

@ -123,7 +123,7 @@ include::{extrasdir}/associations-one-to-many-bidirectional-example.sql[]
[IMPORTANT]
====
Whenever a bidirectional association is formed, the application developer must make sure both sides are in-sync at all times.
The `addPhone()` and `removePhone()` are utilities methods that synchronize both ends whenever a child element is added or removed.
The `addPhone()` and `removePhone()` are utility methods that synchronize both ends whenever a child element is added or removed.
====
Because the `Phone` class has a `@NaturalId` column (the phone number being unique),
@ -146,7 +146,7 @@ include::{extrasdir}/associations-one-to-many-bidirectional-lifecycle-example.sq
Unlike the unidirectional `@OneToMany`, the bidirectional association is much more efficient when managing the collection persistence state.
Every element removal only requires a single update (in which the foreign key column is set to `NULL`), and,
if the child entity lifecycle is bound to its owning parent so that the child cannot exist without its parent,
then we can annotate the association with the `orphan-removal` attribute and disassociating the child will trigger a delete statement on the actual child table row as well.
then we can annotate the association with the `orphan-removal` attribute and dissociate the child will trigger a delete statement on the actual child table row as well.
[[associations-one-to-one]]
==== `@OneToOne`
@ -254,7 +254,7 @@ see the <<chapters/pc/BytecodeEnhancement.adoc#BytecodeEnhancement, BytecodeEnha
==== `@ManyToMany`
The `@ManyToMany` association requires a link table that joins two entities.
Like the `@OneToMany` association, `@ManyToMany` can be a either unidirectional or bidirectional.
Like the `@OneToMany` association, `@ManyToMany` can be either unidirectional or bidirectional.
[[associations-many-to-many-unidirectional]]
===== Unidirectional `@ManyToMany`

View File

@ -84,8 +84,8 @@ Internally Hibernate uses a registry of basic types when it needs to resolve a s
[cols=",,,",options="header",]
|=================================================================================================
|Hibernate type (org.hibernate.spatial package) |JDBC type |Java type |BasicTypeRegistry key(s)
|JTSGeometryType |depends on the dialect | com.vividsolutions.jts.geom.Geometry |jts_geometry, or the classname of Geometry or any of its subclasses
|GeolatteGeometryType |depends on the dialect | org.geolatte.geom.Geometry |geolatte_geometry, or the classname of Geometry or any of its subclasses
|JTSGeometryType |depends on the dialect | com.vividsolutions.jts.geom.Geometry |jts_geometry, or the class name of Geometry or any of its subclasses
|GeolatteGeometryType |depends on the dialect | org.geolatte.geom.Geometry |geolatte_geometry, or the class name of Geometry or any of its subclasses
|=================================================================================================
[NOTE]
@ -151,7 +151,7 @@ The `@Basic` annotation defines 2 attributes.
JPA defines this as "a hint", which essentially means that it effect is specifically required.
As long as the type is not primitive, Hibernate takes this to mean that the underlying column should be `NULLABLE`.
`fetch` - FetchType (defaults to EAGER):: Defines whether this attribute should be fetched eagerly or lazily.
JPA says that EAGER is a requirement to the provider (Hibernate) that the value should be fetched when the owner is fetched, while LAZY is merely a hint that the value be fetched when the attribute is accessed.
JPA says that EAGER is a requirement to the provider (Hibernate) that the value should be fetched when the owner is fetched, while LAZY is merely a hint that the value is fetched when the attribute is accessed.
Hibernate ignores this setting for basic types unless you are using bytecode enhancement.
See the <<chapters/pc/BytecodeEnhancement.adoc#BytecodeEnhancement,BytecodeEnhancement>> for additional information on fetching and on bytecode enhancement.
@ -188,7 +188,7 @@ or its `org.hibernate.type.IntegerType` for mapping `java.lang.Integer` attribut
The answer lies in a service inside Hibernate called the `org.hibernate.type.BasicTypeRegistry`, which essentially maintains a map of `org.hibernate.type.BasicType` (a `org.hibernate.type.Type` specialization) instances keyed by a name.
We will see later, in the <<basic-type-annotation>> section, that we can explicitly tell Hibernate which BasicType to use for a particular attribute.
But first let's explore how implicit resolution works and how applications can adjust implicit resolution.
But first, let's explore how implicit resolution works and how applications can adjust the implicit resolution.
[NOTE]
====
@ -214,7 +214,7 @@ For more details, see <<basic-custom-type>> section.
Sometimes you want a particular attribute to be handled differently.
Occasionally Hibernate will implicitly pick a `BasicType` that you do not want (and for some reason you do not want to adjust the `BasicTypeRegistry`).
In these cases you must explicitly tell Hibernate the `BasicType` to use, via the `org.hibernate.annotations.Type` annotation.
In these cases, you must explicitly tell Hibernate the `BasicType` to use, via the `org.hibernate.annotations.Type` annotation.
[[basic-type-annotation-example]]
.Using `@org.hibernate.annotations.Type`
@ -315,7 +315,7 @@ include::{sourcedir}/basic/BitSetTypeTest.java[tags=basic-custom-type-BitSetType
----
====
Alternatively, use can use a `@TypeDef` ans skip the registration phase:
Alternatively, you can use the `@TypeDef` and skip the registration phase:
[[basic-custom-type-BitSetTypeDef-mapping-example]]
.Using `@TypeDef` to register a custom Type
@ -424,7 +424,7 @@ Hibernate supports the mapping of Java enums as basic value types in a number of
[[basic-enums-Enumerated]]
===== `@Enumerated`
The original JPA-compliant way to map enums was via the `@Enumerated` and `@MapKeyEnumerated` for map keys annotations which works on the principle that the enum values are stored according to one of 2 strategies indicated by `javax.persistence.EnumType`:
The original JPA-compliant way to map enums was via the `@Enumerated` or `@MapKeyEnumerated` for map keys annotations, working on the principle that the enum values are stored according to one of 2 strategies indicated by `javax.persistence.EnumType`:
`ORDINAL`::
stored according to the enum value's ordinal position within the enum class, as indicated by `java.lang.Enum#ordinal`
@ -487,7 +487,7 @@ include::{sourcedir}/basic/PhoneTypeEnumeratedStringTest.java[tags=basic-enums-E
----
====
Persisting the same entity like in the `@Enumerated(ORDINAL)` example, Hibernate generates the following SQL statement:
Persisting the same entity as in the `@Enumerated(ORDINAL)` example, Hibernate generates the following SQL statement:
[[basic-enums-Enumerated-string-persistence-example]]
.Persisting an entity with an `@Enumerated(STRING)` mapping
@ -504,7 +504,7 @@ include::{extrasdir}/basic/basic-enums-Enumerated-string-persistence-example.sql
Let's consider the following `Gender` enum which stores its values using the `'M'` and `'F'` codes.
[[basic-enums-converter-example]]
.Enum with custom constructor
.Enum with a custom constructor
====
[source, JAVA, indent=0]
----
@ -684,7 +684,7 @@ Mapping LOBs (database Large Objects) come in 2 forms, those using the JDBC loca
JDBC LOB locators exist to allow efficient access to the LOB data.
They allow the JDBC driver to stream parts of the LOB data as needed, potentially freeing up memory space.
However they can be unnatural to deal with and have certain limitations.
However, they can be unnatural to deal with and have certain limitations.
For example, a LOB locator is only portably valid during the duration of the transaction in which it was obtained.
The idea of materialized LOBs is to trade-off the potential efficiency (not all drivers handle LOB data efficiently) for a more natural programming paradigm using familiar Java types such as `String` or `byte[]`, etc for these LOBs.
@ -698,7 +698,7 @@ The JDBC LOB locator types include:
* `java.sql.NClob`
Mapping materialized forms of these LOB values would use more familiar Java types such as `String`, `char[]`, `byte[]`, etc.
The trade off for _more familiar_ is usually performance.
The trade-off for _more familiar_ is usually performance.
[[basic-clob]]
===== Mapping CLOB
@ -843,7 +843,7 @@ include::{sourcedir}/basic/BlobByteArrayTest.java[tags=basic-blob-byte-array-exa
==== Mapping Nationalized Character Data
JDBC 4 added the ability to explicitly handle nationalized character data.
To this end it added specific nationalized character data types.
To this end, it added specific nationalized character data types:
* `NCHAR`
* `NVARCHAR`
@ -894,7 +894,7 @@ include::{sourcedir}/basic/NClobTest.java[tags=basic-nclob-example]
----
====
To persist such an entity, you have to create a `NClob` using the `NClobProxy` Hibernate utility:
To persist such an entity, you have to create an `NClob` using the `NClobProxy` Hibernate utility:
[[basic-nclob-persist-example]]
.Persisting a `java.sql.NClob` entity
@ -952,7 +952,7 @@ Hibernate also allows you to map UUID values, again in a number of ways.
[NOTE]
====
The default UUID mapping is as binary because it represents more efficient storage.
However many applications prefer the readability of character storage.
However, many applications prefer the readability of character storage.
To switch the default mapping, simply call `MetadataBuilder.applyBasicType( UUIDCharType.INSTANCE, UUID.class.getName() )`.
====
@ -961,7 +961,7 @@ To switch the default mapping, simply call `MetadataBuilder.applyBasicType( UUID
As mentioned, the default mapping for UUID attributes.
Maps the UUID to a `byte[]` using `java.util.UUID#getMostSignificantBits` and `java.util.UUID#getLeastSignificantBits` and stores that as `BINARY` data.
Chosen as the default simply because it is generally more efficient from storage perspective.
Chosen as the default simply because it is generally more efficient from a storage perspective.
==== UUID as (var)char
@ -980,7 +980,7 @@ Note that this can cause difficulty as the driver chooses to map many different
==== UUID as identifier
Hibernate supports using UUID values as identifiers, and they can even be generated on user's behalf.
Hibernate supports using UUID values as identifiers, and they can even be generated on the user's behalf.
For details, see the discussion of generators in <<chapters/domain/identifiers.adoc#identifiers,_Identifier generators_>>.
[[basic-datetime]]
@ -1127,7 +1127,7 @@ Programmatically::
TimeZone.setDefault( TimeZone.getTimeZone( "UTC" ) );
----
However, as explained in http://in.relation.to/2016/09/12/jdbc-time-zone-configuration-property/[this article], this is not always practical especially for front-end nodes.
However, as explained in http://in.relation.to/2016/09/12/jdbc-time-zone-configuration-property/[this article], this is not always practical, especially for front-end nodes.
For this reason, Hibernate offers the `hibernate.jdbc.time_zone` configuration property which can be configured:
Declaratively, at the `SessionFactory` level::
@ -1200,7 +1200,7 @@ include::{extrasdir}/basic/basic-jpa-convert-period-string-converter-sql-example
In cases when the Java type specified for the "database side" of the conversion (the second `AttributeConverter` bind parameter) is not known,
Hibernate will fallback to a `java.io.Serializable` type.
If the Java type is not know to Hibernate, you will encounter the following message:
If the Java type is not known to Hibernate, you will encounter the following message:
> HHH000481: Encountered Java type for which we could not locate a JavaTypeDescriptor and which does not appear to implement equals and/or hashCode.
> This can lead to significant performance problems when performing equality/dirty checking involving this Java type.
@ -1291,7 +1291,7 @@ include::{sourcedir}/basic/JpaQuotingTest.java[tags=basic-jpa-quoting-example]
----
====
Because `name` and `number` are reserved words, the `Product` entity mapping uses backtricks to quote these column names.
Because `name` and `number` are reserved words, the `Product` entity mapping uses backticks to quote these column names.
When saving the following `Product entity`, Hibernate generates the following SQL insert statement:
@ -1360,8 +1360,8 @@ Properties marked as generated must additionally be _non-insertable_ and _non-up
Only `@Version` and `@Basic` types can be marked as generated.
`NEVER` (the default):: the given property value is not generated within the database.
`INSERT`:: the given property value is generated on insert, but is not regenerated on subsequent updates. Properties like _creationTimestamp_ fall into this category.
`ALWAYS`:: the property value is generated both on insert and on update.
`INSERT`:: the given property value is generated on insert but is not regenerated on subsequent updates. Properties like _creationTimestamp_ fall into this category.
`ALWAYS`:: the property value is generated both on insert and update.
To mark a property as generated, use The Hibernate specific `@Generated` annotation.
@ -1682,7 +1682,7 @@ include::{extrasdir}/basic/mapping-column-read-and-write-composite-type-persiste
==== `@Formula`
Sometimes, you want the Database to do some computation for you rather than in the JVM, you might also create some kind of virtual column.
You can use a SQL fragment (aka formula) instead of mapping a property into a column. This kind of property is read only (its value is calculated by your formula fragment)
You can use a SQL fragment (aka formula) instead of mapping a property into a column. This kind of property is read-only (its value is calculated by your formula fragment)
[NOTE]
====
@ -1847,7 +1847,7 @@ include::{sourcedir}/basic/FilterTest.java[tags=mapping-filter-Account-example]
====
Notice that the `active` property is mapped to the `active_status` column.
This mapping was done to show you that the `@Filter` condition uses a SQL condition, and not a JPQL filtering criteria.
This mapping was done to show you that the `@Filter` condition uses a SQL condition and not a JPQL filtering predicate.
====
As already explained, we can also apply the `@Filter` annotation for collections as illustrated by the `Client` entity:
@ -2033,7 +2033,7 @@ include::{extrasdir}/basic/mapping-no-filter-join-table-collection-query-example
----
====
If we enable the filter and set the `maxOrderId` to `1`, when fetching the `accounts` collections, Hibernate is going to apply the `@FilterJoinTable` clause filtering criteria, and we will get just
If we enable the filter and set the `maxOrderId` to `1` when fetching the `accounts` collections, Hibernate is going to apply the `@FilterJoinTable` clause filtering criteria, and we will get just
`2` `Account` entities, with the `order_id` values of `0` and `1`.
[[mapping-filter-join-table-collection-query-example]]
@ -2366,7 +2366,7 @@ http://docs.oracle.com/javaee/7/api/javax/persistence/ManyToOne.html[`@ManyToOne
http://docs.oracle.com/javaee/7/api/javax/persistence/OneToOne.html[`@OneToOne`],
http://docs.oracle.com/javaee/7/api/javax/persistence/OneToMany.html[`@OneToMany`], and
http://docs.oracle.com/javaee/7/api/javax/persistence/ManyToMany.html[`@ManyToMany`]
feature a http://docs.oracle.com/javaee/7/api/javax/persistence/ManyToOne.html#targetEntity--[`targetEntity`] attribute to specify the actual class of the entiity association when an interface is used for the mapping.
feature a http://docs.oracle.com/javaee/7/api/javax/persistence/ManyToOne.html#targetEntity--[`targetEntity`] attribute to specify the actual class of the entity association when an interface is used for the mapping.
The http://docs.oracle.com/javaee/7/api/javax/persistence/ElementCollection.html[`@ElementCollection`] association has a http://docs.oracle.com/javaee/7/api/javax/persistence/ElementCollection.html#targetClass--[`targetClass`] attribute for the same purpose.

View File

@ -3,11 +3,11 @@
:sourcedir: ../../../../../test/java/org/hibernate/userguide/collections
:extrasdir: extras/collections
Naturally Hibernate also allows to persist collections.
These persistent collections can contain almost any other Hibernate type, including: basic types, custom types, embeddables and references to other entities.
Naturally Hibernate also allows persisting collections.
These persistent collections can contain almost any other Hibernate type, including basic types, custom types, embeddables, and references to other entities.
In this context, the distinction between value and reference semantics is very important.
An object in a collection might be handled with _value_ semantics (its life cycle being fully depends on the collection owner),
or it might be a reference to another entity with its own life cycle.
An object in a collection might be handled with _value_ semantics (its lifecycle being fully dependant on the collection owner),
or it might be a reference to another entity with its own lifecycle.
In the latter case, only the _link_ between the two objects is considered to be a state held by the collection.
The owner of the collection is always an entity, even if the collection is defined by an embeddable type.
@ -46,7 +46,7 @@ The persistent collections injected by Hibernate behave like `ArrayList`, `HashS
[[collections-synopsis]]
==== Collections as a value type
Value and embeddable type collections have a similar behavior as simple value types because they are automatically persisted when referenced by a persistent object and automatically deleted when unreferenced.
Value and embeddable type collections have similar behavior as simple value types because they are automatically persisted when referenced by a persistent object and automatically deleted when unreferenced.
If a collection is passed from one persistent object to another, its elements might be moved from one table to another.
[IMPORTANT]
@ -170,7 +170,7 @@ In the following sections, we will go through all these collection types and dis
[[collections-bag]]
==== Bags
Bags are unordered lists and we can have unidirectional bags or bidirectional ones.
Bags are unordered lists, and we can have unidirectional bags or bidirectional ones.
[[collections-unidirectional-bag]]
===== Unidirectional bags
@ -270,7 +270,7 @@ include::{extrasdir}/collections-bidirectional-bag-orphan-removal-example.sql[]
----
====
When rerunning the previous example, the child will get removed because the parent-side propagates the removal upon disassociating the child entity reference.
When rerunning the previous example, the child will get removed because the parent-side propagates the removal upon dissociating the child entity reference.
[[collections-list]]
==== Ordered Lists
@ -418,7 +418,7 @@ http://docs.oracle.com/javaee/7/api/javax/persistence/OrderBy.html[`@OrderBy`] a
when fetching the current annotated collection, the Hibernate specific
https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/OrderBy.html[`@OrderBy`] annotation is used to specify a *SQL* clause instead.
In the following example, the `@OrderBy` annotations uses the `CHAR_LENGTH` SQL function to order the `Article` entities
In the following example, the `@OrderBy` annotation uses the `CHAR_LENGTH` SQL function to order the `Article` entities
by the number of characters of the `name` attribute.
[[collections-customizing-ordered-by-sql-clause-mapping-example]]
@ -541,7 +541,7 @@ include::{sourcedir}/UnidirectionalComparatorSortedSetTest.java[lines=75..77,ind
[[collections-map]]
==== Maps
A `java.util.Map` is a ternary association because it requires a parent entity, a map key and a value.
A `java.util.Map` is a ternary association because it requires a parent entity, a map key, and a value.
An entity can either be a map key or a map value, depending on the mapping.
Hibernate allows using the following map keys:
@ -601,7 +601,7 @@ include::{extrasdir}/collections-map-custom-key-type-sql-example.sql[]
----
The `call_register` records the call history for every `person`.
The `call_timestamp_epoch` column stores the phone call timestamp as a Unix timestamp since epoch.
The `call_timestamp_epoch` column stores the phone call timestamp as a Unix timestamp since the Unix epoch.
[NOTE]
====
@ -700,7 +700,7 @@ include::{extrasdir}/collections-map-key-class-fetch-example.sql[]
A unidirectional map exposes a parent-child association from the parent-side only.
The following example shows a unidirectional map which also uses a `@MapKeyTemporal` annotation.
The map key is a timestamp and it's taken from the child entity table.
The map key is a timestamp, and it's taken from the child entity table.
[NOTE]
====
@ -851,7 +851,7 @@ The reason why the `Queue` interface is not used for the entity attribute is bec
- `java.util.SortedSet`
- `java.util.SortedMap`
However, the custom collection type can still be customized as long as the base type is one of the aformentioned persistent types.
However, the custom collection type can still be customized as long as the base type is one of the aforementioned persistent types.
====
This way, the `Phone` collection can be used as a `java.util.Queue`:

View File

@ -19,7 +19,7 @@ With this approach, you do not write persistent classes, only mapping files.
A given entity has just one entity mode within a given SessionFactory.
This is a change from previous versions which allowed to define multiple entity modes for an entity and to select which to load.
Entity modes can now be mixed within a domain model; a dynamic entity might reference a POJO entity, and vice versa.
Entity modes can now be mixed within a domain model; a dynamic entity might reference a POJO entity and vice versa.
[[mapping-model-dynamic-example]]
.Dynamic domain model Hibernate mapping
@ -60,8 +60,8 @@ include::{extrasdir}/dynamic/mapping-model-dynamic-example.sql[indent=0]
[NOTE]
====
The main advantage of dynamic models is quick turnaround time for prototyping without the need for entity class implementation.
The main down-fall is that you lose compile-time type checking and will likely deal with many exceptions at runtime.
The main advantage of dynamic models is the quick turnaround time for prototyping without the need for entity class implementation.
The main downfall is that you lose compile-time type checking and will likely deal with many exceptions at runtime.
However, as a result of the Hibernate mapping, the database schema can easily be normalized and sound, allowing to add a proper domain model implementation on top later on.
It is also interesting to note that dynamic models are great for certain integration use cases as well.

View File

@ -5,17 +5,17 @@
Historically Hibernate called these components.
JPA calls them embeddables.
Either way the concept is the same: a composition of values.
Either way, the concept is the same: a composition of values.
For example we might have a `Publisher` class that is a composition of `name` and `country`,
For example, we might have a `Publisher` class that is a composition of `name` and `country`,
or a `Location` class that is a composition of `country` and `city`.
.Usage of the word _embeddable_
[NOTE]
====
To avoid any confusion with the annotation that marks a given embeddable type, the annotation will be further referred as `@Embeddable`.
To avoid any confusion with the annotation that marks a given embeddable type, the annotation will be further referred to as `@Embeddable`.
Throughout this chapter and thereafter, for brevity sake, embeddable types may also be referred as _embeddable_.
Throughout this chapter and thereafter, for brevity sake, embeddable types may also be referred to as _embeddable_.
====
[[embeddable-type-mapping-example]]
@ -27,7 +27,7 @@ include::{sourcedir}/NestedEmbeddableTest.java[tag=embeddable-type-mapping-examp
----
====
An embeddable type is another form of value type, and its lifecycle is bound to a parent entity type, therefore inheriting the attribute access from its parent (for details on attribute access, see <<chapters/domain/entity.adoc#access-embeddable-types,Access strategies>>).
An embeddable type is another form of a value type, and its lifecycle is bound to a parent entity type, therefore inheriting the attribute access from its parent (for details on attribute access, see <<chapters/domain/entity.adoc#access-embeddable-types,Access strategies>>).
Embeddable types can be made up of basic values as well as associations, with the caveat that, when used as collection elements, they cannot define collections themselves.
@ -36,7 +36,7 @@ Embeddable types can be made up of basic values as well as associations, with th
Most often, embeddable types are used to group multiple basic type mappings and reuse them across several entities.
[[simple-embeddable-type-mapping-example]]
.Simple Embeddedable
.Simple Embeddable
====
[source,java]
----
@ -62,7 +62,7 @@ So, the embeddable type is represented by the `Publisher` class and
the parent entity makes use of it through the `book#publisher` object composition.
The composed values are mapped to the same table as the parent table.
Composition is part of good Object-oriented data modeling (idiomatic Java).
Composition is part of good object-oriented data modeling (idiomatic Java).
In fact, that table could also be mapped by the following entity type instead.
[[alternative-to-embeddable-type-mapping-example]]
@ -74,13 +74,13 @@ include::{sourcedir}/SimpleEmbeddableEquivalentTest.java[tag=embeddable-type-map
----
====
The composition form is certainly more Object-oriented, and that becomes more evident as we work with multiple embeddable types.
The composition form is certainly more object-oriented, and that becomes more evident as we work with multiple embeddable types.
[[embeddable-multiple]]
==== Multiple embeddable types
Although from an object-oriented perspective, it's much more convenient to work with embeddable types, this example doesn't work as-is.
When the same embeddable type is included multiple times in the same parent entity type, the JPA specification demands setting the associated column names explicitly.
When the same embeddable type is included multiple times in the same parent entity type, the JPA specification demands to set the associated column names explicitly.
This requirement is due to how object properties are mapped to database columns.
By default, JPA expects a database column having the same name with its associated object property.
@ -94,10 +94,10 @@ We have a few options to handle this issue.
JPA defines the `@AttributeOverride` annotation to handle this scenario.
This way, the mapping conflict is resolved by setting up explicit name-based property-column type mappings.
If an Embeddabe type is used multiple times in some entity, you need to use the
If an Embeddable type is used multiple times in some entity, you need to use the
http://docs.oracle.com/javaee/7/api/javax/persistence/AttributeOverride.html[`@AttributeOverride`] and
http://docs.oracle.com/javaee/7/api/javax/persistence/AssociationOverride.html[`@AssociationOverride`] annotations
to override the default column names definied by the Embeddable.
to override the default column names defined by the Embeddable.
Considering you have the following `Publisher` embeddable type
which defines a `@ManyToOne` association with the `Country` entity:
@ -179,17 +179,17 @@ You could even develop your own naming strategy to do other types of implicit na
[[embeddable-collections]]
==== Collections of embeddable types
Collections of embeddable types are specifically value collections (as embeddable types are a value type).
Collections of embeddable types are specifically valued collections (as embeddable types are a value type).
Value collections are covered in detail in <<chapters/domain/collections.adoc#collections-value,Collections of value types>>.
[[embeddable-mapkey]]
==== Embeddable types as Map key
==== Embeddable type as a Map key
Embeddable types can also be used as `Map` keys.
This topic is converted in detail in <<chapters/domain/collections.adoc#collections-map,Map - key>>.
[[embeddable-identifier]]
==== Embeddable types as identifiers
==== Embeddable type as identifier
Embeddable types can also be used as entity type identifiers.
This usage is covered in detail in <<chapters/domain/identifiers.adoc#identifiers-composite,Composite identifiers>>.

View File

@ -10,9 +10,9 @@
[NOTE]
====
The entity type describes the mapping between the actual persistable domain model object and a database table row.
To avoid any confusion with the annotation that marks a given entity type, the annotation will be further referred as `@Entity`.
To avoid any confusion with the annotation that marks a given entity type, the annotation will be further referred to as `@Entity`.
Throughout this chapter and thereafter, entity types will be simply referred as _entity_.
Throughout this chapter and thereafter, entity types will be simply referred to as _entity_.
====
[[entity-pojo]]
@ -71,17 +71,17 @@ That said, the constructor should be defined with at least package visibility if
[[entity-pojo-accessors]]
==== Declare getters and setters for persistent attributes
The JPA specification requires this, otherwise the model would prevent accessing the entity persistent state fields directly from outside the entity itself.
The JPA specification requires this, otherwise, the model would prevent accessing the entity persistent state fields directly from outside the entity itself.
Although Hibernate does not require it, it is recommended to follow the JavaBean conventions and define getters and setters for entity persistent attributes.
Nevertheless, you can still tell Hibernate to directly access the entity fields.
Attributes (whether fields or getters/setters) need not be declared public.
Hibernate can deal with attributes declared with public, protected, package or private visibility.
Hibernate can deal with attributes declared with the public, protected, package or private visibility.
Again, if wanting to use runtime proxy generation for lazy loading, the getter/setter should grant access to at least package visibility.
[[entity-pojo-identifier]]
==== Provide identifier attribute(s)
==== Providing identifier attribute(s)
[IMPORTANT]
====
@ -210,11 +210,11 @@ include::{sourcedir-mapping}/identifier/SimpleEntityTest.java[tag=entity-pojo-mu
----
====
Specifically the outcome in this last example will depend on whether the `Book` class
Specifically, the outcome in this last example will depend on whether the `Book` class
implemented equals/hashCode, and, if so, how.
If the `Book` class did not override the default equals/hashCode,
then the two `Book` object reference are not going to be equal since their references are different.
then the two `Book` object references are not going to be equal since their references are different.
Consider yet another case:
@ -253,7 +253,7 @@ include::{sourcedir-mapping}/identifier/NaiveEqualsHashCodeEntityTest.java[tag=e
----
====
The issue here is a conflict between the use of generated identifier, the contract of `Set` and the equals/hashCode implementations.
The issue here is a conflict between the use of the generated identifier, the contract of `Set`, and the equals/hashCode implementations.
`Set` says that the equals/hashCode value for an object should not change while the object is part of the `Set`.
But that is exactly what happened here because the equals/hasCode are based on the (generated) id, which was not set until the JPA transaction is committed.
@ -328,7 +328,7 @@ To find the `Account` balance, we need to query the `AccountSummary` which share
However, the `AccountSummary` is not mapped to a physical table, but to an SQL query.
So, if we have the following `AccountTransaction` record, the `AccountSummary` balance will mach the proper amount of money in this `Account`.
So, if we have the following `AccountTransaction` record, the `AccountSummary` balance will match the proper amount of money in this `Account`.
[[mapping-Subselect-entity-find-example]]
.Finding a `@Subselect` entity
@ -356,7 +356,7 @@ The goal of the `@Synchronize` annotation in the `AccountSummary` entity mapping
underlying `@Subselect` SQL query. This is because, unlike JPQL and HQL queries, Hibernate cannot parse the underlying native SQL query.
With the `@Synchronize` annotation in place,
when executing a HQL or JPQL which selects from the `AccountSummary` entity,
when executing an HQL or JPQL which selects from the `AccountSummary` entity,
Hibernate will trigger a Persistence Context flush if there are pending `Account`, `Client` or `AccountTransaction` entity state transitions.
====

View File

@ -21,7 +21,7 @@ See <<chapters/domain/natural_id.adoc#naturalid,Natural Ids>>.
====
Technically the identifier does not have to map to the column(s) physically defined as the table primary key.
They just need to map to column(s) that uniquely identify each row.
However this documentation will continue to use the terms identifier and primary key interchangeably.
However, this documentation will continue to use the terms identifier and primary key interchangeably.
====
Every entity must define an identifier. For entity inheritance hierarchies, the identifier must be defined just on the entity that is the root of the hierarchy.
@ -219,7 +219,7 @@ For discussion of generated values for non-identifier attributes, see <<chapters
Hibernate supports identifier value generation across a number of different types.
Remember that JPA portably defines identifier value generation just for integer types.
Identifier value generation is indicates using the `javax.persistence.GeneratedValue` annotation.
Identifier value generation is indicated using the `javax.persistence.GeneratedValue` annotation.
The most important piece of information here is the specified `javax.persistence.GenerationType` which indicates how values will be generated.
[NOTE]
@ -241,7 +241,7 @@ The rest of the discussion here assumes this setting is enabled (true).
How a persistence provider interprets the AUTO generation type is left up to the provider.
The default behavior is to look at the java type of the identifier attribute.
The default behavior is to look at the Java type of the identifier attribute.
If the identifier type is UUID, Hibernate is going to use a <<identifiers-generators-uuid, UUID identifier>>.
@ -249,7 +249,7 @@ If the identifier type is numerical (e.g. `Long`, `Integer`), then Hibernate is
The `IdGeneratorStrategyInterpreter` has two implementations:
`FallbackInterpreter`::
This is the default strategy since Hibernate 5.0. For older versions, this strategy is enabled through the <<appendices/Configurations.adoc#configurations-mapping,`hibernate.id.new_generator_mappings`>> configuration property .
This is the default strategy since Hibernate 5.0. For older versions, this strategy is enabled through the <<appendices/Configurations.adoc#configurations-mapping,`hibernate.id.new_generator_mappings`>> configuration property.
When using this strategy, `AUTO` always resolves to `SequenceStyleGenerator`.
If the underlying database supports sequences, then a SEQUENCE generator is used. Otherwise, a TABLE generator is going to be used instead.
`LegacyFallbackInterpreter`::
@ -288,7 +288,7 @@ include::{sourcedir}/SequenceGeneratorNamedTest.java[tag=identifiers-generators-
----
====
The `javax.persistence.SequenceGenerator` annotataion allows you to specify additional configurations as well.
The `javax.persistence.SequenceGenerator` annotation allows you to specify additional configurations as well.
[[identifiers-generators-sequence-configured]]
.Configured sequence
@ -303,7 +303,7 @@ include::{sourcedir}/SequenceGeneratorConfiguredTest.java[tag=identifiers-genera
==== Using IDENTITY columns
For implementing identifier value generation based on IDENTITY columns,
Hibernate makes use of its `org.hibernate.id.IdentityGenerator` id generator which expects the identifier to generated by INSERT into the table.
Hibernate makes use of its `org.hibernate.id.IdentityGenerator` id generator which expects the identifier to be generated by INSERT into the table.
IdentityGenerator understands 3 different ways that the INSERT-generated value might be retrieved:
* If Hibernate believes the JDBC environment supports `java.sql.Statement#getGeneratedKeys`, then that approach will be used for extracting the IDENTITY generated keys.
@ -314,18 +314,18 @@ IdentityGenerator understands 3 different ways that the INSERT-generated value m
====
It is important to realize that this imposes a runtime behavior where the entity row *must* be physically inserted prior to the identifier value being known.
This can mess up extended persistence contexts (conversations).
Because of the runtime imposition/inconsistency Hibernate suggest other forms of identifier value generation be used.
Because of the runtime imposition/inconsistency, Hibernate suggests other forms of identifier value generation be used.
====
[NOTE]
====
There is yet another important runtime impact of choosing IDENTITY generation: Hibernate will not be able to JDBC batching for inserts of the entities that use IDENTITY generation.
The importance of this depends on the application specific use cases.
The importance of this depends on the application-specific use cases.
If the application is not usually creating many new instances of a given type of entity that uses IDENTITY generation, then this is not an important impact since batching would not have been helpful anyway.
====
[[identifiers-generators-table]]
==== Using table identifier generator
==== Using the table identifier generator
Hibernate achieves table-based identifier generation based on its `org.hibernate.id.enhanced.TableGenerator` which defines a table capable of holding multiple named value segments for any number of entities.
@ -392,7 +392,7 @@ This is supported through its `org.hibernate.id.UUIDGenerator` id generator.
`UUIDGenerator` supports pluggable strategies for exactly how the UUID is generated.
These strategies are defined by the `org.hibernate.id.UUIDGenerationStrategy` contract.
The default strategy is a version 4 (random) strategy according to IETF RFC 4122.
Hibernate does ship with an alternative strategy which is a RFC 4122 version 1 (time-based) strategy (using ip address rather than mac address).
Hibernate does ship with an alternative strategy which is a RFC 4122 version 1 (time-based) strategy (using IP address rather than mac address).
[[identifiers-generators-uuid-mapping-example]]
.Implicitly using the random UUID strategy
@ -427,7 +427,7 @@ Which is, in fact, the role of these optimizers.
none:: No optimization is performed. We communicate with the database each and every time an identifier value is needed from the generator.
pooled-lo:: The pooled-lo optimizer works on the principle that the increment-value is encoded into the database table/sequence structure.
In sequence-terms this means that the sequence is defined with a greater-that-1 increment size.
In sequence-terms, this means that the sequence is defined with a greater-than-1 increment size.
+
For example, consider a brand new sequence defined as `create sequence m_sequence start with 1 increment by 20`.
This sequence essentially defines a "pool" of 20 usable id values each and every time we ask it for its next-value.
@ -483,7 +483,7 @@ include::{extrasdir}/id/identifiers-generators-pooled-lo-optimizer-persist-examp
----
====
As you can see from the list of generated SQL statements, you can insert 3 entities for one database sequence call.
As you can see from the list of generated SQL statements, you can insert 3 entities with just one database sequence call.
This way, the pooled and the pooled-lo optimizers allow you to reduce the number of database roundtrips, therefore reducing the overall transaction response time.
[[identifiers-derived]]

View File

@ -5,7 +5,7 @@
Although relational database systems don't provide support for inheritance, Hibernate provides several strategies to leverage this object-oriented trait onto domain model entities:
MappedSuperclass:: Inheritance is implemented in domain model only without reflecting it in the database schema. See <<entity-inheritance-mapped-superclass>>.
MappedSuperclass:: Inheritance is implemented in the domain model only without reflecting it in the database schema. See <<entity-inheritance-mapped-superclass>>.
Single table:: The domain model class hierarchy is materialized into a single table which contains entities belonging to different class types. See <<entity-inheritance-single-table>>.
Joined table:: The base class and all the subclasses have their own database tables and fetching a subclass entity requires a join with the parent table as well. See <<entity-inheritance-joined-table>>.
Table per class:: Each subclass has its own table containing both the subclass and the base class properties. See <<entity-inheritance-table-per-class>>.
@ -13,11 +13,11 @@ Table per class:: Each subclass has its own table containing both the subclass a
[[entity-inheritance-mapped-superclass]]
==== MappedSuperclass
In the following domain model class hierarchy, a 'DebitAccount' and a 'CreditAccount' share the same 'Account' base class.
In the following domain model class hierarchy, a `DebitAccount` and a `CreditAccount` share the same `Account` base class.
image:images/domain/inheritance/inheritance_class_diagram.svg[Inheritance class diagram]
When using `MappedSuperclass`, the inheritance is visible in the domain model only and each database table contains both the base class and the subclass properties.
When using `MappedSuperclass`, the inheritance is visible in the domain model only, and each database table contains both the base class and the subclass properties.
[[entity-inheritance-mapped-superclass-example]]
.`@MappedSuperclass` inheritance
@ -35,7 +35,7 @@ include::{extrasdir}/entity-inheritance-mapped-superclass-example.sql[]
[NOTE]
====
Because the `@MappedSuperclass` inheritance model is not mirrored at database level,
Because the `@MappedSuperclass` inheritance model is not mirrored at the database level,
it's not possible to use polymorphic queries (fetching subclasses by their base class).
====
@ -123,7 +123,7 @@ Both `@DiscriminatorColumn` and `@DiscriminatorFormula` are to be set on the roo
The available options are `force` and `insert`.
The `force` attribute is useful if the table contains rows with _extra_ discriminator values that are not mapped to a persistent class.
This could for example occur when working with a legacy database.
This could, for example, occur when working with a legacy database.
If `force` is set to true Hibernate will specify the allowed discriminator values in the SELECT query, even when retrieving all instances of the root class.
The second option, `insert`, tells Hibernate whether or not to include the discriminator column in SQL INSERTs.

View File

@ -83,7 +83,7 @@ to specify the ImplicitNamingStrategy to use. See
[[PhysicalNamingStrategy]]
==== PhysicalNamingStrategy
Many organizations define rules around the naming of database objects (tables, columns, foreign-keys, etc).
Many organizations define rules around the naming of database objects (tables, columns, foreign keys, etc).
The idea of a PhysicalNamingStrategy is to help implement such naming rules without having to hard-code them
into the mapping via explicit names.
@ -94,8 +94,8 @@ would be, for example, to say that the physical column name should instead be ab
[NOTE]
====
It is true that the resolution to `acct_num` could have been handled in an ImplicitNamingStrategy in this case.
But the point is separation of concerns. The PhysicalNamingStrategy will be applied regardless of whether
the attribute explicitly specified the column name or whether we determined that implicitly. The
But the point is separation of concerns. The PhysicalNamingStrategy will be applied regardless of whether
the attribute explicitly specified the column name or whether we determined that implicitly. The
ImplicitNamingStrategy would only be applied if an explicit name was not given. So it depends on needs
and intent.
====

View File

@ -10,8 +10,7 @@ As we will see later, Hibernate provides a dedicated, efficient API for loading
[[naturalid-mapping]]
==== Natural Id Mapping
Natural ids are defined in terms of on
e or more persistent attributes.
Natural ids are defined in terms of one or more persistent attributes.
[[naturalid-simple-basic-attribute-mapping-example]]
.Natural id using single basic attribute

View File

@ -6,7 +6,7 @@
Hibernate understands both the Java and JDBC representations of application data.
The ability to read/write this data from/to the database is the function of a Hibernate _type_.
A type, in this usage, is an implementation of the `org.hibernate.type.Type` interface.
This Hibernate type also describes various aspects of behavior of the Java type such as how to check for equality, how to clone values, etc.
This Hibernate type also describes various behavioral aspects of the Java type such as how to check for equality, how to clone values, etc.
.Usage of the word _type_
[NOTE]
@ -20,7 +20,7 @@ When you encounter the term type in discussions of Hibernate, it may refer to th
To help understand the type categorizations, let's look at a simple table and domain model that we wish to map.
[[mapping-types-basic-example]]
.Simple table and domain model
.A simple table and domain model
====
[source, SQL, indent=0]
----

View File

@ -113,7 +113,7 @@ The `REVTYPE` column value is taken from the https://docs.jboss.org/hibernate/or
|2 | `DEL` |A database table row was deleted.
|=================================
The audit (history) of an entity can be accessed using the `AuditReader` interface, which can be obtained having an open `EntityManager` or `Session` via the `AuditReaderFactory`.
The audit (history) of an entity can be accessed using the `AuditReader` interface, which can be obtained by having an open `EntityManager` or `Session` via the `AuditReaderFactory`.
[[envers-audited-revisions-example]]
.Getting a list of revisions for the `Customer` entity
@ -148,11 +148,11 @@ include::{extrasdir}/envers-audited-rev1-example.sql[]
When executing the aforementioned SQL query, there are two parameters:
revision_number::
The first parameter marks the revision number we are interested in or the latest one that exist up to this particular revision.
The first parameter marks the revision number we are interested in or the latest one that exists up to this particular revision.
revision_type::
The second parameter specifies that we are not interested in `DEL` `RevisionType` so that deleted entries are filtered out.
The same goes for the second revision associated to the `UPDATE` statement.
The same goes for the second revision associated with the `UPDATE` statement.
[[envers-audited-rev2-example]]
.Getting the second revision for the `Customer` entity
@ -210,7 +210,7 @@ Name of a field in the audit entity that will hold the revision number.
Name of a field in the audit entity that will hold the type of the revision (currently, this can be: `add`, `mod`, `del`).
`*org.hibernate.envers.revision_on_collection_change*` (default: `true` )::
Should a revision be generated when a not-owned relation field changes (this can be either a collection in a one-to-many relation, or the field using `mappedBy` attribute in a one-to-one relation).
Should a revision be generated when a not-owned relation field changes (this can be either a collection in a one-to-many relation or the field using `mappedBy` attribute in a one-to-one relation).
`*org.hibernate.envers.do_not_audit_optimistic_locking_field*` (default: `true` )::
When true, properties to be used for optimistic locking, annotated with `@Version`, will not be automatically audited (their history won't be stored; it normally doesn't make sense to store it).
@ -221,14 +221,14 @@ Should the entity data be stored in the revision when the entity is deleted (ins
This is not normally needed, as the data is present in the last-but-one revision.
Sometimes, however, it is easier and more efficient to access it in the last revision (then the data that the entity contained before deletion is stored twice).
`*org.hibernate.envers.default_schema*` (default: `null` - same schema as table being audited)::
`*org.hibernate.envers.default_schema*` (default: `null` - same schema as the table being audited)::
The default schema name that should be used for audit tables.
+
Can be overridden using the `@AuditTable( schema="..." )` annotation.
+
If not present, the schema will be the same as the schema of the table being audited.
`*org.hibernate.envers.default_catalog*` (default: `null` - same catalog as table being audited)::
`*org.hibernate.envers.default_catalog*` (default: `null` - same catalog as the table being audited)::
The default catalog name that should be used for audit tables.
+
Can be overridden using the `@AuditTable( catalog="..." )` annotation.
@ -261,7 +261,7 @@ Only used if the `ValidityAuditStrategy` is used, and `org.hibernate.envers.audi
Boolean flag that determines the strategy of revision number generation.
Default implementation of revision entity uses native identifier generator.
+
If current database engine does not support identity columns, users are advised to set this property to false.
If the current database engine does not support identity columns, users are advised to set this property to false.
+
In this case revision numbers are created by preconfigured `org.hibernate.id.enhanced.SequenceStyleGenerator`.
See: `org.hibernate.envers.DefaultRevisionEntity` and `org.hibernate.envers.enhanced.SequenceIdRevisionEntity`.
@ -284,7 +284,7 @@ For more information, refer to <<envers-tracking-properties-changes>> and <<enve
`*org.hibernate.envers.modified_flag_suffix*` (default: `_MOD` )::
The suffix for columns storing "Modified Flags".
+
For example: a property called "age", will by default get modified flag with column name "age_MOD".
For example, a property called "age", will by default get modified flag with column name "age_MOD".
`*org.hibernate.envers.embeddable_set_ordinal_field_name*` (default: `SETORDINAL` )::
Name of column used for storing ordinal of the change in sets of embeddable elements.
@ -356,7 +356,7 @@ IMPORTANT: These subqueries are notoriously slow and difficult to index.
. The alternative is a validity audit strategy.
This strategy stores the start-revision and the end-revision of audit information.
For each row inserted, updated or deleted in an audited table, one or more rows are inserted in the audit tables, together with the start revision of its validity.
But at the same time the end-revision field of the previous audit rows (if available) are set to this revision.
But at the same, time the end-revision field of the previous audit rows (if available) is set to this revision.
Queries on the audit information can then use 'between start and end revision' instead of subqueries as used by the default audit strategy.
+
The consequence of this strategy is that persisting audit information will be a bit slower because of the extra updates involved,
@ -405,7 +405,7 @@ include::{extrasdir}/envers-audited-validity-mapping-example.sql[]
As you can see, the `REVEND` column is added as well as its Foreign key to the `REVINFO` table.
When rerunning thee previous `Customer` audit log queries against the `ValidityAuditStrategy`,
When rerunning the previous `Customer` audit log queries against the `ValidityAuditStrategy`,
we get the following results:
[[envers-audited-validity-rev1-example]]
@ -430,15 +430,15 @@ When Envers starts a new revision, it creates a new revision entity which stores
By default, that includes just:
revision number::
An integral value (`int/Integer` or `long/Long`). Essentially the primary key of the revision
An integral value (`int/Integer` or `long/Long`). Essentially, the primary key of the revision
revision timestamp::
Either a `long/Long` or `java.util.Date` value representing the instant at which the revision was made.
When using a `java.util.Date`, instead of a `long/Long` for the revision timestamp, take care not to store it to a column data type which will loose precision.
When using a `java.util.Date`, instead of a `long/Long` for the revision timestamp, take care not to store it to a column data type which will lose precision.
Envers handles this information as an entity.
By default it uses its own internal class to act as the entity, mapped to the `REVINFO` table.
You can, however, supply your own approach to collecting this information which might be useful to capture additional details such as who made a change
or the ip address from which the request came.
or the IP address from which the request came.
There are two things you need to make this work:
. First, you will need to tell Envers about the entity you wish to use.
@ -457,9 +457,9 @@ method of the `org.hibernate.envers.RevisionListener` interface.
You tell Envers your custom `org.hibernate.envers.RevisionListener` implementation to use by specifying it on the `@org.hibernate.envers.RevisionEntity` annotation, using the value attribute.
If your `RevisionListener` class is inaccessible from `@RevisionEntity` (e.g. it exists in a different module),
set `org.hibernate.envers.revision_listener` property to its fully qualified class name.
Class name defined by the configuration parameter overrides revision entity's value attribute.
Class name defined by the configuration parameter overrides the revision entity's value attribute.
Considering we have a `CurrentUser` utility which stores the current logged user:
Considering we have a `CurrentUser` utility which stores the currenty logged user:
[[envers-revisionlog-CurrentUser-example]]
.`CurrentUser` utility
@ -553,7 +553,7 @@ implementation is supplied, the `RevisionListener` will be constructed without i
=== Tracking entity names modified during revisions
By default, entity types that have been changed in each revision are not being tracked.
This implies the necessity to query all tables storing audited data in order to retrieve changes made during specified revision.
This implies the necessity to query all tables storing audited data in order to retrieve changes made during the specified revision.
Envers provides a simple mechanism that creates `REVCHANGES` table which stores entity names of modified persistent objects.
Single record encapsulates the revision identifier (foreign key to `REVINFO` table) and a string value.
@ -607,7 +607,7 @@ include::{extrasdir}/envers-tracking-modified-entities-revchanges-after-rename-e
Users, that have chosen one of the approaches listed above,
can retrieve all entities modified in a specified revision by utilizing API described in <<envers-tracking-modified-entities-queries>>.
Users are also allowed to implement custom mechanism of tracking modified entity types.
Users are also allowed to implement custom mechanisms of tracking modified entity types.
In this case, they shall pass their own implementation of `org.hibernate.envers.EntityTrackingRevisionListener`
interface as the value of `@org.hibernate.envers.RevisionEntity` annotation.
@ -657,10 +657,10 @@ include::{sourcedir}/EntityTypeChangeAuditTrackingRevisionListenerTest.java[tags
====
[[envers-tracking-properties-changes]]
=== Tracking entity changes at property level
=== Tracking entity changes at the property level
By default, the only information stored by Envers are revisions of modified entities.
This approach lets user create audit queries based on historical values of entity properties.
This approach lets users create audit queries based on historical values of entity properties.
Sometimes it is useful to store additional metadata for each revision, when you are interested also in the type of changes, not only about the resulting values.
The feature described in <<envers-tracking-modified-entities-revchanges>> makes it possible to tell which entities were modified in a given revision.
@ -668,7 +668,7 @@ The feature described in <<envers-tracking-modified-entities-revchanges>> makes
The feature described here takes it one step further.
_Modification Flags_ enable Envers to track which properties of audited entities were modified in a given revision.
Tracking entity changes at property level can be enabled by:
Tracking entity changes at the property level can be enabled by:
. setting `org.hibernate.envers.global_with_modified_flag` configuration property to `true`.
This global switch will cause adding modification flags to be stored for all audited properties of all audited entities.
@ -677,11 +677,11 @@ Tracking entity changes at property level can be enabled by:
The trade-off coming with this functionality is an increased size of audit tables and a very little, almost negligible, performance drop during audit writes.
This is due to the fact that every tracked property has to have an accompanying boolean column in the schema that stores information about the property modifications.
Of course it is Envers job to fill these columns accordingly - no additional work by the developer is required.
Of course, it is Enver's job to fill these columns accordingly - no additional work by the developer is required.
Because of costs mentioned, it is recommended to enable the feature selectively, when needed with use of the granular configuration means described above.
[[envers-tracking-properties-changes-mapping-example]]
.Mapping for tracking entity changes at property level
.Mapping for tracking entity changes at the property level
====
[source, JAVA, indent=0]
----
@ -697,7 +697,7 @@ include::{extrasdir}/envers-tracking-properties-changes-mapping-example.sql[]
As you can see, every property features a `_MOD` column (e.g. `createdOn_MOD`) in the audit log.
[[envers-tracking-properties-changes-example]]
.Tracking entity changes at property level example
.Tracking entity changes at the property level example
====
[source, JAVA, indent=0]
----
@ -724,14 +724,14 @@ The queries in Envers are similar to Hibernate Criteria queries, so if you are c
The main limitation of the current queries implementation is that you cannot traverse relations.
You can only specify constraints on the ids of the related entities, and only on the "owning" side of the relation.
This however will be changed in future releases.
This, however, will be changed in future releases.
[NOTE]
====
The queries on the audited data will be in many cases much slower than corresponding queries on "live" data,
as, especially for the default audit strategy, they involve correlated subselects.
Queries are improved both in terms of speed and possibilities, when using the validity audit strategy,
Queries are improved both in terms of speed and possibilities when using the validity audit strategy,
which stores both start and end revisions for entities. See <<envers-audit-ValidityAuditStrategy>>.
====
@ -907,7 +907,7 @@ In other words, the result set would contain a list of `Customer` instances, one
hold the audited property data at the _maximum_ revision number for each `Customer` primary key.
[[envers-tracking-properties-changes-queries]]
=== Querying for revisions of entity that modified a given property
=== Querying for entity revisions that modified a given property
For the two types of queries described above it's possible to use special `Audit` criteria called `hasChanged()` and `hasNotChanged()`
that makes use of the functionality described in <<envers-tracking-properties-changes>>.
@ -946,7 +946,7 @@ Using this query we won't get all other revisions in which `lastName` wasn't tou
From the SQL query you can see that the `lastName_MOD` column is being used in the WHERE clause,
hence the aforementioned requirement for tracking modification flags.
Of course, nothing prevents user from combining `hasChanged` condition with some additional criteria.
Of course, nothing prevents users from combining `hasChanged` condition with some additional criteria.
[[envers-tracking-properties-changes-queries-hasChanged-and-hasNotChanged-example]]
.Getting all `Customer` revisions for which the `lastName` attribute has changed and the `firstName` attribute has not changed
@ -1196,7 +1196,7 @@ include::{extrasdir}/envers-querying-entity-relation-nested-join-multiple-restri
[[envers-querying-revision-entities]]
=== Querying for revision information without loading entities
It may sometimes be useful to load information about revisions to find out who performed specific revisions or
Sometimes, it may be useful to load information about revisions to find out who performed specific revisions or
to know what entity names were modified but the change log about the related audited entities isn't needed.
This API allows an efficient way to get the revision information entity log without instantiating the actual
entities themselves.
@ -1213,7 +1213,7 @@ AuditQuery query = getAuditReader().createQuery()
This query will return all revision information entities for revisions between 1 and 25 including those which are
related to deletions. If deletions are not of interest, you would pass `false` as the second argument.
Note this this query uses the `DefaultRevisionEntity` class type. The class provided will vary depending on the
Note that this query uses the `DefaultRevisionEntity` class type. The class provided will vary depending on the
configuration properties used to configure Envers or if you supply your own revision entity. Typically users who
will use this API will likely be providing a custom revision entity implementation to obtain custom information
being maintained per revision.
@ -1257,24 +1257,24 @@ The audit table contains the following columns:
id:: `id` of the original entity (this can be more then one column in the case of composite primary keys)
revision number:: an integer, which matches to the revision number in the revision entity table.
revision type:: The `org.hibernate.envers.RevisionType` enumeration ordinal stating if the change represent an INSERT, UPDATE or DELETE.
revision type:: The `org.hibernate.envers.RevisionType` enumeration ordinal stating if the change represents an INSERT, UPDATE or DELETE.
audited fields:: properties from the original entity being audited
The primary key of the audit table is the combination of the original id of the entity and the revision number,
so there can be at most one historic entry for a given entity instance at a given revision.
The current entity data is stored in the original table and in the audit table.
This is a duplication of data, however as this solution makes the query system much more powerful, and as memory is cheap, hopefully this won't be a major drawback for the users.
This is a duplication of data, however as this solution makes the query system much more powerful, and as memory is cheap, hopefully, this won't be a major drawback for the users.
A row in the audit table with entity id `ID`, revision `N` and data `D` means: entity with id `ID` has data `D` from revision `N` upwards.
A row in the audit table with entity id `ID`, revision `N`, and data `D` means: entity with id `ID` has data `D` from revision `N` upwards.
Hence, if we want to find an entity at revision `M`, we have to search for a row in the audit table, which has the revision number smaller or equal to `M`, but as large as possible.
If no such row is found, or a row with a "deleted" marker is found, it means that the entity didn't exist at that revision.
The "revision type" field can currently have three values: `0`, `1` and `2`, which means `ADD`, `MOD` and `DEL`, respectively.
The "revision type" field can currently have three values: `0`, `1` and `2`, which means `ADD`, `MOD`, and `DEL`, respectively.
A row with a revision of type `DEL` will only contain the id of the entity and no data (all fields `NULL`), as it only serves as a marker saying "this entity was deleted at that revision".
Additionally, there is a revision entity table which contains the information about the global revision.
By default the generated table is named `REVINFO` and contains just two columns: `ID` and `TIMESTAMP`.
By default, the generated table is named `REVINFO` and contains just two columns: `ID` and `TIMESTAMP`.
A row is inserted into this table on each new revision, that is, on each commit of a transaction, which changes audited data.
The name of this table can be configured, the name of its columns as well as adding additional columns can be achieved as discussed in <<envers-revisionlog>>.
@ -1283,7 +1283,7 @@ The name of this table can be configured, the name of its columns as well as add
While global revisions are a good way to provide correct auditing of relations, some people have pointed out that this may be a bottleneck in systems, where data is very often modified.
One viable solution is to introduce an option to have an entity "locally revisioned", that is revisions would be created for it independently.
This woulld not enable correct versioning of relations, but it would work without the `REVINFO` table.
This would not enable correct versioning of relations, but it would work without the `REVINFO` table.
Another possibility is to introduce a notion of "revisioning groups", which would group entities sharing the same revision numbering.
Each such group would have to consist of one or more strongly connected components belonging to the entity graph induced by relations between entities.
@ -1326,7 +1326,7 @@ Bags are not supported because they can contain non-unique elements.
Persisting, a bag of `String`s violates the relational database principle that each table is a set of tuples.
In case of bags, however (which require a join table), if there is a duplicate element, the two tuples corresponding to the elements will be the same.
Hibernate allows this, however Envers (or more precisely: the database connector) will throw an exception when trying to persist two identical elements because of a unique constraint violation.
Although Hibernate allows this, Envers (or more precisely: the database connector) will throw an exception when trying to persist two identical elements because of a unique constraint violation.
There are at least two ways out if you need bag semantics:
@ -1344,7 +1344,7 @@ Envers, however, has to do this so that when you read the revisions in which the
To be able to name the additional join table, there is a special annotation: `@AuditJoinTable`, which has similar semantics to JPA `@JoinTable`.
One special case are relations mapped with `@OneToMany` with `@JoinColumn` on the one side, and `@ManyToOne` and `@JoinColumn( insertable=false, updatable=false`) on the many side.
One special case is to have relations mapped with `@OneToMany` with `@JoinColumn` on the one side, and `@ManyToOne` and `@JoinColumn( insertable=false, updatable=false`) on the many side.
Such relations are, in fact, bidirectional, but the owning side is the collection.
To properly audit such relations with Envers, you can use the `@AuditMappedBy` annotation.
@ -1370,7 +1370,7 @@ SQL table partitioning offers a lot of advantages including, but certainly not l
=== Suitable columns for audit table partitioning
Generally, SQL tables must be partitioned on a column that exists within the table.
As a rule it makes sense to use either the _end revision_ or the _end revision timestamp_ column for partitioning of audit tables.
As a rule, it makes sense to use either the _end revision_ or the _end revision timestamp_ column for partitioning of audit tables.
[NOTE]
====
@ -1442,14 +1442,14 @@ The following audit information is available, sorted on in order of occurrence:
To partition this data, the _level of relevancy_ must be defined. Consider the following:
. For fiscal year 2006 there is only one revision.
. For the fiscal year 2006, there is only one revision.
It has the oldest _revision timestamp_ of all audit rows,
but should still be regarded as relevant because it's the latest modification for this fiscal year in the salary table (its _end revision timestamp_ is null).
+
Also, note that it would be very unfortunate if in 2011 there would be an update of the salary for fiscal year 2006 (which is possible in until at least 10 years after the fiscal year),
Also, note that it would be very unfortunate if in 2011 there would be an update of the salary for the fiscal year 2006 (which is possible until at least 10 years after the fiscal year),
and the audit information would have been moved to a slow disk (based on the age of the __revision timestamp__).
Remember that, in this case, Envers will have to update the _end revision timestamp_ of the most recent audit row.
. There are two revisions in the salary of fiscal year 2007 which both have nearly the same _revision timestamp_ and a different __end revision timestamp__.
. There are two revisions in the salary of the fiscal year 2007 which both have nearly the same _revision timestamp_ and a different __end revision timestamp__.
On first sight, it is evident that the first revision was a mistake and probably not relevant.
The only relevant revision for 2007 is the one with _end revision timestamp_ null.

View File

@ -42,7 +42,7 @@ include::{sourcedir}/InterceptorTest.java[tags=events-interceptors-session-scope
A `SessionFactory`-scoped interceptor is registered with the `Configuration` object prior to building the `SessionFactory`.
Unless a session is opened explicitly specifying the interceptor to use, the `SessionFactory`-scoped interceptor will be applied to all sessions opened from that `SessionFactory`.
`SessionFactory`-scoped interceptors must be thread safe.
`SessionFactory`-scoped interceptors must be thread-safe.
Ensure that you do not store session-specific states since multiple sessions will use this interceptor potentially concurrently.
[[events-interceptors-session-factory-scope-example]]
@ -63,8 +63,8 @@ Many methods of the `Session` interface correlate to an event type.
The full range of defined event types is declared as enum values on `org.hibernate.event.spi.EventType`.
When a request is made of one of these methods, the Session generates an appropriate event and passes it to the configured event listener(s) for that type.
Applications are free to implement a customization of one of the listener interfaces (i.e., the `LoadEvent` is processed by the registered implementation of the `LoadEventListener` interface), in which case their implementation would
be responsible for processing any `load()` requests made of the `Session`.
Applications can customize the listener interfaces (i.e., the `LoadEvent` is processed by the registered implementation of the `LoadEventListener` interface), in which case their implementations would
be responsible for processing the `load()` requests made of the `Session`.
[NOTE]
====
@ -94,7 +94,7 @@ When you want to customize the entity state transition behavior, you have to opt
For example, the `Interceptor#onSave()` method is invoked by Hibernate `AbstractSaveEventListener`.
Or, the `Interceptor#onLoad()` is called by the `DefaultPreLoadEventListener`.
. you can replace any given default event listener with your own implementation.
When doing this, you should probably extend the default listeners because otherwise you'd have to take care of all the low-level entity state transition logic.
When doing this, you should probably extend the default listeners because otherwise, you'd have to take care of all the low-level entity state transition logic.
For example, if you replace the `DefaultPreLoadEventListener` with your own implementation, then, only if you call the `Interceptor#onLoad()` method explicitly, you can mix the custom load event listener with a custom Hibernate interceptor.
[[events-declarative-security]]
@ -140,7 +140,7 @@ JPA also defines a more limited set of callbacks through annotations.
There are two available approaches defined for specifying callback handling:
* The first approach is to annotate methods on the entity itself to receive notification of particular entity life cycle event(s).
* The first approach is to annotate methods on the entity itself to receive notifications of a particular entity lifecycle event(s).
* The second is to use a separate entity listener class.
An entity listener is a stateless class with a no-arg constructor.
The callback annotations are placed on a method of this class instead of the entity class.

View File

@ -8,7 +8,7 @@ Tuning how an application does fetching is one of the biggest factors in determi
Fetching too much data, in terms of width (values/columns) and/or depth (results/rows),
adds unnecessary overhead in terms of both JDBC communication and ResultSet processing.
Fetching too little data might cause additional fetching to be needed.
Tuning how an application fetches data presents a great opportunity to influence the application overall performance.
Tuning how an application fetches data presents a great opportunity to influence the overall application performance.
[[fetching-basics]]
=== The basics
@ -27,7 +27,7 @@ There are a number of scopes for defining fetching:
_static_::
Static definition of fetching strategies is done in the mappings.
The statically-defined fetch strategies is used in the absence of any dynamically defined strategies
The statically-defined fetch strategies are used in the absence of any dynamically defined strategies
SELECT:::
Performs a separate SQL select to load the data. This can either be EAGER (the second select is issued immediately) or LAZY (the second select is delayed until the data is needed).
This is the strategy generally termed N+1.
@ -40,13 +40,13 @@ _static_::
Performs a separate SQL select to load associated data based on the SQL restriction used to load the owner.
Again, this can either be EAGER (the second select is issued immediately) or LAZY (the second select is delayed until the data is needed).
_dynamic_ (sometimes referred to as runtime)::
Dynamic definition is really use-case centric. There are multiple ways to define dynamic fetching:
The dynamic definition is really use-case centric. There are multiple ways to define dynamic fetching:
_fetch profiles_::: defined in mappings, but can be enabled/disabled on the `Session`.
HQL/JPQL::: and both Hibernate and JPA Criteria queries have the ability to specify fetching, specific to said query.
entity graphs::: Starting in Hibernate 4.2 (JPA 2.1) this is also an option.
[[fetching-direct-vs-query]]
=== Direct fetching vs entity queries
=== Direct fetching vs. entity queries
To see the difference between direct fetching and entity queries in regard to eagerly fetched associations, consider the following entities:
@ -308,7 +308,7 @@ include::{extrasdir}/fetching-batch-fetching-example.sql[]
----
====
As you can see in the example above, there are only two SQL statements used to fetch the `Employee` entities associated to multiple `Department` entities.
As you can see in the example above, there are only two SQL statements used to fetch the `Employee` entities associated with multiple `Department` entities.
[TIP]
====
@ -388,7 +388,7 @@ include::{sourcedir}/FetchModeSubselectTest.java[tags=fetching-strategies-fetch-
----
====
Now, we are going to fetch all `Department` entities that match a given filtering criteria
Now, we are going to fetch all `Department` entities that match a given filtering predicate
and then navigate their `employees` collections.
Hibernate is going to avoid the N+1 query issue by generating a single SQL statement to initialize all `employees` collections

View File

@ -7,7 +7,7 @@ Flushing is the process of synchronizing the state of the persistence context wi
The `EntityManager` and the Hibernate `Session` expose a set of methods, through which the application developer can change the persistent state of an entity.
The persistence context acts as a transactional write-behind cache, queuing any entity state change.
Like any write-behind cache, changes are first applied in-memory and synchronized with the database during flush time.
Like any write-behind cache, changes are first applied in-memory and synchronized with the database during the flush time.
The flush operation takes every entity state change and translates it to an `INSERT`, `UPDATE` or `DELETE` statement.
[NOTE]
@ -21,7 +21,7 @@ Although JPA defines only two flushing strategies (https://javaee.github.io/java
Hibernate has a much broader spectrum of flush types:
ALWAYS:: Flushes the `Session` before every query.
AUTO:: This is the default mode and it flushes the `Session` only if necessary.
AUTO:: This is the default mode, and it flushes the `Session` only if necessary.
COMMIT:: The `Session` tries to delay the flush until the current `Transaction` is committed, although it might flush prematurely too.
MANUAL:: The `Session` flushing is delegated to the application, which must call `Session.flush()` explicitly in order to apply the persistence context changes.
@ -36,7 +36,7 @@ By default, Hibernate uses the `AUTO` flush mode which triggers a flush in the f
==== `AUTO` flush on commit
In the following example, an entity is persisted and then the transaction is committed.
In the following example, an entity is persisted, and then the transaction is committed.
[[flushing-auto-flush-commit-example]]
.Automatic flushing on commit
@ -79,7 +79,7 @@ include::{extrasdir}/flushing-auto-flush-jpql-example.sql[]
----
====
The reason why the `Advertisement` entity query didn't trigger a flush is because there's no overlapping between the `Advertisement` and the `Person` tables:
The reason why the `Advertisement` entity query didn't trigger a flush is that there's no overlapping between the `Advertisement` and the `Person` tables:
[[flushing-auto-flush-jpql-entity-example]]
.Automatic flushing on JPQL/HQL entities
@ -106,7 +106,7 @@ include::{extrasdir}/flushing-auto-flush-jpql-overlap-example.sql[]
----
====
This time, the flush was triggered by a JPQL query because the pending entity persist action overlaps with the query being executed.
This time, the flush was triggered by a JPQL query because the pending entity persists action overlaps with the query being executed.
==== `AUTO` flush on native SQL query
@ -214,7 +214,7 @@ include::{extrasdir}/flushing-always-flush-sql-example.sql[]
=== `MANUAL` flush
Both the `EntityManager` and the Hibernate `Session` define a `flush()` method that, when called, triggers a manual flush.
Hibernate also defines a `MANUAL` flush mode so the persistence context can only be flushed manually.
Hibernate also provides a `MANUAL` flush mode so the persistence context can only be flushed manually.
[[flushing-manual-flush-example]]
.`MANUAL` flushing
@ -234,14 +234,14 @@ The `INSERT` statement was not executed because the persistence context because
[NOTE]
====
This mode is useful when using multi-request logical transactions and only the last request should flush the persistence context.
This mode is useful when using multi-request logical transactions, and only the last request should flush the persistence context.
====
[[flushing-order]]
=== Flush operation order
From a database perspective, a row state can be altered using either an `INSERT`, an `UPDATE` or a `DELETE` statement.
Because entity state changes are automatically converted to SQL statements, it's important to know which entity actions are associated to a given SQL statement.
Because entity state changes are automatically converted to SQL statements, it's important to know which entity actions are associated with a given SQL statement.
`INSERT`:: The `INSERT` statement is generated either by the `EntityInsertAction` or `EntityIdentityInsertAction`. These actions are scheduled by the `persist` operation, either explicitly or through cascading the `PersistEvent` from a parent to a child entity.
`DELETE`:: The `DELETE` statement is generated by the `EntityDeleteAction` or `OrphanRemovalAction`.

View File

@ -59,7 +59,7 @@ Any settings prefixed with `hibernate.connection.` (other than the "special ones
`hibernate.c3p0.max_size` or `c3p0.maxPoolSize`:: The maximum size of the c3p0 pool. See http://www.mchange.com/projects/c3p0/#maxPoolSize[c3p0 maxPoolSize]
`hibernate.c3p0.timeout` or `c3p0.maxIdleTime`:: The Connection idle time. See http://www.mchange.com/projects/c3p0/#maxIdleTime[c3p0 maxIdleTime]
`hibernate.c3p0.max_statements` or `c3p0.maxStatements`:: Controls the c3p0 PreparedStatement cache size (if using). See http://www.mchange.com/projects/c3p0/#maxStatements[c3p0 maxStatements]
`hibernate.c3p0.acquire_increment` or `c3p0.acquireIncrement`:: Number of connections c3p0 should acquire at a time when pool is exhausted. See http://www.mchange.com/projects/c3p0/#acquireIncrement[c3p0 acquireIncrement]
`hibernate.c3p0.acquire_increment` or `c3p0.acquireIncrement`:: Number of connections c3p0 should acquire at a time when the pool is exhausted. See http://www.mchange.com/projects/c3p0/#acquireIncrement[c3p0 acquireIncrement]
`hibernate.c3p0.idle_test_period` or `c3p0.idleConnectionTestPeriod`:: Idle time before a c3p0 pooled connection is validated. See http://www.mchange.com/projects/c3p0/#idleConnectionTestPeriod[c3p0 idleConnectionTestPeriod]
`hibernate.c3p0.initialPoolSize`:: The initial c3p0 pool size. If not specified, default is to use the min pool size. See http://www.mchange.com/projects/c3p0/#initialPoolSize[c3p0 initialPoolSize]
Any other settings prefixed with `hibernate.c3p0.`:: Will have the `hibernate.` portion stripped and be passed to c3p0.
@ -194,7 +194,7 @@ Although SQL is relatively standardized, each database vendor uses a subset and
This is referred to as the database's dialect.
Hibernate handles variations across these dialects through its `org.hibernate.dialect.Dialect` class and the various subclasses for each database vendor.
In most cases Hibernate will be able to determine the proper Dialect to use by asking some questions of the JDBC Connection during bootstrap.
In most cases, Hibernate will be able to determine the proper Dialect to use by asking some questions of the JDBC Connection during bootstrap.
For information on Hibernate's ability to determine the proper Dialect to use (and your ability to influence that resolution), see <<chapters/portability/Portability.adoc#portability-dialectresolver,Dialect resolution>>.
If for some reason it is not able to determine the proper one or you want to use a custom Dialect, you will need to set the `hibernate.dialect` setting.
@ -229,8 +229,8 @@ If for some reason it is not able to determine the proper one or you want to use
|MySQL5 |Support for the MySQL database, version 5.x
|MySQL5InnoDB |Support for the MySQL database, version 5.x preferring the InnoDB storage engine when exporting tables.
|MySQL57InnoDB |Support for the MySQL database, version 5.7 preferring the InnoDB storage engine when exporting tables. May work with newer versions
|MariaDB |Support for the MariadB database. May work with newer versions
|MariaDB53 |Support for the MariadB database, version 5.3 and newer.
|MariaDB |Support for the MariaDB database. May work with newer versions
|MariaDB53 |Support for the MariaDB database, version 5.3 and newer.
|Oracle8i |Support for the Oracle database, version 8i
|Oracle9i |Support for the Oracle database, version 9i
|Oracle10g |Support for the Oracle database, version 10g

View File

@ -100,7 +100,7 @@ If the version number is generated by the database, such as a trigger, use the a
[[locking-optimistic-timestamp]]
===== Timestamp
Timestamps are a less reliable way of optimistic locking than version numbers, but can be used by applications for other purposes as well.
Timestamps are a less reliable way of optimistic locking than version numbers but can be used by applications for other purposes as well.
Timestamping is automatically used if you the `@Version` annotation on a `Date` or `Calendar` property type.
[[locking-optimistic-version-timestamp-example]]
@ -114,7 +114,7 @@ include::{sourcedir}/OptimisticLockingTest.java[tags=locking-optimistic-version-
Hibernate can retrieve the timestamp value from the database or the JVM, by reading the value you specify for the `@org.hibernate.annotations.Source` annotation.
The value can be either `org.hibernate.annotations.SourceType.DB` or `org.hibernate.annotations.SourceType.VM`.
The default behavior is to use the database, and is also used if you don't specify the annotation at all.
The default behavior is to use the database and is also used if you don't specify the annotation at all.
The timestamp can also be generated by the database instead of Hibernate
if you use the `@org.hibernate.annotations.Generated(GenerationTime.ALWAYS)` or the `@Source` annotation.
@ -161,7 +161,7 @@ include::{sourcedir}/OptimisticLockTest.java[tags=locking-optimistic-exclude-att
----
====
This way, if one tread modifies the `Phone` number while a second thread increments the `callCount` attribute,
This way, if one thread modifies the `Phone` number while a second thread increments the `callCount` attribute,
the two concurrent transactions are not going to conflict as illustrated by the following example.
[[locking-optimistic-exclude-attribute-example]]
@ -198,7 +198,7 @@ sometimes, you need rely on the actual database row column values to prevent *lo
Hibernate supports a form of optimistic locking that does not require a dedicated "version attribute".
This is also useful for use with modeling legacy schemas.
The idea is that you can get Hibernate to perform "version checks" using either all of the entity's attributes, or just the attributes that have changed.
The idea is that you can get Hibernate to perform "version checks" using either all of the entity's attributes or just the attributes that have changed.
This is achieved through the use of the
https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/OptimisticLocking.html[`@OptimisticLocking`]
annotation which defines a single attribute of type
@ -322,7 +322,7 @@ JPA comes with its own http://docs.oracle.com/javaee/7/api/javax/persistence/Loc
|`READ` and `OPTIMISTIC`|`READ` | The entity version is checked towards the end of the currently running transaction.
|`WRITE` and `OPTIMISTIC_FORCE_INCREMENT`|`WRITE` | The entity version is incremented automatically even if the entity has not changed.
|`PESSIMISTIC_FORCE_INCREMENT`|`PESSIMISTIC_FORCE_INCREMENT` | The entity is locked pessimistically and its version is incremented automatically even if the entity has not changed.
|`PESSIMISTIC_READ`|`PESSIMISTIC_READ` | The entity is locked pessimistically using a shared lock, if the database supports such a feature. Otherwise, an explicit lock is used.
|`PESSIMISTIC_READ`|`PESSIMISTIC_READ` | The entity is locked pessimistically using a shared lock if the database supports such a feature. Otherwise, an explicit lock is used.
|`PESSIMISTIC_WRITE`|`PESSIMISTIC_WRITE`, `UPGRADE` | The entity is locked using an explicit lock.
|`PESSIMISTIC_WRITE` with a `javax.persistence.lock.timeout` setting of 0 |`UPGRADE_NOWAIT` | The lock acquisition request fails fast if the row s already locked.
|`PESSIMISTIC_WRITE` with a `javax.persistence.lock.timeout` setting of -2 |`UPGRADE_SKIPLOCKED` | The lock acquisition request skips the already locked rows. It uses a `SELECT ... FOR UPDATE SKIP LOCKED` in Oracle and PostgreSQL 9.5, or `SELECT ... with (rowlock, updlock, readpast) in SQL Server`.
@ -385,7 +385,7 @@ The `javax.persistence.lock.scope` is https://hibernate.atlassian.net/browse/HHH
Traditionally, Hibernate offered the `Session#lock()` method for acquiring an optimistic or a pessimistic lock on a given entity.
Because varying the locking options was difficult when using a single `LockMode` parameter, Hibernate has added the `Session#buildLockRequest()` method API.
The following example shows how to obtain shared database lock without waiting for the lock acquisition request.
The following example shows how to obtain a shared database lock without waiting for the lock acquisition request.
[[locking-buildLockRequest-example]]
.`buildLockRequest` example
@ -448,8 +448,8 @@ include::{extrasdir}/locking-follow-on-secondary-query-example.sql[]
The lock request was moved from the original query to a secondary one which takes the previously fetched entities to lock their associated database records.
Prior to Hibernate 5.2.1, the the follow-on-locking mechanism was applied uniformly to any locking query executing on Oracle.
Since 5.2.1, the Oracle Dialect tries to figure out if the current query demand the follow-on-locking mechanism.
Prior to Hibernate 5.2.1, the follow-on-locking mechanism was applied uniformly to any locking query executing on Oracle.
Since 5.2.1, the Oracle Dialect tries to figure out if the current query demands the follow-on-locking mechanism.
Even more important is that you can overrule the default follow-on-locking detection logic and explicitly enable or disable it on a per query basis.
@ -469,6 +469,6 @@ include::{extrasdir}/locking-follow-on-explicit-example.sql[]
[NOTE]
====
The follow-on-locking mechanism should be explicitly enabled only if the current executing query fails because the `FOR UPDATE` clause cannot be applied, meaning that the Dialect resolving mechanism needs to be further improved.
The follow-on-locking mechanism should be explicitly enabled only if the currently executing query fails because the `FOR UPDATE` clause cannot be applied, meaning that the Dialect resolving mechanism needs to be further improved.
====

View File

@ -75,7 +75,7 @@ include::{sourcedir}/AbstractMultiTenancyTest.java[tags=multitenacy-hibernate-se
Additionally, when specifying the configuration, an `org.hibernate.MultiTenancyStrategy` should be named using the `hibernate.multiTenancy` setting.
Hibernate will perform validations based on the type of strategy you specify.
The strategy here correlates to the isolation approach discussed above.
The strategy here correlates with the isolation approach discussed above.
NONE::
(the default) No multitenancy is expected.
@ -111,7 +111,7 @@ The `MultiTenantConnectionProvider` to use can be specified in a number of ways:
* Use the `hibernate.multi_tenant_connection_provider` setting.
It could name a `MultiTenantConnectionProvider` instance, a `MultiTenantConnectionProvider` implementation class reference or a `MultiTenantConnectionProvider` implementation class name.
* Passed directly to the `org.hibernate.boot.registry.StandardServiceRegistryBuilder`.
* If none of the above options match, but the settings do specify a `hibernate.connection.datasource` value,
* If none of the above options matches, but the settings do specify a `hibernate.connection.datasource` value,
Hibernate will assume it should use the specific `DataSourceBasedMultiTenantConnectionProviderImpl` implementation which works on a number of pretty reasonable assumptions when running inside of an app server and using one `javax.sql.DataSource` per tenant.
See its https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/engine/jdbc/connections/spi/DataSourceBasedMultiTenantConnectionProviderImpl.html[Javadocs] for more details.

View File

@ -53,10 +53,10 @@ In order to utilize container-managed JPA, an Enterprise OSGi JPA container must
In Karaf, this means Aries JPA, which is included out-of-the-box (simply activate the `jpa` and `transaction` features).
Originally, we intended to include those dependencies within our own `features.xml`.
However, after guidance from the Karaf and Aries teams, it was pulled out.
This allows Hibernate OSGi to be portable and not be directly tied to Aries versions, instead having the user choose which to use.
This allows Hibernate OSGi to be portable and not be directly tied to Aries versions, instead of having the user choose which to use.
That being said, the QuickStart/Demo projects include a sample https://github.com/hibernate/hibernate-demos/tree/master/hibernate-orm/osgi/managed-jpa/features.xml[features.xml]
showing which features need activated in Karaf in order to support this environment.
showing which features need to be activated in Karaf in order to support this environment.
As mentioned, use this purely as a reference!
=== persistence.xml
@ -186,7 +186,7 @@ include::{sourcedir}/_native/HibernateUtil.java[tag=osgi-discover-SessionFactory
The https://github.com/hibernate/hibernate-demos/tree/master/hibernate-orm/osgi/unmanaged-native[unmanaged-native] demo project displays the use of optional Hibernate modules.
Each module adds additional dependency bundles that must first be activated, either manually or through an additional feature.
As of ORM 4.2, Envers is fully supported.
Support for C3P0, Proxool, EhCache, and Infinispan were added in 4.3, however none of their 3rd party libraries currently work in OSGi (lots of `ClassLoader` problems, etc.).
Support for C3P0, Proxool, EhCache, and Infinispan were added in 4.3. However, none of their 3rd party libraries currently work in OSGi (lots of `ClassLoader` problems, etc.).
We're tracking the issues in JIRA.
=== Extension Points
@ -201,7 +201,7 @@ The specified interface should be used during service registration.
`org.hibernate.integrator.spi.Integrator`:: (as of 4.2)
`org.hibernate.boot.registry.selector.StrategyRegistrationProvider`:: (as of 4.3)
`org.hibernate.boot.model.TypeContributor`:: (as of 4.3)
JTA's:: `javax.transaction.TransactionManager` and `javax.transaction.UserTransaction` (as of 4.2), however these are typically provided by the OSGi container.
JTA's:: `javax.transaction.TransactionManager` and `javax.transaction.UserTransaction` (as of 4.2). However, these are typically provided by the OSGi container.
The easiest way to register extension point implementations is through a `blueprint.xml` file.
Add `OSGI-INF/blueprint/blueprint.xml` to your classpath. Envers' blueprint is a great example:
@ -225,10 +225,10 @@ Extension points can also be registered programmatically with `BundleContext#reg
* Scanning is supported to find non-explicitly listed entities and mappings.
However, they MUST be in the same bundle as your persistence unit (fairly typical anyway).
Our OSGi `ClassLoader` only considers the "requesting bundle" (hence the requirement on using services to create `EntityManagerFactory`/`SessionFactory`), rather than attempting to scan all available bundles.
This is primarily for versioning considerations, collision protections, etc.
This is primarily for versioning considerations, collision protection, etc.
* Some containers (ex: Aries) always return true for `PersistenceUnitInfo#excludeUnlistedClasses`, even if your `persistence.xml` explicitly has `exclude-unlisted-classes` set to `false`.
They claim it's to protect JPA providers from having to implement scanning ("we handle it for you"), even though we still want to support it in many cases.
The work around is to set `hibernate.archive.autodetection` to, for example, `hbm,class`.
The workaround is to set `hibernate.archive.autodetection` to, for example, `hbm,class`.
This tells hibernate to ignore the `excludeUnlistedClasses` value and scan for `*.hbm.xml` and entities regardless.
* Scanning does not currently support annotated packages on `package-info.java`.
* Currently, Hibernate OSGi is primarily tested using Apache Karaf and Apache Aries JPA. Additional testing is needed with Equinox, Gemini, and other container providers.

View File

@ -17,11 +17,11 @@ Hibernate supports the enhancement of an application Java domain model for the p
===== Lazy attribute loading
Think of this as partial loading support.
Essentially you can tell Hibernate that only part(s) of an entity should be loaded upon fetching from the database and when the other part(s) should be loaded as well.
Note that this is very much different from proxy-based idea of lazy loading which is entity-centric where the entity's state is loaded at once as needed.
Essentially, you can tell Hibernate that only part(s) of an entity should be loaded upon fetching from the database and when the other part(s) should be loaded as well.
Note that this is very much different from the proxy-based idea of lazy loading which is entity-centric where the entity's state is loaded at once as needed.
With bytecode enhancement, individual attributes or groups of attributes are loaded as needed.
Lazy attributes can be designated to be loaded together and this is called a "lazy group".
Lazy attributes can be designated to be loaded together, and this is called a "lazy group".
By default, all singular attributes are part of a single group, meaning that when one lazy singular attribute is accessed all lazy singular attributes are loaded.
Lazy plural attributes, by default, are each a lazy group by themselves.
This behavior is explicitly controllable through the `@org.hibernate.annotations.LazyGroup` annotation.
@ -35,9 +35,9 @@ include::{sourcedir}/BytecodeEnhancementTest.java[tags=BytecodeEnhancement-lazy-
----
====
In the above example we have 2 lazy attributes: `accountsPayableXrefId` and `image`.
In the above example, we have 2 lazy attributes: `accountsPayableXrefId` and `image`.
Each is part of a different fetch group (accountsPayableXrefId is part of the default fetch group),
which means that accessing `accountsPayableXrefId` will not force the loading of image, and vice-versa.
which means that accessing `accountsPayableXrefId` will not force the loading of the `image` attribute, and vice-versa.
[NOTE]
====
@ -52,11 +52,11 @@ Historically Hibernate only supported diff-based dirty calculation for determini
This essentially means that Hibernate would keep track of the last known state of an entity in regards to the database (typically the last read or write).
Then, as part of flushing the persistence context, Hibernate would walk every entity associated with the persistence context and check its current state against that "last known database state".
This is by far the most thorough approach to dirty checking because it accounts for data-types that can change their internal state (`java.util.Date` is the prime example of this).
However, in a persistence context with a large number of associated entities it can also be a performance-inhibiting approach.
However, in a persistence context with a large number of associated entities, it can also be a performance-inhibiting approach.
If your application does not need to care about "internal state changing data-type" use cases, bytecode-enhanced dirty tracking might be a worthwhile alternative to consider, especially in terms of performance.
In this approach Hibernate will manipulate the bytecode of your classes to add "dirty tracking" directly to the entity, allowing the entity itself to keep track of which of its attributes have changed.
During flush time, Hibernate simply asks your entity what has changed rather that having to perform the state-diff calculations.
During the flush time, Hibernate asks your entity what has changed rather than having to perform the state-diff calculations.
[[BytecodeEnhancement-dirty-tracking-bidirectional]]
===== Bidirectional association management
@ -105,11 +105,11 @@ These are hard to discuss without diving into a discussion of Hibernate internal
==== Performing enhancement
[[BytecodeEnhancement-enhancement-runtime]]
===== Run-time enhancement
===== Runtime enhancement
Currently, run-time enhancement of the domain model is only supported in managed JPA environments following the JPA-defined SPI for performing class transformations.
Currently, runtime enhancement of the domain model is only supported in managed JPA environments following the JPA-defined SPI for performing class transformations.
Even then, this support is disabled by default. To enable run-time enhancement, specify one of the following configuration properties:
Even then, this support is disabled by default. To enable runtime enhancement, specify one of the following configuration properties:
`*hibernate.enhancer.enableDirtyTracking*` (e.g. `true` or `false` (default value))::
Enable dirty tracking feature in runtime bytecode enhancement.
@ -122,14 +122,14 @@ Enable association management feature in runtime bytecode enhancement which auto
[NOTE]
====
Also, at the moment, only annotated classes are supported for run-time enhancement.
Also, at the moment, only annotated classes are supported for runtime enhancement.
====
[[BytecodeEnhancement-enhancement-gradle]]
===== Gradle plugin
Hibernate provides a Gradle plugin that is capable of providing build-time enhancement of the domain model as they are compiled as part of a Gradle build.
To use the plugin a project would first need to apply it:
To use the plugin, a project would first need to apply it:
.Apply the Gradle plugin
====
@ -157,7 +157,7 @@ Hibernate provides a Maven plugin capable of providing build-time enhancement of
See the section on the <<BytecodeEnhancement-enhancement-gradle>> for details on the configuration settings. Again, the default for those 3 is `false`.
The Maven plugin supports one additional configuration settings: failOnError, which controls what happens in case of error.
Default behavior is to fail the build, but it can be set so that only a warning is issued.
The default behavior is to fail the build, but it can be set so that only a warning is issued.
.Apply the Maven plugin
====

View File

@ -12,8 +12,8 @@ Persistent data has a state in relation to both a persistence context and the un
It has no persistent representation in the database and typically no identifier value has been assigned (unless the _assigned_ generator was used).
`managed`, or `persistent`:: the entity has an associated identifier and is associated with a persistence context.
It may or may not physically exist in the database yet.
`detached`:: the entity has an associated identifier, but is no longer associated with a persistence context (usually because the persistence context was closed or the instance was evicted from the context)
`removed`:: the entity has an associated identifier and is associated with a persistence context, however it is scheduled for removal from the database.
`detached`:: the entity has an associated identifier but is no longer associated with a persistence context (usually because the persistence context was closed or the instance was evicted from the context)
`removed`:: the entity has an associated identifier and is associated with a persistence context, however, it is scheduled for removal from the database.
Much of the `org.hibernate.Session` and `javax.persistence.EntityManager` methods deal with moving entities between these states.
@ -78,7 +78,7 @@ include::{sourcedir}/PersistenceContextTest.java[tags=pc-remove-jpa-example]
====
[[pc-remove-native-example]]
.Deleting an entity with Hibernate API
.Deleting an entity with the Hibernate API
====
[source, JAVA, indent=0]
----
@ -91,7 +91,7 @@ include::{sourcedir}/PersistenceContextTest.java[tags=pc-remove-native-example]
Hibernate itself can handle deleting detached state.
JPA, however, disallows it.
The implication here is that the entity instance passed to the `org.hibernate.Session` delete method can be either in managed or detached state,
while the entity instance passed to remove on `javax.persistence.EntityManager` must be in managed state.
while the entity instance passed to remove on `javax.persistence.EntityManager` must be in the managed state.
====
[[pc-get-reference]]
@ -177,7 +177,7 @@ include::{sourcedir}/PersistenceContextTest.java[tags=pc-find-optional-by-id-nat
[[pc-find-natural-id]]
=== Obtain an entity by natural-id
In addition to allowing to load by identifier, Hibernate allows applications to load by declared natural identifier.
In addition to allowing to load the entity by its identifier, Hibernate allows applications to load entities by the declared natural identifier.
[[pc-find-by-natural-id-entity-example]]
.Natural-id mapping
@ -219,23 +219,23 @@ include::{sourcedir}/PersistenceContextTest.java[tags=pc-find-optional-by-simple
----
====
Hibernate offer a consistent API for accessing persistent data by identifier or by the natural-id. Each of these defines the same two data access methods:
Hibernate offers a consistent API for accessing persistent data by identifier or by the natural-id. Each of these defines the same two data access methods:
getReference::
Should be used in cases where the identifier is assumed to exist, where non-existence would be an actual error.
Should never be used to test existence.
That is because this method will prefer to create and return a proxy if the data is not already associated with the Session rather than hit the database.
The quintessential use-case for using this method is to create foreign-key based associations.
The quintessential use-case for using this method is to create foreign key based associations.
load::
Will return the persistent data associated with the given identifier value or null if that identifier does not exist.
Each of these two methods define an overloading variant accepting a `org.hibernate.LockOptions` argument.
Each of these two methods defines an overloading variant accepting a `org.hibernate.LockOptions` argument.
Locking is discussed in a separate <<chapters/locking/Locking.adoc#locking,chapter>>.
[[pc-managed-state]]
=== Modifying managed/persistent state
Entities in managed/persistent state may be manipulated by the application and any changes will be automatically detected and persisted when the persistence context is flushed.
Entities in managed/persistent state may be manipulated by the application, and any changes will be automatically detected and persisted when the persistence context is flushed.
There is no need to call a particular method to make your modifications persistent.
[[pc-managed-state-jpa-example]]
@ -320,7 +320,7 @@ include::{sourcedir}/DynamicUpdateTest.java[tags=pc-managed-state-dynamic-update
----
====
This time, when reruning the previous test case, Hibernate generates the following SQL UPDATE statement:
This time, when rerunning the previous test case, Hibernate generates the following SQL UPDATE statement:
[[pc-managed-state-dynamic-update-example]]
.Modifying the `Product` entity with a dynamic update
@ -416,12 +416,12 @@ Clearing the persistence context has the same effect.
Evicting a particular entity from the persistence context makes it detached.
And finally, serialization will make the deserialized form be detached (the original instance is still managed).
Detached data can still be manipulated, however the persistence context will no longer automatically know about these modification and the application will need to intervene to make the changes persistent again.
Detached data can still be manipulated, however, the persistence context will no longer automatically know about these modifications, and the application will need to intervene to make the changes persistent again.
[[pc-detach-reattach]]
==== Reattaching detached data
Reattachment is the process of taking an incoming entity instance that is in detached state and re-associating it with the current persistence context.
Reattachment is the process of taking an incoming entity instance that is in the detached state and re-associating it with the current persistence context.
[IMPORTANT]
====
@ -459,7 +459,7 @@ Provided the entity is detached, `update` and `saveOrUpdate` operate exactly the
[[pc-merge]]
==== Merging detached data
Merging is the process of taking an incoming entity instance that is in detached state and copying its data over onto a new managed instance.
Merging is the process of taking an incoming entity instance that is in the detached state and copying its data over onto a new managed instance.
Although not exactly per se, the following example is a good visualization of the `merge` operation internals.

View File

@ -24,7 +24,7 @@ Originally, Hibernate would always require that users specify which dialect to u
Generally, this required their users to configure the Hibernate dialect or defining their own method of setting that value.
Starting with version 3.2, Hibernate introduced the notion of automatically detecting the dialect to use based on the `java.sql.DatabaseMetaData` obtained from a `java.sql.Connection` to that database.
This was much better, expect that this resolution was limited to databases Hibernate know about ahead of time and was in no way configurable or overrideable.
This was much better, except that this resolution was limited to databases Hibernate know about ahead of time and was in no way configurable or overrideable.
Starting with version 3.3, Hibernate has a fare more powerful way to automatically determine which dialect to should be used by relying on a series of delegates which implement the `org.hibernate.dialect.resolver.DialectResolver` which defines only a single method:
@ -35,7 +35,7 @@ public Dialect resolveDialect(DatabaseMetaData metaData) throws JDBCConnectionEx
The basic contract here is that if the resolver 'understands' the given database metadata then it returns the corresponding Dialect; if not it returns null and the process continues to the next resolver.
The signature also identifies `org.hibernate.exception.JDBCConnectionException` as possibly being thrown.
A `JDBCConnectionException` here is interpreted to imply a "non transient" (aka non-recoverable) connection problem and is used to indicate an immediate stop to resolution attempts.
A `JDBCConnectionException` here is interpreted to imply a __non-transient__ (aka non-recoverable) connection problem and is used to indicate an immediate stop to resolution attempts.
All other exceptions result in a warning and continuing on to the next resolver.
The cool part about these resolvers is that users can also register their own custom resolvers which will be processed ahead of the built-in Hibernate ones.
@ -50,14 +50,14 @@ To register one or more resolvers, simply specify them (separated by commas, tab
=== Identifier generation
When considering portability between databases, another important decision is selecting the identifier generation strategy you want to use.
Originally Hibernate provided the _native_ generator for this purpose, which was intended to select between a __sequence__, __identity__, or _table_ strategy depending on the capability of the underlying database.
Originally, Hibernate provided the _native_ generator for this purpose, which was intended to select between a __sequence__, __identity__, or _table_ strategy depending on the capability of the underlying database.
However, an insidious implication of this approach comes about when targeting some databases which support _identity_ generation and some which do not.
_identity_ generation relies on the SQL definition of an IDENTITY (or auto-increment) column to manage the identifier value.
It is what is known as a _post-insert_ generation strategy because the insert must actually happen before we can know the identifier value.
Because Hibernate relies on this identifier value to uniquely reference entities within a persistence context,
it must then issue the insert immediately when the users requests that the entity be associated with the session (e.g. like via `save()` or `persist()`) , regardless of current transactional semantics.
it must then issue the insert immediately when the user requests that the entity be associated with the session (e.g. like via `save()` or `persist()`), regardless of current transactional semantics.
[NOTE]
====

View File

@ -18,9 +18,9 @@ They are type-safe in terms of using interfaces and classes to represent various
They can also be type-safe in terms of referencing attributes as we will see in a bit.
Users of the older Hibernate `org.hibernate.Criteria` query API will recognize the general approach, though we believe the JPA API to be superior as it represents a clean look at the lessons learned from that API.
Criteria queries are essentially an object graph, where each part of the graph represents an increasing (as we navigate down this graph) more atomic part of query.
Criteria queries are essentially an object graph, where each part of the graph represents an increasing (as we navigate down this graph) more atomic part of the query.
The first step in performing a criteria query is building this graph.
The `javax.persistence.criteria.CriteriaBuilder` interface is the first thing with which you need to become acquainted to begin using criteria queries.
The `javax.persistence.criteria.CriteriaBuilder` interface is the first thing with which you need to become acquainted with begin using criteria queries.
Its role is that of a factory for all the individual pieces of the criteria.
You obtain a `javax.persistence.criteria.CriteriaBuilder` instance by calling the `getCriteriaBuilder()` method of either `javax.persistence.EntityManagerFactory` or `javax.persistence.EntityManager`.
@ -148,7 +148,7 @@ Specifically, notice the constructor and its argument types.
Since we will be returning `PersonWrapper` objects, we use `PersonWrapper` as the type of our criteria query.
This example illustrates the use of the `javax.persistence.criteria.CriteriaBuilder` method construct which is used to build a wrapper expression.
For every row in the result we are saying we would like a `PersonWrapper` instantiated with the remaining arguments by the matching constructor.
For every row in the result, we are saying we would like a `PersonWrapper` instantiated with the remaining arguments by the matching constructor.
This wrapper expression is then passed as the select.
[[criteria-tuple]]
@ -273,7 +273,7 @@ include::{sourcedir}/CriteriaTest.java[tags=criteria-from-fetch-example]
[NOTE]
====
Technically speaking, embedded attributes are always fetched with their owner.
However in order to define the fetching of _Phone#addresses_ we needed a `javax.persistence.criteria.Fetch` because element collections are `LAZY` by default.
However, in order to define the fetching of _Phone#addresses_ we needed a `javax.persistence.criteria.Fetch` because element collections are `LAZY` by default.
====
[[criteria-path]]

View File

@ -6,7 +6,7 @@
The Hibernate Query Language (HQL) and Java Persistence Query Language (JPQL) are both object model focused query languages similar in nature to SQL.
JPQL is a heavily-inspired-by subset of HQL.
A JPQL query is always a valid HQL query, the reverse is not true however.
A JPQL query is always a valid HQL query, the reverse is not true, however.
Both HQL and JPQL are non-type-safe ways to perform query operations.
Criteria queries offer a type-safe approach to querying. See <<chapters/query/criteria/Criteria.adoc#criteria,Criteria>> for more information.
@ -15,7 +15,7 @@ Criteria queries offer a type-safe approach to querying. See <<chapters/query/c
=== Query API
[[hql-examples-domain-model]]
=== Examples domain model
=== Example domain model
To better understand the further HQL and JPQL examples, it's time to familiarize the domain model entities that are used in all the examples features in this chapter.
@ -130,7 +130,7 @@ Relying on provider specific hints limits your applications portability to some
The final thing that needs to happen before the query can be executed is to bind the values for any defined parameters.
JPA defines a simplified set of parameter binding methods.
Essentially it supports setting the parameter value (by name/position) and a specialized form for `Calendar`/`Date` types additionally accepting a `TemporalType`.
Essentially, it supports setting the parameter value (by name/position) and a specialized form for `Calendar`/`Date` types additionally accepting a `TemporalType`.
[[jpql-api-parameter-example]]
.JPA name parameter binding
@ -337,7 +337,7 @@ Hibernate offers additional, specialized methods for scrolling the query and han
The `Query#scroll` method is overloaded:
* Then main form accepts a single argument of type `org.hibernate.ScrollMode` which indicates the type of scrolling to be used.
* The main form accepts a single argument of type `org.hibernate.ScrollMode` which indicates the type of scrolling to be used.
See the https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/ScrollMode.html[Javadocs] for the details on each.
* The second form takes no argument and will use the `ScrollMode` indicated by `Dialect#defaultScrollMode`.
@ -422,7 +422,7 @@ So `SeLeCT` is the same as `sELEct` is the same as `SELECT`, but `org.hibernate.
[NOTE]
====
This documentation uses lowercase keywords as convention in examples.
This documentation uses lowercase keywords as a convention in examples.
====
[[hql-statement-types]]
@ -433,14 +433,14 @@ HQL additionally allows `INSERT` statements, in a form similar to a SQL `INSERT
[IMPORTANT]
====
Care should be taken as to when a `UPDATE` or `DELETE` statement is executed.
Care should be taken as to when an `UPDATE` or `DELETE` statement is executed.
[quote, Section 4.10 of the JPA 2.0 Specification]
____
Caution should be used when executing bulk update or delete operations because they may result in
inconsistencies between the database and the entities in the active persistence context. In general, bulk
update and delete operations should only be performed within a transaction in a new persistence con
text or before fetching or accessing entities whose state might be affected by such operations.
update and delete operations should only be performed within a transaction in a new persistence context
or before fetching or accessing entities whose state might be affected by such operations.
____
====
@ -477,11 +477,11 @@ include::{sourcedir}/HQLTest.java[tags=hql-select-simplest-jpql-example]
----
Even though HQL does not require the presence of a `select_clause`, it is generally good practice to include one.
For simple queries the intent is clear and so the intended result of the `select_clause` is east to infer.
For simple queries the intent is clear and so the intended result of the `select_clause` is easy to infer.
But on more complex queries that is not always the case.
It is usually better to explicitly specify intent.
Hibernate does not actually enforce that a `select_clause` be present even when parsing JPQL queries, however applications interested in JPA portability should take heed of this.
Hibernate does not actually enforce that a `select_clause` be present even when parsing JPQL queries, however, applications interested in JPA portability should take heed of this.
====
[[hql-update]]
@ -497,7 +497,7 @@ include::{extrasdir}/statement_update_bnf.txt[]
----
====
`UPDATE` statements, by default, do not effect the `version` or the `timestamp` attribute values for the affected entities.
`UPDATE` statements, by default, do not affect the `version` or the `timestamp` attribute values for the affected entities.
However, you can force Hibernate to set the `version` or `timestamp` attribute values through the use of a `versioned update`.
This is achieved by adding the `VERSIONED` keyword after the `UPDATE` keyword.
@ -506,14 +506,14 @@ This is achieved by adding the `VERSIONED` keyword after the `UPDATE` keyword.
====
This is a Hibernate specific feature and will not work in a portable manner.
Custom version types, `org.hibernate.usertype.UserVersionType`, are not allowed in conjunction with a `update versioned` statement.
Custom version types, `org.hibernate.usertype.UserVersionType`, are not allowed in conjunction with an `update versioned` statement.
====
An `UPDATE` statement is executed using the `executeUpdate()` of either `org.hibernate.query.Query` or `javax.persistence.Query`.
The method is named for those familiar with the JDBC `executeUpdate()` on `java.sql.PreparedStatement`.
The `int` value returned by the `executeUpdate()` method indicates the number of entities effected by the operation.
This may or may not correlate to the number of rows effected in the database.
The `int` value returned by the `executeUpdate()` method indicates the number of entities affected by the operation.
This may or may not correlate to the number of rows affected in the database.
An HQL bulk operation might result in multiple actual SQL statements being executed (for joined-subclass, for example).
The returned number indicates the number of actual entities affected by the statement.
Using a JOINED inheritance hierarchy, a delete against one of the subclasses may actually result in deletes against not just the table to which that subclass is mapped, but also the "root" table and tables "in between".
@ -587,7 +587,7 @@ You can either explicitly specify the id property in the `attribute_list`, in wh
This latter option is only available when using id generators that operate "in the database"; attempting to use this option with any "in memory" type generators will cause an exception during parsing.
For optimistic locking attributes, the insert statement again gives you two options.
You can either specify the attribute in the `attribute_list` in which case its value is taken from the corresponding select expressions, or omit it from the `attribute_list` in which case the `seed value` defined by the corresponding `org.hibernate.type.VersionType` is used.
You can either specify the attribute in the `attribute_list` in which case its value is taken from the corresponding select expressions or omit it from the `attribute_list` in which case the `seed value` defined by the corresponding `org.hibernate.type.VersionType` is used.
[[hql-insert-example]]
.INSERT query statements
@ -601,8 +601,8 @@ include::{sourcedir}/../batch/BatchTest.java[tags=batch-bulk-hql-insert-example]
[[hql-from-clause]]
=== The `FROM` clause
The `FROM` clause is responsible defining the scope of object model types available to the rest of the query.
It also is responsible for defining all the "identification variables" available to the rest of the query.
The `FROM` clause is responsible for defining the scope of object model types available to the rest of the query.
It is also responsible for defining all the "identification variables" available to the rest of the query.
[[hql-identification-variables]]
=== Identification variables
@ -863,7 +863,7 @@ include::{sourcedir}/SelectDistinctTest.java[tags=hql-distinct-entity-query-exam
----
====
In this case, `DISTINCT` is used because there can be multiple `Books` entities associated to a given `Person`.
In this case, `DISTINCT` is used because there can be multiple `Books` entities associated with a given `Person`.
If in the database there are 3 `Persons` in the database and each person has 2 `Books`, without `DISTINCT` this query will return 6 `Persons` since
the SQL-level result-set size is given by the number of joined `Book` records.
@ -951,7 +951,7 @@ ENTRY::
Only valid for `Maps`. Refers to the map's logical `java.util.Map.Entry` tuple (the combination of its key and value).
`ENTRY` is only valid as a terminal path and it's applicable to the `SELECT` clause only.
See <<hql-collection-expressions>> for additional details on collection related expressions.
See <<hql-collection-expressions>> for additional details on collection-related expressions.
[[hql-polymorphism]]
=== Polymorphism
@ -987,7 +987,7 @@ It returns every object of every entity type defined by your application mapping
[[hql-expressions]]
=== Expressions
Essentially expressions are references that resolve to basic or tuple values.
Essentially, expressions are references that resolve to basic or tuple values.
[[hql-identification-variable]]
=== Identification variable
@ -1036,7 +1036,7 @@ The actual suffix is case-insensitive.
The boolean literals are `TRUE` and `FALSE`, again case-insensitive.
Enums can even be referenced as literals. The fully-qualified enum class name must be used.
HQL can also handle constants in the same manner, though JPQL does not define that as supported.
HQL can also handle constants in the same manner, though JPQL does not define that as being supported.
Entity names can also be used as literal. See <<hql-entity-type-exp>>.
@ -1046,7 +1046,7 @@ Date/time literals can be specified using the JDBC escape syntax:
* `{t 'hh:mm:ss'}` for times
* `{ts 'yyyy-mm-dd hh:mm:ss[.millis]'}` (millis optional) for timestamps.
These Date/time literals only work if you JDBC drivers supports them.
These Date/time literals only work if the underlying JDBC driver supports them.
====
[[hql-numeric-arithmetic]]
@ -1073,13 +1073,13 @@ The following rules apply to the result of arithmetic operations:
* else, (the assumption being that both operands are of integral type) the result is `Integer` (except for division, in which case the result type is not further defined)
Date arithmetic is also supported, albeit in a more limited fashion.
This is due partially to differences in database support and partially to the lack of support for `INTERVAL` definition in the query language itself.
This is due to differences in database support and partly to the lack of support for `INTERVAL` definition in the query language itself.
[[hql-concatenation]]
=== Concatenation (operation)
HQL defines a concatenation operator in addition to supporting the concatenation (`CONCAT`) function.
This is not defined by JPQL, so portable applications should avoid it use.
This is not defined by JPQL, so portable applications should avoid its use.
The concatenation operator is taken from the SQL concatenation operator (e.g `||`).
[[hql-concatenation-example]]
@ -1378,7 +1378,7 @@ ELEMENTS::
Only allowed in the where clause.
Often used in conjunction with `ALL`, `ANY` or `SOME` restrictions.
INDICES::
Similar to `elements` except that `indices` refers to the collections indices (keys/positions) as a whole.
Similar to `elements` except that the `indices` expression refers to the collections indices (keys/positions) as a whole.
[[hql-collection-expressions-example]]
.Collection-related expressions examples
@ -1407,7 +1407,7 @@ See also <<hql-collection-qualification>> as there is a good deal of overlap.
We can also refer to the type of an entity as an expression.
This is mainly useful when dealing with entity inheritance hierarchies.
The type can expressed using a `TYPE` function used to refer to the type of an identification variable representing an entity.
The type can be expressed using a `TYPE` function used to refer to the type of an identification variable representing an entity.
The name of the entity also serves as a way to refer to an entity type.
Additionally, the entity type can be parameterized, in which case the entity's Java Class reference would be bound as the parameter value.
@ -1501,7 +1501,7 @@ There is a particular expression type that is only valid in the select clause.
Hibernate calls this "dynamic instantiation".
JPQL supports some of that feature and calls it a "constructor expression".
So rather than dealing with the `Object[]` (again, see <<hql-api>>) here we are wrapping the values in a type-safe java object that will be returned as the results of the query.
So rather than dealing with the `Object[]` (again, see <<hql-api>>) here, we are wrapping the values in a type-safe Java object that will be returned as the results of the query.
[[hql-select-clause-dynamic-instantiation-example]]
.Dynamic HQL and JPQL instantiation example
@ -1558,7 +1558,7 @@ If the user doesn't assign aliases, the key will be the index of each particular
=== Predicates
Predicates form the basis of the where clause, the having clause and searched case expressions.
They are expressions which resolve to a truth value, generally `TRUE` or `FALSE`, although boolean comparisons involving `NULL` generally resolve to `UNKNOWN`.
They are expressions which resolve to a truth value, generally `TRUE` or `FALSE`, although boolean comparisons involving `NULL` resolve typically to `UNKNOWN`.
[[hql-relational-comparisons]]
=== Relational comparisons
@ -1596,8 +1596,8 @@ It resolves to false if the subquery result is empty.
[[hql-null-predicate]]
=== Nullness predicate
Check a value for nullness.
Can be applied to basic attribute references, entity references and parameters.
It check a value for nullness.
It can be applied to basic attribute references, entity references, and parameters.
HQL additionally allows it to be applied to component/embeddable types.
[[hql-null-predicate-example]]
@ -1678,9 +1678,9 @@ include::{extrasdir}/predicate_in_bnf.txt[]
The types of the `single_valued_expression` and the individual values in the `single_valued_list` must be consistent.
JPQL limits the valid types here to string, numeric, date, time, timestamp, and enum types, and , in JPQL, `single_valued_expression` can only refer to:
JPQL limits the valid types here to string, numeric, date, time, timestamp, and enum types, and, in JPQL, `single_valued_expression` can only refer to:
* "state fields", which is its term for simple attributes. Specifically this excludes association and component/embedded attributes.
* "state fields", which is its term for simple attributes. Specifically, this excludes association and component/embedded attributes.
* entity type expressions. See <<hql-entity-type-exp>>
In HQL, `single_valued_expression` can refer to a far more broad set of expression types.
@ -1752,7 +1752,7 @@ If the predicate is true, NOT resolves to false. If the predicate is unknown (e.
The `AND` operator is used to combine 2 predicate expressions.
The result of the AND expression is true if and only if both predicates resolve to true.
If either predicates resolves to unknown, the AND expression resolves to unknown as well. Otherwise, the result is false.
If either predicate resolves to unknown, the AND expression resolves to unknown as well. Otherwise, the result is false.
[[hql-or-predicate]]
=== OR predicate operator
@ -1817,7 +1817,7 @@ The types of expressions considered valid as part of the `ORDER BY` clause inclu
Additionally, JPQL says that all values referenced in the `ORDER BY` clause must be named in the `SELECT` clause.
HQL does not mandate that restriction, but applications desiring database portability should be aware that not all databases support referencing values in the `ORDER BY` clause that are not referenced in the select clause.
Individual expressions in the order-by can be qualified with either `ASC` (ascending) or `DESC` (descending) to indicated the desired ordering direction.
Individual expressions in the order-by can be qualified with either `ASC` (ascending) or `DESC` (descending) to indicate the desired ordering direction.
Null values can be placed in front or at the end of the sorted set using `NULLS FIRST` or `NULLS LAST` clause respectively.
[[hql-order-by-example]]

View File

@ -5,7 +5,7 @@
:extrasdir: extras
You may also express queries in the native SQL dialect of your database.
This is useful if you want to utilize database specific features such as window functions, Common Table Expressions (CTE) or the `CONNECT BY` option in Oracle.
This is useful if you want to utilize database-specific features such as window functions, Common Table Expressions (CTE) or the `CONNECT BY` option in Oracle.
It also provides a clean migration path from a direct SQL/JDBC based application to Hibernate/JPA.
Hibernate also allows you to specify handwritten SQL (including stored procedures) for all create, update, delete, and retrieve operations.
@ -84,7 +84,7 @@ include::{sourcedir}/SQLTest.java[tags=sql-hibernate-scalar-query-partial-explic
----
====
This is essentially the same query as before, but now `ResultSetMetaData` is used to determine the type of `name`, where as the type of `id` is explicitly specified.
This is essentially the same query as before, but now `ResultSetMetaData` is used to determine the type of `name`, whereas the type of `id` is explicitly specified.
How the `java.sql.Types` returned from `ResultSetMetaData` is mapped to Hibernate types is controlled by the `Dialect`.
If a specific type is not mapped, or does not result in the expected type, it is possible to customize it via calls to `registerHibernateType` in the Dialect.
@ -112,7 +112,7 @@ include::{sourcedir}/SQLTest.java[tags=sql-hibernate-entity-query-example]
----
====
Assuming that `Person` is mapped as a class with the columns `id`, `name`, `nickName`, `address`, `createdOn` and `version`,
Assuming that `Person` is mapped as a class with the columns `id`, `name`, `nickName`, `address`, `createdOn`, and `version`,
the following query will also return a `List` where each element is a `Person` entity.
[[sql-jpa-entity-query-explicit-result-set-example]]
@ -138,7 +138,7 @@ include::{sourcedir}/SQLTest.java[tags=sql-hibernate-entity-query-explicit-resul
If the entity is mapped with a `many-to-one` or a child-side `one-to-one` to another entity,
it is required to also return this when performing the native query,
otherwise a database specific _column not found_ error will occur.
otherwise, a database-specific _column not found_ error will occur.
[[sql-jpa-entity-associations-query-many-to-one-example]]
.JPA native query selecting entities with many-to-one association
@ -230,7 +230,7 @@ include::{extrasdir}/sql-hibernate-entity-associations-query-one-to-many-join-ex
----
====
At this stage you are reaching the limits of what is possible with native queries, without starting to enhance the sql queries to make them usable in Hibernate.
At this stage, you are reaching the limits of what is possible with native queries, without starting to enhance the sql queries to make them usable in Hibernate.
Problems can arise when returning multiple entities of the same type or when the default alias/column names are not enough.
[[sql-multi-entity-query]]
@ -261,7 +261,7 @@ include::{sourcedir}/SQLTest.java[tags=sql-hibernate-multi-entity-query-example]
The query was intended to return all `Person` and `Partner` instances with the same name.
The query fails because there is a conflict of names since the two entities are mapped to the same column names (e.g. `id`, `name`, `version`).
Also, on some databases the returned column aliases will most likely be on the form `pr.id`, `pr.name`, etc.
Also, on some databases, the returned column aliases will most likely be on the form `pr.id`, `pr.name`, etc.
which are not equal to the columns specified in the mappings (`id` and `name`).
The following form is not vulnerable to column name duplication:
@ -281,13 +281,13 @@ There's no such equivalent in JPA because the `Query` interface doesn't define a
====
The `{pr.*}` and `{pt.*}` notation used above is shorthand for "all properties".
Alternatively, you can list the columns explicitly, but even in this case Hibernate injects the SQL column aliases for each property.
Alternatively, you can list the columns explicitly, but even in this case, Hibernate injects the SQL column aliases for each property.
The placeholder for a column alias is just the property name qualified by the table alias.
[[sql-alias-references]]
=== Alias and property references
In most cases the above alias injection is needed.
In most cases, the above alias injection is needed.
For queries relating to more complex mappings, like composite properties, inheritance discriminators, collections etc., you can use specific aliases that allow Hibernate to inject the proper aliases.
The following table shows the different ways you can use the alias injection.
@ -409,7 +409,7 @@ and the Hibernate `org.hibernate.annotations.NamedNativeQuery` annotation extend
`timeout()`::
The query timeout (in seconds). By default, there's no timeout.
`callable()`::
Does the SQL query represent a call to a procedure/function? Default is false.
Does the SQL query represent a call to a procedure/function? The default is false.
`comment()`::
A comment added to the SQL query for tuning the execution plan.
`cacheMode()`::
@ -670,7 +670,7 @@ Fortunately, Hibernate allows you to resolve the current global catalog and sche
{h-schema}:: resolves the current `hibernate.default_schema` configuration property value.
{h-domain}:: resolves the current `hibernate.default_catalog` and `hibernate.default_schema` configuration property values (e.g. catalog.schema).
Withe these placeholders, you can imply the catalog, schema, or both catalog and schema for every native query.
With these placeholders, you can imply the catalog, schema, or both catalog and schema for every native query.
So, when running the following native query:
@ -863,15 +863,15 @@ include::{sourcedir}/OracleStoredProcedureTest.java[tags=sql-jpa-call-sp-ref-cur
====
[[sql-crud]]
=== Custom SQL for create, update, and delete
=== Custom SQL for CRUD (Create, Read, Update and Delete)
Hibernate can use custom SQL for create, update, and delete operations.
Hibernate can use custom SQL for CRUD operations.
The SQL can be overridden at the statement level or individual column level.
This section describes statement overrides.
For columns, see <<chapters/domain/basic_types.adoc#mapping-column-read-and-write,Column transformers: read and write expressions>>.
The following example shows how to define custom SQL operations using annotations.
`@SQLInsert`, `@SQLUpdate` and `@SQLDelete` override the INSERT, UPDATE, DELETE statements of a given entity.
`@SQLInsert`, `@SQLUpdate`, and `@SQLDelete` override the INSERT, UPDATE, DELETE statements of a given entity.
For the SELECT clause, a `@Loader` must be defined along with a `@NamedNativeQuery` used for loading the underlying table record.
For collections, Hibernate allows defining a custom `@SQLDeleteAll` which is used for removing all child records associated with a given parent entity.
@ -894,7 +894,8 @@ The same is done for the `phones` collection. The `@SQLDeleteAll` and the `SQLIn
[NOTE]
====
You also call a store procedure using the custom CRUD statements; the only requirement is to set the `callable` attribute to `true`.
You can also call a store procedure using the custom CRUD statements.
The only requirement is to set the `callable` attribute to `true`.
====
To check that the execution happens correctly, Hibernate allows you to define one of those three strategies:
@ -927,7 +928,7 @@ include::{sourcedir}/CustomSQLSecondaryTableTest.java[tags=sql-custom-crud-secon
[TIP]
====
The SQL is directly executed in your database, so you can use any dialect you like.
This will, however, reduce the portability of your mapping if you use database specific SQL.
This will, however, reduce the portability of your mapping if you use database-specific SQL.
====
You can also use stored procedures for customizing the CRUD statements.

View File

@ -11,8 +11,8 @@ Since 5.0, Hibernate Spatial is now part of the Hibernate ORM project,
and it allows you to deal with geographic data in a standardized way.
Hibernate Spatial provides a standardized, cross-database interface to geographic data storage and query functions.
It supports most of the functions described by the OGC Simple Feature Specification. Supported databases are: Oracle 10g/11g,
PostgreSql/PostGIS, MySQL, Microsoft SQL Server and H2/GeoDB.
It supports most of the functions described by the OGC Simple Feature Specification. Supported databases are Oracle 10g/11g,
PostgreSQL/PostGIS, MySQL, Microsoft SQL Server and H2/GeoDB.
Spatial data types are not part of the Java standard library, and they are absent from the JDBC specification.
Over the years http://tsusiatsoftware.net/jts/main.html[JTS] has emerged the _de facto_ standard to fill this gap. JTS is
@ -148,10 +148,10 @@ There are several dialects for MySQL:
MySQL versions before 5.6.1 had only limited support for spatial operators.
Most operators only took account of the minimum bounding rectangles (MBR) of the geometries, and not the geometries themselves.
This changed in version 5.6.1 were MySQL introduced `ST_*` spatial operators.
This changed in version 5.6.1, when MySQL introduced `ST_*` spatial operators.
The dialect `MySQLSpatial56Dialect` uses these newer, more precise operators.
These dialects may therefore produce results that differ from that of the other spatial dialects.
These dialects may, therefore, produce results that differ from that of the other spatial dialects.
For more information, see this page in the MySQL reference guide (esp. the section https://dev.mysql.com/doc/refman/5.7/en/spatial-relation-functions.html[Functions That Test Spatial Relations Between Geometry Objects])
====
@ -164,21 +164,22 @@ This dialect has been tested on both Oracle 10g and Oracle 11g with the `SDO_GEO
This dialect can be configured using the Hibernate property:
+
`hibernate.spatial.connection_finder`:::
the fully-qualified classname for the implementation of the `ConnectionFinder` to use (see below).
the fully-qualified class name for the implementation of the `ConnectionFinder` to use (see below).
.The `ConnectionFinder` interface
[NOTE]
====
The `SDOGeometryType` requires access to an `OracleConnection` object wehen converting a geometry to SDO_GEOMETRY.
The `SDOGeometryType` requires access to an `OracleConnection` object when converting a geometry to SDO_GEOMETRY.
In some environments, however, the `OracleConnection` is not available (e.g. because a Java EE container or connection pool proxy wraps the connection object in its own `Connection` implementation).
A `ConnectionFinder` knows how to retrieve the `OracleConnection` from the wrapper or proxy Connection object that is passed into prepared statements.
The default implementation will, when the passed object is not already an `OracleConnection`, attempt to retrieve the `OracleConnection` by recursive reflection.
When the passed object is not already an `OracleConnection`, the default implementation will attempt to retrieve the `OracleConnection` by recursive reflection.
It will search for methods that return `Connection` objects, execute these methods and check the result.
If the result is of type `OracleConnection` the object is returned, otherwise it recurses on it.
If the result is of type `OracleConnection` the object is returned.
Otherwise, it recurses on it.
In may cases this strategy will suffice.
If not, you can provide your own implementation of this interface on the class path, and configure it in the `hibernate.spatial.connection_finder` property.
In may cases, this strategy will suffice.
If not, you can provide your own implementation of this interface on the classpath, and configure it in the `hibernate.spatial.connection_finder` property.
Note that implementations must be thread-safe and have a default no-args constructor.
====

View File

@ -119,7 +119,7 @@ include::{extrasdir}/schema-generation-database-checks-persist-example.sql[]
====
[[schema-generation-column-default-value]]
=== Default value for database column
=== Default value for a database column
With Hibernate, you can specify a default value for a given database column using the https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/ColumnDefault.html[`@ColumnDefault`] annotation.

View File

@ -17,10 +17,10 @@ This documentation largely treats the physical and logic notions of a transactio
[[transactions-physical]]
=== Physical Transactions
Hibernate uses the JDBC API for persistence. In the world of Java there are two well-defined mechanism for dealing with transactions in JDBC: JDBC itself and JTA.
Hibernate uses the JDBC API for persistence. In the world of Java, there are two well-defined mechanisms for dealing with transactions in JDBC: JDBC itself and JTA.
Hibernate supports both mechanisms for integrating with transactions and allowing applications to manage physical transactions.
Transaction handling per `Session` is handled by the `org.hibernate.resource.transaction.spi.TransactionCoordinator` contract,
The transaction handling per `Session` is handled by the `org.hibernate.resource.transaction.spi.TransactionCoordinator` contract,
which are built by the `org.hibernate.resource.transaction.spi.TransactionCoordinatorBuilder` service.
`TransactionCoordinatorBuilder` represents a strategy for dealing with transactions whereas TransactionCoordinator represents one instance of that strategy related to a Session.
Which `TransactionCoordinatorBuilder` implementation to use is defined by the `hibernate.transaction.coordinator_class` setting.
@ -50,9 +50,9 @@ The Hibernate `Session` acts as a transaction-scoped cache providing repeatable
[IMPORTANT]
====
To reduce lock contention in the database, the physical database transaction needs to be as short as possible.
Long database transactions prevent your application from scaling to a highly-concurrent load.
Long-running database transactions prevent your application from scaling to a highly-concurrent load.
Do not hold a database transaction open during end-user-level work, but open it after the end-user-level work is finished.
This is concept is referred to as `transactional write-behind`.
This concept is referred to as `transactional write-behind`.
====
[[transactions-physical-jtaplatform]]
@ -99,11 +99,11 @@ To use this API, you would obtain the `org.hibernate.Transaction` from the Sessi
`markRollbackOnly`:: that works in both JTA and JDBC
`getTimeout` and `setTimeout`:: that again work in both JTA and JDBC
`registerSynchronization`:: that allows you to register JTA Synchronizations even in non-JTA environments.
In fact in both JTA and JDBC environments, these `Synchronizations` are kept locally by Hibernate.
In fact, in both JTA and JDBC environments, these `Synchronizations` are kept locally by Hibernate.
In JTA environments, Hibernate will only ever register one single `Synchronization` with the `TransactionManager` to avoid ordering problems.
Additionally, it exposes a getStatus method that returns an `org.hibernate.resource.transaction.spi.TransactionStatus` enum.
This method checks with the underlying transaction system if needed, so care should be taken to minimize its use; it can have a big performance impact in certain JTA set ups.
This method checks with the underlying transaction system if needed, so care should be taken to minimize its use; it can have a big performance impact in certain JTA setups.
Let's take a look at using the Transaction API in the various environments.
@ -134,7 +134,7 @@ include::{sourcedir}/TransactionsTest.java[tags=transactions-api-bmt-example]
----
====
In the CMT case we really could have omitted all of the Transaction calls.
In the CMT case, we really could have omitted all of the Transaction calls.
But the point of the examples was to show that the Transaction API really does insulate your code from the underlying transaction mechanism.
In fact, if you strip away the comments and the single configuration setting supplied at bootstrap, the code is exactly the same in all 3 examples.
In other words, we could develop that code and drop it, as-is, in any of the 3 transaction environments.
@ -184,7 +184,7 @@ If you use JTA, you can utilize the JTA interfaces to demarcate transactions.
If you execute in an EJB container that supports CMT, transaction boundaries are defined declaratively and you do not need any transaction or session demarcation operations in your code. Refer to <<transactions>> for more information and code examples.
The `hibernate.current_session_context_class` configuration parameter defines which `org.hibernate.context.spi.CurrentSessionContext` implementation should be used.
For backwards compatibility, if this configuration parameter is not set but a `org.hibernate.engine.transaction.jta.platform.spi.JtaPlatform` is configured, Hibernate will use the `org.hibernate.context.internal.JTASessionContext`.
For backward compatibility, if this configuration parameter is not set but a `org.hibernate.engine.transaction.jta.platform.spi.JtaPlatform` is configured, Hibernate will use the `org.hibernate.context.internal.JTASessionContext`.
=== Transactional patterns (and anti-patterns)
@ -195,7 +195,7 @@ This is an anti-pattern of opening and closing a `Session` for each database cal
It is also an anti-pattern in terms of database transactions.
Group your database calls into a planned sequence.
In the same way, do not auto-commit after every SQL statement in your application.
Hibernate disables, or expects the application server to disable, auto-commit mode immediately.
Hibernate disables or expects the application server to disable, auto-commit mode immediately.
Database transactions are never optional.
All communication with a database must be encapsulated by a transaction.
Avoid auto-commit behavior for reading data because many small transactions are unlikely to perform better than one clearly-defined unit of work, and are more difficult to maintain and extend.
@ -216,7 +216,7 @@ Web applications are a prime example of this type of system, though certainly no
At the beginning of handling such a request, the application opens a Hibernate Session, starts a transaction, performs all data related work, ends the transaction and closes the Session.
The crux of the pattern is the one-to-one relationship between the transaction and the Session.
Within this pattern there is a common technique of defining a current session to simplify the need of passing this `Session` around to all the application components that may need access to it.
Within this pattern, there is a common technique of defining a current session to simplify the need of passing this `Session` around to all the application components that may need access to it.
Hibernate provides support for this technique through the `getCurrentSession` method of the `SessionFactory`.
The concept of a _current_ session has to have a scope that defines the bounds in which the notion of _current_ is valid.
This is the purpose of the `org.hibernate.context.spi.CurrentSessionContext` contract.
@ -230,7 +230,7 @@ Using this implementation, a `Session` will be opened the first time `getCurrent
This is best represented with the `org.hibernate.context.internal.ManagedSessionContext` implementation of the `org.hibernate.context.spi.CurrentSessionContext` contract.
Here an external component is responsible for managing the lifecycle and scoping of a _current_ session.
At the start of such a scope, `ManagedSessionContext#bind()` method is called passing in the `Session`.
At the end, its `unbind()` method is called.
In the end, its `unbind()` method is called.
Some common examples of such _external components_ include:
** `javax.servlet.Filter` implementation
** AOP interceptor with a pointcut on the service methods
@ -285,7 +285,7 @@ Automatic versioning is used to isolate concurrent modifications.
|Extended `Session` |The Hibernate `Session` can be disconnected from the underlying JDBC connection after the database transaction has been committed and reconnected when a new client request occurs.
This pattern is known as session-per-conversation and makes even reattachment unnecessary.
Automatic versioning is used to isolate concurrent modifications and the `Session` will not be allowed to flush automatically, only explicitly.
Automatic versioning is used to isolate concurrent modifications, and the `Session` will not be allowed to flush automatically, only explicitly.
|=======================================================================
Session-per-request-with-detached-objects and session-per-conversation each have advantages and disadvantages.
@ -295,7 +295,7 @@ Session-per-request-with-detached-objects and session-per-conversation each have
The _session-per-application_ is also considered an anti-pattern.
The Hibernate `Session`, like the JPA `EntityManager`, is not a thread-safe object and it is intended to be confined to a single thread at once.
If the `Session` is shared among multiple threads, there will be race conditions as well as visibility issues , so beware of this.
If the `Session` is shared among multiple threads, there will be race conditions as well as visibility issues, so beware of this.
An exception thrown by Hibernate means you have to rollback your database transaction and close the `Session` immediately.
If your `Session` is bound to the application, you have to stop the application.

View File

@ -16,14 +16,14 @@ public class CallStatistics {
private final long total;
private final int min;
private final int max;
private final double abg;
private final double avg;
public CallStatistics(long count, long total, int min, int max, double abg) {
public CallStatistics(long count, long total, int min, int max, double avg) {
this.count = count;
this.total = total;
this.min = min;
this.max = max;
this.abg = abg;
this.avg = avg;
}
//Getters and setters omitted for brevity

View File

@ -120,7 +120,7 @@ public class BitSetUserTypeTest extends BaseCoreFunctionalTestCase {
@Type( type = "bitset" )
private BitSet bitSet;
//Constructors, getters and setters are omitted for brevity
//Constructors, getters, and setters are omitted for brevity
//end::basic-custom-type-BitSetUserType-mapping-example[]
public Product() {
}

View File

@ -53,7 +53,7 @@ public class CreationTimestampTest extends BaseEntityManagerFunctionalTestCase {
@CreationTimestamp
private Date timestamp;
//Constructors, getters and setters are omitted for brevity
//Constructors, getters, and setters are omitted for brevity
//end::mapping-generated-CreationTimestamp-example[]
public Event() {}

View File

@ -56,7 +56,7 @@ public class DatabaseValueGenerationTest extends BaseEntityManagerFunctionalTest
@FunctionCreationTimestamp
private Date timestamp;
//Constructors, getters and setters are omitted for brevity
//Constructors, getters, and setters are omitted for brevity
//end::mapping-database-generated-value-example[]
public Event() {}

View File

@ -56,7 +56,7 @@ public class InMemoryValueGenerationTest extends BaseEntityManagerFunctionalTest
@FunctionCreationTimestamp
private Date timestamp;
//Constructors, getters and setters are omitted for brevity
//Constructors, getters, and setters are omitted for brevity
//end::mapping-in-memory-generated-value-example[]
public Event() {}