User Guide grammatical corrections
This commit is contained in:
parent
da5aae74e6
commit
9570f110a1
|
@ -38,7 +38,7 @@ hide the power of SQL from you and guarantees that your investment in
|
|||
relational technology and knowledge is as valid as always.
|
||||
|
||||
Hibernate may not be the best solution for data-centric applications
|
||||
that only use stored-procedures to implement the business logic in the
|
||||
that only use stored procedures to implement the business logic in the
|
||||
database, it is most useful with object-oriented domain models and
|
||||
business logic in the Java-based middle-tier. However, Hibernate can
|
||||
certainly help you to remove or encapsulate vendor-specific SQL code and
|
||||
|
|
|
@ -109,7 +109,7 @@ Should generally only configure this or `hibernate.connection.acquisition_mode`,
|
|||
|`hibernate.c3p0.max_statements` | 5 | Maximum size of C3P0 statement cache. Refers to http://www.mchange.com/projects/c3p0/#maxStatements[c3p0 `maxStatements` setting].
|
||||
|`hibernate.c3p0.acquire_increment` | 2 | Number of connections acquired at a time when there's no connection available in the pool. Refers to http://www.mchange.com/projects/c3p0/#acquireIncrement[c3p0 `acquireIncrement` setting].
|
||||
|`hibernate.c3p0.idle_test_period` | 5 | Idle time before a C3P0 pooled connection is validated. Refers to http://www.mchange.com/projects/c3p0/#idleConnectionTestPeriod[c3p0 `idleConnectionTestPeriod` setting].
|
||||
|`hibernate.c3p0` | | A setting prefix used to indicate additional c3p0 properties that need to be passed ot the underlying c3p0 connection pool.
|
||||
|`hibernate.c3p0` | | A setting prefix used to indicate additional c3p0 properties that need to be passed to the underlying c3p0 connection pool.
|
||||
|===================================================================================================================================================================================================================================
|
||||
|
||||
[[configurations-mapping]]
|
||||
|
@ -155,7 +155,7 @@ However, some JPA providers do need the discriminator for handling joined inheri
|
|||
However, we want to make sure that legacy applications continue to work as well, which puts us in a bind in terms of how to handle _implicit_ discriminator mappings.
|
||||
The solution is to assume that the absence of discriminator metadata means to follow the legacy behavior _unless_ this setting is enabled.
|
||||
|
||||
With this setting enabled, Hibernate will interpret the absence of discriminator metadata as an indication to use the JPA defined defaults for these absent annotations.
|
||||
With this setting enabled, Hibernate will interpret the absence of discriminator metadata as an indication to use the JPA-defined defaults for these absent annotations.
|
||||
|
||||
See Hibernate Jira issue https://hibernate.atlassian.net/browse/HHH-6911[HHH-6911] for additional background info.
|
||||
|
||||
|
@ -173,7 +173,7 @@ See Hibernate Jira issue https://hibernate.atlassian.net/browse/HHH-6911[HHH-691
|
|||
|`hibernate.implicit_naming_strategy` |`default` (default value), `jpa`, `legacy-jpa`, `legacy-hbm`, `component-path` a|
|
||||
|
||||
Used to specify the `org.hibernate.boot.model.naming.ImplicitNamingStrategy` class to use.
|
||||
The following short-names are defined for this setting:
|
||||
The following short names are defined for this setting:
|
||||
|
||||
`default`:: Uses the `org.hibernate.boot.model.naming.ImplicitNamingStrategyJpaCompliantImpl`
|
||||
`jpa`:: Uses the `org.hibernate.boot.model.naming.ImplicitNamingStrategyJpaCompliantImpl`
|
||||
|
@ -272,7 +272,7 @@ The maximum number of entries including:
|
|||
|
||||
maintained by `org.hibernate.engine.query.spi.QueryPlanCache`.
|
||||
|
||||
|`hibernate.query.plan_parameter_metadata_max_size` | `128` (default value) | The maximum number of strong references associated to `ParameterMetadata` maintained by `org.hibernate.engine.query.spi.QueryPlanCache`.
|
||||
|`hibernate.query.plan_parameter_metadata_max_size` | `128` (default value) | The maximum number of strong references associated with `ParameterMetadata` maintained by `org.hibernate.engine.query.spi.QueryPlanCache`.
|
||||
|`hibernate.order_by.default_null_ordering` |`none`, `first` or `last` |Defines precedence of null values in `ORDER BY` clause. Defaults to `none` which varies between RDBMS implementation.
|
||||
|`hibernate.discriminator.force_in_select` |`true` or `false` (default value) | For entities which do not explicitly say, should we force discriminators into SQL selects?
|
||||
|`hibernate.query.substitutions` | `true=1,false=0` |A comma-separated list of token substitutions to use when translating a Hibernate query to SQL.
|
||||
|
@ -371,7 +371,7 @@ In reality, you shouldn't probably enable this setting anyway.
|
|||
|
||||
A setting to control whether to `org.hibernate.engine.internal
|
||||
.StatisticalLoggingSessionEventListener` is enabled on all `Sessions` (unless explicitly disabled for a given `Session`).
|
||||
The default value of this setting is determined by the value for `hibernate.generate_statistics`, meaning that if collection of statistics is enabled logging of Session metrics is enabled by default too.
|
||||
The default value of this setting is determined by the value for `hibernate.generate_statistics`, meaning that if statistics are enabled, then logging of Session metrics is enabled by default too.
|
||||
|
||||
|===================================================================================================================================================================================================================================
|
||||
|
||||
|
@ -507,7 +507,7 @@ In such cases, a value for this setting _must_ be specified.
|
|||
|
||||
The value of this setting is expected to match the value returned by `java.sql.DatabaseMetaData#getDatabaseProductName()` for the target database.
|
||||
|
||||
Additionally specifying `javax.persistence.database-major-version` and/or `javax.persistence.database-minor-version` may be required to understand exactly how to generate the required schema commands.
|
||||
Additionally, specifying `javax.persistence.database-major-version` and/or `javax.persistence.database-minor-version` may be required to understand exactly how to generate the required schema commands.
|
||||
|
||||
|`javax.persistence.database-major-version` | |
|
||||
|
||||
|
|
|
@ -167,7 +167,7 @@ while ( iter.hasNext() ) {
|
|||
}
|
||||
----
|
||||
|
||||
Additionally you may manipulate the result set using a left outer join:
|
||||
Additionally, you may manipulate the result set using a left outer join:
|
||||
|
||||
[source]
|
||||
----
|
||||
|
|
|
@ -44,6 +44,6 @@ The default value of `undefined` indicates that Hibernate uses the identifier pr
|
|||
Database-based timestamps incur an overhead because Hibernate needs to query the database each time to determine the incremental next value.
|
||||
However, database-derived timestamps are safer to use in a clustered environment.
|
||||
Not all database dialects are known to support the retrieval of the database's current timestamp.
|
||||
Others may also be unsafe for locking, because of lack of precision.
|
||||
Others may also be unsafe for locking because of lack of precision.
|
||||
|generated |Whether the timestamp property value is generated by the database. Optional, defaults to `never`.
|
||||
|=======================================================================
|
|
@ -218,7 +218,7 @@ You cannot use stored procedures with Hibernate unless you follow some procedure
|
|||
If they do not follow those rules they are not usable with Hibernate.
|
||||
If you still want to use these procedures you have to execute them via `session.doWork()`.
|
||||
|
||||
The rules are different for each database, since database vendors have different stored procedure semantics/syntax.
|
||||
The rules are different for each database since database vendors have different stored procedure semantics/syntax.
|
||||
|
||||
Stored procedure queries cannot be paged with `setFirstResult()/setMaxResults()`.
|
||||
|
||||
|
|
|
@ -25,7 +25,7 @@ Session (`org.hibernate.Session`):: A single-threaded, short-lived object concep
|
|||
In JPA nomenclature, the `Session` is represented by an `EntityManager`.
|
||||
+
|
||||
Behind the scenes, the Hibernate `Session` wraps a JDBC `java.sql.Connection` and acts as a factory for `org.hibernate.Transaction` instances.
|
||||
It maintains a generally "repeatable read" persistence context (first level cache) of the application's domain model.
|
||||
It maintains a generally "repeatable read" persistence context (first level cache) of the application domain model.
|
||||
|
||||
Transaction (`org.hibernate.Transaction`):: A single-threaded, short-lived object used by the application to demarcate individual physical transaction boundaries.
|
||||
`EntityTransaction` is the JPA equivalent and both act as an abstraction API to isolate the application from the underling transaction system in use (JDBC or JTA).
|
||||
`EntityTransaction` is the JPA equivalent and both act as an abstraction API to isolate the application from the underlying transaction system in use (JDBC or JTA).
|
||||
|
|
|
@ -17,7 +17,7 @@ The following settings control this behavior.
|
|||
`hibernate.jdbc.batch_versioned_data`::
|
||||
Some JDBC drivers return incorrect row counts when a batch is executed.
|
||||
If your JDBC driver falls into this category this setting should be set to `false`.
|
||||
Otherwise it is safe to enable this which will allow Hibernate to still batch the DML for versioned entities and still use the returned row counts for optimistic lock checks.
|
||||
Otherwise, it is safe to enable this which will allow Hibernate to still batch the DML for versioned entities and still use the returned row counts for optimistic lock checks.
|
||||
Since 5.0, it defaults to true. Previously (versions 3.x and 4.x), it used to be false.
|
||||
|
||||
`hibernate.jdbc.batch.builder`::
|
||||
|
@ -48,14 +48,14 @@ include::{sourcedir}/BatchTest.java[tags=batch-session-batch-example]
|
|||
----
|
||||
====
|
||||
|
||||
There are several problems associated to this example:
|
||||
There are several problems associated with this example:
|
||||
|
||||
. Hibernate caches all the newly inserted `Customer` instances in the session-level c1ache, so, when the transaction ends, 100 000 entities are managed by the persistence context.
|
||||
If the maximum memory allocated to the JVM is rather low, this example could fails with an `OutOfMemoryException`.
|
||||
The Java 1.8 JVM allocated either 1/4 of available RAM or 1Gb, which can easily accommodate 100 000 objects on the heap.
|
||||
. long-running transactions can deplete a connection pool so other transactions don't get a chance to proceed.
|
||||
. JDBC batching is not enabled by default, so every insert statement requires a database roundtrip.
|
||||
To enable JDBC batching, set the property `hibernate.jdbc.batch_size` to an integer between 10 and 50.
|
||||
To enable JDBC batching, set the `hibernate.jdbc.batch_size` property to an integer between 10 and 50.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
|
|
|
@ -10,7 +10,7 @@ The process is very different for each.
|
|||
====
|
||||
This chapter will not focus on all the possibilities of bootstrapping.
|
||||
Those will be covered in each specific more-relevant chapters later on.
|
||||
Instead we focus here on the API calls needed to perform the bootstrapping.
|
||||
Instead, we focus here on the API calls needed to perform the bootstrapping.
|
||||
====
|
||||
|
||||
[TIP]
|
||||
|
@ -30,13 +30,13 @@ For a discussion of the legacy bootstrapping API, see <<appendices/Legacy_Bootst
|
|||
|
||||
The first step in native bootstrapping is the building of a `ServiceRegistry` holding the services Hibernate will need during bootstrapping and at run time.
|
||||
|
||||
Actually we are concerned with building 2 different ServiceRegistries.
|
||||
Actually, we are concerned with building 2 different ServiceRegistries.
|
||||
First is the `org.hibernate.boot.registry.BootstrapServiceRegistry`.
|
||||
The `BootstrapServiceRegistry` is intended to hold services that Hibernate needs at both bootstrap and run time.
|
||||
This boils down to 3 services:
|
||||
|
||||
`org.hibernate.boot.registry.classloading.spi.ClassLoaderService`:: which controls how Hibernate interacts with `ClassLoader`s
|
||||
`org.hibernate.integrator.spi.IntegratorService`:: which controls the management ands discovery of `org.hibernate.integrator.spi.Integrator` instances.
|
||||
`org.hibernate.integrator.spi.IntegratorService`:: which controls the management and discovery of `org.hibernate.integrator.spi.Integrator` instances.
|
||||
`org.hibernate.boot.registry.selector.spi.StrategySelector`:: which control how Hibernate resolves implementations of various strategy contracts.
|
||||
This is a very powerful service, but a full discussion of it is beyond the scope of this guide.
|
||||
|
||||
|
@ -106,7 +106,7 @@ include::{sourcedir}/BootstrapTest.java[tags=bootstrap-event-listener-registrati
|
|||
[[bootstrap-native-metadata]]
|
||||
==== Building the Metadata
|
||||
|
||||
The second step in native bootstrapping is the building of a `org.hibernate.boot.Metadata` object containing the parsed representations of an application's domain model and its mapping to a database.
|
||||
The second step in native bootstrapping is the building of a `org.hibernate.boot.Metadata` object containing the parsed representations of an application domain model and its mapping to a database.
|
||||
The first thing we obviously need to build a parsed representation is the source information to be parsed (annotated classes, `hbm.xml` files, `orm.xml` files).
|
||||
This is the purpose of `org.hibernate.boot.MetadataSources`:
|
||||
|
||||
|
@ -240,4 +240,4 @@ include::{sourcedir}/BootstrapTest.java[tags=bootstrap-native-EntityManagerFacto
|
|||
----
|
||||
====
|
||||
|
||||
The `integrationSettings` allows the application develoepr to customize the bootstrapping process by specifying different `hibernate.integrator_provider` or `hibernate.strategy_registration_provider` integration providers.
|
||||
The `integrationSettings` allows the application developer to customize the bootstrapping process by specifying different `hibernate.integrator_provider` or `hibernate.strategy_registration_provider` integration providers.
|
|
@ -10,7 +10,7 @@ It is possible to configure a JVM-level (`SessionFactory`-level) or even a clust
|
|||
|
||||
[NOTE]
|
||||
====
|
||||
Be aware that caches are not aware of changes made to the persistent store by another applications.
|
||||
Be aware that caches are not aware of changes made to the persistent store by other applications.
|
||||
They can, however, be configured to regularly expire cached data.
|
||||
====
|
||||
|
||||
|
@ -78,9 +78,9 @@ The following values are possible:
|
|||
`ENABLE_SELECTIVE` (Default and recommended value)::
|
||||
Entities are not cached unless explicitly marked as cacheable (with the https://docs.oracle.com/javaee/7/api/javax/persistence/Cacheable.html[`@Cacheable`] annotation).
|
||||
`DISABLE_SELECTIVE`::
|
||||
Entities are cached unless explicitly marked as not cacheable.
|
||||
Entities are cached unless explicitly marked as non-cacheable.
|
||||
`ALL`::
|
||||
Entities are always cached even if marked as non cacheable.
|
||||
Entities are always cached even if marked as non-cacheable.
|
||||
`NONE`::
|
||||
No entity is cached even if marked as cacheable.
|
||||
This option can make sense to disable second-level cache altogether.
|
||||
|
@ -90,14 +90,14 @@ The values for this property are:
|
|||
|
||||
read-only::
|
||||
If your application needs to read, but not modify, instances of a persistent class, a read-only cache is the best choice.
|
||||
Application can still delete entities and these changes should be reflected in second-level cache, so that the cache
|
||||
Application can still delete entities and these changes should be reflected in second-level cache so that the cache
|
||||
does not provide stale entities.
|
||||
Implementations may use performance optimizations based on the immutability of entities.
|
||||
read-write::
|
||||
If the application needs to update data, a read-write cache might be appropriate.
|
||||
This strategy provides consistent access to single entity, but not a serializable transaction isolation level; e.g. when TX1 reads looks up an entity and does not find it, TX2 inserts the entity into cache and TX1 looks it up again, the new entity can be read in TX1.
|
||||
nonstrict-read-write::
|
||||
Similar to read-write strategy but there might be occasional stale reads upon concurrent access to an entity. The choice of this strategy might be appropriate if the application rarely updates the same data simultaneously and strict transaction isolation is not required. Implementation may use performance optimizations that make of use the relaxed consistency.
|
||||
Similar to read-write strategy but there might be occasional stale reads upon concurrent access to an entity. The choice of this strategy might be appropriate if the application rarely updates the same data simultaneously and strict transaction isolation is not required. Implementations may use performance optimizations that make use of the relaxed consistency guarantee.
|
||||
transactional::
|
||||
Provides serializable transaction isolation level.
|
||||
|
||||
|
@ -115,8 +115,8 @@ region::
|
|||
Defines a cache region where entries will be stored
|
||||
include::
|
||||
If lazy properties should be included in the second level cache.
|
||||
Default value is "all", so lazy properties are cacheable.
|
||||
The other possible value is "non-lazy", so lazy properties are not cacheable.
|
||||
The default value is `all` so lazy properties are cacheable.
|
||||
The other possible value is `non-lazy` so lazy properties are not cacheable.
|
||||
|
||||
[[caching-query]]
|
||||
=== Entity cache
|
||||
|
@ -130,7 +130,7 @@ include::{sourcedir}/NonStrictReadWriteCacheTest.java[tags=caching-entity-mappin
|
|||
----
|
||||
====
|
||||
|
||||
Hibernate stores cached entities in a dehydrated forms, which is similar to the database representation.
|
||||
Hibernate stores cached entities in a dehydrated form, which is similar to the database representation.
|
||||
Aside from the foreign key column values of the `@ManyToOne` or `@OneToOne` child-side associations,
|
||||
entity relationships are not stored in the cache,
|
||||
|
||||
|
@ -208,7 +208,7 @@ Subsequent collection retrievals will use the cache instead of going to the data
|
|||
|
||||
[NOTE]
|
||||
====
|
||||
The collection cache is not write-through, so any modification will trigger a collection cache entry invalidation.
|
||||
The collection cache is not write-through so any modification will trigger a collection cache entry invalidation.
|
||||
On a subsequent access, the collection will be loaded from the database and re-cached.
|
||||
====
|
||||
|
||||
|
@ -369,7 +369,7 @@ The relationship between Hibernate and JPA cache modes can be seen in the follow
|
|||
|`CacheMode.IGNORE` |`CacheStoreMode.BYPASS` and `CacheRetrieveMode.BYPASS` | Doesn't read/write data from/into cache
|
||||
|======================================
|
||||
|
||||
Setting the cache mode can be done wither when loading entities directly or when executing a query.
|
||||
Setting the cache mode can be done either when loading entities directly or when executing a query.
|
||||
|
||||
[[caching-management-cache-mode-entity-jpa-example]]
|
||||
.Using custom cache modes with JPA
|
||||
|
@ -426,7 +426,7 @@ include::{sourcedir}/SecondLevelCacheTest.java[tags=caching-management-evict-jpa
|
|||
----
|
||||
====
|
||||
|
||||
Hibernate is much more flexible in this regard as it offers a fine-grained control over what needs to be evicted.
|
||||
Hibernate is much more flexible in this regard as it offers fine-grained control over what needs to be evicted.
|
||||
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/Cache.html[`org.hibernate.Cache`] interface defines various evicting strategies:
|
||||
|
||||
- entities (by their class or region)
|
||||
|
@ -610,7 +610,7 @@ If the Infinispan `CacheManager` is bound to JNDI, then the `JndiInfinispanRegio
|
|||
===== Infinispan in JBoss AS/WildFly
|
||||
|
||||
When using JPA in WildFly, region factory is automatically set upon configuring `hibernate.cache.use_second_level_cache=true` (by default second-level cache is not used).
|
||||
For more information please consult https://docs.jboss.org/author/display/WFLY9/JPA+Reference+Guide#JPAReferenceGuide-UsingtheInfinispansecondlevelcache[WildFly documentation].
|
||||
For more information, please consult https://docs.jboss.org/author/display/WFLY9/JPA+Reference+Guide#JPAReferenceGuide-UsingtheInfinispansecondlevelcache[WildFly documentation].
|
||||
|
||||
[[caching-provider-infinispan-config]]
|
||||
==== Configuration properties
|
||||
|
@ -680,7 +680,9 @@ Some options in the cache configuration can also be overridden directly through
|
|||
|
||||
[NOTE]
|
||||
====
|
||||
In versions prior to 5.1, `hibernate.cache.infinispan._something_.expiration.wake_up_interval` was called `hibernate.cache.infinispan._something_.eviction.wake_up_interval`. Eviction settings are checked upon each cache insert, it is expiration that needs to be triggered periodically. Old property still works, but its use is deprecated.
|
||||
In versions prior to 5.1, `hibernate.cache.infinispan._something_.expiration.wake_up_interval` was called `hibernate.cache.infinispan._something_.eviction.wake_up_interval`.
|
||||
Eviction settings are checked upon each cache insert, it is expiration that needs to be triggered periodically.
|
||||
The old property still works, but its use is deprecated.
|
||||
====
|
||||
|
||||
[NOTE]
|
||||
|
|
|
@ -6,7 +6,7 @@ The term https://en.wikipedia.org/wiki/Domain_model[domain model] comes from the
|
|||
It is the model that ultimately describes the https://en.wikipedia.org/wiki/Problem_domain[problem domain] you are working in.
|
||||
Sometimes you will also hear the term _persistent classes_.
|
||||
|
||||
Ultimately the application's domain model is the central character in an ORM.
|
||||
Ultimately the application domain model is the central character in an ORM.
|
||||
They make up the classes you wish to map. Hibernate works best if these classes follow the Plain Old Java Object (POJO) / JavaBean programming model.
|
||||
However, none of these rules are hard requirements.
|
||||
Indeed, Hibernate assumes very little about the nature of your persistent objects. You can express a domain model in other ways (using trees of `java.util.Map` instances, for example).
|
||||
|
|
|
@ -33,7 +33,7 @@ To exclude a field from being part of the entity persistent state, the field mus
|
|||
====
|
||||
Another advantage of using field-based access is that some entity attributes can be hidden from outside the entity.
|
||||
An example of such attribute is the entity `@Version` field, which must not be manipulated by the data access layer.
|
||||
With field-based access, we can simply omit the the getter and the setter for this version field, and Hibernate can still leverage the optimistic concurrency control mechanism.
|
||||
With field-based access, we can simply omit the getter and the setter for this version field, and Hibernate can still leverage the optimistic concurrency control mechanism.
|
||||
====
|
||||
|
||||
[[property-based-access]]
|
||||
|
|
|
@ -176,7 +176,7 @@ From a relational database point of view, the underlying schema is identical to
|
|||
as the client-side controls the relationship based on the foreign key column.
|
||||
|
||||
But then, it's unusual to consider the `Phone` as a client-side and the `PhoneDetails` as the parent-side because the details cannot exist without an actual phone.
|
||||
A much more natural mapping would be if the `Phone` was the parent-side, therefore pushing the foreign key into the `PhoneDetails` table.
|
||||
A much more natural mapping would be if the `Phone` were the parent-side, therefore pushing the foreign key into the `PhoneDetails` table.
|
||||
This mapping requires a bidirectional `@OneToOne` association as you can see in the following example:
|
||||
|
||||
[[associations-one-to-one-bidirectional]]
|
||||
|
@ -213,7 +213,7 @@ include::{extrasdir}/associations-one-to-one-bidirectional-lifecycle-example.sql
|
|||
====
|
||||
|
||||
When using a bidirectional `@OneToOne` association, Hibernate enforces the unique constraint upon fetching the child-side.
|
||||
If there are more than one children associated to the same parent, Hibernate will throw a constraint violation exception.
|
||||
If there are more than one children associated with the same parent, Hibernate will throw a constraint violation exception.
|
||||
Continuing the previous example, when adding another `PhoneDetails`, Hibernate validates the uniqueness constraint when reloading the `Phone` object.
|
||||
|
||||
[[associations-one-to-one-bidirectional-constraint-example]]
|
||||
|
@ -251,7 +251,7 @@ include::{extrasdir}/associations-many-to-many-unidirectional-example.sql[]
|
|||
Just like with unidirectional `@OneToMany` associations, the link table is controlled by the owning side.
|
||||
|
||||
When an entity is removed from the `@ManyToMany` collection, Hibernate simply deletes the joining record in the link table.
|
||||
Unfortunately, this operation requires removing all entries associated to a given parent and recreating the ones that are listed in the current running persistent context.
|
||||
Unfortunately, this operation requires removing all entries associated with a given parent and recreating the ones that are listed in the current running persistent context.
|
||||
|
||||
[[associations-many-to-many-unidirectional-lifecycle-example]]
|
||||
.Unidirectional `@ManyToMany` lifecycle
|
||||
|
@ -273,7 +273,7 @@ For `@ManyToMany` associations, the `REMOVE` entity state transition doesn't mak
|
|||
Since the other side might be referenced by other entities on the parent-side, the automatic removal might end up in a `ConstraintViolationException`.
|
||||
|
||||
For example, if `@ManyToMany(cascade = CascadeType.ALL)` was defined and the first person would be deleted,
|
||||
Hibernate would throw an exception because another person is still associated to the address that's being deleted.
|
||||
Hibernate would throw an exception because another person is still associated with the address that's being deleted.
|
||||
|
||||
[source,java]
|
||||
----
|
||||
|
@ -340,7 +340,7 @@ include::{extrasdir}/associations-many-to-many-bidirectional-lifecycle-example.s
|
|||
|
||||
If a bidirectional `@OneToMany` association performs better when removing or changing the order of child elements,
|
||||
the `@ManyToMany` relationship cannot benefit from such an optimization because the foreign key side is not in control.
|
||||
To overcome this limitation, the the link table must be directly exposed and the `@ManyToMany` association split into two bidirectional `@OneToMany` relationships.
|
||||
To overcome this limitation, the link table must be directly exposed and the `@ManyToMany` association split into two bidirectional `@OneToMany` relationships.
|
||||
|
||||
[[associations-many-to-many-bidirectional-with-link-entity]]
|
||||
===== Bidirectional many-to-many with a link entity
|
||||
|
|
|
@ -91,7 +91,7 @@ That is the purpose of the "BasicTypeRegistry key(s)" column in the previous tab
|
|||
==== The `@Basic` annotation
|
||||
|
||||
Strictly speaking, a basic type is denoted with with the `javax.persistence.Basic` annotation.
|
||||
Generally speaking the `@Basic` annotation can be ignored, as it is assumed by default.
|
||||
Generally speaking, the `@Basic` annotation can be ignored, as it is assumed by default.
|
||||
Both of the following examples are ultimately the same.
|
||||
|
||||
[[basic-annotation-explicit-example]]
|
||||
|
@ -132,7 +132,7 @@ The JPA specification strictly limits the Java types that can be marked as basic
|
|||
* any other type that implements `Serializable` (JPA's "support" for `Serializable` types is to directly serialize their state to the database).
|
||||
|
||||
If provider portability is a concern, you should stick to just these basic types.
|
||||
Note that JPA 2.1 did add the notion of an `javax.persistence.AttributeConverter` to help alleviate some of these concerns; see <<basic-jpa-convert>> for more on this topic.
|
||||
Note that JPA 2.1 did add the notion of a `javax.persistence.AttributeConverter` to help alleviate some of these concerns; see <<basic-jpa-convert>> for more on this topic.
|
||||
====
|
||||
|
||||
The `@Basic` annotation defines 2 attributes.
|
||||
|
@ -165,7 +165,7 @@ include::{sourcedir}/basic/ExplicitColumnNamingTest.java[tags=basic-annotation-e
|
|||
|
||||
Here we use `@Column` to explicitly map the `description` attribute to the `NOTES` column, as opposed to the implicit column name `description`.
|
||||
|
||||
The `@Column` annotation defines other mapping information as well. See its javadocs for details.
|
||||
The `@Column` annotation defines other mapping information as well. See its Javadocs for details.
|
||||
|
||||
[[basic-registry]]
|
||||
==== BasicTypeRegistry
|
||||
|
@ -194,7 +194,7 @@ As a baseline within `BasicTypeRegistry`, Hibernate follows the recommended mapp
|
|||
JDBC recommends mapping Strings to VARCHAR, which is the exact mapping that `StringType` handles.
|
||||
So that is the baseline mapping within `BasicTypeRegistry` for Strings.
|
||||
|
||||
Applications can also extend (add new `BasicType` registrations) or override (replace an exiting `BasicType` registration) using one of the
|
||||
Applications can also extend (add new `BasicType` registrations) or override (replace an existing `BasicType` registration) using one of the
|
||||
`MetadataBuilder#applyBasicType` methods or the `MetadataBuilder#applyTypes` method during bootstrap.
|
||||
For more details, see <<basic-custom-type>> section.
|
||||
|
||||
|
@ -218,7 +218,7 @@ include::{sourcedir}/basic/ExplicitTypeTest.java[tags=basic-type-annotation-exam
|
|||
This tells Hibernate to store the Strings as nationalized data.
|
||||
This is just for illustration purposes; for better ways to indicate nationalized character data see <<basic-nationalized>> section.
|
||||
|
||||
Additionally the description is to be handled as a LOB. Again, for better ways to indicate LOBs see <<basic-lob>> section.
|
||||
Additionally, the description is to be handled as a LOB. Again, for better ways to indicate LOBs see <<basic-lob>> section.
|
||||
|
||||
The `org.hibernate.annotations.Type#type` attribute can name any of the following:
|
||||
|
||||
|
@ -511,7 +511,7 @@ For additional details on using AttributeConverters, see <<basic-jpa-convert>> s
|
|||
[NOTE]
|
||||
====
|
||||
JPA explicitly disallows the use of an AttributeConverter with an attribute marked as `@Enumerated`.
|
||||
So if using the AttributeConverter approach, be sure to not mark the attribute as `@Enumerated`.
|
||||
So if using the AttributeConverter approach, be sure not to mark the attribute as `@Enumerated`.
|
||||
====
|
||||
|
||||
[[basic-enums-custom-type]]
|
||||
|
@ -885,7 +885,7 @@ include::{extrasdir}/basic/basic-datetime-temporal-date-persist-example.sql[]
|
|||
----
|
||||
====
|
||||
|
||||
Only the year, month and the day field were saved into the the database.
|
||||
Only the year, month and the day field were saved into the database.
|
||||
|
||||
If we change the `@Temporal` type to `TIME`:
|
||||
|
||||
|
@ -907,7 +907,7 @@ include::{extrasdir}/basic/basic-datetime-temporal-time-persist-example.sql[]
|
|||
----
|
||||
====
|
||||
|
||||
When the the `@Temporal` type is set to `TIMESTAMP`:
|
||||
When the `@Temporal` type is set to `TIMESTAMP`:
|
||||
|
||||
[[basic-datetime-temporal-timestamp-example]]
|
||||
.`java.util.Date` mapped as `TIMESTAMP`
|
||||
|
@ -1045,7 +1045,7 @@ include::{extrasdir}/basic/basic-quoting-persistence-example.sql[indent=0]
|
|||
Generated properties are properties that have their values generated by the database.
|
||||
Typically, Hibernate applications needed to `refresh` objects that contain any properties for which the database was generating values.
|
||||
Marking properties as generated, however, lets the application delegate this responsibility to Hibernate.
|
||||
When Hibernate issues an SQL INSERT or UPDATE for an entity that has defined generated properties, it immediately issues a select afterwards to retrieve the generated values.
|
||||
When Hibernate issues an SQL INSERT or UPDATE for an entity that has defined generated properties, it immediately issues a select to retrieve the generated values.
|
||||
|
||||
Properties marked as generated must additionally be _non-insertable_ and _non-updateable_.
|
||||
Only `@Version` and `@Basic` types can be marked as generated.
|
||||
|
|
|
@ -11,7 +11,7 @@ or it might be a reference to another entity with its own life cycle.
|
|||
In the latter case, only the _link_ between the two objects is considered to be a state held by the collection.
|
||||
|
||||
The owner of the collection is always an entity, even if the collection is defined by an embeddable type.
|
||||
Collections form one/many-to-many associations between types, so there can be:
|
||||
Collections form one/many-to-many associations between types so there can be:
|
||||
|
||||
- value type collections
|
||||
- embeddable type collections
|
||||
|
@ -150,9 +150,9 @@ entity collections can represent both <<chapters/domain/associations.adoc#associ
|
|||
From a relational database perspective, associations are defined by the foreign key side (the child-side).
|
||||
With value type collections, only the entity can control the association (the parent-side), but for a collection of entities, both sides of the association are managed by the persistence context.
|
||||
|
||||
For ths reason, entity collections can be devised into two main categories: unidirectional and bidirectional associations.
|
||||
Unidirectional associations are very similar to value type collections, since only the parent side controls this relationship.
|
||||
Bidirectional associations are more tricky, since, even if sides need to be in-sync at all times, only one side is responsible for managing the association.
|
||||
For this reason, entity collections can be devised into two main categories: unidirectional and bidirectional associations.
|
||||
Unidirectional associations are very similar to value type collections since only the parent side controls this relationship.
|
||||
Bidirectional associations are more tricky since, even if sides need to be in-sync at all times, only one side is responsible for managing the association.
|
||||
A bidirectional association has an _owning_ side and an _inverse (mappedBy)_ side.
|
||||
|
||||
Another way of categorizing entity collections is by the underlying collection type, and so we can have:
|
||||
|
@ -219,7 +219,7 @@ In the example above, once the parent entity is persisted, the child entities ar
|
|||
[NOTE]
|
||||
====
|
||||
Just like value type collections, unidirectional bags are not as efficient when it comes to modifying the collection structure (removing or reshuffling elements).
|
||||
Because the parent-side cannot uniquely identify each individual child, Hibernate might delete all child table rows associate to the parent entity and re-add them according to the current collection state.
|
||||
Because the parent-side cannot uniquely identify each individual child, Hibernate might delete all child table rows associated with the parent entity and re-add them according to the current collection state.
|
||||
====
|
||||
|
||||
[[collections-bidirectional-bag]]
|
||||
|
@ -483,7 +483,7 @@ Hibernate allows using the following map keys:
|
|||
`MapKey`:: the map key is either the primary key or another property of the entity stored as a map entry value
|
||||
`MapKeyEnumerated`:: the map key is an `Enum` of the target child entity
|
||||
`MapKeyTemporal`:: the map key is a `Date` or a `Calendar` of the target child entity
|
||||
`MapKeyJoinColumn`:: the map key is a an entity mapped as an association in the child entity that's stored as a map entry key
|
||||
`MapKeyJoinColumn`:: the map key is an entity mapped as an association in the child entity that's stored as a map entry key
|
||||
|
||||
[[collections-map-value-type]]
|
||||
===== Value type maps
|
||||
|
@ -566,7 +566,7 @@ include::{extrasdir}/collections-map-bidirectional-example.sql[]
|
|||
|
||||
When it comes to arrays, there is quite a difference between Java arrays and relational database array types (e.g. VARRAY, ARRAY).
|
||||
First, not all database systems implement the SQL-99 ARRAY type, and, for this reason, Hibernate doesn't support native database array types.
|
||||
Second, Java arrays are relevant for basic types only, since storing multiple embeddables or entities should always be done using the Java Collection API.
|
||||
Second, Java arrays are relevant for basic types only since storing multiple embeddables or entities should always be done using the Java Collection API.
|
||||
|
||||
[[collections-array-binary]]
|
||||
==== Arrays as binary
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
|
||||
[IMPORTANT]
|
||||
====
|
||||
JPA only acknowledges the entity model mapping, so if you are concerned about JPA provider portability it's best to stick to the strict POJO model.
|
||||
JPA only acknowledges the entity model mapping so, if you are concerned about JPA provider portability, it's best to stick to the strict POJO model.
|
||||
On the other hand, Hibernate can work with both POJO entities as well as with dynamic entity models.
|
||||
====
|
||||
|
||||
|
|
|
@ -145,7 +145,7 @@ If you are unfamiliar with these topics, they are explained in the <<chapters/pc
|
|||
Whether to implement `equals()` and `hashCode()` methods in your domain model, let alone how to implement them, is a surprisingly tricky discussion when it comes to ORM.
|
||||
|
||||
There is really just one absolute case: a class that acts as an identifier must implement equals/hashCode based on the id value(s).
|
||||
Generally this is pertinent for user-defined classes used as composite identifiers.
|
||||
Generally, this is pertinent for user-defined classes used as composite identifiers.
|
||||
Beyond this one very specific use case and few others we will discuss below, you may want to consider not implementing equals/hashCode altogether.
|
||||
|
||||
So what's all the fuss? Normally, most Java objects provide a built-in `equals()` and `hashCode()` based on the object's identity, so each new object will be different from all others.
|
||||
|
@ -155,7 +155,7 @@ Conceptually however this starts to break down when you start to think about the
|
|||
This is, in fact, exactly the case when dealing with data coming from a database.
|
||||
Every time we load a specific `Person` from the database we would naturally get a unique instance.
|
||||
Hibernate, however, works hard to make sure that does not happen within a given `Session`.
|
||||
In fact Hibernate guarantees equivalence of persistent identity (database row) and Java identity inside a particular session scope.
|
||||
In fact, Hibernate guarantees equivalence of persistent identity (database row) and Java identity inside a particular session scope.
|
||||
So if we ask a Hibernate `Session` to load that specific Person multiple times we will actually get back the same __instance__:
|
||||
|
||||
.Scope of identity
|
||||
|
|
|
@ -64,7 +64,7 @@ include::{sourcedir}/id/SimpleGenerated.java[]
|
|||
----
|
||||
====
|
||||
|
||||
Additionally to the type restriction list above, JPA says that if using generated identifier values (see below) only integer types (short, int, long) will be portably supported.
|
||||
Additionally, to the type restriction list above, JPA says that if using generated identifier values (see below) only integer types (short, int, long) will be portably supported.
|
||||
|
||||
The expectation for generated identifier values is that Hibernate will generate the value when the save/persist occurs.
|
||||
|
||||
|
@ -95,7 +95,7 @@ Note especially that collections and one-to-ones are never appropriate.
|
|||
[[identifiers-composite-aggregated]]
|
||||
==== Composite identifiers - aggregated (EmbeddedId)
|
||||
|
||||
Modelling a composite identifier using an EmbeddedId simply means defining an embeddable to be a composition for the the one or more attributes making up the identifier,
|
||||
Modeling a composite identifier using an EmbeddedId simply means defining an embeddable to be a composition for the one or more attributes making up the identifier,
|
||||
and then exposing an attribute of that embeddable type on the entity.
|
||||
|
||||
.Basic EmbeddedId
|
||||
|
@ -126,7 +126,7 @@ In JPA terms one would use "derived identifiers"; for details, see <<identifiers
|
|||
[[identifiers-composite-nonaggregated]]
|
||||
==== Composite identifiers - non-aggregated (IdClass)
|
||||
|
||||
Modelling a composite identifier using an IdClass differs from using an EmbeddedId in that the entity defines each individual attribute making up the composition.
|
||||
Modeling a composite identifier using an IdClass differs from using an EmbeddedId in that the entity defines each individual attribute making up the composition.
|
||||
The IdClass simply acts as a "shadow".
|
||||
|
||||
.Basic IdClass
|
||||
|
@ -212,7 +212,7 @@ If applications set this to false the resolutions discussed here will be very di
|
|||
The rest of the discussion here assumes this setting is enabled (true).
|
||||
====
|
||||
|
||||
`AUTO` (the default):: Indicates that the persistence provider (Hibernate) should chose an appropriate generation strategy. See <<identifiers-generators-auto>>.
|
||||
`AUTO` (the default):: Indicates that the persistence provider (Hibernate) should choose an appropriate generation strategy. See <<identifiers-generators-auto>>.
|
||||
`IDENTITY`:: Indicates that database IDENTITY columns will be used for primary key value generation. See <<identifiers-generators-identity>>.
|
||||
`SEQUENCE`:: Indicates that database sequence should be used for obtaining primary key values. See <<identifiers-generators-sequence>>.
|
||||
`TABLE`:: Indicates that a database table should be used for obtaining primary key values. See <<identifiers-generators-table>>.
|
||||
|
@ -231,7 +231,7 @@ The fallback is to consult with the pluggable `org.hibernate.boot.model.IdGenera
|
|||
The default behavior is to look at the java type of the identifier attribute:
|
||||
|
||||
* for UUID <<identifiers-generators-uuid>>
|
||||
* Otherwise <<identifiers-generators-sequence>>
|
||||
* Otherwise, <<identifiers-generators-sequence>>
|
||||
|
||||
[[identifiers-generators-sequence]]
|
||||
==== Using sequences
|
||||
|
@ -295,7 +295,7 @@ Because of the runtime imposition/inconsistency Hibernate suggest other forms of
|
|||
[NOTE]
|
||||
====
|
||||
There is yet another important runtime impact of choosing IDENTITY generation: Hibernate will not be able to JDBC batching for inserts of the entities that use IDENTITY generation.
|
||||
The importance of this depends on the application's specific use cases.
|
||||
The importance of this depends on the application specific use cases.
|
||||
If the application is not usually creating many new instances of a given type of entity that uses IDENTITY generation, then this is not an important impact since batching would not have been helpful anyway.
|
||||
====
|
||||
|
||||
|
@ -367,7 +367,7 @@ Most of the Hibernate generators that separately obtain identifier values from d
|
|||
Optimizers help manage the number of times Hibernate has to talk to the database in order to generate identifier values.
|
||||
For example, with no optimizer applied to a sequence-generator, every time the application asked Hibernate to generate an identifier it would need to grab the next sequence value from the database.
|
||||
But if we can minimize the number of times we need to communicate with the database here, the application will be able to perform better.
|
||||
Which is in fact the role of these optimizers.
|
||||
Which is, in fact, the role of these optimizers.
|
||||
|
||||
none:: No optimization is performed. We communicate with the database each and every time an identifier value is needed from the generator.
|
||||
|
||||
|
@ -406,7 +406,7 @@ include::{sourcedir}/id/DerivedIdentifier.java[]
|
|||
====
|
||||
|
||||
In the example above, the `PersonDetails` entity uses the `id` column for both the entity identifier and for the many-to-one association to the `Person` entity.
|
||||
The value of the `PersonDetails` entity identifier is "derived" from the the identifier of its parent `Person` entity.
|
||||
The value of the `PersonDetails` entity identifier is "derived" from the identifier of its parent `Person` entity.
|
||||
The `@MapsId` annotation can also reference columns from an `@EmbeddedId` identifier as well.
|
||||
|
||||
The previous example can also be mapped using `@PrimaryKeyJoinColumn`.
|
||||
|
@ -421,5 +421,5 @@ include::{sourcedir}/id/CompositeIdAssociationPrimaryKeyJoinColumn.java[]
|
|||
|
||||
[NOTE]
|
||||
====
|
||||
Unlike `@MapsId`, the application developer is responsible of ensuring that the identifier and the many-to-one (or one-to-one) association are in sync.
|
||||
Unlike `@MapsId`, the application developer is responsible for ensuring that the identifier and the many-to-one (or one-to-one) association are in sync.
|
||||
====
|
|
@ -17,7 +17,7 @@ In the following domain model class hierarchy, a 'DebitAccount' and a 'CreditAcc
|
|||
|
||||
image:images/domain/inheritance/inheritance_class_diagram.svg[Inheritance class diagram]
|
||||
|
||||
When using `MappedSuperclass`, the inheritance is visible in the domain model only and ach database table contains both the base class and the subclass properties.
|
||||
When using `MappedSuperclass`, the inheritance is visible in the domain model only and each database table contains both the base class and the subclass properties.
|
||||
|
||||
[[entity-inheritance-mapped-superclass-example]]
|
||||
.`@MappedSuperclass` inheritance
|
||||
|
@ -104,7 +104,7 @@ This could for example occur when working with a legacy database.
|
|||
If `force` is set to true Hibernate will specify the allowed discriminator values in the SELECT query, even when retrieving all instances of the root class.
|
||||
|
||||
The second option, `insert`, tells Hibernate whether or not to include the discriminator column in SQL INSERTs.
|
||||
Usually the column should be part of the INSERT statement, but if your discriminator column is also part of a mapped composite identifier you have to set this option to false.
|
||||
Usually, the column should be part of the INSERT statement, but if your discriminator column is also part of a mapped composite identifier you have to set this option to false.
|
||||
====
|
||||
|
||||
[IMPORTANT]
|
||||
|
@ -182,8 +182,8 @@ include::{extrasdir}/entity-inheritance-joined-table-example.sql[]
|
|||
====
|
||||
The primary key of this table is also a foreign key to the superclass table and described by the `@PrimaryKeyJoinColumns`.
|
||||
|
||||
The table name still defaults to the non qualified class name.
|
||||
Also if `@PrimaryKeyJoinColumn` is not set, the primary key / foreign key columns are assumed to have the same names as the primary key columns of the primary table of the superclass.
|
||||
The table name still defaults to the non-qualified class name.
|
||||
Also, if `@PrimaryKeyJoinColumn` is not set, the primary key / foreign key columns are assumed to have the same names as the primary key columns of the primary table of the superclass.
|
||||
====
|
||||
|
||||
[[entity-inheritance-joined-table-primary-key-join-column-example]]
|
||||
|
@ -218,7 +218,7 @@ include::{extrasdir}/entity-inheritance-joined-table-query-example.sql[]
|
|||
|
||||
[IMPORTANT]
|
||||
====
|
||||
Polymorphic queries can create cartesian products, so caution is advised.
|
||||
Polymorphic queries can create Cartesian Products, so caution is advised.
|
||||
====
|
||||
|
||||
[[entity-inheritance-table-per-class]]
|
||||
|
@ -246,7 +246,7 @@ include::{extrasdir}/entity-inheritance-table-per-class-example.sql[]
|
|||
----
|
||||
====
|
||||
|
||||
When using polymorphic queries, a UNION is required to fetch the the base class table along with all subclass tables as well.
|
||||
When using polymorphic queries, a UNION is required to fetch the base class table along with all subclass tables as well.
|
||||
|
||||
.Table per class polymorphic query
|
||||
====
|
||||
|
|
|
@ -21,7 +21,7 @@ Historically Hibernate defined just a single `org.hibernate.cfg.NamingStrategy`.
|
|||
NamingStrategy contract actually combined the separate concerns that are now modeled individually
|
||||
as ImplicitNamingStrategy and PhysicalNamingStrategy.
|
||||
|
||||
Also the NamingStrategy contract was often not flexible enough to properly apply a given naming
|
||||
Also, the NamingStrategy contract was often not flexible enough to properly apply a given naming
|
||||
"rule", either because the API lacked the information to decide or because the API was honestly
|
||||
not well defined as it grew.
|
||||
|
||||
|
@ -37,12 +37,12 @@ repetitive information a developer must provide for mapping a domain model.
|
|||
.JPA Compatibility
|
||||
====
|
||||
JPA defines inherent rules about implicit logical name determination. If JPA provider
|
||||
portability is a major concern, or if you really just like the JPA defined implicit
|
||||
portability is a major concern, or if you really just like the JPA-defined implicit
|
||||
naming rules, be sure to stick with ImplicitNamingStrategyJpaCompliantImpl (the default)
|
||||
|
||||
Also, JPA defines no separation between logical and physical name. Following the JPA
|
||||
specification, the logical name *is* the physical name. If JPA provider portability
|
||||
is important, applications should prefer to not specify a PhysicalNamingStrategy.
|
||||
is important, applications should prefer not to specify a PhysicalNamingStrategy.
|
||||
====
|
||||
|
||||
|
||||
|
@ -71,10 +71,10 @@ the implementation using the `hibernate.implicit_naming_strategy` configuration
|
|||
`legacy-jpa`:: for `org.hibernate.boot.model.naming.ImplicitNamingStrategyLegacyJpaImpl` - compliant with the legacy NamingStrategy developed for JPA 1.0, which was unfortunately unclear in many respects regarding implicit naming rules.
|
||||
`component-path`:: for `org.hibernate.boot.model.naming.ImplicitNamingStrategyComponentPathImpl` - mostly follows `ImplicitNamingStrategyJpaCompliantImpl` rules, except that it uses the full composite paths, as opposed to just the ending property part
|
||||
+
|
||||
* reference to a Class that implements the the `org.hibernate.boot.model.naming.ImplicitNamingStrategy` contract
|
||||
* FQN of a class that implements the the `org.hibernate.boot.model.naming.ImplicitNamingStrategy` contract
|
||||
* reference to a Class that implements the `org.hibernate.boot.model.naming.ImplicitNamingStrategy` contract
|
||||
* FQN of a class that implements the `org.hibernate.boot.model.naming.ImplicitNamingStrategy` contract
|
||||
|
||||
Secondly applications and integrations can leverage `org.hibernate.boot.MetadataBuilder#applyImplicitNamingStrategy`
|
||||
Secondly, applications and integrations can leverage `org.hibernate.boot.MetadataBuilder#applyImplicitNamingStrategy`
|
||||
to specify the ImplicitNamingStrategy to use. See
|
||||
<<chapters/bootstrap/Bootstrap.adoc#Bootstrap,Bootstrap>> for additional details on bootstrapping.
|
||||
|
||||
|
@ -119,10 +119,10 @@ include::{sourcedir}/AcmeCorpPhysicalNamingStrategy.java[]
|
|||
There are multiple ways to specify the PhysicalNamingStrategy to use. First, applications can specify
|
||||
the implementation using the `hibernate.physical_naming_strategy` configuration setting which accepts:
|
||||
|
||||
* reference to a Class that implements the the `org.hibernate.boot.model.naming.PhysicalNamingStrategy` contract
|
||||
* FQN of a class that implements the the `org.hibernate.boot.model.naming.PhysicalNamingStrategy` contract
|
||||
* reference to a Class that implements the `org.hibernate.boot.model.naming.PhysicalNamingStrategy` contract
|
||||
* FQN of a class that implements the `org.hibernate.boot.model.naming.PhysicalNamingStrategy` contract
|
||||
|
||||
Secondly applications and integrations can leverage `org.hibernate.boot.MetadataBuilder#applyPhysicalNamingStrategy`.
|
||||
Secondly, applications and integrations can leverage `org.hibernate.boot.MetadataBuilder#applyPhysicalNamingStrategy`.
|
||||
See <<chapters/bootstrap/Bootstrap.adoc#Bootstrap,Bootstrap>> for additional details on bootstrapping.
|
||||
|
||||
|
||||
|
|
|
@ -43,7 +43,7 @@ This is represented by the `org.hibernate.NaturalIdLoadAccess` contract obtained
|
|||
|
||||
[NOTE]
|
||||
====
|
||||
If the entity does not define a natural id, trying to load an entity by its natural id will thrown an exception.
|
||||
If the entity does not define a natural id, trying to load an entity by its natural id will throw an exception.
|
||||
====
|
||||
|
||||
.Using NaturalIdLoadAccess
|
||||
|
|
|
@ -92,12 +92,12 @@ It is possible to configure various aspects of Hibernate Envers behavior, such a
|
|||
|`org.hibernate.envers.track_entities_changed_in_revision` |`false` |Should entity types, that have been modified during each revision, be tracked.
|
||||
The default implementation creates `REVCHANGES` table that stores entity names of modified persistent objects.
|
||||
Single record encapsulates the revision identifier (foreign key to `REVINFO` table) and a string value.
|
||||
For more information refer to <<envers-tracking-modified-entities-revchanges>> and <<envers-tracking-modified-entities-queries>>.
|
||||
For more information, refer to <<envers-tracking-modified-entities-revchanges>> and <<envers-tracking-modified-entities-queries>>.
|
||||
|
||||
|`org.hibernate.envers.global_with_modified_flag` |`false`, can be individually overridden with `@Audited( withModifiedFlag=true )` |Should property modification flags be stored for all audited entities and all properties.
|
||||
When set to true, for all properties an additional boolean column in the audit tables will be created, filled with information if the given property changed in the given revision.
|
||||
When set to false, such column can be added to selected entities or properties using the `@Audited` annotation.
|
||||
For more information refer to <<envers-tracking-properties-changes>> and <<envers-tracking-properties-changes-queries>>.
|
||||
For more information, refer to <<envers-tracking-properties-changes>> and <<envers-tracking-properties-changes-queries>>.
|
||||
|
||||
|`org.hibernate.envers.modified_flag_suffix` |`_MOD` |The suffix for columns storing "Modified Flags".
|
||||
For example: a property called "age", will by default get modified flag with column name "age_MOD".
|
||||
|
@ -125,9 +125,9 @@ The name of the audit table can be set on a per-entity basis, using the `@AuditT
|
|||
It may be tedious to add this annotation to every audited entity, so if possible, it's better to use a prefix/suffix.
|
||||
|
||||
If you have a mapping with secondary tables, audit tables for them will be generated in the same way (by adding the prefix and suffix).
|
||||
If you wish to overwrite this behaviour, you can use the `@SecondaryAuditTable` and `@SecondaryAuditTables` annotations.
|
||||
If you wish to overwrite this behavior, you can use the `@SecondaryAuditTable` and `@SecondaryAuditTables` annotations.
|
||||
|
||||
If you'd like to override auditing behaviour of some fields/properties inherited from `@MappedSuperclass` or in an embedded component,
|
||||
If you'd like to override auditing behavior of some fields/properties inherited from `@MappedSuperclass` or in an embedded component,
|
||||
you can apply the `@AuditOverride( s )` annotation on the subtype or usage site of the component.
|
||||
|
||||
If you want to audit a relation mapped with `@OneToMany` and `@JoinColumn`,
|
||||
|
@ -138,7 +138,7 @@ just annotate it with `@Audited( targetAuditMode = RelationTargetAuditMode.NOT_A
|
|||
Then, while reading historic versions of your entity, the relation will always point to the "current" related entity.
|
||||
By default Envers throws `javax.persistence.EntityNotFoundException` when "current" entity does not exist in the database.
|
||||
Apply `@NotFound( action = NotFoundAction.IGNORE )` annotation to silence the exception and assign null value instead.
|
||||
Hereby solution causes implicit eager loading of to-one relations.
|
||||
The hereby solution causes implicit eager loading of to-one relations.
|
||||
|
||||
If you'd like to audit properties of a superclass of an entity, which are not explicitly audited (they don't have the `@Audited` annotation on any properties or on the class),
|
||||
you can set the `@AuditOverride( forClass = SomeEntity.class, isAudited = true/false )` annotation.
|
||||
|
@ -152,7 +152,7 @@ The `@Audited` annotation also features an `auditParents` attribute but it's now
|
|||
|
||||
After the basic configuration, it is important to choose the audit strategy that will be used to persist and retrieve audit information.
|
||||
There is a trade-off between the performance of persisting and the performance of querying the audit information.
|
||||
Currently there are two audit strategies.
|
||||
Currently, there are two audit strategies.
|
||||
|
||||
. The default audit strategy persists the audit data together with a start revision.
|
||||
For each row inserted, updated or deleted in an audited table, one or more rows are inserted in the audit tables, together with the start revision of its validity.
|
||||
|
@ -782,7 +782,7 @@ Bags are not supported because they can contain non-unique elements.
|
|||
Persisting, a bag of `String`s violates the relational database principle that each table is a set of tuples.
|
||||
|
||||
In case of bags, however (which require a join table), if there is a duplicate element, the two tuples corresponding to the elements will be the same.
|
||||
Hibernate allows this, however Envers (or more precisely: the database connector) will throw an exception when trying to persist two identical elements, because of a unique constraint violation.
|
||||
Hibernate allows this, however Envers (or more precisely: the database connector) will throw an exception when trying to persist two identical elements because of a unique constraint violation.
|
||||
|
||||
There are at least two ways out if you need bag semantics:
|
||||
|
||||
|
@ -796,12 +796,12 @@ There are at least two ways out if you need bag semantics:
|
|||
=== `@OneToMany` with `@JoinColumn`
|
||||
|
||||
When a collection is mapped using these two annotations, Hibernate doesn't generate a join table.
|
||||
Envers, however, has to do this, so that when you read the revisions in which the related entity has changed, you don't get false results.
|
||||
Envers, however, has to do this so that when you read the revisions in which the related entity has changed, you don't get false results.
|
||||
|
||||
To be able to name the additional join table, there is a special annotation: `@AuditJoinTable`, which has similar semantics to JPA `@JoinTable`.
|
||||
|
||||
One special case are relations mapped with `@OneToMany` with `@JoinColumn` on the one side, and `@ManyToOne` and `@JoinColumn( insertable=false, updatable=false`) on the many side.
|
||||
Such relations are in fact bidirectional, but the owning side is the collection.
|
||||
Such relations are, in fact, bidirectional, but the owning side is the collection.
|
||||
|
||||
To properly audit such relations with Envers, you can use the `@AuditMappedBy` annotation.
|
||||
It enables you to specify the reverse property (using the `mappedBy` element).
|
||||
|
@ -825,7 +825,7 @@ SQL table partitioning offers a lot of advantages including, but certainly not l
|
|||
[[envers-partitioning-columns]]
|
||||
=== Suitable columns for audit table partitioning
|
||||
|
||||
Generally SQL tables must be partitioned on a column that exists within the table.
|
||||
Generally, SQL tables must be partitioned on a column that exists within the table.
|
||||
As a rule it makes sense to use either the _end revision_ or the _end revision timestamp_ column for partitioning of audit tables.
|
||||
|
||||
[NOTE]
|
||||
|
@ -901,9 +901,9 @@ To partition this data, the 'level of relevancy' must be defined. Consider the f
|
|||
. For fiscal year 2006 there is only one revision.
|
||||
It has the oldest _revision timestamp_ of all audit rows, but should still be regarded as relevant because it's the latest modification for this fiscal year in the salary table (its _end revision timestamp_ is null).
|
||||
+
|
||||
Also note that it would be very unfortunate if in 2011 there would be an update of the salary for fiscal year 2006 (which is possible in until at least 10 years after the fiscal year),
|
||||
Also, note that it would be very unfortunate if in 2011 there would be an update of the salary for fiscal year 2006 (which is possible in until at least 10 years after the fiscal year),
|
||||
and the audit information would have been moved to a slow disk (based on the age of the __revision timestamp__).
|
||||
Remember that in this case Envers will have to update the _end revision timestamp_ of the most recent audit row.
|
||||
Remember that, in this case, Envers will have to update the _end revision timestamp_ of the most recent audit row.
|
||||
. There are two revisions in the salary of fiscal year 2007 which both have nearly the same _revision timestamp_ and a different __end revision timestamp__.
|
||||
On first sight, it is evident that the first revision was a mistake and probably not relevant.
|
||||
The only relevant revision for 2007 is the one with _end revision timestamp_ null.
|
||||
|
|
|
@ -42,7 +42,7 @@ include::{sourcedir}/InterceptorTest.java[tags=events-interceptors-session-scope
|
|||
A `SessionFactory`-scoped interceptor is registered with the `Configuration` object prior to building the `SessionFactory`.
|
||||
Unless a session is opened explicitly specifying the interceptor to use, the `SessionFactory`-scoped interceptor will be applied to all sessions opened from that `SessionFactory`.
|
||||
`SessionFactory`-scoped interceptors must be thread safe.
|
||||
Ensure that you do not store session-specific states, since multiple sessions will use this interceptor potentially concurrently.
|
||||
Ensure that you do not store session-specific states since multiple sessions will use this interceptor potentially concurrently.
|
||||
|
||||
[[events-interceptors-session-factory-scope-example]]
|
||||
====
|
||||
|
|
|
@ -7,7 +7,7 @@ Tuning how an application does fetching is one of the biggest factors in determi
|
|||
Fetching too much data, in terms of width (values/columns) and/or depth (results/rows),
|
||||
adds unnecessary overhead in terms of both JDBC communication and ResultSet processing.
|
||||
Fetching too little data might cause additional fetching to be needed.
|
||||
Tuning how an application fetches data presents a great opportunity to influence the application's overall performance.
|
||||
Tuning how an application fetches data presents a great opportunity to influence the application overall performance.
|
||||
|
||||
[[fetching-basics]]
|
||||
=== The basics
|
||||
|
@ -68,7 +68,7 @@ Hibernate, as a JPA provider, honors that default.
|
|||
[[fetching-strategies-no-fetching]]
|
||||
=== No fetching
|
||||
|
||||
For the first use case, consider the application's login process for an `Employee`.
|
||||
For the first use case, consider the application login process for an `Employee`.
|
||||
Let's assume that login only requires access to the `Employee` information, not `Project` nor `Department` information.
|
||||
|
||||
[[fetching-strategies-no-fetching-example]]
|
||||
|
|
|
@ -213,7 +213,7 @@ include::{extrasdir}/flushing-always-flush-sql-example.sql[]
|
|||
=== `MANUAL` flush
|
||||
|
||||
Both the `EntityManager` and the Hibernate `Session` define a `flush()` method that, when called, triggers a manual flush.
|
||||
Hibernate also defines a `MANUAL` flush mode, so the persistence context can only be flushed manually.
|
||||
Hibernate also defines a `MANUAL` flush mode so the persistence context can only be flushed manually.
|
||||
|
||||
[[flushing-manual-flush-example]]
|
||||
.`MANUAL` flushing
|
||||
|
|
|
@ -7,10 +7,10 @@
|
|||
As an ORM tool, probably the single most important thing you need to tell Hibernate is how to connect to your database so that it may connect on behalf of your application.
|
||||
This is ultimately the function of the `org.hibernate.engine.jdbc.connections.spi.ConnectionProvider` interface.
|
||||
Hibernate provides some out of the box implementations of this interface.
|
||||
`ConnectionProvider` is also an extension point, so you can also use custom implementations from third parties or written yourself.
|
||||
`ConnectionProvider` is also an extension point so you can also use custom implementations from third parties or written yourself.
|
||||
The `ConnectionProvider` to use is defined by the `hibernate.connection.provider_class` setting. See the https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/cfg/AvailableSettings.html#CONNECTION_PROVIDER[`org.hibernate.cfg.AvailableSettings#CONNECTION_PROVIDER`]
|
||||
|
||||
Generally speaking applications should not have to configure a `ConnectionProvider` explicitly if using one of the Hibernate-provided implementations.
|
||||
Generally speaking, applications should not have to configure a `ConnectionProvider` explicitly if using one of the Hibernate-provided implementations.
|
||||
Hibernate will internally determine which `ConnectionProvider` to use based on the following algorithm:
|
||||
|
||||
1. If `hibernate.connection.provider_class` is set, it takes precedence
|
||||
|
@ -34,7 +34,8 @@ For JPA applications, note that `hibernate.connection.datasource` corresponds to
|
|||
====
|
||||
|
||||
The `DataSource` `ConnectionProvider` also (optionally) accepts the `hibernate.connection.username` and `hibernate.connection.password`.
|
||||
If specified, the https://docs.oracle.com/javase/8/docs/api/javax/sql/DataSource.html#getConnection-java.lang.String-java.lang.String-[`DataSource#getConnection(String username, String password)`] will be used. Otherwise the no-arg form is used.
|
||||
If specified, the https://docs.oracle.com/javase/8/docs/api/javax/sql/DataSource.html#getConnection-java.lang.String-java.lang.String-[`DataSource#getConnection(String username, String password)`] will be used.
|
||||
Otherwise, the no-arg form is used.
|
||||
|
||||
[[database-connectionprovider-c3p0]]
|
||||
=== Using c3p0
|
||||
|
|
|
@ -2,8 +2,8 @@
|
|||
== JNDI
|
||||
:sourcedir: extras
|
||||
|
||||
Hibernate does optionally interact with JNDI on the applications behalf.
|
||||
Generally it does this when the application:
|
||||
Hibernate does optionally interact with JNDI on the application's behalf.
|
||||
Generally, it does this when the application:
|
||||
|
||||
* has asked the SessionFactory be bound to JNDI
|
||||
* has specified a DataSource to use by JNDI name
|
||||
|
|
|
@ -21,7 +21,7 @@ Hibernate provides mechanisms for implementing both types of locking in your app
|
|||
=== Optimistic
|
||||
|
||||
When your application uses long transactions or conversations that span several database transactions,
|
||||
you can store versioning data, so that if the same entity is updated by two conversations, the last to commit changes is informed of the conflict,
|
||||
you can store versioning data so that if the same entity is updated by two conversations, the last to commit changes is informed of the conflict,
|
||||
and does not override the other conversation's work.
|
||||
This approach guarantees some isolation, but scales well and works particularly well in _read-often-write-sometimes_ situations.
|
||||
|
||||
|
|
|
@ -55,7 +55,7 @@ All data is kept in a single database schema.
|
|||
The data for each tenant is partitioned by the use of partition value or discriminator.
|
||||
The complexity of this discriminator might range from a simple column value to a complex SQL formula.
|
||||
Again, this approach would use a single Connection pool to service all tenants.
|
||||
However, in this approach the application needs to alter each and every SQL statement sent to the database to reference the "tenant identifier" discriminator.
|
||||
However, in this approach, the application needs to alter each and every SQL statement sent to the database to reference the "tenant identifier" discriminator.
|
||||
|
||||
[[multitenacy-hibernate]]
|
||||
=== Multitenancy in Hibernate
|
||||
|
@ -161,7 +161,7 @@ There are two situations where CurrentTenantIdentifierResolver is used:
|
|||
In the case of the current-session feature, Hibernate will need to open a session if it cannot find an existing one in scope.
|
||||
However, when a session is opened in a multitenant environment, the tenant identifier has to be specified.
|
||||
This is where the `CurrentTenantIdentifierResolver` comes into play; Hibernate will consult the implementation you provide to determine the tenant identifier to use when opening the session.
|
||||
In this case, it is required that a `CurrentTenantIdentifierResolver` be supplied.
|
||||
In this case, it is required that a `CurrentTenantIdentifierResolver` is supplied.
|
||||
* The other situation is when you do not want to have to explicitly specify the tenant identifier all the time.
|
||||
If a `CurrentTenantIdentifierResolver` has been specified, Hibernate will use it to determine the default tenant identifier to use when opening the session.
|
||||
|
||||
|
|
|
@ -28,7 +28,7 @@ Hibernate produces and releases its own `features.xml` that defines a core `hibe
|
|||
This is included in the binary distribution, as well as deployed to the JBoss Nexus repository (using the `org.hibernate` groupId and `hibernate-osgi` with the `karaf.xml` classifier).
|
||||
|
||||
Note that our features are versioned using the same ORM artifact versions they wrap.
|
||||
Also note that the features are heavily tested against Karaf 3.0.3 as a part of our PaxExam-based integration tests.
|
||||
Also, note that the features are heavily tested against Karaf 3.0.3 as a part of our PaxExam-based integration tests.
|
||||
However, they'll likely work on other versions as well.
|
||||
|
||||
hibernate-osgi, theoretically, supports a variety of OSGi containers, such as Equinox.
|
||||
|
@ -162,12 +162,12 @@ Your bundle's manifest will need to import, at a minimum,
|
|||
|
||||
=== Obtaining an SessionFactory
|
||||
|
||||
`hibernate-osgi` registers an OSGi service, using the `SessionFactory` interface name, that bootstraps and creates an `SessionFactory` specific for OSGi environments.
|
||||
`hibernate-osgi` registers an OSGi service, using the `SessionFactory` interface name, that bootstraps and creates a `SessionFactory` specific for OSGi environments.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
It is VITAL that your `SessionFactory` be obtained through the service, rather than creating it manually. The service handles the OSGi `ClassLoader`, discovered extension points, scanning, etc.
|
||||
Manually creating an `SessionFactory` is guaranteed to NOT work during runtime!
|
||||
Manually creating a `SessionFactory` is guaranteed to NOT work during runtime!
|
||||
====
|
||||
|
||||
.Discover/Use `SessionFactory`
|
||||
|
|
|
@ -98,7 +98,7 @@ Bytecode-enhanced bi-directional association management makes that first example
|
|||
[[BytecodeEnhancement-dirty-tracking-optimizations]]
|
||||
===== Internal performance optimizations
|
||||
|
||||
Additionally we use the enhancement process to add some additional code that allows us to optimized certain performance characteristics of the persistence context.
|
||||
Additionally, we use the enhancement process to add some additional code that allows us to optimized certain performance characteristics of the persistence context.
|
||||
These are hard to discuss without diving into a discussion of Hibernate internals.
|
||||
|
||||
[[BytecodeEnhancement-enhancement]]
|
||||
|
@ -107,7 +107,7 @@ These are hard to discuss without diving into a discussion of Hibernate internal
|
|||
[[BytecodeEnhancement-enhancement-runtime]]
|
||||
===== Run-time enhancement
|
||||
|
||||
Currently run-time enhancement of the domain model is only supported in managed JPA environments following the JPA defined SPI for performing class transformations.
|
||||
Currently, run-time enhancement of the domain model is only supported in managed JPA environments following the JPA-defined SPI for performing class transformations.
|
||||
Even then, this support is disabled by default.
|
||||
To enable run-time enhancement, specify `hibernate.ejb.use_class_enhancer`=`true` as a persistent unit property.
|
||||
|
||||
|
@ -147,7 +147,7 @@ Enhancement is disabled by default in preparation for additions capabilities (hb
|
|||
Hibernate provides a Maven plugin capable of providing build-time enhancement of the domain model as they are compiled as part of a Maven build.
|
||||
See the section on the <<BytecodeEnhancement-enhancement-gradle>> for details on the configuration settings. Again, the default for those 3 is `true`.
|
||||
|
||||
The Maven plugin supports one additional configuration settings: failOnError, which controls what happens in case of an error.
|
||||
The Maven plugin supports one additional configuration settings: failOnError, which controls what happens in case of error.
|
||||
Default behavior is to fail the build, but it can be set so that only a warning is issued.
|
||||
|
||||
.Apply the Maven plugin
|
||||
|
|
|
@ -59,7 +59,7 @@ include::{sourcedir}/PersistenceContextTest.java[tags=pc-persist-native-example]
|
|||
`org.hibernate.Session` also has a method named persist which follows the exact semantic defined in the JPA specification for the persist method.
|
||||
It is this `org.hibernate.Session` method to which the Hibernate `javax.persistence.EntityManager` implementation delegates.
|
||||
|
||||
If the `DomesticCat` entity type has a generated identifier, the value is associated to the instance when the save or persist is called.
|
||||
If the `DomesticCat` entity type has a generated identifier, the value is associated with the instance when the save or persist is called.
|
||||
If the identifier is not automatically generated, the manually assigned (usually natural) key value has to be set on the instance before the save or persist methods are called.
|
||||
|
||||
[[pc-remove]]
|
||||
|
|
|
@ -21,7 +21,7 @@ If you find that your particular database is not among them, it is not terribly
|
|||
=== Dialect resolution
|
||||
|
||||
Originally, Hibernate would always require that users specify which dialect to use. In the case of users looking to simultaneously target multiple databases with their build that was problematic.
|
||||
Generally this required their users to configure the Hibernate dialect or defining their own method of setting that value.
|
||||
Generally, this required their users to configure the Hibernate dialect or defining their own method of setting that value.
|
||||
|
||||
Starting with version 3.2, Hibernate introduced the notion of automatically detecting the dialect to use based on the `java.sql.DatabaseMetaData` obtained from a `java.sql.Connection` to that database.
|
||||
This was much better, expect that this resolution was limited to databases Hibernate know about ahead of time and was in no way configurable or overrideable.
|
||||
|
|
|
@ -124,7 +124,7 @@ include::{sourcedir}/CriteriaTest.java[tags=criteria-typedquery-multiselect-arra
|
|||
Just as we saw in <<criteria-typedquery-multiselect-array-explicit-example>> we have a typed criteria query returning an `Object` array.
|
||||
Both queries are functionally equivalent.
|
||||
This second example uses the `multiselect()` method which behaves slightly differently based on the type given when the criteria query was first built,
|
||||
but in this case it says to select and return an __Object[]__.
|
||||
but, in this case, it says to select and return an __Object[]__.
|
||||
|
||||
[[criteria-typedquery-wrapper]]
|
||||
=== Selecting a wrapper
|
||||
|
@ -143,7 +143,7 @@ include::{sourcedir}/CriteriaTest.java[tags=criteria-typedquery-wrapper-example,
|
|||
----
|
||||
====
|
||||
|
||||
First we see the simple definition of the wrapper object we will be using to wrap our result values.
|
||||
First, we see the simple definition of the wrapper object we will be using to wrap our result values.
|
||||
Specifically, notice the constructor and its argument types.
|
||||
Since we will be returning `PersonWrapper` objects, we use `PersonWrapper` as the type of our criteria query.
|
||||
|
||||
|
@ -170,7 +170,7 @@ The example uses the explicit `createTupleQuery()` of `javax.persistence.criteri
|
|||
An alternate approach is to use `createQuery( Tuple.class )`.
|
||||
|
||||
Again we see the use of the `multiselect()` method, just like in <<criteria-typedquery-multiselect-array-implicit-example>>.
|
||||
The difference here is that the type of the `javax.persistence.criteria.CriteriaQuery` was defined as `javax.persistence.Tuple` so the compound selections in this case are interpreted to be the tuple elements.
|
||||
The difference here is that the type of the `javax.persistence.criteria.CriteriaQuery` was defined as `javax.persistence.Tuple` so the compound selections, in this case, are interpreted to be the tuple elements.
|
||||
|
||||
The javax.persistence.Tuple contract provides three forms of access to the underlying elements:
|
||||
|
||||
|
@ -228,8 +228,8 @@ include::{sourcedir}/CriteriaTest.java[tags=criteria-from-root-example]
|
|||
----
|
||||
====
|
||||
|
||||
Criteria queries may define multiple roots, the effect of which is to create a cartesian product between the newly added root and the others.
|
||||
Here is an example defining a cartesian product betweem `Person` and `Partner` entities:
|
||||
Criteria queries may define multiple roots, the effect of which is to create a Cartesian Product between the newly added root and the others.
|
||||
Here is an example defining a Cartesian Product between `Person` and `Partner` entities:
|
||||
|
||||
[[criteria-from-multiple-root-example]]
|
||||
.Adding multiple roots example
|
||||
|
|
|
@ -141,7 +141,7 @@ include::{sourcedir}/HQLTest.java[tags=jpql-api-positional-parameter-example]
|
|||
|
||||
[NOTE]
|
||||
====
|
||||
It's good practice to not mix forms in a given query.
|
||||
It's good practice not to mix forms in a given query.
|
||||
====
|
||||
|
||||
In terms of execution, JPA `Query` offers 2 different methods for retrieving a result set.
|
||||
|
@ -217,7 +217,7 @@ Query hints here are database query hints.
|
|||
They are added directly to the generated SQL according to `Dialect#getQueryHintString`.
|
||||
The JPA notion of query hints, on the other hand, refer to hints that target the provider (Hibernate).
|
||||
So even though they are called the same, be aware they have a very different purpose.
|
||||
Also be aware that Hibernate query hints generally make the application non-portable across databases unless the code adding them first checks the Dialect.
|
||||
Also, be aware that Hibernate query hints generally make the application non-portable across databases unless the code adding them first checks the Dialect.
|
||||
====
|
||||
|
||||
Flushing is covered in detail in <<chapters/flushing/Flushing.adoc#flushing,Flushing>>.
|
||||
|
@ -241,7 +241,7 @@ include::{sourcedir}/HQLTest.java[tags=hql-api-parameter-example]
|
|||
====
|
||||
|
||||
Hibernate generally understands the expected type of the parameter given its context in the query.
|
||||
In the previous example, since we are using the parameter in a `LIKE` comparison against a String-typed attribute Hibernate would automatically infer the type; so the above could be simplified.
|
||||
In the previous example since we are using the parameter in a `LIKE` comparison against a String-typed attribute Hibernate would automatically infer the type; so the above could be simplified.
|
||||
|
||||
[[hql-api-parameter-inferred-type-example]]
|
||||
.Hibernate name parameter binding (inferred type)
|
||||
|
@ -318,7 +318,7 @@ The `scroll` method is overloaded.
|
|||
See the https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/ScrollMode.html[Javadocs] for the details on each.
|
||||
* The second form takes no argument and will use the `ScrollMode` indicated by `Dialect#defaultScrollMode`.
|
||||
`Query#scroll` returns a `org.hibernate.ScrollableResults` which wraps the underlying JDBC (scrollable) `ResultSet` and provides access to the results.
|
||||
Since this form holds the JDBC `ResultSet` open, the application should indicate when it is done with the `ScrollableResults` by calling its `close()` method (as inherited from `java.io.Closeable`, so that `ScrollableResults` will work with try-with-resources blocks!).
|
||||
Since this form holds the JDBC `ResultSet` open, the application should indicate when it is done with the `ScrollableResults` by calling its `close()` method (as inherited from `java.io.Closeable` so that `ScrollableResults` will work with try-with-resources blocks!).
|
||||
If left unclosed by the application, Hibernate will automatically close the `ScrollableResults` when the current transaction completes.
|
||||
|
||||
[NOTE]
|
||||
|
@ -658,7 +658,7 @@ as opposed to the other queries in this section where the HQL/JPQL conditions ar
|
|||
====
|
||||
|
||||
The distinction in this specific example is probably not that significant.
|
||||
The `with clause` is sometimes necessary in more complicated queries.
|
||||
The `with clause` is sometimes necessary for more complicated queries.
|
||||
|
||||
Explicit joins may reference association or component/embedded attributes.
|
||||
In the case of component/embedded attributes, the join is simply logical and does not correlate to a physical (SQL) join.
|
||||
|
@ -687,7 +687,7 @@ In the example, using an inner join instead would have resulted in customers wit
|
|||
Fetch joins are not valid in sub-queries.
|
||||
|
||||
Care should be taken when fetch joining a collection-valued association which is in any way further restricted (the fetched collection will be restricted too).
|
||||
For this reason it is usually considered best practice to not assign an identification variable to fetched joins except for the purpose of specifying nested fetch joins.
|
||||
For this reason, it is usually considered best practice not to assign an identification variable to fetched joins except for the purpose of specifying nested fetch joins.
|
||||
|
||||
Fetch joins should not be used in paged queries (e.g. `setFirstResult()` or `setMaxResults()`), nor should they be used with the `scroll()` or `iterate()` features.
|
||||
====
|
||||
|
@ -695,7 +695,7 @@ Fetch joins should not be used in paged queries (e.g. `setFirstResult()` or `set
|
|||
[[hql-implicit-join]]
|
||||
=== Implicit joins (path expressions)
|
||||
|
||||
Another means of adding to the scope of object model types available to the query is through the use of implicit joins, or path expressions.
|
||||
Another means of adding to the scope of object model types available to the query is through the use of implicit joins or path expressions.
|
||||
|
||||
[[hql-implicit-join-example]]
|
||||
.Simple implicit join example
|
||||
|
@ -757,7 +757,7 @@ include::{sourcedir}/HQLTest.java[tags=hql-collection-valued-associations]
|
|||
----
|
||||
====
|
||||
|
||||
In the example, the identification variable `ph` actually refers to the object model type `Phone` which is the type of the elements of the `Person#phones` association.
|
||||
In the example, the identification variable `ph` actually refers to the object model type `Phone`, which is the type of the elements of the `Person#phones` association.
|
||||
|
||||
The example also shows the alternate syntax for specifying collection association joins using the `IN` syntax.
|
||||
Both forms are equivalent.
|
||||
|
@ -790,7 +790,7 @@ INDEX::
|
|||
JPQL however, reserves this for use in the `List` case and adds `KEY` for the `Map` case.
|
||||
Applications interested in JPA provider portability should be aware of this distinction.
|
||||
KEY::
|
||||
Valid only for `Maps`. Refers to the map's key. If the key is itself an entity, can be further navigated.
|
||||
Valid only for `Maps`. Refers to the map's key. If the key is itself an entity, it can be further navigated.
|
||||
ENTRY::
|
||||
Only valid for `Maps`. Refers to the map's logical `java.util.Map.Entry` tuple (the combination of its key and value).
|
||||
`ENTRY` is only valid as a terminal path and it's applicable to the `SELECT` clause only.
|
||||
|
@ -812,8 +812,8 @@ include::{sourcedir}/HQLTest.java[tags=hql-polymorphism-example, indent=0]
|
|||
|
||||
This query names the `Payment` entity explicitly.
|
||||
However, all subclasses of `Payment` are also available to the query.
|
||||
So if the `CreditCardPayment` entity and `WireTransferPayment` entity each extend from `Payment` all three types would be available to the query.
|
||||
And the query would return instances of all three.
|
||||
So if the `CreditCardPayment` and `WireTransferPayment` entities extend the `Payment` class, all three types would be available to the entity query,
|
||||
and the query would return instances of all three.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
|
@ -840,8 +840,8 @@ Again, see <<hql-from-clause>>.
|
|||
[[hql-literals]]
|
||||
=== Literals
|
||||
|
||||
String literals are enclosed in single-quotes.
|
||||
To escape a single-quote within a string literal, use double single-quotes.
|
||||
String literals are enclosed in single quotes.
|
||||
To escape a single quote within a string literal, use double single quotes.
|
||||
|
||||
[[hql-string-literals-example]]
|
||||
.String literals examples
|
||||
|
@ -973,7 +973,7 @@ HQL can also understand additional functions defined by the Dialect as well as t
|
|||
[[jpql-standardized-functions]]
|
||||
=== JPQL standardized functions
|
||||
|
||||
Here are the list of functions defined as supported by JPQL.
|
||||
Here is the list of functions defined as supported by JPQL.
|
||||
Applications interested in remaining portable between JPA providers should stick to these functions.
|
||||
|
||||
CONCAT::
|
||||
|
@ -1194,7 +1194,7 @@ Such function declarations are made by using the `addSqlFunction()` method of `o
|
|||
=== Collection-related expressions
|
||||
|
||||
There are a few specialized expressions for working with collection-valued associations.
|
||||
Generally these are just abbreviated forms or other expressions for the sake of conciseness.
|
||||
Generally, these are just abbreviated forms or other expressions for the sake of conciseness.
|
||||
|
||||
SIZE::
|
||||
Calculate the size of a collection. Equates to a subquery!
|
||||
|
@ -1246,7 +1246,7 @@ We can also refer to the type of an entity as an expression.
|
|||
This is mainly useful when dealing with entity inheritance hierarchies.
|
||||
The type can expressed using a `TYPE` function used to refer to the type of an identification variable representing an entity.
|
||||
The name of the entity also serves as a way to refer to an entity type.
|
||||
Additionally the entity type can be parametrized, in which case the entity's Java Class reference would be bound as the parameter value.
|
||||
Additionally, the entity type can be parameterized, in which case the entity's Java Class reference would be bound as the parameter value.
|
||||
|
||||
[[hql-entity-type-exp-example]]
|
||||
.Entity type expression examples
|
||||
|
@ -1489,8 +1489,8 @@ include::{sourcedir}/HQLTest.java[tags=hql-like-predicate-escape-example]
|
|||
[[hql-between-predicate]]
|
||||
=== Between predicate
|
||||
|
||||
Analogous to the SQL between expression.
|
||||
Perform a evaluation that a value is within the range of 2 other values.
|
||||
Analogous to the SQL `BETWEEN` expression,
|
||||
it checks if the value is within boundaries.
|
||||
All the operands should have comparable types.
|
||||
|
||||
[[hql-between-predicate-example]]
|
||||
|
@ -1589,20 +1589,20 @@ If the predicate is true, NOT resolves to false. If the predicate is unknown (e.
|
|||
|
||||
The `AND` operator is used to combine 2 predicate expressions.
|
||||
The result of the AND expression is true if and only if both predicates resolve to true.
|
||||
If either predicate resolves to unknown, the AND expression resolves to unknown as well. Otherwise, the result is false.
|
||||
If either predicates resolves to unknown, the AND expression resolves to unknown as well. Otherwise, the result is false.
|
||||
|
||||
[[hql-or-predicate]]
|
||||
=== OR predicate operator
|
||||
|
||||
The `OR` operator is used to combine 2 predicate expressions.
|
||||
The result of the OR expression is true if either predicate resolves to true.
|
||||
The result of the OR expression is true if one predicate resolves to true.
|
||||
If both predicates resolve to unknown, the OR expression resolves to unknown.
|
||||
Otherwise, the result is false.
|
||||
|
||||
[[hql-where-clause]]
|
||||
=== The `WHERE` clause
|
||||
|
||||
The `WHERE` clause of a query is made up of predicates which assert whether values in each potential row match the predicated checks.
|
||||
The `WHERE` clause of a query is made up of predicates which assert whether values in each potential row match the current filtering criteria.
|
||||
Thus, the where clause restricts the results returned from a select query and limits the scope of update and delete queries.
|
||||
|
||||
[[hql-group-by]]
|
||||
|
@ -1622,7 +1622,7 @@ include::{sourcedir}/HQLTest.java[tags=hql-group-by-example]
|
|||
The first query retrieves the complete total of all orders.
|
||||
The second retrieves the total for each customer, grouped by each customer.
|
||||
|
||||
In a grouped query, the where clause applies to the non aggregated values (essentially it determines whether rows will make it into the aggregation).
|
||||
In a grouped query, the where clause applies to the non-aggregated values (essentially it determines whether rows will make it into the aggregation).
|
||||
The `HAVING` clause also restricts results, but it operates on the aggregated values.
|
||||
In the <<hql-group-by-example>>, we retrieved `Call` duration totals for all persons.
|
||||
If that ended up being too much data to deal with, we might want to restrict the results to focus only on customers with a summed total of more than 1000:
|
||||
|
@ -1655,7 +1655,7 @@ Additionally, JPQL says that all values referenced in the `ORDER BY` clause must
|
|||
HQL does not mandate that restriction, but applications desiring database portability should be aware that not all databases support referencing values in the `ORDER BY` clause that are not referenced in the select clause.
|
||||
|
||||
Individual expressions in the order-by can be qualified with either `ASC` (ascending) or `DESC` (descending) to indicated the desired ordering direction.
|
||||
Null values can be placed in front or at the end of sorted set using `NULLS FIRST` or `NULLS LAST` clause respectively.
|
||||
Null values can be placed in front or at the end of the sorted set using `NULLS FIRST` or `NULLS LAST` clause respectively.
|
||||
|
||||
[[hql-order-by-example]]
|
||||
.Order by example
|
||||
|
|
|
@ -254,7 +254,7 @@ Problems can arise when returning multiple entities of the same type or when the
|
|||
=== Returning multiple entities
|
||||
|
||||
Until now, the result set column names are assumed to be the same as the column names specified in the mapping document.
|
||||
This can be problematic for SQL queries that join multiple tables, since the same column names can appear in more than one table.
|
||||
This can be problematic for SQL queries that join multiple tables since the same column names can appear in more than one table.
|
||||
|
||||
Column alias injection is needed in the following query which otherwise throws `NonUniqueDiscoveredSqlAliasException`.
|
||||
|
||||
|
@ -297,7 +297,7 @@ include::{sourcedir}/SQLTest.java[tags=sql-hibernate-multi-entity-query-alias-ex
|
|||
There's no such equivalent in JPA because the `Query` interface doesn't define an `addEntity` method equivalent.
|
||||
====
|
||||
|
||||
The `{pr.*}` and `{pt.*}` notation used above is a shorthand for "all properties".
|
||||
The `{pr.*}` and `{pt.*}` notation used above is shorthand for "all properties".
|
||||
Alternatively, you can list the columns explicitly, but even in this case Hibernate injects the SQL column aliases for each property.
|
||||
The placeholder for a column alias is just the property name qualified by the table alias.
|
||||
|
||||
|
@ -760,7 +760,7 @@ we need to use the JDBC syntax.
|
|||
|
||||
[NOTE]
|
||||
====
|
||||
This limitation is acknowledged and it will be addressed by the https://hibernate.atlassian.net/browse/HHH-10530[HHH-10530] issue.
|
||||
This limitation is acknowledged and will be addressed by the https://hibernate.atlassian.net/browse/HHH-10530[HHH-10530] issue.
|
||||
====
|
||||
|
||||
[[sql-call-function-mysql-example]]
|
||||
|
@ -794,7 +794,7 @@ The following example shows how to define custom SQL operations using annotation
|
|||
`@SQLInsert`, `@SQLUpdate` and `@SQLDelete` override the INSERT, UPDATE, DELETE statements of a given entity.
|
||||
For the SELECT clause, a `@Loader` must be defined along with a `@NamedNativeQuery` used for loading the underlying table record.
|
||||
|
||||
For collections, Hibernate allows defining a custom `@SQLDeleteAll` which is used for removing all child records associated to a given parent entity.
|
||||
For collections, Hibernate allows defining a custom `@SQLDeleteAll` which is used for removing all child records associated with a given parent entity.
|
||||
To filter collections, the `@Where` annotation allows customizing the underlying SQL WHERE clause.
|
||||
|
||||
[[sql-custom-crud-example]]
|
||||
|
|
|
@ -146,7 +146,7 @@ The dialects `MySQLSpatial56Dialect` and `MySQLSpatial5InnoDBDialect` use these
|
|||
|
||||
These dialects may therefore produce results that differ from that of the other spatial dialects.
|
||||
|
||||
For more information see this page in the MySQL reference guide (esp. the section https://dev.mysql.com/doc/refman/5.7/en/spatial-relation-functions.html[Functions That Test Spatial Relations Between Geometry Objects])
|
||||
For more information, see this page in the MySQL reference guide (esp. the section https://dev.mysql.com/doc/refman/5.7/en/spatial-relation-functions.html[Functions That Test Spatial Relations Between Geometry Objects])
|
||||
====
|
||||
[[spatial-configuration-dialect-oracle]]
|
||||
Oracle10g/11g::
|
||||
|
@ -180,7 +180,8 @@ Note that implementations must be thread-safe and have a default no-args constru
|
|||
[NOTE]
|
||||
====
|
||||
The Oracle Spatial dialect can be configured to run in either OGC strict or non-strict mode.
|
||||
In OGC strict mode, the Open Geospatial compliant functions of Oracle Spatial are used in spatial operations (they exists in Oracle 10g, but are not documented). In non-strict mode the usual Oracle Spatial functions are used directly, and mimic the OGC semantics.The default is OGC strict mode.
|
||||
In OGC strict mode, the Open Geospatial compliant functions of Oracle Spatial are used in spatial operations (they exists in Oracle 10g, but are not documented).
|
||||
In non-strict mode, the usual Oracle Spatial functions are used directly, and mimic the OGC semantics.The default is OGC strict mode.
|
||||
You can change this to non-strict mode by setting the hibernate.spatial.ogc_strict property to false.
|
||||
|
||||
Note that changing from strict to non-strict mode changes the semantics of the spatial operation.
|
||||
|
@ -211,7 +212,7 @@ Hibernate Spatial comes with the following types:
|
|||
jts_geometry::
|
||||
Handled by `org.hibernate.spatial.JTSGeometryType` it maps a database geometry column type to a `com.vividsolutions.jts.geom.Geometry` entity property type.
|
||||
geolatte_geometry::
|
||||
Handled by `org.hibernate.spatial.GeolatteGeometryType`, it maps a database geometry column type to a `org.geolatte.geom.Geometry` entity property type.
|
||||
Handled by `org.hibernate.spatial.GeolatteGeometryType`, it maps a database geometry column type to an `org.geolatte.geom.Geometry` entity property type.
|
||||
|
||||
The following entity uses the `jts_geometry` to map the PostgreSQL geometry type to a `com.vividsolutions.jts.geom.Point`.
|
||||
|
||||
|
|
|
@ -11,13 +11,13 @@ In most use-cases these definitions align, but that is not always the case.
|
|||
|
||||
[NOTE]
|
||||
====
|
||||
This documentation largely treats the physical and logic notions of transaction as one-in-the-same.
|
||||
This documentation largely treats the physical and logic notions of a transaction as one-in-the-same.
|
||||
====
|
||||
|
||||
[[transactions-physical]]
|
||||
=== Physical Transactions
|
||||
|
||||
Hibernate uses the JDBC API for persistence. In the world of Java there are 2 well defined mechanism for dealing with transactions in JDBC: JDBC itself and JTA.
|
||||
Hibernate uses the JDBC API for persistence. In the world of Java there are two well-defined mechanism for dealing with transactions in JDBC: JDBC itself and JTA.
|
||||
Hibernate supports both mechanisms for integrating with transactions and allowing applications to manage physical transactions.
|
||||
|
||||
Transaction handling per `Session` is handled by the `org.hibernate.resource.transaction.TransactionCoordinator` contract, which are built by the `org.hibernate.resource.transaction.TransactionCoordinatorBuilder` service.
|
||||
|
@ -53,18 +53,18 @@ and `javax.transaction.UserTransaction` for that system as well as exposing the
|
|||
|
||||
[NOTE]
|
||||
====
|
||||
Generally `JtaPlatfor`m will need access to JNDI to resolve the JTA `TransactionManager`, `UserTransaction`, etc.
|
||||
Generally, `JtaPlatform` will need access to JNDI to resolve the JTA `TransactionManager`, `UserTransaction`, etc.
|
||||
See <<chapters/jndi/JNDI.adoc#jndi,JNDI chapter>> for details on configuring access to JNDI.
|
||||
====
|
||||
|
||||
Hibernate tries to discover the `JtaPlatform` it should use through the use of another service named `org.hibernate.engine.transaction.jta.platform.spi.JtaPlatformResolver`.
|
||||
If that resolution does not work, or if you wish to provide a custom implementation you will need to specify the `hibernate.transaction.jta.platform` setting.
|
||||
Hibernate provides many implementations of the `JtaPlatform` contract, all with short-names:
|
||||
Hibernate provides many implementations of the `JtaPlatform` contract, all with short names:
|
||||
|
||||
`Borland`:: `JtaPlatform` for the Borland Enterprise Server.
|
||||
`Bitronix`:: `JtaPlatform` for Bitronix.
|
||||
`JBossAS`:: `JtaPlatform` for Arjuna/JBossTransactions/Narnya when used within the JBoss/WildFly Application Server.
|
||||
`JBossTS`:: `JtaPlatform` for Arjuna/JBossTransactions/Narnya when used standalone.
|
||||
`JBossAS`:: `JtaPlatform` for Arjuna/JBossTransactions/Narayana when used within the JBoss/WildFly Application Server.
|
||||
`JBossTS`:: `JtaPlatform` for Arjuna/JBossTransactions/Narayana when used standalone.
|
||||
`JOnAS`:: `JtaPlatform` for JOTM when used within JOnAS.
|
||||
`JOTM`:: `JtaPlatform` for JOTM when used standalone.
|
||||
`JRun4`:: `JtaPlatform` for the JRun 4 Application Server.
|
||||
|
@ -92,7 +92,7 @@ In fact in both JTA and JDBC environments, these `Synchronizations` are kept loc
|
|||
In JTA environments, Hibernate will only ever register one single `Synchronization` with the `TransactionManager` to avoid ordering problems.
|
||||
|
||||
Additionally, it exposes a getStatus method that returns an `org.hibernate.resource.transaction.spi.TransactionStatus` enum.
|
||||
This method checks with the underling transaction system if needed, so care should be taken to minimize its use; it can have a big performance impact in certain JTA set ups.
|
||||
This method checks with the underlying transaction system if needed, so care should be taken to minimize its use; it can have a big performance impact in certain JTA set ups.
|
||||
|
||||
Let's take a look at using the Transaction API in the various environments.
|
||||
|
||||
|
@ -182,7 +182,7 @@ In the same way, do not auto-commit after every SQL statement in your applicatio
|
|||
Hibernate disables, or expects the application server to disable, auto-commit mode immediately.
|
||||
Database transactions are never optional.
|
||||
All communication with a database must be encapsulated by a transaction.
|
||||
Avoid auto-commit behavior for reading data, because many small transactions are unlikely to perform better than one clearly-defined unit of work, and are more difficult to maintain and extend.
|
||||
Avoid auto-commit behavior for reading data because many small transactions are unlikely to perform better than one clearly-defined unit of work, and are more difficult to maintain and extend.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
|
@ -247,7 +247,7 @@ Even though we have multiple databases access here, from the point of view of th
|
|||
There are many ways to implement this in your application.
|
||||
|
||||
A first naive implementation might keep the `Session` and database transaction open while the user is editing, using database-level locks to prevent other users from modifying the same data and to guarantee isolation and atomicity.
|
||||
This is an anti-pattern, because lock contention is a bottleneck which will prevent scalability in the future.
|
||||
This is an anti-pattern because lock contention is a bottleneck which will prevent scalability in the future.
|
||||
|
||||
Several database transactions are used to implement the conversation.
|
||||
In this case, maintaining isolation of business processes becomes the partial responsibility of the application tier.
|
||||
|
@ -285,7 +285,7 @@ An exception thrown by Hibernate means you have to rollback your database transa
|
|||
If your `Session` is bound to the application, you have to stop the application.
|
||||
Rolling back the database transaction does not put your business objects back into the state they were at the start of the transaction.
|
||||
This means that the database state and the business objects will be out of sync.
|
||||
Usually this is not a problem, because exceptions are not recoverable and you will have to start over after rollback anyway.
|
||||
Usually, this is not a problem because exceptions are not recoverable and you will have to start over after rollback anyway.
|
||||
|
||||
The `Session` caches every object that is in a persistent state (watched and checked for dirty state by Hibernate).
|
||||
If you keep it open for a long time or simply load too much data, it will grow endlessly until you get an `OutOfMemoryException`.
|
||||
|
|
Loading…
Reference in New Issue