cannibalize some information from javadoc

This commit is contained in:
Gavin 2023-05-11 23:05:49 +02:00 committed by Christian Beikov
parent ed04cae295
commit e1cdd99135
3 changed files with 242 additions and 13 deletions

View File

@ -183,7 +183,7 @@ Alternatively, the venerable class `org.hibernate.cfg.Configuration` allows an i
[source,java]
----
SessionFactory factory = new Configuration()
SessionFactory sf = new Configuration()
.addAnnotatedClass(Book.class)
.addAnnotatedClass(Author.class)
.setProperty(AvailableSettings.JAKARTA_JDBC_URL, "jdbc:postgresql://localhost/example")

View File

@ -60,7 +60,7 @@ On the other hand, stateful sessions come with some very important restrictions,
- persistence contexts aren't threadsafe, and can't be shared across threads, and
- a persistence context can't be reused across unrelated transactions, since that would break the isolation and atomicity of the transactions.
[WARNING]
[IMPORTANT]
.This is important
====
If you didn't quite understand the point above, then go back and re-read it until you do understand.
@ -73,3 +73,158 @@ For this reason Hibernate provides both stateful and stateless sessions.
[[creating-session]]
=== Creating a session
Sticking with standard JPA-defined APIs, we saw how to obtain an `EntityManagerFactory` in <<configuration-jpa>>.
It's quite unsurprising that we may use this object to create an `EntityManager`:
[source,java]
----
EntityManager em = emf.createEntityManager();
----
When we're finished with the `EntityManager`, we should explicitly clean it up:
[source,java]
----
em.close();
----
On the other hand, if we're starting from a `SessionFactory`, as described in <<configuration-api>>, we may use:
[source,java]
----
Session s = sf.openSession();
----
But we still need to clean up:
[source,java]
----
em.close();
----
[NOTE]
.Injecting the `EntityManager`
====
If you're writing code for some sort of container environment, you'll probably obtain the `EntityManager` by some sort of dependency injection.
For example, in Java (or Jakarta) EE you would write:
[source,java]
----
@PersistenceContext EntityManager em;
----
In Quarkus, injection is handled by CDI:
[source,java]
----
@Inject EntityManager em;
----
====
[[managing-transactions]]
=== Managing transactions
Using JPA-standard APIs, the `EntityTransaction` interface allows us to control database transactions.
[source,java]
----
EntityManager em = emf.createEntityManager();
EntityTransaction tx = s.getTransaction();
try {
tx.beginTransaction();
//do some work
...
tx.commit();
}
catch (Exception e) {
if (tx.isActive()) tx.rollback();
throw e;
}
finally {
em.close();
}
----
Using Hibernate's native APIs we might write something really similar,
// [source,java]
// ----
// Session s = sf.openSession();
// Transaction tx = null;
// try {
// tx = s.beginTransaction();
// //do some work
// ...
// tx.commit();
// }
// catch (Exception e) {
// if (tx!=null) tx.rollback();
// throw e;
// }
// finally {
// s.close();
// }
// ----
but since this sort of code is extremely tedious, we have a much nicer option:
[source,java]
----
sf.inTransaction(s -> {
//do the work
...
});
----
[NOTE]
.Container-managed transactions
====
In a container environment, the container itself is usually responsible for managing transactions.
In Java EE or Quarkus, you'll probably indicate the boundaries of the transaction using the `@Transactional` annotation.
====
=== Operations on the persistence context
Of course, the main reason we need an `EntityManager` is to do stuff to the database.
The following operations let us interact with the persistence context:
.Important methods of the `EntityManager`
[cols="2,5"]
|===
| Method name and parameters | Effect
| `find(Class,Object)` and `find(Class,Object,LockModeType)`
| Obtain a persistent object given its type and its id
| `persist(Object)`
| Make a transient object persistent and schedule a SQL `insert` statement for later execution
| `remove(Object)`
| Make a persistent object transient and schedule a SQL `delete` statement for later execution
| `merge(Object)`
| Copy the state of a given detached object to a corresponding managed persistent instance and return
the persistent object
| `refresh(Object)` and `refresh(Object,LockModeType)`
| Refresh the persistent state of an object using a new SQL `select` to retrieve the current state from the
database
| `lock(Object, LockModeType)`
| Obtain a <<optimistic-and-pessimistic-locking,pessimistic lock>> on a persistent object
| `flush()`
| Detect changes made to persistent objects association with the session and synchronize the database state with the state of the session by executing SQL `insert`, `update`, and `delete` statements
| `detach(Object)`
| Disassociate a persistent object from a session without
affecting the database
| `getReference(Class,id)` or
`getReference(Object)`
| Obtain a reference to a persistent object without actually loading its state from the database
|===
If an exception occurs while interacting with the database, there's no good way to resynchronize the state of the current persistence context with the state held in database tables.
Therefore, a session is considered to be unusable after any of its methods throws an exception.
[IMPORTANT]
.The persistence context is fragile
====
If you receive an exception from Hibernate, you should immediately close and discard the current session. Open a new session if you need to, but throw the bad one away first.
====
[[flush]]
=== Flushing the session

View File

@ -119,14 +119,66 @@ You can find much more information about association fetching in the
:second-level-cache: https://docs.jboss.org/hibernate/orm/6.2/userguide/html_single/Hibernate_User_Guide.html#caching
A classic way to reduce the number of accesses to the database is to
use a second-level cache, allowing cached data to be shared between
sessions.
A classic way to reduce the number of accesses to the database is to use a second-level cache, allowing cached data to be shared between sessions.
Configuring Hibernate's second-level cache is a rather involved topic,
and quite outside the scope of this document. But in case it helps, we
often test Hibernate with the following configuration, which uses
EHCache as the cache implementation, as above in <<optional-dependencies>>:
By nature, a second-level cache tends to undermine the ACID properties of transaction processing in a relational database. A second-level cache is often by far the easiest way to improve the performance of a system, but only at the cost of making it much more difficult to reason about concurrency. And so the cache is a potential source of bugs which are difficult to isolate and reproduce.
[IMPORTANT]
.Caching is disabled by default
====
Therefore, by default, an entity is not eligible for storage in the second-level cache.
We must explicitly mark each entity that will be stored in the second-level cache with the `@Cache` annotation from `org.hibernate.annotations`.
====
For example:
[source,java]
----
@Cache(usage=NONSTRICT_READ_WRITE, region="Publishers")
@Entity
class Publisher { ... }
----
Hibernate segments the second-level cache into named _regions_, one for each:
- mapped entity hierarchy or
- collection role.
Each region is permitted its own policies for expiry, persistence, and replication. These policies must be configured externally to Hibernate.
An entity hierarchy or collection role may be explicitly assigned a region using the `@Cache` annotation, but, by default, the region name is just the name of the entity class or collection role.
The appropriate policies depend on the kind of data an entity represents. For example, a program might have different caching policies for "reference" data, for transactional data, and for data used for analytics. Ordinarily, the implementation of those policies is the responsibility of the underlying cache implementation.
The `@Cache` annotation also specifies `CacheConcurrencyStrategy`, a policy governing access to the second-level cache by concurrent transactions.
|===
| Concurrency policy | Interpretation | Use case
| `READ_ONLY` | Read-only access | Immutable data
| `NONSTRICT_READ_WRITE` | Read/write access with no locking | When concurrent updates are extremely improbable
| `READ_WRITE` | Read/write access using soft locks | When concurrent updates are possible but not common
| `TRANSACTION` | transactional access | When concurrent updates are frequent
|===
Which policies make sense may also depend on the underlying second-level cache implementation.
[NOTE]
.The JPA-defined `@Cacheable` annotation
====
JPA has a similar annotation, named `@Cacheable`.
Unfortunately, it's almost useless to us, since:
- it provides no way to specify any information about the nature of the cached entity and how its cache should be managed, and
- it may not be used to annotate associations, and so we can't even use it to mark collection roles as eligible for storage in the second-level cache.
====
Once we've marked some entities and collection as eligible for storage in the second-level cache, we still need to set up an actual cache.
[[second-level-cache-configuration]]
=== Configuring the second-level cache provider
Configuring Hibernate's second-level cache is a rather involved topic, and quite outside the scope of this document. But in case it helps, we often test Hibernate with the following configuration, which uses EHCache as the cache implementation, as above in <<optional-dependencies>>:
|===
| Configuration property name | Property value
@ -141,13 +193,35 @@ If you're using EHCache, you'll also need to include an `ehcache.xml` file
that explicitly configures the behavior of each cache region belonging to
your entities and collections.
TIP: Don't forget that you need to explicitly mark each entity that will
be stored in the second-level cache with the `@Cache` annotation from
`org.hibernate.annotations`.
You can find much more information about the second-level cache in the
{second-level-cache}[User Guide].
[[second-level-cache-management]]
=== Second-level cache management
For the most part, the second-level cache is transparent.
Program logic which interacts with the Hibernate session is unaware of the cache, and is not impacted by changes to caching policies.
At worst, interaction with the cache may be controlled by specification of an explicit `CacheMode`.
[source,java]
----
s.setCacheMode(CacheMode.IGNORE);
----
Very occasionally, it's necessary or advantageous to control the cache explicitly, for example, to evict some data that we know to be stale.
The `Cache` interface allows programmatic eviction of cached items.
[source,java]
----
sf.getCache().evictEntityData(Book.class, bookId);
----
[NOTE]
.Second-level cache management is not transaction-aware
====
None of the operations of the `Cache` interface respect any isolation or transactional semantics associated with the underlying caches. In particular, eviction via the methods of this interface causes an immediate "hard" removal outside any current transaction and/or locking scheme.
====
[[session-cache-management]]
=== Session cache management