more verbiage surrounding cache concurrency

this is an important thing, and in the past we've been
very stingy on the documentation side
This commit is contained in:
Gavin King 2022-11-09 12:23:12 +01:00
parent 90e6a8b698
commit a12ba4c2e4
3 changed files with 47 additions and 13 deletions

View File

@ -60,10 +60,30 @@ package org.hibernate;
* semantics associated with the underlying caches. In particular, eviction via
* the methods of this interface causes an immediate "hard" removal outside any
* current transaction and/or locking scheme.
* <p>
* The {@link org.hibernate.annotations.Cache} annotation also specifies a
* {@link org.hibernate.annotations.CacheConcurrencyStrategy}, a policy governing
* access to the second-level cache by concurrent transactions. Either:
* <ul>
* <li>{@linkplain org.hibernate.annotations.CacheConcurrencyStrategy#READ_ONLY
* read-only access} for immutable data,
* <li>{@linkplain org.hibernate.annotations.CacheConcurrencyStrategy#NONSTRICT_READ_WRITE
* read/write access with no locking}, when concurrent updates are
* extremely improbable,
* <li>{@linkplain org.hibernate.annotations.CacheConcurrencyStrategy#READ_WRITE
* read/write access using soft locks} when concurrent updates are possible
* but not common, or
* <li>{@linkplain org.hibernate.annotations.CacheConcurrencyStrategy#TRANSACTIONAL
* transactional access} when concurrent updates are frequent.
* </ul>
* It's important to always explicitly specify an appropriate policy, taking into
* account the expected patterns of data access, most importantly, the frequency
* of updates.
*
* @author Steve Ebersole
*
* @see org.hibernate.annotations.Cache
* @see org.hibernate.annotations.CacheConcurrencyStrategy
*/
public interface Cache extends jakarta.persistence.Cache {
/**

View File

@ -20,7 +20,7 @@ import static java.lang.annotation.RetentionPolicy.RUNTIME;
* <ul>
* <li>a {@linkplain #region named cache region} in which to store
* the state of instances of the entity or collection, and
* <li>an appropriate {@linkplain #usage cache concurrency strategy},
* <li>an appropriate {@linkplain #usage cache concurrency policy},
* given the expected data access patterns affecting the entity
* or collection.
* </ul>
@ -42,8 +42,8 @@ import static java.lang.annotation.RetentionPolicy.RUNTIME;
@Retention(RUNTIME)
public @interface Cache {
/**
* The appropriate concurrency strategy for the annotated root
* entity or collection.
* The appropriate {@linkplain CacheConcurrencyStrategy concurrency
* policy} for the annotated root entity or collection.
*/
CacheConcurrencyStrategy usage();

View File

@ -9,7 +9,7 @@ package org.hibernate.annotations;
import org.hibernate.cache.spi.access.AccessType;
/**
* Identifies strategies for managing concurrent access to the
* Identifies policies for managing concurrent access to the shared
* second-level cache.
* <p>
* A second-level cache is shared between all concurrent active
@ -17,8 +17,8 @@ import org.hibernate.cache.spi.access.AccessType;
* state between transactions, while bypassing the database's
* locking or multi-version concurrency control. This tends to
* undermine the ACID properties of transaction processing, which
* are only guaranteed when all sharing of data is mediated by
* the database.
* are only guaranteed when all sharing of data is mediated by the
* database.
* <p>
* Of course, as a general rule, the only sort of data that really
* belongs in a second-level cache is data that is both:
@ -26,16 +26,20 @@ import org.hibernate.cache.spi.access.AccessType;
* <li>read extremely frequently, and
* <li>written infrequently.
* </ul>
* When an entity is marked {@linkplain Cache cacheable}, it must
* indicate how concurrent access to its second-level cache is
* managed, by selecting a {@code CacheConcurrencyStrategy}
* appropriate to the expected patterns of data access.
* When an entity or collection is marked {@linkplain Cache cacheable},
* it must indicate the policy which governs concurrent access to its
* second-level cache, by selecting a {@code CacheConcurrencyStrategy}
* appropriate to the expected patterns of data access. The most
* important consideration is the frequency of updates which mutate
* the state of the cached entity or collection.
* <p>
* For example, if the entity is immutable, {@link #READ_ONLY}
* is the most appropriate strategy, and the entity should be
* annotated {@code @Cache(usage=READ_ONLY)}.
* For example, if an entity is immutable, {@link #READ_ONLY} is the
* most appropriate policy, and the entity should be annotated
* {@code @Cache(usage=READ_ONLY)}.
*
* @author Emmanuel Bernard
*
* @see AccessType The corresponding SPI.
*/
public enum CacheConcurrencyStrategy {
/**
@ -47,6 +51,8 @@ public enum CacheConcurrencyStrategy {
NONE( null ),
/**
* Read-only access to the shared second-level cache.
* <p>
* Indicates that the cached object is immutable, and is
* never updated. If an entity with this cache concurrency
* is updated, an exception is thrown. This is the simplest,
@ -58,6 +64,9 @@ public enum CacheConcurrencyStrategy {
READ_ONLY( AccessType.READ_ONLY ),
/**
* Read/write access to the shared second-level cache with no
* locking.
* <p>
* Indicates that the cached object is sometimes updated, but
* that it is <em>extremely</em> unlikely that two transactions
* will attempt to update the same item of data at the same
@ -77,6 +86,9 @@ public enum CacheConcurrencyStrategy {
NONSTRICT_READ_WRITE( AccessType.NONSTRICT_READ_WRITE ),
/**
* Read/write access to the shared second-level cache using
* soft locks.
* <p>
* Indicates a non-vanishing likelihood that two concurrent
* transactions attempt to update the same item of data
* simultaneously. This strategy uses "soft" locks to prevent
@ -104,6 +116,8 @@ public enum CacheConcurrencyStrategy {
READ_WRITE( AccessType.READ_WRITE ),
/**
* Transactional access to the shared second-level cache.
* <p>
* Indicates that concurrent writes are common, and the only
* way to maintain synchronization between the second-level
* cache and the database is via the use of a fully